Left-Wing Populism and the Arts

© Ben Stainton
© Ben Stainton

All art is political. As Toni Morrison put it in a 2008 interview with Poets and Writers (issue 36.6):

All of that art-for-art’s-sake stuff is BS […] What are these people talking about? Are you really telling me that Shakespeare and Aeschylus weren’t writing about kings? All good art is political! There is none that isn’t. And the ones that try hard not to be political are political by saying, ‘We love the status quo.’ We’ve just dirtied the word ‘politics,’ made it sound like it’s unpatriotic or something.

If this is true (and I believe that it is), what kind of politics does the historical monster mashup represent, and how does this play out in the texts themselves? To help me answer this question, I’ve been reading up on contemporary political theory. Today’s selected readings were on populism – specifically Twenty-First Century Populism: The Spectre of Western European Democracy (2008), edited by Daniele Albertazzi and Duncan McDonnell, and Populism in Europe and the Americas: Threat or Corrective for Democracy? (2012), edited by Cas Mudde and Cristóbal Rovira Kaltwasser. At first glance you might be wondering what the historical monster mashup could possibly have to do with populism or populist art. Surely it’s impossible for the realm of art, as a space of ‘free speech’, to conform to a populist aesthetic.

Populism in Europe and the Americas is specifically interested in whether populism should be seen as a good thing for democracy (and all its associated rights) or bad thing. First, though, it is tasked with actually defining  populism and democracy, which have both been approached from a wide number of directions. The introduction by Mudde and Kaltwasser ultimately concludes that populism can be minimally defined as an ideology

that considers society to be ultimately separated into two homogenous and antagonistic groups, ‘the pure people’ and ‘the corrupt elite’, and which argues that politics should be an expression of the volonté générale (general will) or the people […]. This means that populism is in essence a form of moral politics, as the distinction between ‘the elite’ and ‘the people’ is first and foremost moral (i.e. pure versus corrupt), not situational (e.g. position of power), sociocultural (e.g. ethnicity, religion), or socio-economic (e.g. class). Moreover, both categories are to a certain extent ’empty signifiers’ (Laclau 1977), as it is the populists who construct the exact meanings of ‘the elite’ and ‘the people’ [pp. 8-9, emphasis modified]

In other words, depending on the context and culture populism can potentially be found everywhere – among all social classes and backgrounds, and in both left-wing and right-wing politics.

Kolsyrefabriken

Twenty-First-Century Populism likewise opens by pointing out the frequent abuse of the term ‘populism’ in contemporary politics:

Much like Dylan Thomas’ definition of an alcoholic as ‘someone you don’t like who drinks as much as you’, the epithet ‘populist’ is often used in public debate to denigrate statements and measures  by parties and politicians which commentators or other politicians oppose. When an adversary promises to crack down on crime or lower taxes and yet increase spending on public services, it is ‘populist’. When one’s own side does so, it is dealing with the country’s problems [p. 2]

Before we can go about deciding what populism means, then, Albertazzi and McDonnell argue that we first need to account for the term’s negative connotations in politics and popular culture.

After an extensive definition of democracy (and its different facets and levels), Mudde and Kaltwasser conclude that populism is generally beneficial to a democracy when it opposes it from the outside [p.25], serving to critique the system and keep its power in check. Populism becomes most harmful to democracy when emerges from within the government, subverting the system’s own ‘checks and balances’ [p. 24] in the name of the popular majority to undermine minority rights, or for personal gain.

Neither of these books comments in any depth on the function of art in populist politics (though Twenty-First-Century Populism does contain a chapter by Gianpietro Mazzoleni on the role of the media in promoting and exploiting populist figureheads). Could historical monster mashup be categorised as populist? That will have to be a future blog post, but at the moment I’m just enjoying learning about a subject area I had never really engaged with. This week’s reading should come especially handy in the light of the American presidential elections.

Calling All Aphantasiacs

This week’s post may at first blush seem entirely unrelated to my research on monsters, mashups, and popular culture, but it has more to do with these topics than you might think.

It will also blow your mind.

maxresdefault-3

A number of years ago I discovered that a member of my immediate family could not visualise things in their mind, or bring to mind what something tasted or sounded like. We both assumed that this was a well-documented phenomenon, like colour-blindness or other altered perceptions. Last year, we learned that no one had actually bothered to study it before, and that it apparently affects a surprisingly large number of people – perhaps one in fifty. It’s called ‘aphantasia’, literally ‘without mental image’.

By far the most insightful (and humorous) piece I’ve read on the subject is ‘Aphantasia: How It Feels To Be Blind In Your Mind’ by Blake Ross, one of the co-creators of the web browser Firefox. Ross describes his inability to create a mental image as follows:

If you tell me to imagine a beach, I ruminate on the “concept” of a beach. I know there’s sand. I know there’s water. I know there’s a sun, maybe a lifeguard. I know facts about beaches. I know a beach when I see it, and I can do verbal gymnastics with the word itself.
But I cannot flash to beaches I’ve visited. I have no visual, audio, emotional or otherwise sensory experience. I have no capacity to create any kind of mental image of a beach, whether I close my eyes or open them, whether I’m reading the word in a book or concentrating on the idea for hours at a time—or whether I’m standing on the beach itself.
Ross’s writing process is equally interesting to those of us who are used to imagining a scene by visualising it. In the (rather lengthy) excerpt below, he describes how he imagines scenes when he is writing general prose:
First I think of a noun in my milk voice: cupcake. Then I think of a verb: cough. Finally an adjective: hairy. What if there was a hairy monster that coughs out cupcakes? Now I wonder how he feels about that. Does he wish he was scarier? Is he regulated by the FDA? Does he get to subtract Weight Watchers points whenever he coughs? Are his sneezes savory or sweet? Is the flu delicious?
If I don’t like the combination of words I’ve picked, I keep Mad Libbing until the concept piques my interest.
This has always struck me as an incredibly inefficient way to imagine things, because I can’t hold the scene in my mind. I have to keep reminding myself, “the monster is hairy” and “the sneeze-saltines are sitting on a teal counter.” But I thought, maybe that’s just how it is.
Later Ross describes how he feels about writing fiction:
Overall, I find writing fiction torturous. All writers say this, obviously, but I’ve come to realize that they usually mean the “writing” part: They can’t stop daydreaming long enough to put it on the page. I love the writing and hate the imagining, which is why I churn out 50 dry essays for every nugget of fiction.
Perhaps my favourite part of Ross’s account is where he explains how hard he found it to believe that a ‘mind’s eye’ actually existed:
Even after the Exeter study, I was certain that what we had here was a failure to communicate. You say potato, I say potato. Let me be clear: I know nobody can see fantastical images through their actual eyes. But I was equally sure nobody could “see” them with some fanciful “mind’s eye,” either. That was just a colorful figure of speech, like “the bee’s knees” or “the cat’s pajamas.”
For Ross, it was the people who could picture things in their minds who were unusual. ‘Imagine your phone buzzes with breaking news’, he writes. ‘WASHINGTON SCIENTISTS DISCOVER TAIL-LESS MAN. Well then what are you?’

U. Exeter’s Adam Zeman, who coined the term last year, remains adamant that aphantasia is ‘not a disorder’, though ‘it makes quite an important difference to their experience of life because many of us spend our lives with imagery hovering somewhere in the mind’s eye which we inspect from time to time, it’s a variability of human experience.’

a3004045906_16

‘We have known of the existence of people with no mind’s eye for more than a century’, writes the New Scientist:

In 1880, Francis Galton conducted an experiment in which people had to imagine themselves sitting at their breakfast table, and to rate the illumination, definition and colouring of the table and the objects on it. Some found it easy to imagine the table, including Galton’s cousin, Charles Darwin, for whom the scene was “as distinct as if I had photos before me”. But a few individuals drew a total blank.

Zeman’s speculation about whether the ‘mind’s eye’ is a key part of human experience is part of what interests me so much about the condition. What perception do we define as human perception, and how do we account for natural variability in that perception? Are the people who saw a gold and white dress somehow less ‘human’ than those who saw a black and blue dress? Ross comments on this question as well:

I think what makes us human is that we know we’re the galactic punchline, but we can still laugh at the setup. The cosmos got me good on this one. How beautiful that such electrical epiphany is not just the province of the child. And were the bee’s knees real, too? And have the cats worn pajamas all along?

Are people who can’t picture things ‘inhuman’ or ‘disabled’ in some way. Of course not, you’re probably thinking, but why does their existence (or ours, from Ross’s perspective) surprise us so much? Furthermore, how do they then consume popular culture, which is intensely reliant on familiar modes of phantasia, and what does a piece of art geared towards a person with aphantasia look like? For me the question immediately becomes: ‘How many people with aphantasia are working in the arts, and how many have sat before me in an English Literature classroom?’

book_of_imagination_by_t1na-d7mlgj9-1160x653

The answer potentially has huge implications for the humanities, and as someone teaching in English Literature I wonder how effective our current methods of approaching reading and meaning actually are. When I mentioned the condition to my colleagues at Cardiff University, many jokingly responded with something along the lines of: ‘What, you mean someone without an imagination?’A recent article by Mo Costandi in the Guardian explores the potential impact of aphantasia on education. He writes:

One study shows that using mental imagery helps primary school pupils learn and understand new scientific words, and that their subjective reports of the vividness of their images is closely related to the extent to which imagery enhances their learning. Visualisation techniques are also helpful for the teaching and learning of mathematics and computer science, both of which involve an understanding of the patterns within numbers, and creating mental representations of the spatial relationships between them.

While Costandi’s article is fascinating, the title (‘If you can’t imagine things, how can you learn?’) is poorly found. People with aphantasia still imagine things, they just do so very differently than people without aphantasia. Their experience is not inherently less rich, or less creative. It is simply rich and creative in a way the rest of us cannot imagine.

The article essentially concludes the same thing – aphantasia is not a disability in any traditional sense:

“We know that children with aphantasia tend not to enjoy descriptive texts, and this may well influence their reading comprehension,” says neurologist Adam Zeman of the University of Exeter who, together with his colleagues, gave the condition its name last year. “But there isn’t any evidence directly linking it to learning disabilities yet.”

While it’s good to know that aphantasia doesn’t affect reading comprehension more generally, current evidence says little about how metaphorical language is perceived. Descriptive passages in novels may be out, but what kinds of description? My aphantasiac family member is an avid reader, and passages like this one, from the opening of Iain Banks’ A Song of Stone (1997), are no problem:

Screen Shot 2016-07-27 at 18.35.00

So comparing things to ideas still seems to carry impact and emotion, while comparing things to other things, as in the opening passage of Tom Robbins’ Jitterbug Perfume (1985), becomes more problematic:

 

The beet is the murderer returned to the scene of the crime. The beet is what happens when the cherry finishes with the carrot. The beet is the ancient ancestor of the autumn moon, bearded, buried, all but fossilized; the dark green sails of the grounded moon-boat stitched with veins of primordial plasma; the kite string that once connected the moon to the Earth now a muddy whisker drilling desperately for rubies.

The beet was Rasputin’s favorite vegetable. You could see it in his eyes.

702ad7447c84edb6593bed99c04c6a94

Even more interesting to me (especially in the light of Tom Robbins’ prose) is the question of how aphantasiacs comprehend poetry and poetic imagery, which often relies heavily on a direct link between words and images, senses, or feelings, rather than abstract concepts. What poetry do aphantasiacs prefer, how do they make sense of the poetry they don’t? What does an aphantasiac make of Alison Croggon’s ‘The Elwood Organic Fruit and Vegetable Shop’, Anne Carson’s Autobiography of Red, or K. Schippers’ ‘Things You Only Need to Remember Briefly’? How can we teach aphantasiacs to read sensory images in an educational system where our most common response is ‘Just read until you feel it’?

Illustration © Maurice Vellekoop
Illustration © Maurice Vellekoop

This is part of what Adam Zeman’s AHRC project, ‘The Eye’s Mind’, aims to discover, but I suspect there’s enough work on this subject to go around for a good long while. I would be fascinated to know if any of you reading this identify as aphantastic, especially if you happen to work in the arts or creative industries. You can take an abridged version of the aphantasia test at this link.

Or, better yet, let me know if any of you are researchers at Cardiff, Exeter, Bristol or Bath, and would want to set up a GW4-funded project with me to explore these questions. (You can also contact Adam Zeman about his neurological research into aphantasia at a.zeman@exeter.ac.uk.)

Meaning versus Significance as Explained by Instagram Autumn Mania

Image © Suburban Misfit
Image © Suburban Misfit
I sought fit words to paint the blackest face of woe;
Studying inventions fine her wits to entertain,
Oft turning others’ leaves, to see if thence would flow
Some fresh and fruitful showers upon my sunburn’d brain.
(Sir Philip Sidney,  Astrophil and Stella, sonnet 1: 5-8)

 

The autumn semester has started at Cardiff University, which means lots of new students, and new classes for me to teach. The past few weeks have made me think long and hard (again) about why exactly I study art and culture, what role those two terms play in contemporary society, and how I can explain to students why reading medieval sonnets really matters. Inevitably, the first few seminar discussions involve a lot of plot-telling rather than analysis, and so much time is spent on figuring out what something says and how it says it that it’s difficult to get beyond that, much less explain why we should.

Fortunately (or unfortunately, depending on your perspective), autumn also means lots of new tweets about #PSL, and lots of new #autumnleaf pics on Instagram, both of which provide rich food for thought if you work in popular culture. Eugene Wolters over at the Critical Theory blog has already explored how the Pumpkin Spice Latte can be used to explain Jean Baudrillard’s concept of the simulacrum. I’m out to make a much less ambitious connection today – specifically, how hashtagged pictures of autumn leaves illustrate the difference between meaning and significance.

I’m referring here to the work of E. D. Hirsch, a critic whose opinions about authorial intention and the origins of meaning I actually fundamentally disagree with, but whose distinction between the terms ‘meaning’ and ‘significance’ I still find useful. As such, I’m going to explain how he defines these two terms and then go on to abuse his definition for my own ends. For Hirsch, the ‘meaning’ of a text is something that does not change, while a text’s ‘significance’ alters depending on where, how, and by whom it is read. If we ignore the way context influences the things we literally see or notice in a text, and thus how significance influences meaning, then we can hijack these terms for a moment to talk about leaf pics. In this instance ‘meaning’ describes what a text is, and ‘significance’ describes what it does.

Image Mark Nolan
Image © Mark Nolan

When someone posts a picture of an Autumn leaf (#autumnleaf #adventure #travel #wanderlust, etc.), they are reading their surroundings at the level of meaning, and their post essentially replicates that meaning. ‘Look: a leaf’, their picture is saying. Aside from the new associations generated by the hashtags, the process is a closed circuit; the leaves fall, the photo is taken and assumes its place among all the other photos of fallen leaves, and autumn leaf piles are recreated on social media. The season ends, the leaves go away, and the pictures are swept to the gutters of feeds, walls, and profiles everywhere, out of sight and out of mind.

All art potentially functions like this. It’s created, it’s consumed, and then it’s forgotten. It’s meaningful (or potentially full of meaning), but perhaps not significant, or worth commenting on.

This is, of course, a matter of perspective, and given that I’m talking about leaf pics right now, clearly the phenomenon as a whole has enough significance to be noticed by at least one person (two if you count this list). In any case, it’s only by commenting on art, or by giving it ‘significance’, that it is immortalised and made relevant. Why are people so fascinated by autumn leaves? What motivates a person to stop and crouch down in the middle of the street, phone in hand, to document and share (with wonder) an event that happens regularly every year? Is it the shared experience of sharing they’re after? Why this, and not another experience?

These are the significant questions, and though meaning can help you to answer these questions, it is a means rather than an end. For me, studying culture and literature is less about unearthing meaning, and more about getting at (and generating) significance. Whether you do it inside or outside a classroom, learning to tell the difference between meaning and significance doesn’t just help you to figure out why reading love sonnets or looking at Autumn leaf pics can be productive. It starts to tell you how they have been productive, and for whom, and opens up a new way of looking at things that you hadn’t previously considered. If you believe (as I do) that change is – exhilaratingly  – the only real constant, and that we should strive to break out of our comfort zones, being able to think ‘otherwise‘ is invaluable.

Image © Ginny
Image © Ginny

When you stop looking for meaning and start exploring and forging significance, you will find like minds in unexpected places. You may discover that the world is at once a much bigger, and a much smaller place than you first suspected. Isn’t that worth the effort, even if it means scrolling through endless pages of other people’s leaves?

Roland Barthes and Spaces of Attunement

Conference header REVISED

Whereas this week I’m busy with preparations for two conference presentations at guest universities, at the end of March I was a passive observer at two separate sets of conferences, both at my very own Cardiff University. My department hosted the ‘Roland Barthes at 100’ conference, the School of Planning and Geography across the way held a ‘Spaces of Attunement’ symposium, and both ran over the same two days at the very end of March.

I was originally only registered for ‘Spaces of Attunement’, but because Neil Badmington, the organisor of ‘Roland Barthes at 100’, is my secondary thesis supervisor, I ended up spending some time there helping out. I even chaired my first panel, on Barthes and visual culture, where I got to hear two very different papers. Stella Baraklianou (from the University of Huddersfield) gave a presentation on the punctum in digital art and photography, citing work by Idris Khan and Eva Stenram. Freelance scholar Jayne Sheridan talked about the border between commercialism and art, using the Chanel N°5 commercial directed by Jean-Pierre Jeunet, ‘Train de Nuit’:

Though I heard many fine presentations during the two-day conference, the one that stuck out for me the most was Michael Wood’s opening keynote ‘French Lessons’, which I mentioned in last week’s post. Wood talked about many things, but primarily he was concerned with how even when we read Barthes well, when we read him outside of the French we misread him. Our reading may not be wrong, but we are missing something. For Wood one of the intervening factors in this misreading is the fact that in French, beauty is often more important than exactitude. Practically, this often means that French philosophers have a weakness for aphorism – they cannot resist the witty maxim. As I summarised last week, maxims are caricatures of language, and can’t be academically defended. The truth in a maxim is either too trivial to be ‘really’ true, or is not wholly true.

Wood used an example from Barthes’ Camera Lucida (1980), summarised in this old post by Michael Sacasas:

Barthes was taken by the way that a photograph suggests both the “that-has-been” and the “this-will-die” aspects of a photographic subject. His most famous discussion of this dual gesture involved a photograph of his mother, which does not appear in the book. But a shot of [failed assassin Lewis] Powell is used to illustrate a very similar point. It is captioned, ‘He is dead, and he is going to die …’ The photograph simultaneously witnesses to three related realities. Powell was; he is no more; and, in the moment captured by this photograph, he is on his way to death.

The idea that an object in a photograph is either dead or is going to die may be true, though exceptions could no doubt be found. If it is true, how much inherent meaning does it have? All living things die – this is not something that needs explanation. Rather than genuinely attempting revolution, the maxim is merely the platform upon which we can build the arguments and ideas we want to.

lewis_payne
‘He is dead, and he is going to die …’

For Wood, though, this is the entire point of the maxim: it can be used for anything. Citing Adorno’s Minima Moralia (1951), in which he posits that the problem with philosophers is that they want to be right, Wood suggested that being right (or exact) is not so important in literature – at least not in the sense that many people try to impose upon it. Instead, literature expresses a love of arguments without wanting to win.

With that in mind, I trekked across campus to the lovely and imposing Glamorgan building for a fabulous lunch and a change of topic. The order of the day was a posthuman exploration of ‘attunement to the world in all its particularity, strangeness, enchantment and horror’. Suitably prepared for this experience by Wood’s defense of arguments without victories and questions without answers, I sat in on an animal studies panel that included Joanna Latimer on the idea of ‘being/living alongside’ as opposed to ‘being/living with’ nonhumans, Lesley Green on environmental humanities, apartheid, and the baboon problem at Table Mountain National Park, and Karolina Rucinska on transgenic animals and Enviropig. This was followed by a fascinating keynote by Mara Miele, cataloguing the EmoFarm project, an experiment on emotional response in sheep.

I wish my building was this imposing.
I wish my building was this imposing.

Before we closed off the day with a reception, we split into small groups for some discussion, which started off awkwardly but ultimately yielded some interesting ideas and connections. If I can find the time to post about any of these things at greater length, I will definitely do so. Each presentation gave me a lot to think about. For the moment, though, I should probably get back to my other deadlines. Until next week!

 

Seeing Blue: Science, Art, and Perspective

Absolut Rodrigue, 1993, one of many Blue Dog paintings created by George Rodrigue.
Absolut Rodrigue, 1993, one of many Blue Dog paintings created by George Rodrigue.

This week’s post will be quick, both because I’m hard at work on research deadlines again after Easter, and because the weather outside is too beautiful to waste any more time typing indoors than absolutely necessary. Sunny weather is incredibly precious here in Northern Europe, and you never know how much of it you’ll get in a given year. I’m eagerly awaiting the day someone makes a screen you can actually see anything on in direct sunlight. Honestly, I can’t believe Apple hasn’t solved this problem yet. Do they never work outside in Cupertino? They probably don’t know how good they have it with their 265 days of sun per year.

Anyway. In honour of my PhD research, today’s post is kind of a mashup between a lecture I attended last week and an article I read a few days ago. The two don’t really have much to do with each other, but I ended up connecting them together in my brain anyway.

blue_room

The article was on the history of the colour blue, and about ‘the way that humans see the world and how until we have a way to describe something, even something so fundamental as a color, we may not even notice that it’s there’. Apparently, in 1858 William Gladstone (followed up by Lazarus Geiger) discovered that ancient cultures had very strange ways of describing colours in their writings, and that this also likely related to how they perceived the world. You can read the complete and fascinating story over on the Business Insider, of all places, but this part of Geiger’s work stood out to me:

Every language first had a word for black and for white, or dark and light. The next word for a color to come into existence – in every language studied around the world – was red, the color of blood and wine.

After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colors to appear in every language is blue.

The only ancient culture to develop a word for blue was the Egyptians – and as it happens, they were also the only culture that had a way to produce a blue dye.

I found this idea incredibly striking. To me, the idea that people might not recognise blue is like something out of science fiction. (And now that I’ve got a deluxe edition of The Giver Quartet, I do think it’s high time to re-read Lois Lowry’s Gathering Blue.)

road-and-cloudy-blue-sky-new-desktop-wallpapers-in-high-resolution-fullscreen
‘What is the color of honey, and “faces pale with fear”? If you’re Homer – one of the most influential poets in human history – that color is green.’

The article, which summarises a fascinating Radiolab podcast series on colour, continues by discussing the work of Jules Davidoff. Davidoff wanted to discover whether you can really see something that you have no word for, and so he conducted an experiment with the Himba tribe in Nambia, which has no word for blue, or to distinguish between blue and green, but does have more words for shades of green than most Western languages.

Davidoff discovered the following:

When shown a circle with 11 green squares and one blue, they could not pick out which one was different from the others — or those who could see a difference took much longer and made more mistakes than would make sense to us, who can clearly spot the blue square.

When the tables were turned, however, and the Himba were asked to look at a circle of green squares where one was a slightly different shade, they spotted it immediately. Can you?

Vidipedia/Himba Colour Experiment
Vidipedia/Himba Colour Experiment

You can find out if you were right at this link.

Clearly, language has a huge impact on perception. This brings me to the second subject of this blog post, a lecture by Michael Wood called ‘French Lessons’. At the recent Roland Barthes at 100 conference, Wood indirectly talked about the use of literature in discovering the strange within the seemingly obvious. His specific example was about the love of French philosophers for aphorisms – snappy, universalising catchphrases. Aphorisms are like little linguistic caricatures, saying things that we know aren’t really true, or that are too trivial to really matter even if they are true.

The Old Guitarist, painted in 1903 by Pablo Picasso, just after the suicide death of Picasso's close friend, Casagemas
The Old Guitarist, painted in 1903 by Pablo Picasso, just after the suicide death of Picasso’s close friend, Casagemas.

Some of the most memorable (and quotable) bits of language aren’t true in the traditional ways. ‘God is dead’, or ‘To be or not to be, that is the question’ don’t really say anything in a factual or practical sense. Despite this they still convey a great deal of meaning, and have a strange power to make us re-think things we once saw as obvious. We need this kind of caricature, Wood argued, to ever have a chance of escaping the limits of language. By the limits of language, what we’re talking about here is the ways we use language to reinforce what is normal, and also to categorise things and people that are not.

I could write a lot more about Wood’s lecture (and I probably will next week), but in the context of the history of blue it really resonates. One of the best side effects of art is how it illuminates the things that hide in plain sight. Life may seem complete, but then art reminds us of all the things our picture of the world leaves out. Just think of all the colours that might still be out there, waiting to be seen.

Vincent Van Gogh. Starry Night Over the Rhone, 1888.
Vincent Van Gogh. Starry Night Over the Rhone, 1888.

 

 

On the Front Lines Between ‘Funny’ and ‘Offensive’

Art by Billy Ludwig.
Art by Billy Ludwig.

Seeing as today is April Fool’s Day (or April Fools’ Day, as Wikipedia pointedly suggests I should be apostrophising it), and most of the commentary on the day’s festivities seems to border on despair and desperation, I thought it might be fun to post something about the uses and limits of humour.

The line between what’s funny and what isn’t is a fine one, and is often purely a matter of context. Even when successful, humour is always a question of morality, politics, and aesthetics. One need only look at the recent Charlie Hebdo shootings and controversy to confirm this assertion. Humour can be used to point to both serious and trivial issues, and whether it is productive is always a question of perspective.

In the introduction to their 2005 essay collection Beyond a Joke: The Limits of Humour, Sharon Lockyer and Michael Pickering have the following to say about the distinction between what is considered funny and what is offensive:

It is generally regarded as being beneficial to laugh about things, including ourselves; to get problems off our chests and ‘see their funny side’; to look back on what was previously regarded as very serious, maybe even tragic, and ‘have a good laugh about it’. There are clearly many cases where this is so, but equally there are others when it is inappropriate to laugh, when humour does not sit happily with the general tenor of an event or situation, and when a joke is regarded as overstepping the mark, as being beyond a joke.
(p. 4)

In this definition of the border between humour and harm, the focus is on whether the object of ridicule is the person laughing, or some unspecified other. A few pages later Lockyer and Pickering use the example of a female Muslim comedian, who tells a joke about being felt up during a pilgrimage to Mecca. Were a Christian man to tell this joke, they argue, it would ‘shift from being a joke at the teller’s expense to a joke told at someone else’s’ (p. 9).

From the cover of Beyond a Joke
From the cover of Beyond a Joke

Nazi jokes and humour are often the target of these kinds of questions. Is the Japanese ‘Nazi chic’ trend excused by the fact that the perpetrators aren’t German, and weren’t even alive during World War II? At what point are we responsible for the images and ideas we appropriate?

Lately I’ve been doing a lot of thinking about the borders of the humorous. I recently finished a first draft of a chapter on humour in the novel-as-masup (Jane Slayre, Wuthering Bites, etc.), which is also under review as part of an upcoming book collection. In these mashups, the question of humour is mainly one of triviality. The very idea of combining Jane Austen and zombies or sea monsters is so silly it almost crosses the border of humour in the other direction to become boring. Even for those inclined to be disturbed by the potential defacement of a literary classic, is it really worth getting upset over fiction? Because I’ve been working on it so much I’m a bit tired of the whole genre at the moment, but I’m also writing on a tangential paper for the upcoming Material Traces of the Past in Contemporary Literature conference in Málaga, Spain (naturally the location had nothing to do with my decision to attend).

Art by Billy Ludwig.
Art by Billy Ludwig.

For this paper I’ll be looking at how the creation and inversion of ironic distance complicates readings of historical fiction in the twenty-first century. Rather than looking at the nineteenth century as I do in my chapter, I’ll be covering some texts set during the two World Wars. The first is Kim Newman’s The Bloody Red Baron (1995 | 2012), an alternate history novel that imagines Count Dracula leading the German forces during WWI. The second is Billy Ludwig’s manipulations of old photos from WWII, modified to include iconic images from Star Wars.

The Bloody Red Baron was a largely uncontroversial novel, both at its original 1995 release and at its reprinting in 2012. Ludwig’s photomanipulations have seen some moderate internet blowback, however, as has the similar If Star Wars Was Real photo series. This comes despite the fact that the two sets of images clearly mesh well aesthetically, partially due to the fact that George Lucas drew liberal inspiration from both World Wars when creating the Star Wars universe, as fan and historian Cole Horton demonstrates. When you get down to it, both texts are technically doing the same thing: inserting fantastical characters and images into historical contexts. What makes the one potentially upsetting, while the other is accepted as harmless?

The 2012 reissue of The Bloody Red Baron has some flashy new cover art.
The 2012 reissue of The Bloody Red Baron has some flashy new cover art.

One answer might be the visualness of photography. It’s inherently more confrontational than text. With a novel we have the right not to read, or to ‘look away’, as it were. In her 1968 article ‘The Social Control of Cognition: Some Factors in Joke Perception’, Mary Douglas argues that humour needs to be permitted as well as understood in order to be perceived as a joke. There is nothing particularly offensive about any version of The Bloody Red Baron’s cover art, or the book’s title. You never know where Ludwig’s photomanipulations might pop up, though, confronting you with something you have no chance to opt out of.

A woman runs for her life from an AT-AT.
A woman runs for her life from an AT-AT.

Another answer might lie in the question of celebrity. In The Bloody Red Baron, Kim Newman mainly hijacks famous historical figures. By stepping into the public spotlight, these figures have abdicated their right to privacy – at least to a certain degree. Additionally, many of these famous figures remained far from the front lines during the war. The same cannot be said for the lowly foot-soldiers and everyday citizens featured in many of the Star Wars photomanipulations.

What do you think – are these images offensive? If so, where do they cross the line?

‘Everything is Awesome’ is the Anthem of Our Age

So the Oscars were on over the weekend. And although The LEGO Movie may have been snubbed in the nominations for Best Animated Feature, it was very present in the evening’s rendition of ‘Everything is Awesome’, which included Oscar statuettes made out of LEGO blocks and a heavy metal interlude by Will Arnett (as Batman):

‘Everything is Awesome’ feels like the weird theme song of our times. Take this excerpt, rapped by SNL’s The Lonely Island:

Life is good, ‘cause everything’s awesome!

Lost my job, it’s a new opportunity—

more free time for my awesome community!

Stepped in mud, got new brown shoes:

It’s awesome to win, it’s awesome to lose!

Blue skies, bouncy springs, we just named

two awesome things!

A Nobel Prize, a piece of string,

you know what’s awesome? Everything!

Everything you see, think, or say is awesome!

Are they serious about handling adversity so positively? Maybe, maybe not. But in the wake of things like the credit crisis, the rapidly dwindling job market, and armed conflicts in Palestine, Syria, and Paris, these words feel at once ridiculous and right. What else can we do but handle the challenges thrown at us as best we can? Over on Vulture, LEGO Movie star Chris Pratt also weighed in on the song’s relevance to our contemporary world:

I think it follows the theme of this movie, which you think is just some Lego movie made to sell toys – and it’s actually a really subversive, interesting, thought-provoking commentary on society […] The song itself represents that, because it’s saying that everything is awesome, but it’s the anthem of this strange world that exists halfway between America and North Korea, you know what I mean? Twenty or 30 years from now, I think people will look back at ‘Everything Is Awesome’ and it’ll be more than just a cool pop song. It’s really reflective of where we are right now, and that’s what art is all about.

the_lego_movie_2014-wide
‘Everything is awesome! Everything is cool when you’re part of a team.’

 

While it may seem strange to some that Pratt refers to a song from a children’s film as art, he’s put his finger on something that resonates with a lot of the theories on popular art in which I’m currently immersed. One thing I’m especially into at this moment is called metamodernism. Recently a lot of people have been speculating that postmodernism has run its course, and that something new is taking shape. What is this something, you ask? I’ll let Luke Turner explain:

[R]ather than simply signalling a return to naïve modernist ideological positions, metamodernism considers that our era is characterised by an oscillation between aspects of both modernism and postmodernism. We see this manifest as a kind of informed naivety, a pragmatic idealism, a moderate fanaticism, oscillating between sincerity and irony, deconstruction and construction, apathy and affect, attempting to attain some sort of transcendent position, as if such a thing were within our grasp. The metamodern generation understands that we can be both ironic and sincere in the same moment; that one does not necessarily diminish the other.

Basically, modernism places us in a world that, ontologically speaking, may no longer exist, and postmodernism’s relentless deconstruction makes it epistemologically difficult to move forward. Metamodernist art (and criticism) lets us have our cake and eat it too, in ‘a kind of informed naivety’. It believes in a world that it logically knows will never come to be.

Bo Bartlett, School of the Americas, 2010, oil on board, 76 x76.”
Bo Bartlett, School of the Americas, 2010, oil on board. Originally 76 x76.” Click through for an article on metamodern art.

So what does this have to do with ‘Everything is Awesome’ and The LEGO Movie? More than you would think. Seth Abramson’s got a great article from almost exactly a year ago on how The LEGO Movie represents ‘the first unabashedly metamodern children’s film in Hollywood history’. Specifically, he talks about the simultaneously ironic and sincere impulse behind the song that swept the Oscars last weekend:

In the pre-metamodern world, these lyrics would be immediately (and rightly) received as ironic. These days, not so much. In fact, it’s impossible to tell whether the exuberance behind the song above is real or feigned, as it’s simultaneously the anthem for a repressive totalitarian state run by Lord Business and an unbearably catchy, optimistic tune the high-spirited Emmet continues to enjoy even after its sinister intentions are revealed.

Though it’s a lot to wrap your head around in practice, this pairing and blurring of the ‘real’ and the ‘feigned’ are everywhere in contemporary art. Appropriately, in a blog post over on Notes on Metamodernism, Kyle Karthauser explores the notion of ‘the awesome’ as the metamodern equivalent of the sublime:

Etymologically it gestures at the sublime through “awe,” an experience that mingles abject fear and profound ecstasy. In today’s vernacular, it denotes delight (“Hall & Oates is coming to the state fair? That’s awesome!”) And while the everyday use of “awesome” may be used to describe mundane, not-terribly-profound things, in the broadening of its definition it forces us to broaden our aesthetic understanding of the space between the sublime and the beautiful. From a classical or postmodern standpoint, there is no space between the two, and absolutely no overlap.

The entire post is well worth a read, even if you only skip through to Karthauser’s analysis of Evel Knievel as an emblem of the awesome. There’s also an interesting look at the opacity or surface-heavy quality of  the metamodern era, and the moments of revelation an encounter with these surfaces can produce:

After 30+ years of the postmodern paradigm, an experience of this sort is genuinely mind-boggling. When the dominant critical practice is that of dissolving texts into various discourses, of tracing allusions, sniffing out irony, and contemplating the terminally arbitrary nature of signification itself, you aren’t prepared for Evel Knievel. You aren’t prepared for the Star Wars Kid. When someone tattoos this permanently onto their body, your belief in irony as the be-all-end-all of enlightened cultural expression has to be shaken.

These experiences certainly don’t negate our cultural and political obligations to be informed in the long term, but they do allow us a few precious moments of naivety. They give us a break from the numbing cynicism that might otherwise threaten to overwhelm and immobilise us. Especially given the vast inundation of often-negative information with which we are continually bombarded.

evel-knievel
This guy is a pretty perfect example of ‘the awesome’ at work.

 

‘Everything is Awesome’ (and The LEGO Movie in general) hit a nerve in our culture because it’s simultaneously so simple and so over the top. You can read it on a number of levels – deeply and superficially, ironically and sincerely – and all of those readings would be correct. Is everything awesome? Probably not. But if we behave as though it is, we may well achieve something meaningful through that informed naivety. And we might actually enjoy ourselves in the process.

Embrace your metamodern side and listen to ‘Everything is Awesome’ here. And maybe also watch this video of hamsters eating tiny burritos.

Our Zombies, Ourselves: A Lecture with J. Halberstam

Jack Halberstam during an interview at USC. Photo by Sara Newman.
Jack Halberstam during an interview at USC. Photo by Sara Newman.

At guest lectures I usually come prepared to fully understand about half of the references made, and get excited about one or two particular sound bytes. Not so at Jack Halberstam’s lecture on Zombie Humanism at the End of the World (originally titled ‘Our Zombies, Ourselves: Queerness at the End of Time’), kindly hosted by the Cardiff School of Journalism, Media and Cultural Studies. Halberstam’s talk on zombie humanism and biopolitics is right in the same theoretical zone as my own PhD work on monster mashups, and her walk through wildness, pets, mess, and bare life left me sagely (and embarrassingly) nodding in agreement for the better part of an hour. I’m most familiar with Halberstam’s early work Skin Shows: Gothic Horror and the Technology of Monsters (1995), which takes a look at the social issues that accompany portrayals of monsters from the nineteenth century to the present. ‘Our Zombies, Ourselves’ presented these issues in a new jacket.

The lecture started with some of the work Halberstam is currently doing on ‘wildness’ as a theoretical concept. Wildness is a word with a lot of connotations in our culture, and can potentially be appropriated for anti-colonial, anti-imperialist, and anti-humanist ends. Wildness isn’t part of the mainstream, but is still a part of our environment and our society. It’s a place we’ve categorised as un-categorisable. A place where ideas about what’s ‘normal’ or ‘civilised’ go to die. This makes it an ideal framework for thinking about queerness, and about zombies.

In Halberstam’s conception of zombie humanism, zombies represent everyone relegated to the role of the ‘living dead’ in our society: people we’ve ‘rescued’ from death, but who only matter in that they make us feel good about that ‘human’ act of charity. Zombie humanism serves to make us more human and everything else less so, and who the ‘us’ and ‘we’ are in this scenario may not be the groups you expected. Zombies are everywhere.

You may even be one yourself. Also, if you haven't seen this episode of Community (Season 2, Episode 6), you need to go do that. Right now.
You may even be one yourself. Also, if you haven’t seen this episode of Community (Season 2, Episode 6), you need to go do that. Right now.

The first category of the ‘living dead’ Halberstam brought up in her talk was household pets – cats, dogs, goldfish, and any other animal we may ‘rescue’ from the category of food. Because no guest lecture in the UK (or anywhere else, for that matter) is complete without a reference to Monty Python, she led with the infamous parrot sketch, offering the highly tweetable statement that ‘all pets are dead parrots’. The humour in the sketch lies mainly in the stereotypical dishonesty of shopkeepers (the object could just as easily have been a boat that didn’t actually float, for example), but the irrelevance of a pet’s physical or emotional state as an object is also a factor in this joke. Why couldn’t you have a dead parrot as a pet? What makes a live parrot better, really? We take care of our pets largely without considering whether they enjoy or appreciate what we are doing. We draw arbitrary boundaries (‘Nip, don’t bite’), preferring to overlook the fact that a pet’s whole existence, like that of livestock or other ‘edible’ animals, is primarily for the benefit of humans.

Frankenweenie, everyone's favourite zombie pet.
Frankenweenie, everyone’s favourite zombie pet.

The pet discussion sparked some controversy with the more aggressively progressive pet owners in the audience, and came up quite a bit during the Q&A at the end (‘But I’m a vegan’ and ‘I don’t tell people I have cats, I say I live with two cats’), but for me the super-anthropomorphising of pets among wealthy owners only complicates this issue further. It reminds me eerily of a recent trip to South Africa, where a vineyard owner was all-too-eager to tell me how they take care of ‘their blacks’ in the aftermath of apartheid. While I realise this is an extreme comparison to make, it also seems strange to me that we are often so unwilling to even think that we might be treating the animals that live with us in a demeaning or inappropriate way. This idea of the pet as a zombie is partly a response to Donna Haraway’s humanisation of the pet as a companion species (see The Companion Species Manifesto and Lively Capital in particular), it fills in some of the theoretical gaps in the ways we typically think about our ‘furry friends’.

The category of ‘living dead’ isn’t only reserved for animals. Prisoners, refugees, the poor: all categories of people we regularly argue we are ‘helping’ while simultaneously denying them humanity/personhood. While we’re on the topic of race, Halberstam offered a fun illustration from AMC’s The Walking Dead on the ease with which non-white subjects become ‘zombies’ – in both the metaphorical and the literal sense. Human characters die all the time on the show, but never quite in the numbers they do when the predominantly white band of survivors (led by cowboy-archetype Rick) encounters a prison full of predominantly non-white inmates.

Guess how long these guys lived.
Guess how long these guys lived.

For Halberstam, the US zombie is a racialised frontier metaphor: cowboys versus Indians. This also reflects US attitudes towards the socially undead ‘zombie other’. Things are quite different in UK-produced zombie media, for example, and here Halberstam made a reference to my favourite zombie series, In the Flesh. In this series, rather than putting down the zombie uprising through extermination, the Brits turn to a different kind of domination: rehabilitation. Sufferers of Partially Deceased Syndrome (PDS) are given a kit that includes anti-rage meds, special contact lenses, and makeup to mimic living flesh, and are returned to their ‘natural’ environments. Here they are naturally shunned, abused, and sometimes even killed by the angry and frightened survivors of the zombie apocalypse.

zombie-post-series-boxxThe attitudes towards zombies in popular culture again reflect widespread attitudes towards the dehumanised other in our society. We paint pictures of children and adults who are mindless, self-absorbed consumers and a burden on the economy. We rehabilitate those who don’t fit our idea of the normal. We fight wars against foreign terrorist groups, put criminals into prisons and the elderly into nursing homes.  The debate here is not necessarily whether or not these people should be rehabilitated, but rather that the way we go about it is fundamentally dehumanising for the objects of these efforts. Really, though we rarely think about it, our gestures of rehabilitation are designed to make us feel more human, and transform the recipients of that unsolicited help into people who, while they may still technically be alive, have been stripped of the agency that would imbue that life with meaning.

Bookending her discussion on wildness, Halberstam also brought the concept of ‘mess’ to bear on her analysis of zombie humanism, particularly as it relates to queerness. Citing studies by queer theorists Martin Manalansan and José Muñoz, which both seek newer and more humanising ways of looking at queer lives and spaces, Halberstam explored the use of mess as both an aesthetic and a theoretical approach. In the Western world we love order, and we love binaries – especially in academia. But if we can learn to embrace mess and chaos (whatever such an approach might look like in practice), we may discover a different but equally valid system of categorisation. At the very least we will be opening ourselves to new and much-needed perspectives. This approach will be vital if we ever want to ‘kill’ zombie humanism once and for all.

'I know exactly where everything is.'
‘I know exactly where everything is.’

These theories all form part of Halberstam’s current work on fascism and (homo)sexuality. All in all it was a very interesting evening, and I’m definitely looking forward to reading Halberstam’s future work in this area. You can find a link to the original seminar description at organiser Paul Bowman’s academia.edu page.

I even got up the courage to ask a question at the end.

Of Apes and Angels

Image via 'Awesome and Oops'
Image via ‘Awesome and Oops’

“Humans need fantasy to be human. To be the place where the falling angel meets the rising ape.”
–Terry Pratchett, Hogfather (London: Corgi, 1997), p. 422

This blog has recently undergone a move from WordPress.com to a real domain, as well as a re-design that includes a new name – something more vivid and less technical than ‘Neo-Historical Monsters’. The new name, ‘Angels and Apes’,  is taken from one of my favourite quotes on the use and nature of fantasy, from one of my favourite authors. To get me in the mood for the holidays I was recently re-reading Terry Pratchett’s Hogfatherwhich is part of the extensive and excellent Discworld series. In it, Death (who speaks in all-caps) and his granddaughter Susan fight to save the Discworld equivalent of Father Christmas, because otherwise the sun will never rise again. In the novel’s climax, a sceptical Susan asks Death what would really happen if belief in the Hogfather died out. In the following – rather long – conversation, Death replies:

THE SUN WOULD NOT HAVE RISEN.

“Really? Then what would have happened, pray?”

A MERE BALL OF FLAMING GAS WOULD HAVE ILLUMINATED THE WORLD.

They walked in silence for a moment.

“Ah,” said Susan dully. “Trickery with words. I would have though you’d have been more literal-minded than that.”

I AM NOTHING IF NOT LITERAL-MINDED. TRICKERY WITH WORDS IS WHERE HUMANS LIVE.

“All right,” said Susan. “I’m not stupid. You’re saying humans need… fantasies to make life bearable.”

REALLY? AS IT IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

“Tooth fairies? Hogfathers? Little-”

YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

“So we can believe the big ones?”

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

[…]

STARS EXPLODE, WORLDS COLLIDE, THERE’S HARDLY ANYWHERE IN THE UNIVERSE WHERE HUMANS CAN LIVE WITHOUT BEING FROZEN OR FRIED, AND YET YOU BELIEVE THAT A… A BED IS A NORMAL THING. IT IS THE MOST AMAZING TALENT.

“Talent?”

OH, YES. A VERY SPECIAL KIND OF STUPIDITY. YOU THINK THE WHOLE UNIVERSE IS INSIDE YOUR HEADS.

“You make us sound mad,” said Susan. A nice warm bed…

NO. YOU NEED TO BELIEVE IN THINGS THAT AREN’T TRUE. HOW ELSE CAN THEY BECOME? said Death.

[p. 422]

Though this is probably not specifically intended to be a posthuman quote, the two fit very well together. The universe would keep on existing without humans, but it would exist in a very different way – especially for us. We are creatures of experience and imagination. Everything in our world ultimately exists because we recognise it’s there; because we have decided it does in fact exist, and that it does so in a particular way. Our existence is, in fact, a kind of fantasy. With that in mind, imagining things beyond our current conception of ‘reality’ is a very important way for us to change that reality, and to push the boundaries of human experience.

This idea is a foundational part of my research. The idea that we create our own reality is what first drew me to the study of genre fiction, and it resonates with me on a deeper level as someone who embraces the postmodern philosophy that the most important questions are ontological rather than epistemological: so not ‘how did I come to be?’ but ‘who and what am I?’. The possibility of infinite imagination in identity creation is an important concept in things like revisionist mythmaking and afrofuturism, or re-writing the past to make space for different voices in the present. This is also a central question in posthumanism, which continuously tries to redefine the human from the outside in, ultimately rejecting the idea that a ‘perfect human’ can exist.

Angels and apes represent two very different human identities. One is part of classical and religious narrative, one is part of modern and scientific narrative. The angel Death’s comment about falling angels and rising apes in Hogfather is a reference to a famous paragraph from a study by playwright/anthropologist Robert Audrey:

We were born of risen apes, not fallen angels, and the apes were armed killers besides. And so what shall we wonder at? Our murders and massacres and missiles, and our irreconcilable regiments? Or our treaties whatever they may be worth; our symphonies however seldom they may be played; our peaceful acres, however frequently they may be converted to battlefields; our dreams however rarely they may be accomplished. The miracle of man is not how far he has sunk but how magnificently he has risen. We are known among the stars by our poems, not our corpses.

–Robert Audrey, African Genesis: A Personal Investigation into the Animal Origins and Nature of Man (New York: Dell Publishing, 1961), p. 354

Whether we identify as rising apes, falling angels, or something in between, it’s important that we keep on asking ourselves who we are and what we believe, and that we keep on imagining difference.

Unless of course we no longer want to grow as people, or as a species. Then we should definitely never write or read fantasy, especially not fantasy by Terry Pratchett.

Posthumanism

Posthuman-WorldEvery piece of research needs to have goals, and mine has two.  The problem with these goals is that they’re kind of difficult to understand unless you explain one term: posthumanism. This term is tricky to approach as well, because it actually has two meanings. One of the meanings has to do with altering humanity through technology, with the idea that we’ll someday become something ‘more’ than human. This is a cool idea, but not quite the kind of posthumanism I’m interested in.

The second meaning of the term posthumanism refers to the idea that there is no one definition of the word ‘human’. Starting in the Renaissance we see lots of humanism in the Western world, which like our first meaning of posthumanism focuses on improving humanity through humanity’s products. Unlike posthumanism, though, humanism’s tools for improvement were the humanities: grammar, rhetoric, history, poetry and moral philosophy. This also sounds like a good thing, and it was. Partly as a result of the humanist movement new groups in society (including women) regularly learnt to read and write, and had access to the texts that come along with that new knowledge.

Hi, Steve.
We’ll call our wealthy, white, heterosexual male Steve.

The  problem with humanism, at least the way posthumanists see it, is that when you try to make everyone better by your ideals, you also end up sneakily defining what ‘better’ is by those ideals. In Western culture, ‘better’ has traditionally been a wealthy, white, heterosexual man. There are lots of those in our history and culture. There’s absolutely nothing inherently wrong with being a wealthy, white, heterosexual man (some of my best friends are at least well-off, white, heterosexual men). When you assume that the views of wealthy, white, heterosexual men are best, though, you tend to step on other people’s identities in the process of ‘improving’ them.

So posthumanism, despite the ‘post’ in front, is actually about re-defining what it means to be a (better) human. That takes a lot of the straightforwardness out of the whole self-improvement system, but it does hopefully give new groups of people—the poor, women,  non-whites, LGBTQs, children— the chance to be heard and accepted on equal footing with wealthy, white, heterosexual men.

But what does all this have to do with monsters? That’s a good question, which is probably best answered in a later post.

You can also read my first post to figure out how we got to this point.