One of the joys (and sorrows) of research is all the interesting information you find on one topic while doing research on something completely different. While researching spirit photography, for instance, I came across this fascinating account of the Victorian stereoscope in the art book for National Museums Scotland’s exhibition ‘Photography: A Victorian Sensation’.*
If you think the 3D film craze is a new thing, think again. The stereoscope is one of its many historical predecessors. Essentially a pair of fancy spectacles, the device allowed you to view two nearly identical images side-by-side in a way that would make them appear three-dimensional. Alison Morrison-Low describes how enthusiastically the Victorians took to the technology:
Hundreds of thousands of stereoscopic images were sold […] in a major craze which reached every middle-class Victorian drawing-room. The demand appeared insatiable. In 1854, George Swan Nottage (1823-85) set up the London Stereoscopic Company. ‘No home without a stereoscope’ was its slogan. It sold a wide range of stereoscopes, costing from 2s 6d to £20 (about £10 and £1550 today), and became the largest photographic publishing company in the world. [p. 63]
The vast numbers of stereo photographs can be divided into four main categories: travel, news, social scenes and comedy. By far the largest group was that of travel. […] The beauty of the English, Welsh or Irish countryside was frequently illustrated, as well as that of Scotland. Rural poverty and derelict cottages were seldom shown, as a Romantic portrayal of scenery prevailed. [p. 67]
And speaking of the Romantics…
Charles Breese (1819-75) of Birmingham and Sydenham sold his highly thought-of quality slides at 5 shillings (£20 today) each. Entitled ‘Breaking Waves’, 1870s-80s, it comes with a quote from Lord Byron: ‘Sea with rocks and a half moon / the deep blue moon of night, Lit by an orb / Which looks like a spirit or a spirit’s world’. [p. 76]
*All page citations refer to Alison Morrison-Low, Photography: A Victorian Sensation (Edinburgh: National Museums Scotland, 2015).
Star Wars has always been a deeply political franchise. Not just in its themes, which include war, totalitarianism, multiculturalism, and civil disobedience, but also through its use in political debates and activism. George Lucas has consistently claimed that the first Star Wars film was an analogy for the Vietnam war, and that the villainous Emperor Palpatine had a specific real-life counterpart: “Richard M. Nixon was his name. He subverted the senate and finally took over and became an imperial guy and he was really evil. But he pretended to be a really nice guy.” The franchise is also steeped in historical references which, while not directly political, certainly contribute to its politicisation by various groups. The Stormtroopers and other, visual parallels between the Empire and Nazi Germany are just one example.
The franchise has also frequently been read as making a specific political statement, as in Ep III, where Anakin Skywalker tells Obi-Wan “If you’re not with me, you’re my enemy.” This caused many US conservatives to protest that the film caricatured former President George Bush’s post-9/11 assertion that “Either you are with us, or you are with the terrorists.” Ronald Reagan’s 1983 anti-missile defense initiative was dubbed “Star Wars”: a move that irked Reagan, but was shrewdly deemed good publicity by assistant secretary of defense Richard Perle, who reasoned: “It’s a good movie. Besides, the good guys won.”
In a 2012 article, Jonathan Gray wrote about how fans used a scene from Ep V, in which a Rebel snowspeeder takes down an Imperial Walker with a grappling hook, as a metaphor to protest Wisconsin Governor Scott Walker’s Budget Repair Bill in 2011. One man played the scene on a loop using his iPad, chanting “The Rebels brought down Walkers. So can we!” Others carried Star Wars slogans on signs, or dressed up as the vehicles from the films. The Star Wars references served as an important point of “morale and community building” among the protestors. In another article, Andreas Jungherr describes how Darth Vader was used by the SPD (a German political party) to discredit Angela Merkel in the 2009 German federal election.
Stay tuned for the full article, on Star Wars and popular feminism, later this year!
 Christopher Klein, ‘The Real History That Inspired “Star Wars”’, History.com, 2015, para. 4 <http://www.history.com/news/the-real-history-that-inspired-star-wars> [accessed 24 February 2017].
 Derek R. Sweet, Star Wars in the Public Square: The Clone Wars as Political Dialogue (Jefferson, NC: McFarland, 2015), p. 10.
 Frances FitzGerald, Way Out There In the Blue: Reagan, Star Wars and the End of the Cold War (Simon and Schuster, 2001), p. 39.
 Jonathan Gray, ‘Of Snowspeeders and Imperial Walkers: Fannish Play at the Wisconsin Protests’, Transformative Works and Cultures, 10 (2012), para. 1.2.
 Andreas Jungherr, ‘The German Federal Election of 2009: The Challenge of Participatory Cultures in Political Campaigns’, Transformative Works and Cultures, 10.0 (2011), para. 5.6 <http://journal.transformativeworks.org/index.php/twc/article/view/310> [accessed 10 February 2017].
Could this ragged girl with brindled lugs have spoken like we do she would have called herself a wolf, but she cannot speak, although she howls because she is lonely–yet ‘howl’ is not the right word for it, since she is young enough to make the noise that pups do, bubbling, delicious, like that of a panful of fat on the fire. Sometimes the sharp ears of her foster kindred hear her across the irreparable gulf of absence; they answer her from faraway pine forest and the bald mountain rim. Their counterpoint crosses and criss-crosses the night sky; they are trying to talk to her but they cannot do so because she does not understand their language even if she knows how to use it for she is not a wolf herself, although suckled by wolves.
Her panting tongue hangs out; her red lips are thick and fresh. Her legs are long, lean and muscular. Her elbows, hands and knees are thickly callused because she always runs on all fours. She never walks; she trots or gallops. Her pace is not our pace.
Since I’m deep in piles of academic work at the moment (teaching, articles, conference planning, thesis deadlines, you name it), I thought I would gift myself a lighter week and give you some of my top picks for the absolute worst depictions of academics and academic life in contemporary popular culture.
7. Victor Frankenstein (every Frankenstein adaptation ever since 1818)
Why he’s the worst: Ok, so technically Frankenstein isn’t actually a doctor. Nowhere in Shelley’s novel is he awarded a PhD or MD—technically he’s just a ‘natural philosopher’. Still, this mad, Romantic genius is one of the classic bad academics, he’s been giving scientists a bad name for nearly 200 years. Trying to monopolise the entire experiment, not listening to the advice of colleagues, robbing graves. That’s just bad scientific practice.
6. Edward Alcott (Loser, 2000)
Why he’s the worst: This literature professor knows everything better, and puts down curious students at virtually every opportunity. Plus, he’s sleeping with (and emotionally abusing) one of his young students. While he may sadly not be completely fictional, he’s definitely not someone who belongs in academia, or who will have a place there for much longer.
5. Indiana Jones (Indiana Jones and the Temple of Doom, 1984)
Why he’s the worst: This guy launched 1,000 PhDs in archaeology, but when they finally got there they discovered that no, as an academic you don’t generally get to explore booby-trapped temples, fight natives, or casually destroy priceless artefacts. When you do get to the fun part out in the field, it’s mainly brushing, measuring, and meticulously cataloguing. And unlike Indiana, you certainly don’t get endless months of teaching leave and funding with which to do it.
4. Ted Mosby (How I Met Your Mother, 2005)
Why he’s the worst: Does ‘Professor Mosby’ actually have a PhD in archeology? Does he even have an MA? Does he…does he even actually know what he’s talking about? The show doesn’t really care, since his teaching is just a funny thing he sometimes does to break up the monotony of drinking at MacLaren’s, having an awesome time with his friends, and getting into and out of terrible relationships.
Also, no way he could pay for that Manhattan apartment on an adjunct’s salary.
3. Daniel Jackson (Stargate SG-1, 1997)
Why he’s the worst: Egyptologist Daniel Jackson is the ultimate Gary Stu. He’s not taken seriously by any of his academic colleagues, because he’s basically a crazy conspiracy theorist. Then, all his theories are validated because it turns out aliens actually did build the pyramids, so he becomes a chief advisor to the U.S. Air Force. He speaks a bajillion languages and knows everything about science, mythology, and whatever the show needs him to know. Because that’s apparently part of what egyptologists learn in grad school. Also, hot women are constantly and unexpectedly attracted to him.
2. Robert Langdon (The Da Vinci Code, 2003)
Why he’s the worst: I take it back—Robert Langdon, ‘Harvard University professor of religious iconology and symbology’, is the real Gary Stu. All Dan Brown’s books have an awesome hero who looks vaguely like Harrison Ford, and this guy is ‘Harrison Ford in Harris tweed’. He is a genius and brilliant and has an eidetic memory, but doesn’t speak Italian or know anything useful outside of what he needs to solve all the mysteries in the story. From the novels we can deduce that what he does all day at work is talk cryptically about things and try to look smart.
Also, fake academic discipline is fake.
1. Clayton Danvers (Bitten, 2014)
Why he’s the worst: This guy, man. I know technically that asking him to be realistic in any way is missing the point, since his real role in this show is to be eye candy, and also to mope around and tell us how awesome Elena is. It wasn’t enough for him to be sexy and loyal, though. Clay is the Man Who Has it All. Seriously, this is the end of his character biography on SyFy.com: ‘Now a Professor of Anthropology, Clay divides his time between his scholastic research and enforcing the pack code while keeping errant Mutts in line.’
From his melodramatic anthropology lectures about ‘deep desires’, ‘the beasts within us’, and ‘the mask behind which we hide’, his students must think he’s Batman or something—and they wouldn’t be too far off. Clay is supported in ridiculous luxury by his pack family, has a fabulous office filled with a treasure trove of ancient artefacts, and a prestigious job that isn’t so demanding he can’t constantly drop everything to romp around the forest with his wolf bros. He can’t even be bothered to type up his own research notes, which is how he actually meets Elena in the first place. There is this, though:
Who do you think is the worst academic in pop culture? Did I miss someone great (i.e. awful) from Victorian popular culture? Who are your favourite on-screen academics? Let me know! I would love to make a follow-up list or two in the future.
This past weekend I was fortunate enough to spend some time in Copenhagen, where I visited the recently-opened Copenhagen Contemporary art museum. Before I stepped into the exhibition space to the left of the ticket desk, I was directed to a dark hall at the back of the museum, where Pierre Huyghe’s 20-minute film Untitled (Human Mask) was playing on a loop. The museum’s website introduces the film as follows:
A monkey wearing a mask of a young woman, trained as a servant, unconscious enactor of a human labour; and a drone, an unmanned camera, programmed to perform tasks, inhabit the same landscape of Fukushima, just after the natural and technological disaster.
Human Mask is dramatically different in tone and style, but features the same monkey in the same restaurant, following the Fukushima disaster in 2011. In this post-apocalyptic environment, Huyghe deployed a drone camera crew, capturing the monkey’s fitful movements through the space and creating the impression of an interior and distinctly human life. The resulting film is both supremely uncanny and surprisingly moving.
No words are spoken in Human Mask, aside from several instances of a muffled, automated voice speaking Japanese in the distance, issuing what sounds like a public service announcement. The monkey, too, is silent save for the amplified sound of its breathing behind the mask. Nevertheless, sound has a real, physical presence in the film, especially when rain begins to pound on the tin roof towards the end.
Frieze.com‘s Jennifer Higgie has written a brilliant review of the film, from when it was first exhibited in London back in 2014. She concludes:
Animals are indifferent to cameras and, as far as we know, to art, too. You can film them as much as you like, but there will never be any artifice to their performances – they’re anti-actors. It is impossible to know who – or what – a monkey is by imposing our values on them. This is the paradox Huyghe has set up: he has choreographed a deeply artificial scenario in order to explore something profoundly real about the assumed superiority of man over nature and about the ethics of using animals to satisfy very human needs. In all of this, Huyghe obviously implicates himself as well: his own actions demonstrate how inter-species communication is still an enigma – and that art, obviously, isn’t exempt from the problems that this poses. His film is a stark and brilliant reminder that humans are the only species who regularly practice deceit – and that the only ones we are capable of deceiving are ourselves. You can put a monkey in a mask but, however hard you try, you can’t make it believe a lie. It knows it’s a monkey. If only humans were as wise.
As a teacher, I deal with plagiarism all the time—usually in the sense of advising students how to avoid it in their academic essays. As an academic blogger, though, and a web editor before that, I’ve often had to deal with another form of plagiarism: the visual kind. Where most of us are clear on what constitutes textual plagiarism, some of us are less up-to-date on what visual plagiarism might entail. Which images are you allowed to use where, and when are you allowed to appropriate, manipulate, and replicate them without permission from the creator?
With kind permission from Follio.com, in this post you can find a few excerpts from their infographic on image manipulation and international copyright standards. Click here for the complete version.
“To find yourself in the spotlight for plagiarism would be concerning and could even be expensive, even worse when you have fallen foul of copyright laws without even knowing it. Most people have a basic level of understanding relating to copyright law but things have become a lot more complicated since we all started downloading text and images from the internet.
The PhD is a strange thing. You spend three years (or four, or seven, depending on where and how you’re working) fixated on a single topic. You read lots of things you don’t need to read, and explore many avenues that will turn out to be dead ends. Your time is largely yours to spend how you choose, although there are more than enough obligations to choose from. In many ways, it represents a kind of academic freedom that you’re unlikely to ever see again.
After your PhD, potential employers seem interested in everything but your thesis. They want to know what you have published, what you have taught, and what additional impact and engagement skills you can bring to the table. The interstitial space between the PhD and the mythical academic job is feared, densely populated, and vigorously prepared for. Speaking to those who are there already, it can also be incredibly soul-crushing. Applications require a great deal of time and effort, but you are competing against hundreds of other highly qualified people, often your friends, and your chances of success are slim. The sea of tick-boxes, online forms, and buzzwords can be depressingly dehumanising.
Most people at the training day were what we call ECRs (Early Career Researchers), and many were in that uncertain space between the PhD and full-time employment. The first thing that became immediately clear was how much everyone cared about their research. Yes, we exchanged the usual banter about the dire state of the job market, the gruelling commutes between part-time teaching jobs, and our lack of future prospects, but the subject always turned back around to the work. Most had a clear idea of why the research they were conducting into the eighteenth and nineteenth centuries was still important and relevant. If the world couldn’t see that, we would find a way to show them.
This was one important difference Mark Llewellyn, research director at the AHRC and one of the speakers, identified between scholars of his generation and ours. Where some established academics don’t see the need to make their research directly relevant to the public, ECRs tend to immediately see the benefits of getting their research out there. This willingness to get out there and do the work is partly born out of necessity, of course, but Llewellyn sees this as vital to the future of the humanities.
Llewellyn and the other speakers (full programme here) also did a great job of breaking down the meaningless buzzwords that circulate around funding and public events. What does ‘engagement’ actually mean, for instance? Addressing the neo-Victorianists in the room, Llewellyn asked whether the Victorians even mean the same thing to us as they do to the people we’re trying to engage. In the mad dash for employment we often feel it’s our job to somehow make people care about our work, but the process is much more organic than that. It requires connecting with specific people and communities, learning about their needs, and building up a relationship that is fulfilling for both parties. We need the public to engage, but they also need us to be engaged.
Sound a bit saccharine? Fortunately the tone of the day wasn’t at all patronising or abstract. Claire Wood had a few useful tips about identifying who this mystical ‘public’ actually is, and who we should really be talking to. Gillian Dow, Mary Guyatt, and Holly Furneaux all shared direct examples of the strengths and pitfalls of public engagement. The presenters also did a brilliant job of dispelling the Romantic myth of the scholar, who dispenses knowledge from an irony tower to the ignorant masses. For each speaker, engagement had impacted their own research in profound and resoundingly practical ways. It was precisely the act of doing something for and with the public, without worrying about the immediate relevance to the research, that yielded new and unexpected results.
The training day also did a remarkable job of making us, the participants, feel like human beings again—no mean feat for an event with so many big names attached. Each speaker was very approachable, and was not only excited to talk about our ideas, but also keen to offer help and advice. The staff at Chawton House were kind and very professional, and the day was organised without a hitch. Because there weren’t too many of us—several dozen in total—there was just enough opportunity to chat without making the networking feel like a chore. The location itself was also quiet and intimate, and made the whole thing feel like a relaxing retreat rather than an ideas mill.
The only thing I would have liked more of were the workshops, for which we split into small groups based on our research and expertise. I’m still in the early stages of my public engagement plan, and so was matched with a group designed to generate some ideas for how to bring your research to specific groups of people. Our research was randomly paired with two categories: a target group and a type of project. Target groups included easier audiences (retired adults) and more challenging ones (youth not in education or employment), and it was interesting to think about which groups fit best with which topics. The projects also ranged the gamut from exhibitions to podcasts to board games. Everyone in my group was encouraging and full of ideas, and though we had to move quickly from project to project, many of us exchanged contact information so we could take these ideas further after the event.
Despite the renewed confidence in both academia and in public engagement this training day has given me, I remain convinced that the current state of affairs is not a good one. As Furneaux pointed out during her talk, the ability to build bridges outside of academia and engage in impact research still requires a fair amount of privilege. It is often done out of pocket or in a volunteer capacity, and not everyone has the luxury of that kind of free time or disposable income. Researchers are still required to be jacks of all trades—extroverts, scholars, teachers, self-publishers—which ignores the realities of twenty-first-century academia and the value of individuals who don’t fit this mould. Until we figure out how to build a fairer system, however, it’s good to know that people on both sides of the job divide are committed to being there for each other, and ensuring that this important research has a future.
While I’m currently an academic by day, by night (and in some of my holidays) I also do translation, editing, and other freelance work. Some of this is for the Adventist church, where my family have been members for several generations. While I’m not the most active member myself, the church and its 19-million-strong membership help out in health, education, and humanitarian aid around the world.
Though organised religion certainly has its drawbacks, I still think it can be a powerful way to mobilise people. This is why I offer my time and skills to the world church organisation, and to several local branches.
At the end of last year I wrote an article that was published in a national church magazine. You can read it here if you speak Dutch. The article gave readers a brief history of feminism. It also addressed an ongoing conflict between current world church leadership and the international communities it is meant to support. A few months after it was published in Dutch, an English version of the article was picked up by Spectrum, an independent Adventist news agency and magazine.
Like Christianity (or even Adventism), feminism is not a static entity, composed of people who think exactly alike and who all move in the same direction. Nor should it be—if it were, it would not be able to do the thing it aims to do: work toward equal rights for all people, regardless of their gender. In fact, the illusion of unity—unity of one group or even of the whole human race—was one of the problems feminism had to overcome along the way. Let me explain what I mean with a short history lesson.
Hillary Rodham Clinton may have been the first woman nominated to a major political party in the U.S., but she is certainly not the first woman to run for the office of president. In 1872, almost fifty years before any woman would be able to legally vote for her, Victoria Woodhull became America’s first female presidential candidate. A campaigner for women’s suffrage, she reasoned: “If Congress refuse to listen and to grant what women ask, there is but one course left to pursue. What is there left for women to do but to become the mothers of the future government?” If the government was not going to listen to women, women would just have to join the government. She lost spectacularly to Ulysses S. Grant, but her campaign drew a great deal of media attention, and she continued to campaign for women’s rights until she died at age eighty-eight—seven years after women were finally granted the right to vote.
Woodhull, and other women like her, formed what is called the “first wave” of modern feminism. The height of first-wave feminism occurred in the nineteenth and early twentieth centuries with the suffragettes and the women’s rights movement. These feminists were largely focused on the legal aspects of equal rights: the vote, the right to be educated, the right to own property.
The “second wave,” generally marked as taking place from the 1960s through the 1990s, came up against a different set of challenges. Equipped with the legal rights won by first-wave feminists, the second wave set out to negotiate questions of identity and social justice. Women were now legally “equal,” but deep-seated cultural biases still kept them from true equality on most fronts. They had to fight for the right to be women in the workplace, and in this new environment, they were forced to reconsider what it actually meant to be a woman and what it meant for a woman to be equal to a man.
Undaunted by these challenges, second-wave feminists succeeded in reforming higher many elements: education, business, politics, and reproductive rights; set up organizations and legislation for the protection of battered women; raise awareness about the movement at a popular level. Second-wave feminism was loud and proud, and this is the wave we are still most likely to associate with the term “feminism.” These women also changed history in a deeper way. I work at a university, teaching and researching literary and cultural criticism.
Basically, I study how art and literature shape identity. In my field, feminism is hugely important— and not just because the feminist movement ensured my right to work in the first place.
For hundreds of years, people assumed that great art was universal. We believed that it held up a mirror to the world—that it showed us who we were as people. Then, in the middle of the twentieth century, we suddenly and shockingly realized that most of the art we had previously considered “great” was actually only reflecting a very small portion of the world, from a very specific point of view. Most of the art was made by men, specifically, well-off white men from the West. We discovered that “we” were not as united as we had thought and that our unity had only been possible because we were excluding everyone with a different perspective than ours—people who were women, who were black, who were poor or uneducated. These people did not matter in our society, and so their art could not possibly matter either. Then a group of feminist critics came along—at this point still mostly women—who, thanks to their nineteenth-century feminist forerunners, were finally allowed to participate in scientific discourse. They pointed out, in a language other scholars could understand, that actually these other perspectives were everywhere and could be very valuable indeed.
The impact this realization had on the arts (and later on the sciences as well) cannot be overstated. There were endless, conflicting worlds and perspectives out there, just waiting to be recognized. The effect was revolutionary.
Last month I visited the Store Wars: 40 Years of Merchandise exhibition in Hoorn, NL. It was a small, intimate affair that took a loving look at the way Star Wars has affected merchandising and fan practices. A few weeks ago, I took a trip into London for the travelling Star Wars Identities exhibition at the O2 Centre. Despite sharing a broad subject, the two could not have been more different. Identities features a number of original props, costumes, and concept art from the pre-Disney era. In practice this meant I got to see stuff from the original trilogy (1977-1983), the prequel trilogy (1999-2005), and the Clone Wars animated series (2008-2015). The Force Awakens‘ BB-8 also made an appearance.
The exhibition was, perhaps logically, much larger than the one in Hoorn. It also had quite a few more visitors. Tickets had to be booked for specific time slots, and once we arrived we were admitted in groups of 10 to 15. Although you sometimes had to wait a few moments for a path to the next costume or prop to clear, there was plenty of space and time for all of us to enjoy the exhibits—and to take lots of photographs, which almost everyone did.
Exhibits were often grouped by theme: droids, podracers, Jedi, ships. Major characters whose development was especially extensive or technical, like Yoda or Jabba the Hutt, had their own sections. I had no idea that it took so many concepts to arrive at the Yoda we know today. I’m half-relieved that Garden Gnome Yoda didn’t make the final cut, but would also love to see someone edit him into a fan version of Star Wars.
I’m not necessarily a believer in the sacredness of ‘original’ objects, and I won’t say I was paralysed with awe by Luke Skywalker’s jumpsuit, or the mural that hung behind Palpatine’s chair in Revenge of the Sith, but it was pretty amazing to be surrounded by so many objects that made up such a big part of my childhood. I’ve seen Ralph McQuarrie’s art so many times in books that it was somewhat surreal to see the pieces hanging up at an exhibition. Sort of like unexpectedly stumbling across a portrait of a distant relative at the National Portrait Gallery. There were many other great pieces of concept art as well.
The staging and lighting of the exhibits was very well done overall. I won’t lie that Darth Vader’s (or should I say, David Prowse’s?) suit, displayed in all its black glory against neon lights, gave me a little thrill. I was also excited to see the model Slave I and suit of armour belonging to its owner. As a girl I was most interested in the Jedi, but as an adult Boba Fett is my hero. The model Star Destroyer from A New Hope and the AT-AT and Snowspeeder used in the filming of The Empire Strikes Back were also personal favourites.
In addition to being visually stunning, there were a few neat technical aspects to Star Wars Identities as well. Each visitor was given a headset, which would activate when we faced certain exhibits. This let us focus on a particular video or audio clip without any distractions from other corners of the space. It also made me feel a bit like the exhibit was coming to life as I approached.
The highlight of the exhibition from an interactive standpoint, though, was definitely the ‘identities’ component. In addition to their headset, each visit received a bracelet at the start of their tour. When touched to various sensors throughout the exhibition, this bracelet would allow visitors to create their own Star Wars characters through a series of choices. After choosing things like race, appearance, and name, the exhibition takes you through your own Star Wars story—from birth, to crisis, to the ultimate choice between good and evil. At the end of the exhibition you can view the character you created, and e-mail yourself a copy of your character’s story as a memento of your visit.
The exhibition also asked visitors to think about the process of narrative and identity in general. What makes people who they are? What makes a person good or evil? What forces shape the characters of Star Wars, and what forces shape us? While at times this narrative felt a little contrived, it gave visitors of all ages something fun to do while waiting to get a peek at another exhibit.
This is not the exhibition’s first stop, nor will it be the last. Star Wars Identities is at the O2 Centre until 3 September 2017, after which it will set up shop in a new city. If you’re a Star Wars fan near London with £25 to spend (£18 at the concession rate), it’s definitely worth a visit. All in all I spent about two hours looking, reading, and listening.
This week, I finally got a peek at the Spring syllabus for an undergraduate course I’m co-teaching. Sadly my students won’t be watching Blade Runner or reading Do Androids Dream of Electric Sheep? this year. I will be teaching a session on ‘the death of the book’, though, and science fiction plays an increasingly important part in this discussion.
Several years ago, Google released strange, surreal pictures its neural network ‘Deep Dream’ had painted from random noise. In an article entitled ‘Yes, androids do dream of electric sheep’, The Guardian described the process as follows:
What do machines dream of? New images released by Google give us one potential answer: hypnotic landscapes of buildings, fountains and bridges merging into one.
They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition.
Since then, Google has also launched Magenta, which aims to use ‘machine learning to create compelling art and music’. One of its first products was this computer-generated piano variation on ‘Twinkle Twinkle Little Star’ (drum added later by a human):
Early last year, MIT Technology Review‘s Martin Gayford looked at several of these examples of robotically generated art to try and get at the question of what makes art ‘art’ in the first place:
The unresolved questions about machine art are, first, what its potential is and, second, whether—irrespective of the quality of the work produced—it can truly be described as “creative” or “imaginative.” These are problems, profound and fascinating, that take us deep into the mysteries of human art-making.
Computers have broken into the art world, then, but what about writing? There, too, AI has been making great progress. The Verge‘s Josh Dzieza delved into the strange world of computer-generated novels back in 2014, shortly after Google released its ‘Deep Dream’ images:
Narrative is one of the great challenges of artificial intelligence. Companies and researchers are working to create programs that can generate intelligible narratives, but most of them are restricted to short snippets of text. The company Narrative Science, for example, makes programs that take data from sporting events or financial reports, highlight the most significant information, and arrange it using templates pre-written by humans. It’s not the loveliest prose, but it’s fairly accurate and very fast.
Water in Suspense reveals a hidden world. We discover a rich structure immanent in the water droplet, a structure not ordinarily accessible to our senses. In this way it’s similar to the Hubble Extreme Deep Field, which also reveals a hidden world. Both are examples of what I call Super-realist art, art which doesn’t just capture what we can see directly with our eyes or hear with our ears, but which uses new sensors and methods of visualization to reveal a world that we cannot directly perceive. It’s art being used to reveal science.
Although I’m not an artist or an art critic, I find Super-realist art fascinating. Works like the Hubble Extreme Deep Field and Water in Suspense give us concrete, vivid representations of deep space and the interior structure of a water droplet. For most of us, these are usually little more than dry abstractions, remote from our understanding. By creating vivid representations, Super-realist art provides us with a new way of thinking about such phenomena.
Regardless of whether we think machines will kill art, or take it to the next level, I’m very much looking forward to bringing these kinds of questions to my first-years.