What follows is a writeup of the talk I gave at IMPAKT Fesitval this year in Utrecht on the subject of memory and technology. I have to confess that there was a lot more profanity as I delivered the talk due generally to being ill for months.
I'm trained as a designer but I don't really design anything. I find it a useful thing to say rather than do. I spend most of my time thinking about, writing about and talking about relationship between technology, politics and design. So when the guys from IMPAKT asked me if I had any opinions about the relationship between memory and technology I was like; 'Hell yeah I have opinions about everything.' And so here are those opinions.
The title of this talk takes its name from the very famous book by Jean Baudrillard, The Gulf War Did Not Take Place. In the book, Baudrillard argues that the 1991 Gulf War did not take place. Obviously. Not that there was ipso facto no Gulf War, he's not like a Gulf War denialist, but that the Gulf War as it was presented through 24 hour rolling media was not the Gulf War that was lived on the ground. The Gulf War was a highly orchestrated media 'atrocity' masquerading as a war and Baudrillard talks about how manipulation of the media changed the narrative that was received in Europe and the US.
So the first part of this talk is about how the manipulation of media changes the remembered narrative of those who receive it. This is a kind of prosaic idea, hardly new. We're used to states and corporations controlling cultural narratives through the media but I think we live at a point where, interestingly, individuals have a high-degree of control over their own personal narrative and through the connected nature of things begin to change the wider socio-political narratives we have, perhaps unknowingly.
In the background is a video from a cruise missile nosecam. The Gulf War was also the first war where we had images of the conflict directly from the weapons themselves. The nosecam of a cruise missile shows great power, control and high-technology but at the same time is a mechanical contradiction: The camera is destroyed at the point where the weapon fulfils its purpose. And so we have all the foreplay of war with the showy aspects of control and power without any of the terror, chaos and suffering that the cruise missile brings. The cruise missiles becomes a form of media.
And so the second part of this talk, the second aspect at play, is sight. We're visual animals, most of our memories are visual (even though it's not as strong as olfactory memory) and we tend to trust what we see. So the proliferation of small, mobile cameras leads to new conceptions of how memories are recorded and what they mean.
And so we begin with cinematic tour-de-force that is Google's How It Feels (through Google Glass). Two minutes and fifteen seconds of film history. Glass, for those that don't know, was Google's ill-fated attempt to create a consumer augmented reality market. Google Glass was a headset that you would wear on your head. It only did four things that no-one ever wanted badly and so didn't get very far. The key thing is that it was a head mounted camera from which you record and stream stuff. And that's how Google try and sell this thing - through the experiences it enables.
You only see the product itself appear for six seconds in the entire clip but you get to do a wealth of weird stuff - flying balloons and planes, sculpting tigers from ice, playing with a dog on a deserted beach and so on. Essentially now that you have hands free you can better experience things and experience is inseparable from the recording of those things. Google are telling you that their technology enables experiences, not how it works, what it does, or what it's for.
Experience has become a really central part of our technological narrative, born largely from the protestant work ethic and there's two distinct flavours of experience. The first is the kind of sexy, rarefied Instagram experience - swimming with dolphins, flying planes, eating cakes. And you have to photograph and upload them or it might as well have not happened. They're usually accompanied by some reminder of your own mortality too - 99 Things To Do Before You Die - as if not swimming with dolphins is just a waste of good oxygen.
The second meaning of experience is 'expertise' and the two things are completely interconnected. At job interviews you're asked about your experience. Experience adds to your expertise and makes you more socially valuable and you evidence this experience by photographing it and uploading it.
Here's the advertisement for the Apple iPhone 5. There's very little information about what it actually is, its battery life or processing power, size or weight. Just lots of information about what it can do. And that is take photos, as if this is some brand-new never before conceived of technology. Apple tells you how the technology will enable you to gather and share more experiences than ever before, more photos of cakes and lakes than your mates.
This sales pitch of technology as an experience-enabler is hardly new. We've had 'sex sells' for a damn sight longer than we've had GPS or the Internet. Here's the compulsory vintage futurism of a any talk like this. What I believe is the oldest cell phones advert, from Radio Shack in 1989. And already we can see that the technology is sold on its experience generating potential. You can now phone people from the beach! Work hard and have fun!
But there's a crucial difference between the cell phones of old and the cell phones of now. These vintage cell phones augment your experience. They allow you to multi-task and become more mobile (or, perhaps force you to multi-task and become more mobile.) But they don't record your experiences.
In fact these devices aren't just recording your experiences, they're capturing high-resolution, detailed versions of parts of your life and remembering them.
Simplicam, which can act as a stand-in for anyone of the 99% of Internet of Things projects that are about surveillance and mass data capture. The Internet of Things i rapidly becoming the world's largest surveillance infrastructure and one that we're weirdly excited to invite into our homes.
The aim of this infrastructure is to constantly record, monitor and store data on you and your behaviour.
Amazon Echo. It kind of staggers me that when Amazon said they wanted to put an always-on microphone into people's homes everyone just thought that was OK. A corporation manufactures, sells and distributes spyware and everyone just lapped it up. The purpose of the Amazon Echo is to act as a hub for Internet of Things products but also find ways to make it easier for you to buy stuff from Amazon. At it's core are data-gathering; listening to things you say in the house in order to better target products at you and also acting as a personal shopper; directly responding to your impulsive needs for soap or blue pants.
You can control your SmartTV, and use many of its features, with voice commands. If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. In addition, Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.So yeah, this is Samsung telling you not to volcalise any sensitive or intimate information. In your living room.
These systems gather huge amounts of data about you but only the data that's relevant to them. They're not interested in why you like blue panties, only that you do and then they use that to sell you blue panties in the most effective way possible. They build what's called a 'data double' - a slightly inaccurate chalk outline of who you are, made only of data points of relevance to that company or corporation and inevitably flawed by the technology. This is why it gets weird when these data doubles try to reach out to us. We and they imagine that they gathering of enough data is enough to make them empathetic can human-ish.
It's accurate, but completely out of context. And just getting this in the middle of the day out of nowhere kind of throws you for a bit of a loop.
Eric Meyer's Facebook page a year ago. Facebook decided to send him a prompt to share his year with others and showed him an image of his daughter. Who had died a few weeks earlier from a terminal illness. Now, Eric Meyer was the first to admit that Facebook aren't dicks. They don't always wantonly go about trying to emotionally bully people. It's just that we assume these systems are so advanced and so sophisticated that they have some built-in human sensitivity, they don't. Facebook is a database, not a friend. It's got great data on pain but it can never understand the embodied experience of feeling pain. It can't empathise, it can just react to data available. It's not good at being your friend but the narrative we build and it builds suggests that it is.
film of the same name and is essentially indicative of a process where someone convinces someone else that they are imagining things happening or tells them things are happening which can't be observed.
The proliferation of fallible, manipulable connected devices and our emotional reliance on them leaves the territory for gaslighting wide open and leaves us with sticky problems as far as questioning our own reality goes.
Desert Shield with an angry looking Colin Powell in the background. This was the 'media circus' of the Gulf War. The main reason that the version of the Gulf War broadcast was so tightly controlled was because the press were so tightly regimented. The US dubbed this regimentation Annex Foxtrot and they forbade the press from going into the field, talking with normal soldiers or civilians or leaving strict enclosures. This was the mechanic by which the Gulf War story was told to Europe and the US.
Now it doesn't matter hugely. There was no one pointing at this specific drone and saying it specifically did something - it was used as a generic placeholder image. However it's skews the visual narrative of how we think about and visualise drone warfare when collective memory thinks of a render when thinking about drone warfare.
It's interesting to think if in fifty years visual anthropologists will look back at the realistic renders populating our visual culture and wonder how we so un-cynically accepted the creep of irreality into our collective memory. And who know what effect it might have in years to come?
Woman in The Moon. Lang said that the rocket launch scene wasn't tense enough so introduced the countdown clock as a way to build tension. NASA loved it and now a dependence on time is inexorably tied to the popular imagination of space flight.
Liverpool and Manchester Railway. For most of human history, people operated on localised time. It wasn't until the construction of massive communications and transport infrastructures that a standardised sense of time was required. Before the railway, Manchester and Liverpool kept different times. This was the first timetabled railway line and the company had to enforce standardised time so that people could understand when trains departed and arrived. Within a few years, the standardised time used in timetabling had spread across the country and by 1880 the western world was on Greenwich Mean Time, all aligning clocks to London time. With standardised time, clocks and watches become the first networked devices.
Memories For The Future project. In it, Google put the street view images of Fukushima before the tsunami out for people to stroll through and remember their old homes, places of work and so on. It also appeals to the morbidly curious I suppose. But that possessive title is a bit of a giveaway; implying Google's self-appointed role of guardian of the collective memory. These aren't your memories you're strolling through; their Google's. And as you stroll through them you feed them data about who you are, what you're looking at and what you do while you're there. They've constructed a dead town as a data playground.
Sure, I'm being cynical. Google most likely genuinely thought this would be a good idea for those that had lost so much, and for those it probably is. But good intentions dont' betray the underlying ideology-as-business-model. Google makes money from you doing stuff. It wants you to do stuff and will make money from all the stuff you do, including looking at images of your destroyed home.
Manchurian Candidate. Which I consider superior to the original. In the film, Liev Schrieber undergoes brainwashing treatment during his time in the Gulf War to become a brain dead assassin - his memories and those memories of his comrades entirely fabricated in the lab.
DeepDream is really interesting. By now we've all seen weird, kinda horrifying images created by it everywhere and enjoyed their novelty. There's something really human about us celebrating the aesthetic failures of technology. It's why Instagram is so popular. The best analogy I've heard for how DeepDream works is that it's like asking a child to draw a house. All children draw pretty much the same house because they've had limited exposure to images of houses from which to synthesise the idea of what a house is. Which is why child's houses look relatively generic - square, four windows, a door and a triangular roof.
DeepDream is kind of like that but it's only ever seen pictures of dogs.
Deep Stereo is a spin off of DeepDream using similar neural network technology. Deep Stereo can interpolate between still images and create and understand 3D space and movement. Now, this may jsut seem like great technology which would make Street View a lot more fun. But think back to our house-drawing child. They might grow up to be an architect and really study houses to understand how they work and then synthesise that knowledge into new types of houses. Or, they might design film sets. Or, they might run a ponzi scheme in property. It's all based on the same basic understanding and analysis of the raw data of what a house is.
Think back to the drone and Ikea renders. Those things are innocuous. Deep Stereo implies a future in which Google can quite convincingly write and manufacture memories that never happened to an incredible degree of accuracy.
And in this desperation to constantly harvest and grab memories, we're making the cameras and eyes that record our experiences ever smaller and more ubiquitous and easier to use. In doing so, we unintentionally invite other actors to control our perception of our experiences. A Go-Pro for instance, just happens to fit in a sea gull's beak.