This Did Not Take Place, IMPAKT Fesitival 2015

What follows is a writeup of the talk I gave at IMPAKT Fesitval this year in Utrecht on the subject of memory and technology. I have to confess that there was a lot more profanity as I delivered the talk due generally to being ill for months. 
I'm trained as a designer but I don't really design anything. I find it a useful thing to say rather than do. I spend most of my time thinking about, writing about and talking about relationship between technology, politics and design. So when the guys from IMPAKT asked me if I had any opinions about the relationship between memory and technology I was like; 'Hell yeah I have opinions about everything.' And so here are those opinions.

The title of this talk takes its name from the very famous book by Jean Baudrillard, The Gulf War Did Not Take Place. In the book, Baudrillard argues that the 1991 Gulf War did not take place. Obviously. Not that there was ipso facto no Gulf War, he's not like a Gulf War denialist, but that the Gulf War as it was presented through 24 hour rolling media was not the Gulf War that was lived on the ground. The Gulf War was a highly orchestrated media 'atrocity' masquerading as a war and Baudrillard talks about how manipulation of the media changed the narrative that was received in Europe and the US.

So the first part of this talk is about how the manipulation of media changes the remembered narrative of those who receive it. This is a kind of prosaic idea, hardly new. We're used to states and corporations controlling cultural narratives through the media but I think we live at a point where, interestingly, individuals have a high-degree of control over their own personal narrative and through the connected nature of things begin to change the wider socio-political narratives we have, perhaps unknowingly.

In the background is a video from a cruise missile nosecam. The Gulf War was also the first war where we had images of the conflict directly from the weapons themselves. The nosecam of a cruise missile shows great power, control and high-technology but at the same time is a mechanical contradiction: The camera is destroyed at the point where the weapon fulfils its purpose. And so we have all the foreplay of war with the showy aspects of control and power without any of the terror, chaos and suffering that the cruise missile brings. The cruise missiles becomes a form of media.

And so the second part of this talk, the second aspect at play, is sight. We're visual animals, most of our memories are visual (even though it's not as strong as olfactory memory) and we tend to trust what we see. So the proliferation of small, mobile cameras leads to new conceptions of how memories are recorded and what they mean.

And so we begin with cinematic tour-de-force that is Google's How It Feels (through Google Glass). Two minutes and fifteen seconds of film history. Glass, for those that don't know, was Google's ill-fated attempt to create a consumer augmented reality market. Google Glass was a headset that you would wear on your head. It only did four things that no-one ever wanted badly and so didn't get very far. The key thing is that it was a head mounted camera from which you record and stream stuff. And that's how Google try and sell this thing - through the experiences it enables.

You only see the product itself appear for six seconds in the entire clip but you get to do a wealth of weird stuff - flying balloons and planes, sculpting tigers from ice, playing with a dog on a deserted beach and so on. Essentially now that you have hands free you can better experience things and experience is inseparable from the recording of those things.  Google are telling you that their technology enables experiences, not how it works, what it does, or what it's for.
Experience has become a really central part of our technological narrative, born largely from the protestant work ethic and there's two distinct flavours of experience. The first is the kind of sexy, rarefied Instagram experience - swimming with dolphins, flying planes, eating cakes. And you have to photograph and upload them or it might as well have not happened. They're usually accompanied by some reminder of your own mortality too - 99 Things To Do Before You Die - as if not swimming with dolphins is just a waste of good oxygen. 

The second meaning of experience is 'expertise' and the two things are completely interconnected. At job interviews you're asked about your experience. Experience adds to your expertise and makes you more socially valuable and you evidence this experience by photographing it and uploading it. 

Here's the advertisement for the Apple iPhone 5. There's very little information about what it actually is, its battery life or processing power, size or weight. Just lots of information about what it can do. And that is take photos, as if this is some brand-new never before conceived of technology. Apple tells you how the technology will enable you to gather and share more experiences than ever before, more photos of cakes and lakes than your mates. 

This sales pitch of technology as an experience-enabler is hardly new. We've had 'sex sells' for a damn sight longer than we've had GPS or the Internet. Here's the compulsory vintage futurism of a any talk like this. What I believe is the oldest cell phones advert, from Radio Shack in 1989. And already we can see that the technology is sold on its experience generating potential. You can now phone people from the beach! Work hard and have fun!

But there's a crucial difference between the cell phones of old and the cell phones of now. These vintage cell phones augment your experience. They allow you to multi-task and become more mobile (or, perhaps force you to multi-task and become more mobile.) But they don't record your experiences.
The average smart phone has 19 sensors in it: Light, proximity, two cameras, three microphones (one of which is ultrasound,) touch. Positioning comes via GPS, Wi-FI, Cellular, Near Field and Bluetooth technologies. And it contains an accelerometer, magnetometer, gyroscope, pressure sensor and temperature and humidity sensor.

In fact these devices aren't just recording your experiences, they're capturing high-resolution, detailed versions of parts of your life and remembering them.
Things remember you. This is the Simplicam, which can act as a stand-in for anyone of the 99% of Internet of Things projects that are about surveillance and mass data capture. The Internet of Things i rapidly becoming the world's largest surveillance infrastructure and one that we're weirdly excited to invite into our homes.

The aim of this infrastructure is to constantly record, monitor and store data on you and your behaviour.
This is the Amazon Echo. It kind of staggers me that when Amazon said they wanted to put an always-on microphone into people's homes everyone just thought that was OK. A corporation manufactures, sells and distributes spyware and everyone just lapped it up. The purpose of the Amazon Echo is to act as a hub for Internet of Things products but also find ways to make it easier for you to buy stuff from Amazon. At it's core are data-gathering; listening to things you say in the house in order to better target products at you and also acting as a personal shopper; directly responding to your impulsive needs for soap or blue pants.
Worse still is something like the Samsung Smart TV. Samsung are kind of explicit that the purpose of the microphone is to allow you to control the TV by voice but yet again, the microphone is always on, always gathering data and always sending it off to third-parties to be analysed. Samsung had to later send out a disclaimer:
You can control your SmartTV, and use many of its features, with voice commands. If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. In addition, Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.
So yeah, this is Samsung telling you not to volcalise any sensitive or intimate information. In your living room. 

These systems gather huge amounts of data about you but only the data that's relevant to them. They're not interested in why you like blue panties, only that you do and then they use that to sell you blue panties in the most effective way possible. They build what's called a 'data double' - a slightly inaccurate chalk outline of who you are, made only of data points of relevance to that company or corporation and inevitably flawed by the technology. This is why it gets weird when these data doubles try to reach out to us. We and they imagine that they gathering of enough data is enough to make them empathetic can human-ish.

Here's Facebook doing it's whole memory-lane thing that it's trying to do at the moment. I'm really bad at Facebook, I mostly use it for self-promotion, so it's constantly prodding me to interact. Three years, it tells me I've been friends with Paul Revell. My dad.

It's accurate, but completely out of context. And just getting this in the middle of the day out of nowhere kind of throws you for a bit of a loop.
And it could have been worse. This is Eric Meyer's Facebook page a year ago. Facebook decided to send him a prompt to share his year with others and showed him an image of his daughter. Who had died a few weeks earlier from a terminal illness. Now, Eric Meyer was the first to admit that Facebook aren't dicks. They don't always wantonly go about trying to emotionally bully people. It's just that we assume these systems are so advanced and so sophisticated that they have some built-in human sensitivity, they don't. Facebook is a database, not a friend. It's got great data on pain but it can never understand the embodied experience of feeling pain. It can't empathise, it can just react to data available. It's not good at being your friend but the narrative we build and it builds suggests that it is.
Which brings us on to the idea of gaslighting. Gaslighting is a term for psychological and emotional manipulation, bullying and violence particularly through the control of environmental conditions. The term comes from the film of the same name and is essentially indicative of a process where someone convinces someone else that they are imagining things happening or tells them things are happening which can't be observed.

The proliferation of fallible, manipulable connected devices and our emotional reliance on them leaves the territory for gaslighting wide open and leaves us with sticky problems as far as questioning our own reality goes.
This brings us rather neatly back to our idea of a non-existent Gulf War. Here's Dick Cheney addressing the press at Desert Shield with an angry looking Colin Powell in the background. This was the 'media circus' of the Gulf War. The main reason that the version of the Gulf War broadcast was so tightly controlled was because the press were so tightly regimented. The US dubbed this regimentation Annex Foxtrot and they forbade the press from going into the field, talking with normal soldiers or civilians or leaving strict enclosures. This was the mechanic by which the Gulf War story was told to Europe and the US.
We have the same thing today but perhaps more inadvertently in our coverage of warfare. If you google the word 'drone' this image crops up in the top ten hits. Five years ago it and various versions would have filled the top ten. For years this was the de facto image of a drone used in coverage of what we were learning about drone warfare. It littered blogs, newspapers and social media. Last year though it was revealed to be a rendering created by a hobbyist.

Now it doesn't matter hugely. There was no one pointing at this specific drone and saying it specifically did something - it was used as a generic placeholder image. However it's skews the visual narrative of how we think about and visualise drone warfare when collective memory thinks of a render when thinking about drone warfare.
More prosaically, I discovered last year that 26% of Ikea images are renders. Probably more now. It makes sense to produce renders rather than pay to set up and photograph these sets. However there's some strange dissonance about publishing aspirational imagery of a thing that we think is real and isn't. This living room never existed, the people and lifestyle implied by it never did. That's no different to advertising or any other kind of vision, but it exists in a visual language of reality as opposed to futurity and introduces an interesting element of impossibility.
 Stalin was well know for his manipulation of imagery. He erased enemies and the disappeared from the collective memory of Soviet Russia as a way to cement his power and control the narrative of history. Now, and then in the west, we look back on this and see it as crude, dictatorial. We think that citizens must have been cynical and sceptical about these images. But in reality, at the time, like most contemporary media, it slipped seamlessly into the collective memory - just another part of the story. Soviet citizens may have known the images were doctored but they didn't view them critically like we do now.

It's interesting to think if in fifty years visual anthropologists will look back at the realistic renders populating our visual culture and wonder how we so un-cynically accepted the creep of irreality into our collective memory. And who know what effect it might have in years to come?
This is a very famous image and one of much contention. It appears to show a scene in Berlin taken by a Google Street View car where a Smart car has pulled over and a woman (presumably formerly in the car) is giving birth on the side of the road. In 2010 this was in the press a lot with the obvious question - is it real?
But that's not quite the question. There's more nuance to it. You see we instantly recognise and see-through the fact that it's a street view image. The visual cues are all there - the map in the corner, watermarking and the camera style. But we could also ask, not just if it's a real birth, but if it's a real street view photo. How hard would this to be knock up in photoshop? We're so familiar with this media of global representation that we don't really critically question the context of the photo. We're a bit like Soviet citizens being fed doctored photographs, we see through the context and question what's inside rather than the whole setup itself. This might be a real birth and a fake Google Street View image.
In fact if we look back on the same scene today, even more questions are raised. The ad agency that the image was taken in front of denies having anything to do with it, and yet that appears to be the same smart car at the bottom of the image. The hospital on the other side of the road quite sensibly suggested that had a live birth taken place outside, they would have noticed. Even more interesting are the dates. The original image is watermarked 2010 and this one is copyrighted from 2008 with a 2012 watermark and a significant growth in the trees. Another aspect of this world-remembering machine is the ability to forensically examine the past.
 Of course, we can't talk about memory without reference to time. The image here is from Fritz Lang's Woman in The Moon. Lang said that the rocket launch scene wasn't tense enough so introduced the countdown clock as a way to build tension. NASA loved it and now a dependence on time is inexorably tied to the popular imagination of space flight.
In fact, the history of time itself is deeply tied to technology. This is the 1830 Liverpool and Manchester Railway. For most of human history, people operated on localised time. It wasn't until the construction of massive communications and transport infrastructures that a standardised sense of time was required. Before the railway, Manchester and Liverpool kept different times. This was the first timetabled railway line and the company had to enforce standardised time so that people could understand when trains departed and arrived. Within a few years, the standardised time used in timetabling had spread across the country and by 1880 the western world was on Greenwich Mean Time, all aligning clocks to London time. With standardised time, clocks and watches become the first networked devices.
We can look at the Global Positioning System as a continuation of this project. Sending satellites into space to more accurately pinpoint the position objects in time and space to better synchronise the working of the Earth as a planetary machine.
The Internet is a large physical infrastructure that relies on being very carefully synchronised and standardised to even work. The construction of this 'world brain' of GPS and the Internet that can very accurately and quickly understand and compute the relative position in time and space of everything connected to it means that we more and more and think of the mind as an architectural space as the ancients did.
For a long time, the modernists considered the body and mind to be liquid; ethers, biles and juices. Yet now we see the popular resurgence of 'memory palaces.' Once again we are starting to visualise the mind as a physical architectural space. After all, if the Internet behaves like a human brain and possesses architectural dimensions, it makes sense to assume that the brain is much the same.
And what does Google do with its memory palace? Well it allows you to stroll through it, and kind of add to it. This is the Memories For The Future project. In it, Google put the street view images of Fukushima before the tsunami out for people to stroll through and remember their old homes, places of work and so on. It also appeals to the morbidly curious I suppose. But that possessive title is a bit of a giveaway; implying Google's self-appointed role of guardian of the collective memory. These aren't your memories you're strolling through; their Google's. And as you stroll through them you feed them data about who you are, what you're looking at and what you do while you're there. They've constructed a dead town as a data playground. 

Sure, I'm being cynical. Google most likely genuinely thought this would be a good idea for those that had lost so much, and for those it probably is. But good intentions dont' betray the underlying ideology-as-business-model. Google makes money from you doing stuff. It wants you to do stuff and will make money from all the stuff you do, including looking at images of your destroyed home.
Which kinda leads us on into the last section - Manufacturing Memories. The still there is from the remake of the Manchurian Candidate. Which I consider superior to the original. In the film, Liev Schrieber undergoes brainwashing treatment during his time in the Gulf War to become a brain dead assassin - his memories and those memories of his comrades entirely fabricated in the lab.
So DeepDream is really interesting. By now we've all seen weird, kinda horrifying images created by it everywhere and enjoyed their novelty. There's something really human about us celebrating the aesthetic failures of technology. It's why Instagram is so popular. The best analogy I've heard for how DeepDream works is that it's like asking a child to draw a house. All children draw pretty much the same house because they've had limited exposure to images of houses from which to synthesise the idea of what a house is. Which is why child's houses look relatively generic - square, four windows, a door and a triangular roof.

DeepDream is kind of like that but it's only ever seen pictures of dogs.
So when you ask it to draw the aliens meme guy it just draws it in dogs. The purpose of this exercise is two-fold. Firstly, computers can't recognise discrete objects in images well. Humans are blessed by being able to visually comprehend, describe and represent a cup. A computer needs that data codified. DeepDream allows Google to recognise cups in pictures by comparing them to dogs (in a way.) Secondly, much like a child, that analysis allows DeepDream to create new images of cups. Which is where it gets interesting.

Deep Stereo is a spin off of DeepDream using similar neural network technology. Deep Stereo can interpolate between still images and create and understand 3D space and movement. Now, this may jsut seem like great technology which would make Street View a lot more fun. But think back to our house-drawing child. They might grow up to be an architect and really study houses to understand how they work and then synthesise that knowledge into new types of houses. Or, they might design film sets. Or, they might run a ponzi scheme in property. It's all based on the same basic understanding and analysis of the raw data of what a house is.

Think back to the drone and Ikea renders. Those things are innocuous. Deep Stereo implies a future in which Google can quite convincingly write and manufacture memories that never happened to an incredible degree of accuracy. 
Milan Kundera's quote is quite literal. Of course some people do rewrite history. But the companies harvesting data are doing it because of the inflated fiction of the value of data. We value our past and so we invest our data back into it. Buying surveillance cameras, more aggressively experiential smart phones and more and more accounts. We share stuff because we're vain and want to be remembered or noticed. This gives value to those who guard this endless stream of stuff and in turn makes them more valuable. A feedback loop exists between your future memories and the rapid expansion of technological power today. Google probably do just want to get better and better at capturing the world, but in their quest to do so, they're developing tools to rewrite future history.
I guess the pithy takeaway here is our relationship with the things that remember us. We're not remembered by people so much as things. Objects and devices constantly busy remembering us, some we asked to, some we didn't. And they send all these memories off to places that we may not know about or understand or want to have them. But like I said, we're vain like that and we don't have as much control as we'd like. Your Facebook account is not your own, it's everyone's and Facebook's. It is a database of advertising targets not a family photo album. Their both databases and they both uses similar design to convince us to share and record but they're purposes are vastly different.

And in this desperation to constantly harvest and grab memories, we're making the cameras and eyes that record our experiences ever smaller and more ubiquitous and easier to use. In doing so, we unintentionally invite other actors to control our perception of our experiences. A Go-Pro for instance, just happens to fit in a sea gull's beak.