At least three people said ‘I love your blog’ in the last week and each time I felt crestfallen that I’ve been so lax with this this obligation. I really do try and do everything. I think I’ve seen films where in sound recording studios some of the sliders on the big desk of sliders move on their own; I’m not sure why this is, I assume it’s something programmed in, certainly I’ve never been around audio equipment that sophisticated. My own musical recordings were all done with a combination of Garage Band and crates of Kronenburg. Anyway, that’s how life works sometimes: You turn something up at one end and all the way down the other end of the desk a slider is automatically tweaked down.
Making Five Stores From The Distant Future
For the last two weeks I’ve been spending evenings and early morning working on a series of renders for the opening keynote I gave at Orgatec in Cologne last week. The whole thing was a massive faff but quite fun and diverting at a time where I really needed something creative to call my own. Sure, I could have thrown some slides together for a 25 minute-long witter with some data and ‘trends,’ but having spoken with Robert who was organising the whole shindig I felt quite inspired to go the extra mile and turned the research I did into a series of short stories from the future. At some point I’ll figure out how to share these but I wanted (realising that people who read this website are actually interested in this stuff) to talk about how I went about doing it.
To start with I had three constraints: I knew I wanted to write and tell short stories (just because), I knew I had about 20 minutes in a 35 minute talk to do this so I worked out I wanted roughly 5 short stories of about 4 minutes each. I also knew I wanted them all to be connected somehow. The next thing I knew was the big picture of the world. I’d already sent off a blurb and pinned down a couple of things that would shape the world with ideas like degrowth, the end of high finance and speculation, the end or weakening of global norms and institutions and the stuff we know about like climate and demographic change. The final thing was knowing the audience might be futures-curious but like-as-not unfamiliar with most of these concepts.
I started by throwing those big ideas down on a piece of A3 paper and imagining what connected them. For instance, in a world of managed degrowth, people might want to kick against it and you could get a subculture of people looking to engage in high finance and speculation in the same way that living a fully sustainable lifestyle today could be seen as a subculture. You might also start to see a slowdown in global logistics as a result of climate, degrowth and ending global norms, so rather than a world of next-day delivery, everything takes a long time for things to move. Between those two there’s an obvious conflict; the drive for speed, power and control, the reality of sluggish, uncoordinated and messy physical reality. This was the first one I thought about and came to sailing ships but the rest also flowed quite quickly once I started imagining what occurrred at the intersections of different drivers and ideas.
I spent an hour or so doodling away and thinking about the little visuals that emerged and that actually became the backbone of the whole thing. I started by modelling the scenes I was reasonably confident about (like the sailing ships). For each scene it was about making the familiar unfamiliar; the uncanny. For the ships for instance, I used a boxy, cargo ship that might be easily recognised but then put Chinese junker-style sails on copied from a modern sailing vessel. Each scene I wanted to be recognisable but have something different; diesel ships with sails, an office with a playground, a kitchen with 18 seats etc. etc. This is the starting point for most speculative design; finding something materially familiar and normalised and twisting it so that the audience is forced to reconcile their expectation (diesel ships have engines) with what they’re seeing (these ones have sails). So it’s also important that both those things are recognisable. Where I was introducing a brand new element – like the ‘d-rhizome’ in the home office scene; an AI-augmented alternative to the Internet that is fully node-based and inspired by slime moulds or mycelium – these would have to be explained in the story.
From here the stories and scenes sort of developed in tandem along with prompting ideas for the next scenes. Some stories were easy to flesh out to bullet points and pull together like the Bangladeshi immigrants running a semi-autonomous Norwegian vineyard as part of an international soil restoration programme for migrant workers. The pieces just sort of fall into place. Others took more forcing.
The rooftop scene, for example, is about a building caretaker where the building has so much biomaterial and biotech fitted it’s almost a living thing, so I wanted it to be less of a service and more like a doctor; someone who is widely respected and admired for their expertise and time. This is an idea we explored a little of in the Future of Making work that went to Singapore the other week. I knew I wanted the top-down view of the roof in a sort of satire of green roofs. So I put cows on it. If you’re going to cover a roof in grass you might as well have cows and you might as well use their waste to fuel a bioreactor. And the association of the machines with the animals opened up the story beyond the technological to one more similar to a farmer who cares for their animals but it’s a building.
I worked these out by sketching over and over again the scenes in my notebook, adding elements and writing notes on how they might work and how the character relates to them. I didn’t get to writing the prose of the stories until literally on the train over to Cologne. Luckily, my head was so in the world that this all came quite quickly. I settled on a model in which for each scene a character reflects on how they got there, some exposition, some weirdness. I actually ended up using Copilot quite a lot to figure out details like names, locations, species and so on which probably saved a bunch of time hunting for an endangered species of bird that eats berries and migrates from through Germany to the arctic.
On anti-AI aesthetics
A quick note on the style. You might note at the top of that paper it says ‘like Frostpunk.’ I knew I had a lot of work to do so I wanted to reduce the workload as much as possible. So inspired by the game I adopted three tricks. First of all, I tried to stick to fixed view so that I could keep the lighting simple. Apart from the dinner scene, no camera moves through the scene so I didn’t need to worry about what was ‘behind’ the camera and could build the scene like a set. Second of all was using simple flat images as background parallaxes. The rooftop is a great example. The background here is just a flat image of a street. Finally was keeping the style loose and low poly where possible. I didn’t hit this rule all the time. Ironically, the more time-pressured I got, the easier it became to just pull out pre-made assets from Blender Kit. So while the ships is all DIY, with some cardboard cutout UV mapping, by the time I was doing the office scene I was basically just modelling core bits like the room, the weird screen and table, the vertical farm. The rest is all found assets.
I realised quite late that as well as a time-saving effort, these aesthetics decisions were about intentionally distancing the images from the new generative AI aesthetic. I didn’t want to do over-stylised photo-real images with lots of soft blur because I wanted the audience to know that I had made these images by hand, that it took effort and labour to do and that maybe in that effort and labour I had got the opportunity to think about these future scenarios in more depth. That by moving things around, working out how space might function, designing the workarounds people might have to make for their work to fit their lives, that I would learn a lot more about the subject and that this informs the stories.
I know that generative AI image-making has become a popular speculative design tool but I’m pretty sure it’s not actual design. When you put in a prompt to for ‘a future retrofit commercial office where people are living in apartments and spending their days trading in high finance derivatives around a massive table’ you’re not actually designing anything. I suppose you’re actually asking the machine to elicit your own head-cannon from a cultural median for you. Sure that thing has probably never existed before but you’re not really making anything, just skewing a graph.
Design that is also research is what we learn in the actual designing of things; of keyboards and desks and tables and chairs and lamps and switches. In making those things and thinking about the people who will touch and use them you generate knowledge, understanding and insight about the future. If you’re just taking your preconceptions and getting a machine to make them ‘real’ then have you really learned anything? A reason these renders take so long is that even adding a chair to a desk scene forces me to ask questions like; how long does this person sit? What kind of things do they like? Are the proud of their work? What else might they need to do? How might their personality be reflected in the chair? And in exploring and answering those questions I feed the knowledge back into the stories and the world-building.
Points of failure
Of course, none of these projects ever go right. Even after so many years of honing my Blender-craft and convincing myself I had plenty of time there were problems. With about a week to go I lost my notebook and with it, all the sketches, notes and annotations I had been pulling together for each scene. Pretty sure I dropped it somewhere around Central Saint Martins at an event but despite a couple of visits it never showed up so I had to remember a lot of the ideas I had for the last three or four scenes. Second thing was that the PC I was remoting into to do the rendering went offline and took about a week to get back. So I had all the renders backed up and modelled out but time ticking on the actual render time. I ended up sinking about $300 into cloud rendering to meet the deadline. (I missed the deadline, but got it in before the talk which is what counts.)
And of course, nothing ever looks like you want it to. Each of the renders except the vineyard, rooftop and the forest have multiple versions. And even those were re-rendered a bunch to fix bugs or style problems. The original kitchen was just some tables arranged end-to-end with a cooker at the head. It just felt like a big party not like a kitchen purposefully setup for a large group to eat together regularly. The first office was just basically a bullpen with holographic screens which I threw together at 2am one morning and in the cold light of day, rejected as unimaginative and cliched. The idea of having it as a literal live/work retrofit with apartments in a commercial building came later. So really I ended up producing about 16 rendered animations of about two minutes each to get to the final seven.
Finally, and a critical failure for someone who claims to be a designer; I didn’t get to do any testing. There simply wasn’t time to get someone else to have an eye-over of the stories. I was writing and editing them right up to the morning of the keynote itself You should always give time to have someone else edit your work because, though I may know this world inside and out, no one else does and afterwards several audience members commented that it was ‘very dense’ meaning, I imagine, a lot went over people’s heads when spoken and not read on the page. It also probably meant that I wasn’t as confident in them in presentation as I might have been with more dry-runs, even if I did rehearse the whole thing four or five times.
For example, introducing the d-rhizome, this new type of Internet which prioritises real connection rather than command-and-control was tough. Think about a classic science fiction book; usually it only introduces one new idea (e.g. there’s time travel, plants are an alien species, spiders are the apex species) but everything else is broadly the same (e.g. people want to preserve their life, get wealthier (in some way) saved their loved ones, whatever). But science fiction authors get a whole book and your total attention to introduce and explore that idea. I had 5 minutes and a trade conference keynote so I’m not surprised some of it was lost.
Other than that, it’s just all the stuff that goes with anything you’ve worked super hard on; you notice all the things that could be better but I’m long enough in the tooth to know that that’s life and you just have to move on. Anyway, yes, I will find a way to tell you the stories and show you the full renders. It’s on my to-do with everyhting else.
Recent and Upcoming
Couple of recent and upcoming things.
I’ve taken up a teaching role at the London Interdisciplinary School teaching design. I’ve been following the LIS since it launched and been really interested in what a genuinely interdisciplinary education looks like so this is an interesting little peek inside.
I took on a role as an industry champion at the Creative Industries Policy and Evidence Center to advise and consult on the future of the creative industries.
22nd November: I’m going to be at the next Design Declares! event with a host of amazing and luminary folks. Really quite worried about what I’m able to bring to that party.
As I said, I’ll find a way to document the Orgatec stories. The other big one was the opening keynote at the Design and AI symposium hosted by TU Delft. I’m not sure if they were recorded, if not I will also seek to document that but it’s basically a PhD walkthrough with a dance in the middle. I also have thoughts about some of the other stuff that was there.
Reading
I’m significantly behind on keeping up with newsletters because of all the above work. I’ve managed to crawl and skim through about 40 or so in the last few days. There’s an overarching and exasperated message that the amount of money and resource being thrown at AI ($100s billions) versus the actual tangible provable outcomes (5% is of positive impact at various things) are wildly out of proportion which does give the impression that we’re heading for a very real bubble.
The Ethico-Politics of design toolkits by Tomasz Hollanek explores dozens of ethical AI toolkits with some choice words on ethics- and participation-washing as part of a process that is often depoliticised and fails to match the actual needs of AI development processes. These toolkits often call for alternatives, which he points out there are loads of, which are ignored or maligned by mainstream AI practice.
Microsoft’s Hypocrisy on AI. This is depressingly unsurprising but it’s useful to have a bunch of evidence. In the PhD I’m circling a bit around how claims about AI’s ‘potential’ (to do things like cure cancer or mitigate climate change) gain credibility despite these being completely fabricated assertions. It’s a tricky thing to pin down, the PhD is all about how idea A (it can play games really well or chat with your kid) become claim B (it will cure cancer, mitigate climate change) but this article basically shows how big tech is “talking out both sides of its mouth” about these speculative claims by also making a bunch of money selling prospecting tools to fossil fuel companies. I was at an event where I tried to make this self-fulfilling prophecy point to some city leaders:
Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them. Therein lies the crux of the problem: In this new generative-AI paradigm, uncertainty reigns over certainty, speculation dominates reality, science defers to faith.
Brian Merchant has also written up a bit on it here.
Ed Zitron on the Subprime AI Crisis. Zitron (who I like reading but can’t listen to) has been tracking the wobbly finances of big tech in AI for a while and frustratedly pointing out all the inherent contradictions and problems. Zitron extends the usual argument with the specific mechanisms by which AI is sold. One; it’s on you to figure out how to make it useful/valuable (more on this next week) and two: through software as a service that binds you to it. This one gave me real dot-com-bubble vibes. Consume with reporting on underwhelming productivity impacts.
Wes has finally released his Stories from AI-Free Futures. He’s been working really hard on getting this album together as a continuation of Newly Forgotten Technologies which I would broadly describe as ‘specualtion on what comes after AI.’ Please do check them out.
Paul Graham Raven interviewing George Voss here. Part 2 is now out as well.
Apple did another launch which is a great excuse to remember how underwhelming things are. (I would 100% get a Mac Mini though, I really always liked them)
WordPress seems to have got super slow? I have refreshed my browser a bunch but it’s just got really clunky and delayed since I was last here. Perhaps something to do with all the lawsuits? Anyway I love you and assure you that following a very unpleasant summer I am back to regular programming.
This is a rough, edited transcript of the talk I gave for Bartlett Cinematic and Videogame Architecture students on Monday. I recorded it on my phone which was sat next to me and then used Otter.ai (which is very good I think) to transcribe. Back in the day everyone used to blog their talks and I really liked it so I’m going to try and get back into the habit. I should note that for these types of things I rarely really properly ‘prepare.’ I tend to thrown some ideas together that I think/believe will chime with the audience and then have a more discursive meander through those ideas with them. With more professional stuff it tends to be a bit more uni-directional and pro. Also I don’t have time to scroll through for all the typos and you know I’m bad at that anyway so just apologies in advance really. Anyway, transmission begins:::
Hi folks, thanks for having me. I have to say I’m pretty jealous, if this course existed ten years ago, I would have done it. So I’m going to talk about this idea called ‘Design in the Construction of Imaginaries.’ And I choose these words, for a particular reason, I’ll come on to how I’m gonna use them in a way in a second. But I think it’s important to be on the same page here about what these words mean.
So I come from a design background and I’ve taught interaction design and graphic design and product design and UX and all sorts of stuff. But when I talk about design, I really just mean a sort of sophisticated understanding of material culture. So that might mean digital stuff, it can be physical stuff, architecture. And design means thinking about a particular affect or effect that you want to have on the world. Whereas (and this is purely my own definition) art I think is more subjective, it’s about you as a person.
So then imaginaries is an idea from social science, Sheila Jasanoff is probably the person to read if you’re interested. Imaginaries are a sort of collective headcanon for things in the world. So we have an imaginary of artificial intelligence, which I’m going to talk about quite a bit. We have an imaginary called London, we have an imaginary called gender, we have an imaginary of ‘our people,’ nations have imaginaries. So these are all sort of constructs of certain tropes and myths and stories and visions that we all collectively hold and often can be quite tricky to pin down. And I’m very interested in how design constructs imaginaries, both to build and reinforce mainstream imaginaries but also; how can we use design or material practice that to unpick these imaginaries as well, to challenge them, to question them and to sort of disassemble them and show their parts which is what this little talk is all about.
So very quickly who I am and what I do. I’m Design Futures Lead at Arup Foresight. My job is to think about what the future of various things and for the sake of Arup and our clients, but particularly I lead on using design methods to do that and we have a small growing design team who use design to both produce certain types of outputs like exhibitions and films, but also as a research technique. Before that, I was an academic for a long time and I also ran a curatorial and research project called Haunted Machines with Natalie Kane. But a lot of what I’m going to talk about is mostly related to my PhD work, which I’m doing at Goldsmiths.
So I’m going to kick off with this with with a concept called future foreclosure. This is the idea that we’re not very good at thinking about the future and actually, the futures we construct and the futures we imagine are actually quite limited, and increasingly so. This shot is from Star Trek III; The Search for Spock, which is not as well studied perhaps as 2001; A Space Odyssey for its set design. But still, Gene Roddenberry, the creator of Star Trek put a lot of effort into designing the detail around the Starship Enterprise, including this sign on the transporter, which says ‘No Smoking.’ And I love this because it indicates a world in which the people of the 1980s were able to imagine a future in which you could dematerialise and rematerialise somewhere else completely through this amazing technology. You could jump from a spaceship to a planet or a ship to another ship but, everyone would still be smoking. It shows how we don’t question accepted social norms.
And then there’s this idea that we’re in a kind of bleak place for the future. This is a quote from David Runciman, who’s a Cambridge political scientist. And he was reflecting I think, on what happened in 2022. And he said…
We were talking about the metaverse earlier as perhaps one of the greatest examples of that. It’s just bootstrapping technology onto a financial instrument, and hoping for the best. So there’s a sort of cynical lack of imagination about what the future might be but also a sense of inevitability. My research looks at how AI has been socially constructed, and design’s role in that. One of the really fascinating things about AI is that everyone has a concept of it. Everyone’s seen films, video games, everybody’s heard hysterical news stories. But that also creates its own problems, because it gives AI this sense of inevitability. So these scholars reviewed I think 200-odd ethics guidelines for the use of AI across governments, nonprofits and companies and said…
So that imaginary of an inevitable AI coming is so secured that all of the discourse is just about limiting harms. And this percieved inevitability; the idea that nothing can stop the status-quo AI future and no alternatives can be imagined blinds us to the harms. Sun-ha Hong talks about the idea that…
So there’s this idea that a techno-future is inevitable and foreclosed but it wasn’t always so right? There were times where we’ve had alternative visions for what technology could be. Anyone seen David Cronenberg’s Existenz?
[Group of Gen Z’s doggedly keep their hands down] Oh boy. Okay. So this came out of this same year as The Matrix, 1999. And The Matrix obviously has become a real hallmark of what people now think about as a retro future; the idea of immersing ourselves in a simulated reality and an artificial intelligence that takes over. But Existenz was looking at the idea that we might be carrying around these bio computers or ‘pods’ that we plug into and exist in a different sort of virtual reality that existed between these bio pod. At the time, that was another future imaginary that people had and yet for some reason this is now seen as unreasonable and ridiculous while The Matrix is often held up by journalists as a potential future reality.
So, why do some imaginaries, like an AI apocalypse in The Matrix, take hold and others don’t and what role does design have in the success or failure of them?
Minority Report by Steven Spielberg came out a few years later in 2001. It’s a hugely influential film on the world of design and technology for reasons I will go into in a second. And it’s a really interesting case study in the cultural impact of one film over a huge collective imaginary of what technological futures are. Minority Report (based on the Phillip K Dick short story of the same name) takes place in a future where we’re able to predict crime before it happens and so there’s a ‘pre-crime unit’ that arrest people before they commit the crime. But there lots of other technologies in it like a gestural interface thing, augmented reality, eye tracking, AR and facial recognition. Almost all these technologies that were speculative at the time, captured here.
And so Minority Report becomes a powerful comparison point for journalists and investors around technology for years to come. Rather than opening us to alternatives or presenting a critical question, Minority Report is used as a story, a metaphor of a particular technofuture that drives billions of dollars of investment to technologies like gestural interfaces, AI and worst, predictive policing.
We might think that the role of science fiction, fiction and cinema is to broaden our future imaginaries and to help us challenge the status quo rather than but as philosopher Fredric Jameson said…
Jameson makes a really interesting suggestion that really the role of futures in science fiction, in most cases, isn’t to broaden our imagination, and throw in new ideas and new questions but is to convince us that we’re just in the past of a future that’s inevitable and already pre-decided.
Minority Report becomes incredibly influential. It has a whole Wikipedia page dedicated just to the technologies that are that are in Minority Report. The production team work with loads of researchers at places like MIT and all sorts of technology companies to develop these gadgets and gizmos. And then for years and years and years. We’re talking over two decades now, people have been trying to recreate the technology in Minority Report or using it as a metaphor – a framing device – for real-world technologies.
But there’s a very good reason why Minority Report and artefacts like it work, why they stick in culture; and it’s because of design. David Kirby really analyses the use of design to convince people of certain worlds and world building. John Underkoffler was the guy who designed the gestural interface and then went up to set up a multi billion dollar company based on the excitement generated by Minority Report to build it but obviously it didn’t work we don’t have them…
All that is to say is that basically the reality, believability and tangibility of the designs for John Underkoffler and actually for Minority Report more broadly, is what makes them stick; the reason they were enticing is because they seemed somehow grounded in reality. And there’s lots of detail in that film to bring them out. For instance, there’s a part where when Tom Cruise is swiping across the interface, there’s an error and one of the windows doesn’t come with him, he has to go back and pick it up and those sort of details, bring out the believability of it.
I’m not going to go on about Minority Report anymore. I just wanted to use it to show this connection between imaginaries, design and futures. How the futures we imagine are informed by and inform the stories we tell and how design is a sort of connective tissue that bring both fictional and future imaginaries to life and make them convincing. Because, despite being a complete fiction, as a result of Minority Report we’ve seen probably billions of dollars invested into these speculative technologies at the cost of less glamorous or profitable things like climate intervention or medical science.
So now I want to talk about the way that design is used to construct imaginaries, and the way that you can then start to unpick, unsettle and challenge them through critical practice. And this involve the use of metaphors, charisma, and tropes that draw on science fiction. Earlier on I mentioned Haunted Machines which is a project I ran with my friend Natalie Kane, who’s a curator at the V&A. We started this in 2014 and were really interested in the question of why so much of the emerging technology of the time; voice assistants, Internet of Things devices and so on, were wrapped up in occult language and metaphor.
Once you really start to get into the weeds on this, it’s more than just colloquial and coincidental. We did lots of work here and there’s lots of great social science about this. Essentially magic is a causeless technology; you push button get thing, there’s no work that has to be done on labour involved in that process. Secondly it associates the technology with secret, hidden or forbidden power which also goes to making technology really aspirational since power, speed and control are so revered in society. But it also reveals something about how we imagine technology.
We perhaps like to think that technology and innovation are closely aligned to science but scholars have really shown that technology and innovation respond to deeper, more human and existential desires and fears but dress it up as science in order to give it credibility. For instance, Anthony Enns, explored the ongoing hold that psychotechnologies (brain reading technology) has over the imagination and innovation space (see recent neuralink news)…
Most technologies aren’t really answers to things; they’re charms to make you more powerful, more beautiful, help you live longer or give you access to secret knowledge. Anyone who’s watched Mad Men would know this but I think we assume that somehow the development technology is a rational science that doesn’t tap into desires or emotions that it’s based on, like, scientific principles. And so, like any field that promises the solution to spiritual, existential crises, it fills quickly with charlatans and criminals.
[A quick game of ‘Name that Criminal’ ensues.]
Charisma is really important here, there’s a reason that we keep revering and looking up to these people. William Stahl, who really analysed this enchanting effect that technology had with the early introduction of the PC talked about the important of charismatic figures, sage-like or even messianic who became idols and prophets as a result of this narrative framework that developed around technology as secret, powerful and tapping into needs and desires.
So as well as metaphors of magic, power, speed and control and great charisma, technology draws on pre-existing tropes and imaginaries we have in order to slip it into mainstream acceptance.
So this is Ai-Da which is claimed by its creator, Aiden Meller, to be the first artist robot in the world. And obviously, like a lot of these projects, very little is given away about how it actually works, who built it, what the actual algorithmic processes behind it, but a lot of work goes into presenting it and framing it. In this case, for a hearing in the UK parliament on the future of the creative industries it is female presenting, it’s presenting as juvenile and it’s presented in these overalls and dungarees to look and feel like a creative or artist as well as, again, somewhat juvenile. This scene is fascinating for many reasons and I have written thousands of words about it. The decision to invite a machine to testify before parliament (which they actually say they can’t admit as real evidence) is ludicrous. Its answers are also pre-recorded, so the whole thing is a performance but you wouldn’t give a tape player the same platform. But the really interesting thing about it is that, very quickly, the politicians and legislators quickly fall into step of treating it like a real human being, they start referring to it as ‘she’ and ‘her,’ and they ask it questions directly. So it’s a really fascinating thing about how the design choices around the presentation of what is essentially an algorithm in a box are eliciting empathy, feeling and sort of status quo relationships from these legislators.
There’s also a fascinating part where Meller talks about some of the engineers who worked on it who said that really it’s the worst form of artist robot you can imagine, right? Because if you want a functioning ‘artist’ robot, that produces paintings like Ai-Da, just use a robot arm. Humans very complicated, messy, lots of limbs that don’t really do much in terms of art-making.
So why insist on this human form? Obviously, the whole thing is to draw on and reinforce an imaginary that we’re super familiar with from science fiction of humanoid robots displaying human-like behaviours. This draws on empathy to make the audience feel an emotional connection (again those deep desires and fears) and also makes it more easily ‘consumable’ as the audience are familiar with this sort of setup from TV and film. There’s also a second imaginary at play which is one that is perhaps more powerful but les obvious and that’s nation-building where scholarshave shown now how states and governments are keen to associate themselves with technology to appear future-facing and high tech.
Another great example is the AlphaGodocumentary from DeepMind. So in 2016 DeepMind beat the world Go champion, Lee Sedol and they made this documentary about it which obviously gives DeepMind the opportunity to frame the whole narrative around what they’re doing. And because this is film, they also draw on tropes. So on the left, for instance, is a scene where Lee Sedol realises he’s about to lose, and they have this long lens of him outside smoking a cigarette over melancholy violins. And this on the right is a scene from the very end where one of the advisors is playing with his daughter at sunrise in a vineyard and is saying, how excited he is about the AI future. The whole thing is very well done and choreographed to tell a very familiar David-and-Goliath story of DeepMind, a team of dozens of genius computer scientists, owned by Alphabet, one of the world’s largest and most powerful corporations beating a Korean man at Go. Which on the face of it is an outrageous framing, which is why there’s lots of discussion at the beginning of how complex Go is, how it’s ‘uncomputable’ as a way of showing how DeepMind have not only beaten human intuition and gestalt that Go apparently requires but also this insurmountable mathematical problem.
This connection to games is also really interesting, around the same time IBM put out Watson to win at Jeopardy! And there’s a deep history of AI, computers and chess. The mainstream imaginary of AI (and AGI in particular) involves AI being as good as, if not better than a human. We already have computers that can model whole-Earth weather patterns, or simulate huge crowds moving through space, or image distant galaxies, things that no human can do but thanks to collective imaginaries we’ve set benchmarks of ‘good’ AI as one that has the intuitive and gestalt properties of human thinking. Which is why these folks are so focussed on making AI that can make art or win games; it is a way of disenchanting these human activities and skills, showing that they are calculable, controllable and computable. It’s profoundly nihilistic.
The final thing here is the ‘so what?’ ‘So you’ve built a computer that can win at a game, so what?’ And this is where there’s usually a clever rhetorical swipe and the story turns speculative. You’ll often find here as you do in AlphaGo and in the shorter Watson documentary an extended claim that victory at this one very confined benchmark equates to curing cancer, solving climate change or alleviating poverty. Even though scholarshave shown that there’s little result from these displays and performances other than increased funding and hype.
And then you’ve got things like the design of power and complexity. This is another big trope in AI. This is Alexander Nix, who was the head of Cambridge Analytica who were famous for stealing a lot of data from Facebook and claiming to be able to predict and influence the outcome of elections. Studies since have shown they had no such power whatsoever. They did steal thirty million Facebook profiles but they didn’t have anything fancier than a big Excel spreadsheet. The point is, when you see him or read about him, he’s always described as very charismatic. Like our previous charlatans, that presentation, in this case of a slick, public school guy who’s well connected is really important. And the things that Cambridge Analytica really relied on to bamboozle people is scale and complexity. All of their comms is big numbers, complex terms and ideas. And whether it’s in cinema or in real life, this is often used to construct an AI imaginary; this idea that somehow it’s bigger than us, and we can’t possibly comprehend it. It was used by DeepMind to describe Go as uncomputable and beyond the comprehension of a human. This apparent complexity is used as an invitation to ignore how it works and, again, to secure that secretive, magical power.
Then there’s journalism and media. If you Google ‘artificial intelligence,’ you get these humanoid figures that are usually blue with lots of lines going everywhere accompanied by numbers and data. This is no true representation of AI, and lots of groups are exploring alternatives, but it is a pretty dominant aesthetic metaphor used in mainstream press and reporting which goes to secure an imaginary that…
…is also reinforced in cinema. We can see, and are likely all familiar, with how the same aesthetics are recycled, because if you’re going to explain AI to someone, they’ve probably seen Ironman so you can use that to build your story on. It’s easy and convenient to hijack those aesthetics for stock imagery, and sort of loop them back through culture over and over again. But of course, at the same time, as we saw in Minority Report earlier, real-world technology is shaped by this set of fictions and stories.
[Pause for chat]
So that was a whistle stop tour of how imaginaries are constructed and how design is used to build them. I want to quickly look at how they’re disseminated and what that means for them. Not only these, these imaginaries created and reinforced, but they also have to get out there in the world. Has anyone come across the Shazam Effect? So, in 2012, a bunch of Spanish researchers sat down to answer the question; ‘does all pop music sound the same?’ And they found out it did…
So later, Derek Thompson coined the Shazam Effect to explain this; that through things like Shazam, Spotify, and these increasingly available data platforms, record companies had loads of data about what people like which they could then use to produce more music that conforms with what people are listening to. And, as science fiction author Bruce Sterling says; ‘what happens to musicians happens to everyone.’ And so…
We see this effect in things like International Airbnb Style as coined by Laurel Schwulst. When you’re on Airbnb, you’re trying to attract people to stay at your property and so you look at the other properties that are successful and you design it, present it, photograph it and light it in a way that’s successful for others which results in this homogeneity.
We see the same thing in cars with this Wind Tunnel Effect. There’s so much software and regulation around the design of cars that when you run them through the simulations that are required to make them as efficient as possible, meet the fuel standards and so on that you basically end up with the same slight variations on the same forms.
In architecture we also see the same thing thanks to industrial image production. This is Crystal CG who produce renderings for architectural studios all over the world and they’ve produced hundreds, thousands of images. So they know what worked previously. They know what clients like, they know what works in a particular country or city or region. And so you end up with homogenisation of style and design and form as this centralisation of production makes the process less artistic and more industrial.
And these renderings are particualrly important because I would contend that these images, printed on these eight foot tall hoardings and plastered all over the city are the most common and everyday way that most people come across the future. They might read the news or watch a film, but every day they’re living, working and travelling around these massive, highly saturated, gorgeous images that obscure the real building site and importantly…
…distract from the actual place where they have a voice in the future of their city, which is in planning notices.
I think the thing to take away from all of this as we move on to talk about what you can do and how you work as critical practitioners is to know that design in this situation is never neutral, that it carries forward pre-existing tropes and assumptions from culture, imaginaries and fiction and embeds them in new objects and technologies. So things like real-world AI are designed, by choice, to conform to expectations from fiction and the imagination, even though it is often wholly inappropriate to dealing with real world problems. Madeleine Akrich writes…
So I want to get on to critical practice, because that’s why we’re all here.
So an assumption that I’ve seen a lot in AI and again, I’ve written about quite a bit is the idea that AI will somehow democratise imagination and creativity. And I love this interview with the creator of stable diffusion, David Holz who says…
This idea, that again, is profoundly nihilistic, that creativity, criticality and imagination are just about making the right tool is provably false and very similar to claims that social networks would liberate, democratise and educate. At the same time as claims are made of ‘democratising’ creativity, these tools and platforms are foreclosing imagination to try and make it conform with what AI developers want.
The title of this talk, and it’s related to my thesis title as well is ‘Design and The Construction of Imaginaries’ which is an allusion to an amazing paper by Carl Di Salvo – ‘Design and the Construction of Publics.’ I always point to this work for your one-stop shop on how critical design works. He suggests that design has a role in building public discourse, not just solving problems, he says…
Di Salvo says that publics assemble around issues. This issue might be a new building, it might be a broken toilet, it might be you know, being a parent, it might be unaffordable rent. And very often those issues are designed such as with a building, app or service. So Di Salvo extends this and says that the way that critical design works is by inventing the things for an issue to assemble around. And when you’re talking about things like AI and the sorts of things that are quite ephemeral and tricky to pin down, there’s a powerful role for critical practice to materialise the issue so that people can then assemble around it and talk about it.
So a one of the most well known projects that does this, and one that Di Salvo writes about is Tom Thwaites’ Toaster Project. So Tom Thwaites set out to answer a simple question, which is; ‘can I make a toaster?’ He had to make everything himself, had to smelt all the metal and form all the plastic and reated this great series of YouTube videos, I think became a little documentary and a book. So why? I mean, we already have toasters and as Di Salvo says, he’s not solving anything. The main thing is how the projects reveals how much we take for granted these incredibly complicated supply chain processes that go into a really simple object. Di Salvo calls this a ‘tracer;’ it traces the outlines of a thing that’s otherwise invisible, which is the whole supply chain, industrial set of processes and technologies that produce this really simple object. It reveals them to the audience by saying this is a ludicrously complicated, globalised, exploitive, and wasteful product. So by revealing this thing that’s otherwise invisible or obscured he brings an issue to the front which is otherwise quite difficult for people to understand; supply chains, materials, all that kind of stuff. So this is what what we mean by using design to unsettle or untangle certain tropes.
So Di Salvo calls this a ‘tracer’ but the other type of project in critical practice of design is what he calls ‘projectors.’ You’ve probably heard of speculative design and Tony Dunne writes in his thesis…
Di Salvo says that these projects work by showing how things might be otherwise to reveal the way they are which might be otherwise hard to see, because it can be hard for us to challenge our assumptions about life; think again of the Start Trek ‘no smoking’ sign.
So this project for example is called Robots but would you think of any of these objects as robots? They all have characteristics of robots. The one with the one with the looks a bit like a lamp has to be plugged into the wall in order to work so it can’t move around as much as it would like, the L shaped one is my favourite but has to be held at a particular angle in order to work which requires it to basically rest in the crook of your arm or it won’t work. All of these things are designed to have the behaviours of what we might assume a robots; they have movement, they have autonomy, some sort of agency, arguably, but they look nothing like ‘robots.’ They don’t look like metal humanoid figures, or little dogs or machines but by projecting forward (or sideways, let’s say because it’s not suggesting a particular time) and saying ‘this is how things could be otherwise’ it reveals the assumptions that we have taken for granted; all those tropes that we talked about earlier that are constructed around AI and technology.
So that’s tracers and projectors, used to reveal hidden assumptions. But I also think the hack or exploit is really an important tool. I’ve spent far too long going down to speedrunning rabbit hole which is completely fascinating. So speedrunning is simply people competing to complete a game as fast as possible by any means possible. And the great thing about that challenge is that speed unners don’t see the video game as the designers intended it. They don’t see it as a world with the narrative and certain mechanics in it have been designed, they almost get to the layer underneath the actual construction of it; the architecture of the actual game engine itself, and try and find hacks and exploits in the underlying mathematics really to find a way through. And these hacks require really explicit and sophisticated knowledge of game engines and architecture in order to spot the exploits.
And I like seeing the same concepts on approaching technical systems in critical practice; that ability to see beyond the thing as it’s presented and unpick the reality underneath. For instance, Gabriel Goh took Yahoo’s Images’, not safe for work filter, and turned it all the way to zero to ask; ‘what’s the least pornographic image possible? What if we undid that algorithm, laid it out and turned everything to zero and then rendered what it would give us?’ You end up with quite pastoral and bucolic scenes. You sort of get the sense of classical architecture, green, beaches, sky, that kind of stuff. And this is a similar sort of tracing project that uses exploits to reveal a system. It’s taking the thing that we’re that we’re given, and saying, ‘I’m just gonna lay out all the pieces of it and try and figure out where they come from and the decisions that were made.’ Because someone programmed this, it’s not accidental, someone decided what the least pornographic image possible should be and trained this system on it.
An important note here is how, like a lot of good critical practitioners they document how they do this right because that’s really where the knowledge is. It’s not in the thing itself. It’s in the journey to get there and that’s, that’s really important. They’re all documenting what they did and what they revealed.
I’m going to talk about one of my own works really quickly, which was an Augury a project I did in 2018 with Wesley Goatley, who’s a sound and data artist by training and we’ve worked together on lots of projects. So augury was an ancient divination technique used by the Greeks and Romans which involvd looking at the flight patterns of birds. So they basically say; ‘all the birds are flying West, therefore, we must go to war’ or ‘the birds are flying East therefore, we must go to war.’ It usually ended in ‘we’re going to war.’ Fundamentally, it was a belief that the birds were messengers from the gods.
So we created a sequence-to-sequence machine learning system that trains on ADS-B data of the flight pattern of planes over 50 kilometre radius of London for about four or five weeks and put it next to the latest tweets about the ‘future’ from London at the same time, and then train this machine learning system so that once it was in the gallery, if you asked it for a prediction, it would just give you complete garbage, because there’s absolutely no association or causal connection between these data sets. But what we wanted to do is untangle a lot of the way big tech was talking about AI as almost prophetic in its power while obfuscating the way it actually worked. And so the point of this satire was to say, ‘given how little we know about how often these corporate machine learning systems work, they may as well be the flight patterns of birds and planes.’
And, as I said before, documenting, reflecting, talking with a great collaborator about what you’re doing, the choices you’re making and the thing you want to unpick is super important. Especially as this was one of our first time working with machine learning. And as we’re doing it, it’s revealing to us something about the way that computer scientists or engineer think about things like corpuses, data sets, epochs and so on. What are the corners cut? The conveniences made? The assumptions inscribed in these tools.
So if that’s a ‘tracer’ what can ‘projectors’, alternatives, look like? This is QT.Bot by Lucas LaRochelle. LaRochelle, did a project for years called Queering the Map where they gathered stories of queer experiences all over the world, and then pin them to Google Earth. So basically, people would would say, give an anecdote about how they met their partner, or maybe had a negative experience as well as a queer person at a certain place on Google Earth. This map is now huge, it must be tens of thousands of different people’s testimony. But then LaRochelle trained a machine learning system on that data, on the stories and on the images of the places to generate an arguably queer AI with data is fed it that is very different from a normative, heteronormative data set.
And another speculative trajectory for AI that lots of people are making is how it might enhance our relationship with the non-human world. Such as with the Ecological Intelligence Agency from Superflux. Sascha Pohflepp captured this idea for me, that what AI gives us and should be giving us, isn’t the ability to replicate things that humans can already do but to extend and enhance our ability to understand things that are too fast, too slow, to huge or too small to be comprehended by us. I guess that’s where a lot of the science is pointed but what if that was the imaginary we held too? Not one of speed, power and control but of understanding and empathy and care enabled by the ability to crunch vast data to the human scale.
So what can I leave you with as critical practitioners? I hope I’ve talked about how imaginaries are made and disseminated and how design is used to reinforce those but also how critical practice can use methods like tracing, projecting and exploits to reveal and unpick them, to assemble new publics around the ephemeral and loaded imaginary of AI and other things.
I called this little section ‘making traps’ because I think really, taht’s what all this is about. Whether you want to convince someone of a science fiction future, or invite someone to challenge their assumptions you’re making a trap. Benedict Singleton, reflecting on Villem Flusser writes that…
Fundamentally, no one is creating anything new. The trap maker simply reengineers existing tendencies for a new outcome. Think about a simple rabbit trap; you bend the branch to hold the elasticity, you know the rabbit wants to eat a certain type of bait and follows a certain path and thwack! Rabbit stew. There’s no fundamentally new thing here. All of these futures are traps; drawing on a mastery and sophisticated understanding of things that already exist – aspirations of power, control and speed, stories and fictions of robots and all powerful machine – and pointing them in a new direction that’s favourable to whatever imaginary you want other people to buy into. The question is whether you build traps that keep us in the status quo or ones that break us out of it.
Thanks.
Recents
I contributed to the Service Design College’s Futures and Foresight course on some of this stuff but in a more applied way. You can no go and look at it if you want but I think it is behind a paywall.
Short Stuff
Very short. Trying to spend less time reading newsletters and more time writing PhD even though this transcription took probably four hours.
I haven’t come across anything gushingly positive on the Apple Vision Quest (but then I don’t tend to read uncritically gushing positive journalism). Here’s some from Paris Marx about why it doesn’t make sense,
Max Read suggests that Kyle Chayka gives too much weight to algorithms and feeds in his analysis of why everything is the same.
I’ve been reaching out to catch up with people now I’m emerging from my convalescence including folks I haven’s spoken to since before the Cov. If you wanna hang out, let me know and let’s hang out a bit. Ok, love you bye.
I’m about the only person not playing Zelda this week. Instead I write to you from a very nice hotel in Charlottenburg in Berlin at the tail end of just over a week of travelling the Germanic world. I know I haven’t written in a while and I know I say that every time there is simply no good reason; I’m just a busy person. Every week I don’t write it is harder to again the next week.
I’m going back to the office more which means introducing two hours of sitting on a bus into each day (as opposed to 50 minutes on a bike #carssuck). As of yet I haven’t found a good way to blog or do anything with that time other than look out the window, listen to podcasts and do some Duolingo; I cannot type on phone keyboards and no one has solved the navigation problems of switching between windows effectively.
Recently, a couple of people were (jovially) saying I was really opinionated and it struck a discord somewhere. We can never know ourselves of course (I was recently told in an organisational psychology assessment that my results suggested I could be manipulative and I was like ‘how would I know?’) but I don’t know that it rings true. The things I have opinions about are often strong ones, usually because they’re developing theories, folk-observations, bewaring others of respect for water or about how much I hate cars. But there’s a huge amount of things (most sport, epidemiology, archaeology, theology, set theory, the Greek classics etc. etc.) that I have no opinion on, largely because I am reluctant to develop an opinion on something I have no understanding of. In the midst of Covid I was very clear that I had no opinion on ventilation, vaccination strategies or macroeconomic responses. Also I find the intellectual-progressive tendency to tear itself apart over its opinions really counter-productive/revolutionary.
Like, I understand the criticism of the AI letter, but isn’t the spirit of it right? Isn’t the spirit of the AI Dilemma right even if you disagree for critical or intellectual reasons with the individuals? ‘We’ expend so much energy on each other while the other side attack immigrants, ban abortion and gerrymander elections. I was actually sort of happy about the vibe of ambivalence around the coronation in the UK. I was expecting anti-monarchist vitriol from everywhere but happily people seemed to generally leave each other alone to respond how they wished and in a way that made them happy (other than the obvious nefarious arrests); whether that’s a protest, ignoring it or standing in the rain. If this weird spectacle makes you happy; great, people spend all sorts of money on all sorts of stupid things to make themselves happy.
Anyway, a lot has happened to tell you about though I can’t commit to giving you any more hot takes or elite opinions. They’ve all been used elsewhere.
PhD
Last week I capped off the chunk of work I need for the upgrade; the more-than-halfway point that is a submission of a good chunk of the work. Mostly this has involved addressing the front-stuff; tightening up the methodology and introductory sections but I also wrote a new piece on how I’m using theories from other domains. The process is really not straightforward; I loosely collect quotes or notes on things as I read in a blank document then iterate over and over and over again, grouping and cutting until an argument starts to emerge. Sometimes I’ll spend ten minutes trying to get a sentence just right or grab at an idea I can’t quite vocalise properly.
Anyway, it’s submitted now and when I get back tomorrow I’ll review the feedback from Matt, Joe and Wes, make any further changes and send off to the examiners. Feels good to be having it done but I know that really it’s just firing the starting gun on a much bigger chunk of work to finish the whole thing.
Recents
Can you believe it’s been almost six weeks? I’ll try and rattle through recent activities as quick as possible but in no particular order.
I was in Milan to do a panel with Viraj Joshi, Alexandra Mihai and Simone Rebaudengo hosted by Domus Academy and the Speculative Futures Milan guys on AI and design. Easily one of the most fun and interesting panels I’ve done in ages. We laughed about the fun stuff and got serious about the serious stuff, and had a wonderful evening rave in the hotel lobby. It was also great to be back in Milan for the first time in years and just run into loads of old friends, meet some new ones and talk about design again. I forget sometimes that people think about me as a designer and want to talk to me about design, no middle management.
Malina and I did a class on speculative design for Milan’s ‘Service Design Masterclass.’ I was also the inaugural guest on the Service Design Network’s YAP podcast talking about design, futures and working at Arup. Back on From Later again with Rob, Syd and Viraj (once again) talking about cloud seeding and CGI and all sorts of nonsense. Did a good research-heavy design talk for the Service Design College. I’m sort of glad no one’s asked me to write anything for a while. I can use my words good with my mouth brain but writing is hard.
Anyway, I’m in Berlin because we’re at the tail-end of Arup Foresight’s Regenerative Futures exhibition and events here for Berlin Design Week. The team made a new film – ‘Common Ground’ – and we ran a small series of public and private events which were bloody lovely. Before that (last week) I was in Austria for Desired Futures 6. Three days of great talks and discussions from a who’s-who of futures nerds. On Friday I hosted Julian in a conversation about imagination which was great.
I’m sure there’s things I’ve missed and there’s lots of the same coming up tbh. I know there’s like three talks next week and it’s really hard to keep on top of it all. I do try and share them in good course on the social networks. My phone is buzzing to say I have to go elsewhere so I need to cut this short.
Short Stuff
Sorry, a lot of this is old news.
Not a huge amount but a bit of insight on tech companies laying off or redeploying ethics teams. They say; ‘we’re developing AI responsibly anyway’ we say; ‘inconvenient cost in race-to-the-bottom of the market’
Also Washington Post mapping out the different camps in the AI debate and, importantly, what their financial incentives are.
I’m very much on the ‘Robots have never and aren’t going to take your job (but they will take the things you love)’ side of things, Noah Smith’s analysis of an analysis of why 300 million jobs aren’t going to be erased.
Brazil is looking to get Chinese investment to start a semiconductor industry.
Emily Bender on tech wsu – ‘critics say’ as a positioning tool
There is no AI from Jaron Lanier on his usual optimistic drive. Unfortunately, he assumes that given a fairer system, everyone will play fair. For example, one suggestion is that people referenced in training data (eg. illustrators of cats) be paid for their effort when a new thing (cat image) is produced. This fails to take into account all the companies that would just spam training data (with cats) to overweight their contribution and profit. Over and over again, problems with AI are problems with capitalism not AI.
Side FX from K Allado-McDowell, really thoughtful reflection on the beginnings of critical practice with AI.