One of the things I’ve been really emphasising about this new technological wave in talking to people is that we’re not in the ‘exciting’ and frenetic days of early social media or the Internet. This isn’t a time where some new technologies are emerging and smart, playful outsiders are coming in and showing us new ways we might do things. Generative AI is characterised by four or five of the world’s wealthiest companies, run by a few dozen of the world’s wealthiest men, focussing on the two wealthiest states, fighting to maintain the status quo.
Of course there are and will be, weird and interesting things that happen along the way but the incumbents are so powerful that they can just hoover up any competition. This was well analysed by Henry Farrell on the political economy of AI. He points out that, just as with the early Internet, a war over IP is emerging between the incumbent corporation that capitalise on culture and the artists and creatives who feed that culture, only this time the incumbents aren’t Disney, Warner Brothers and the record companies as with Netflix, Napster and Spotify but the big tech companies; Microsoft, Google, Amazon and so on trying to extend the living they’ve made off the back of the work of creatives. The point Farrell makes is there a future in which this just kills culture and the Internet; that the well is so poisoned by synthetic media and market disincentives that the whole enterprise of the Internet just sort of ossifies and collapses.
As we know from Gopnik, generative AI is a cultural technology, a way of organising and disseminating knowledge. It doesn’t create anything new but changes the way we order things and value them. The IP fights going on are a symptom of this shift and in fighting to maintain total supremacy and status quo over a speculative future market, the incumbents are likely smothering anything new that might emerge as a result.
In a sort of answer to the last’s posts provocation, (‘If someone tells you what something could do, ask them why it isn’t.‘) why would any of these incumbents seek to change the techno-cultural production machine that has made their bosses billionaires? AI isn’t a disruptive force to them, it’s a compliant one and the aim is simply to avoid letting any of your three or four competitors claim any space off you. Luckily for us, maybe, it’s actually going quite badly as Open AI starts to hit a ceiling, the numbers look unworkable and they keep launching things that flop or provide some novelty but little functional utility.
Short Stuff
I’ve been asked to do an interview for a thing but the thing I really like is that I’ve been given a long time to do it. I have the questions but now have two months to answer them which is really really interesting because it means I can actually think and sit with them rather than dash them off like often happens or as on this blog
A good piece on rituals, it’s a useful antidote to the sometimes lazy framing of ‘smartphones are now rituals’ that you sometimes see in popular reporting. Rituals have specific qualities and properties that are not present in most technologically-mediated content binges.
Following on from my last post, Dave Karpf’s review of Dixon’s book on blockchains: “He sees some problems with the Internet that venture capital helped build. The only solution he can imagine is more venture capital.”
Wes on tech’s delusional relationship to Star Trek. (Weirdly I added an overlong footnote about Star Trek to a recent essay. Probably won’t make the cut but it was basically ‘Star Trek is a silly thing to pin your colours to because it is politically infeasible.’)
The fascinating trap OpenAI has itself in as a result of its arrangements with Microsoft and as a nonprofit to prove that it has not got anywhere near so-called AGI.
This was a really great interview with Cameron Tonkinwise (and Okskar!). I nodded enthusiastically with most of his talking about designers in organisations. Was hoping there’d be more succinct and clear definition around Transition Design but there’s a lot of great content there.
I have little loyalty or connection to San Francisco but you know I detest the hubris and nihlism of tech culture. This is a great piece from Rebecca Solnit on how it has destroyed the city and the paradox at the heart of claims of democratisation while Silicon Valley increasingly lives and encourages isolation, alienation and separation. Putting it better than I did in the last post.
Something really sparked a circuit in this great article from Beth Singler about apocalyptism in AI. I’m paraphrasing but; apocalypses are a utopia for those that survive them.
Sorry it was very short this week. I feel like I’ve worked through a lot of stuff recently already and have been focussing on work to try and get various things finished and over the line. Ok, love you, bye.
I’m particularly annoyed today. I had a backlog of news and research to go through and mainlined too much horrible shit in one go to remain my usual centrist-Dad balanced self. Instead I’m taking this opportunity to work some rage at hypocrisy. It all started with this great Rolling Stone article about this year’s Consumer Electronics Show and an idea that stood out to me as a much better articulation of the root of a bunch of work around my PhD:
The whole week [of CES panels and presentations on AI] was like that: specific and devastating harms paired with vague claims of benefits touted as the salve to all of mankind’s ills.
Throughout the show, the author writes about how wealthy tech moguls stood on stages and loudly promised all the ways that AI would make people richer, happier, healthier, live longer and fix whole ecosystems while in quieter Q&As, they brooded and avoided eye contact while discussing the specific and existing harms and exploits going on; from algorithmic injustice to scams and crime. Then the author discusses the actual tech on display; that despite these claims that AI will cure cancer, eliminate road deaths, deal with climate change and and uplift society, all that is on display are AI sex toys, a pixelated rabbit that orders you the most normal pizza from the list (famously in the demo the creator of the Rabbit R1 just asks for ‘the most popular pizza from Pizza Hut’ which is how everyone orders pizza right? More on that in a bit) and a telescope that can remove light pollution (admittedly cool). There’s an outsize contrast between the claims of potential AI futures (overpromising, blindly optimistic and disconnected from real-world problems), the reality (quick buck gadgets that have little utility as demonstrators) and the evidenced harms (fraud, deception, crime, IP theft, injustice and road deaths.) And these appear to be drifting further apart.
Dan McQuillian has also put it well as “the social benefits are still speculative, but the harms have been empirically demonstrated.” This is a big motivator in my own research in AI and has been really since the early Haunted Machines days: How and why have the imaginary claims of speculative benefits outweighed the observable harms it is doing? What methods, tricks, tactics and strategies are deployed to make us believe in these fantasies?
Most of the executives hoping to profit off AI are in a similar state of mind. All the free money right now is going to AI businesses. They know the best way to chase that money is to throw logic to the wind and promise the masses that if we just let this technology run roughshod over every field of human endeavor it’ll be worth it in the end.
This is rational for them, because they’ll make piles of money. But it is an irrational thing for us to let them do. Why would we want to put artists and illustrators out of a job? Why would we accept a world where it’s impossible to talk to a human when you have a problem, and you’re instead thrown to a churning swarm of chatbots? Why would we let Altman hoover up the world’s knowledge and resell it back to us?
We wouldn’t, and we won’t, unless he can convince us doing so is the only way to solve every problem that terrifies us. Climate change, the cure for cancer, an end to war or, at least, an end to fear that we’ll be victimized by crime or terrorism, all of these have been touted as benefits of the coming AI age. If only we can reach the AGI promised land.
Lots of others have come at this idea in other ways; Bojana Romic on how AI people frame the present as a ‘transhistorical continuity‘ into an inevitable future, Lucy Suchman and Just Weber’s ‘promissory rhetorics‘ where technology is framed by what it will do rather than what it actually does or Lisa Messeri and Janet Vertesi’s ‘projectories‘ where imaginary and ever-receding future technologies are used as justification for present investments and cover for failures.
Another rhetorical flourish I’ve noticed is the constant reference to ‘technology’ as the agent of all this change rather than massive multi-billion dollar companies, their leaders and shareholders creating this stuff. Even more critical groups like the Centre of Humane Technology ask ‘How to tell if a technology will serve humanity well?‘ Rather than the more accurate ‘How to tell if a multibillion dollar company, it’s leaders, shareholders and the regulators they have captured will serve as well?’
The irony of this frustrated critique of the discourse around AI is that it has already been captured by the extremists in big tech. If you point out that AI isn’t actually meeting any of these promises and is hurting a bunch of people along the way, it is turned into an excuse for more, faster AI. Effective accelerationists who are tend to lurk at the forefront of the technology and money discussion will gleefully profess that fuelling the worst excesses of capitalisms is a great idea because actually it will lead to all these things they’ve been promising: That really, the problem isn’t that technology developed and deployed through capitalistic mechanisms will always fail to fulfil its promises as longs as the motivation is shareholder profit, but that it’s only with more, harder, faster capitalism that these promises can be fulfilled. In the word of the angry man that promised us that blockchain, then the metaverse was the next big thing and makes all his money from selling military technology, the market is a self-correcting mechanism with the best interests of humanity at heart and so we must give over more agency to it.
And people keep buying this garbage! Even as the creators are openly, wilfully dismissive of the needs of ‘consumers’ and openly promise to take away their agency! In the run-up to the US election there’s reckons going around again about why working class people vote against their economic interests. I know this is a controversial theory and I’m not a political scientist so not able to weigh in on the debate only to say that in the case of Brexit and Trump, data shows that the people to be hurt most by them were a majority of the voting block. A commonly-heard but dismissive, snobby and deleterious reading of this is to say that all these rhetorical flourishes are effective in convincing people of extremist views (including those of techno-optimist extremists) as the solution to social inequity but the subtext of that reading implies that people are stupid, which they’re not but is exactly what big tech and extremists do think of people.
Perhaps (and this is pure dirty reckons) we should think the other way: a sort of aspiration towards nihlism. As people make decisions about whether to eat or heat their homes, as successive climate records continue to be broken, as geopolitical instability continues to deepen, the answer of big tech is AI sex toys, a pixelated rabbit that orders the most popular pizza and $3500 VR goggles. AKA Jackpot technologies, preparing the wealthy tech class for a diminished society where society is replaced by technological mediation.
All the promises of democratisation, liberation, creative opportunity are demonstrably disproven by a suite of technologies that isolate, divide and exploit. In the current tech future, the aspiration is to have no common cultural reference points with anyone and instead to compete for the most superior human experience by accumulating more technology and more media. It’s no longer about developing technology that might help people navigate the inequities and complexities of society, government and every day life in a big complex assemblage but technologies that isolate and elevate you beyond it such that you no longer have to rely on or work with the state or institutions. Is it this that has an aspirational appeal to people? Imagine if someone could remove your social problems not by solving them per se and making it better for everyone (more efficient bureaucracy, healthcare, schooling, access to good transport systems, good quality housing etc.) but by instead by removing you from having to make any of those decisions at all?
Georgina Voss wrote about or made a an observation once that Silicon Valley tech was about removing having to take responsibility; cooking dinner, driving yourself somewhere, doing your washing, paying your rent. By extension, the most aspirational status espoused by the vision of big tech is one of diminished responsibility and diminished dependence on society.
I often talk about Lawrence Lek’s ‘Unreal Estate: The Royal Academy is Yours‘ – it’s one of my favourite projects and one of the first good bits of art made in Unity I ever saw. In it, a wealthy oligarch has bought the Royal Academy of Art in London and turned it into a gaudy, tasteless mansion draped in leopard print and the cliches of modern art. The point (at least my interpretation) is that to the ultra-wealthy, the world may as well be a game engine, devoid of consequence, transaction costs and material limitations; everything is reprogrammable or reconfigurable and so, by a perverse logic in which nothing really matters because nothing has any real value.
So I’m angry because that’s the logic of big tech evangelists. To drive down the meaning and value of everything so that whatever’s being hoiked this year at CES is seen, by contrast, as the most valuable and important thing ever. That’s why you can stand on stage showing a gadget that orders the most popular pizza for you and in the same few minutes have someone equate that technology with solving crumbling planetary and social health. And people just keep believing it.
PhD
So how is the PhD going? (The three most common questions I get asked are ‘How’s the leg?’ ‘How’s the PhD?’ ‘Can you knock up a powerpoint showing x?’) (The leg is… fine. I have a bit of an early check up later because I’ve been in more pain than I like, the PhD is- well I’m about to tell you and yes I can knock up that powerpoint for you.) Good, thank you. I’ve started the second main chapter (which is chapter 4); Enchantment, The Uncanny and The Sublime. This is one of the three ‘substantial’ chapters that get into the meat of the thesis. In this case it’s looking at how enchantment, uncanniness and sublimity are used to reinforce status quo imaginaries of AI. For example; scale and complexity – by making AI appear to be insurmountably large it gives the impression that intelligence is simply a product of scale and complexity but also makes it difficult to confront or challenge. This is a technique also used by mainstream artists to dress up what is essentially using lots of energy intensive computing to make nice pictures as somehow about intelligence or sentience or meaning.
On the flip side or the amazing critical practices that challenge scale and complexity; comb data sets, point out gaps, highlight the labour and so on. There’s also aspects of enchantment like why chatbots convince us that something more than calculation-at-scale is going on.
At the moment I’m chunking through the notes and quotes I’ve grabbed over the last two years or so as I’ve been reading, trying to sort and organise. I’d like to use two case studies because it would reflect the two used in the the Spectacles, Performance and Demonstration chapter (Ai-Da and AlphaGo) but it might settle on one. Or it might be two that aren’t evenly weighted. I definitely want to use Cambridge Analytica because that was very much about enchanting people with the belief in the power of AI through scale and complexity and the (apparently) uncanny results. The other one might be Synthesiszing Obama, largely because I did a project on it specifically but also because there’s a recurring theme here about human or life-like behaviour and enchantment.
Anyway, I’ll keep you up to date. I’m hoping to have finished crunching the notes by mid-next-week and then start moving things around to form up subchapters and sections. Then it’s that process of just writing over and over and over and over and over again on each section. I’m not aiming to get these as polished as Spectacles, Performance and Demonstration. I need to look at some of the fundamental structure – particularly around how I’m positioning practice – so all I want to do is get to a point where I have the overall shape of the whole thesis and then look at it from top-to-bottom to make sure it’s coherent before diving into the detail.
If I’m honest I’m not spending enough time on it. I accept that it will take a few weeks to get back into the PhD headspace though so I’m ramping up to it. It might mean a little less blogging from me as I divert more time to it but that won’t necessarily be a bad thing.
Short Stuff
Promoting some friends for you to check out; Crystal’s exhibition and Jay’s talk. This is what the Internet is supposed to be for.
Speaking of LLMs, someone managed to ChatGPT’s system prompts (the rules that frame how it responds) and I agree (unusually) with Azeem Azhar that it is brilliant. It is completely fascinating that we can set semantic rules for a trillion parameter computer. That is actually really cool, no sarcasm at all.
This in credibly complex and evolved Codes of Conduct from an online game that Dan Hon linked to.
I read someting recently about how it was quite likely that platforms would start to coalesce again. All of the streamers have had to raise prices and that means consumers have been dropping some. It went like: Ultimately the cost of syndicating some IP for Netflix to run is significantly more cost effective than building and maintaining your own platform when people don’t want to pay for a dozen different ones. The maths of then having to keep creating original content to keep your platform ‘full’ so that people don’t get bored is also pointless when all are doing the same. I think there’s something similar here with XBox de-exclusifying some games. Entrapping ecosystems were good when times were better, now when times are lean, getting in front of eyeballs is still the priority.
Remarkable story of Air Canada chatbot making up a refund policy then Air Canada back-tracking and claiming the bot is a ‘separate legal entity’ and that it shouldn’t have been trusted.
Lots of folks sharing this have commented that ‘running Doom on x’ is now a benchmark for computation. Anyway, running Doom on E Coli bacteria.
OpenAI’s new gizmo named after an entry-level Shimano gearset for some reason is another glossy distraction from the exploitation and misrepresentation at the heart of their business models. I honestly don’t know why nothing stirs in me when I see these things. I sense the genuine glee and excitement that others have for them but I just automatically go ‘oh great, another one, who are they going to hurt this time?’
I finished Tchaikovsky’s ‘Children of…‘ series the other day. I was actually inspired to pick it up because of Matt Jones’ blogging of it. As Matt points out, it’s clear that the corvids in the latest book are meant to be illustrative of the difference between sentience and intelligence or at least to trouble that distinction. Where the other ‘evolved’ species (spiders and octopuses) demonstrate clear sentience as we might relate to it; intelligence plus decisions-making, emotions, sense of self and others, wants, needs, inner worlds etc. (I don’t know the definition) the crows are more ambiguous and in fact claim not to be sentient but to be evolved problem solving machines. The crows live as pairs – one of the pair can observe patterns and spot new things while the other memorises and catalogues. They also can’t ‘speak’ only repeat things they’ve already come across (a la stochastic parrots). I suppose the point is to question those (particularly AI boosters) claiming that sentience emerges from complexity. That’s why every new ‘behaviour’ from a GPT is loudly touted as being indicative of sentience; we read these emergent patterns from complexity as if they are indications of sentience. (I’m writing about this now in the PhD) It’s a good metaphor.
I ended up in a hole on LinkedIn the other day of people responding to a very good post who in the last year have become coaches and experts in AI. Watch out there, folks, the charlatanism is real. Here’s my advice; any time anyone tells you what something could do, ask them why it isn’t. Ok, I love you, bye.
This is a rough, edited transcript of the talk I gave for Bartlett Cinematic and Videogame Architecture students on Monday. I recorded it on my phone which was sat next to me and then used Otter.ai (which is very good I think) to transcribe. Back in the day everyone used to blog their talks and I really liked it so I’m going to try and get back into the habit. I should note that for these types of things I rarely really properly ‘prepare.’ I tend to thrown some ideas together that I think/believe will chime with the audience and then have a more discursive meander through those ideas with them. With more professional stuff it tends to be a bit more uni-directional and pro. Also I don’t have time to scroll through for all the typos and you know I’m bad at that anyway so just apologies in advance really. Anyway, transmission begins:::
Hi folks, thanks for having me. I have to say I’m pretty jealous, if this course existed ten years ago, I would have done it. So I’m going to talk about this idea called ‘Design in the Construction of Imaginaries.’ And I choose these words, for a particular reason, I’ll come on to how I’m gonna use them in a way in a second. But I think it’s important to be on the same page here about what these words mean.
So I come from a design background and I’ve taught interaction design and graphic design and product design and UX and all sorts of stuff. But when I talk about design, I really just mean a sort of sophisticated understanding of material culture. So that might mean digital stuff, it can be physical stuff, architecture. And design means thinking about a particular affect or effect that you want to have on the world. Whereas (and this is purely my own definition) art I think is more subjective, it’s about you as a person.
So then imaginaries is an idea from social science, Sheila Jasanoff is probably the person to read if you’re interested. Imaginaries are a sort of collective headcanon for things in the world. So we have an imaginary of artificial intelligence, which I’m going to talk about quite a bit. We have an imaginary called London, we have an imaginary called gender, we have an imaginary of ‘our people,’ nations have imaginaries. So these are all sort of constructs of certain tropes and myths and stories and visions that we all collectively hold and often can be quite tricky to pin down. And I’m very interested in how design constructs imaginaries, both to build and reinforce mainstream imaginaries but also; how can we use design or material practice that to unpick these imaginaries as well, to challenge them, to question them and to sort of disassemble them and show their parts which is what this little talk is all about.
So very quickly who I am and what I do. I’m Design Futures Lead at Arup Foresight. My job is to think about what the future of various things and for the sake of Arup and our clients, but particularly I lead on using design methods to do that and we have a small growing design team who use design to both produce certain types of outputs like exhibitions and films, but also as a research technique. Before that, I was an academic for a long time and I also ran a curatorial and research project called Haunted Machines with Natalie Kane. But a lot of what I’m going to talk about is mostly related to my PhD work, which I’m doing at Goldsmiths.
So I’m going to kick off with this with with a concept called future foreclosure. This is the idea that we’re not very good at thinking about the future and actually, the futures we construct and the futures we imagine are actually quite limited, and increasingly so. This shot is from Star Trek III; The Search for Spock, which is not as well studied perhaps as 2001; A Space Odyssey for its set design. But still, Gene Roddenberry, the creator of Star Trek put a lot of effort into designing the detail around the Starship Enterprise, including this sign on the transporter, which says ‘No Smoking.’ And I love this because it indicates a world in which the people of the 1980s were able to imagine a future in which you could dematerialise and rematerialise somewhere else completely through this amazing technology. You could jump from a spaceship to a planet or a ship to another ship but, everyone would still be smoking. It shows how we don’t question accepted social norms.
And then there’s this idea that we’re in a kind of bleak place for the future. This is a quote from David Runciman, who’s a Cambridge political scientist. And he was reflecting I think, on what happened in 2022. And he said…
We were talking about the metaverse earlier as perhaps one of the greatest examples of that. It’s just bootstrapping technology onto a financial instrument, and hoping for the best. So there’s a sort of cynical lack of imagination about what the future might be but also a sense of inevitability. My research looks at how AI has been socially constructed, and design’s role in that. One of the really fascinating things about AI is that everyone has a concept of it. Everyone’s seen films, video games, everybody’s heard hysterical news stories. But that also creates its own problems, because it gives AI this sense of inevitability. So these scholars reviewed I think 200-odd ethics guidelines for the use of AI across governments, nonprofits and companies and said…
So that imaginary of an inevitable AI coming is so secured that all of the discourse is just about limiting harms. And this percieved inevitability; the idea that nothing can stop the status-quo AI future and no alternatives can be imagined blinds us to the harms. Sun-ha Hong talks about the idea that…
So there’s this idea that a techno-future is inevitable and foreclosed but it wasn’t always so right? There were times where we’ve had alternative visions for what technology could be. Anyone seen David Cronenberg’s Existenz?
[Group of Gen Z’s doggedly keep their hands down] Oh boy. Okay. So this came out of this same year as The Matrix, 1999. And The Matrix obviously has become a real hallmark of what people now think about as a retro future; the idea of immersing ourselves in a simulated reality and an artificial intelligence that takes over. But Existenz was looking at the idea that we might be carrying around these bio computers or ‘pods’ that we plug into and exist in a different sort of virtual reality that existed between these bio pod. At the time, that was another future imaginary that people had and yet for some reason this is now seen as unreasonable and ridiculous while The Matrix is often held up by journalists as a potential future reality.
So, why do some imaginaries, like an AI apocalypse in The Matrix, take hold and others don’t and what role does design have in the success or failure of them?
Minority Report by Steven Spielberg came out a few years later in 2001. It’s a hugely influential film on the world of design and technology for reasons I will go into in a second. And it’s a really interesting case study in the cultural impact of one film over a huge collective imaginary of what technological futures are. Minority Report (based on the Phillip K Dick short story of the same name) takes place in a future where we’re able to predict crime before it happens and so there’s a ‘pre-crime unit’ that arrest people before they commit the crime. But there lots of other technologies in it like a gestural interface thing, augmented reality, eye tracking, AR and facial recognition. Almost all these technologies that were speculative at the time, captured here.
And so Minority Report becomes a powerful comparison point for journalists and investors around technology for years to come. Rather than opening us to alternatives or presenting a critical question, Minority Report is used as a story, a metaphor of a particular technofuture that drives billions of dollars of investment to technologies like gestural interfaces, AI and worst, predictive policing.
We might think that the role of science fiction, fiction and cinema is to broaden our future imaginaries and to help us challenge the status quo rather than but as philosopher Fredric Jameson said…
Jameson makes a really interesting suggestion that really the role of futures in science fiction, in most cases, isn’t to broaden our imagination, and throw in new ideas and new questions but is to convince us that we’re just in the past of a future that’s inevitable and already pre-decided.
Minority Report becomes incredibly influential. It has a whole Wikipedia page dedicated just to the technologies that are that are in Minority Report. The production team work with loads of researchers at places like MIT and all sorts of technology companies to develop these gadgets and gizmos. And then for years and years and years. We’re talking over two decades now, people have been trying to recreate the technology in Minority Report or using it as a metaphor – a framing device – for real-world technologies.
But there’s a very good reason why Minority Report and artefacts like it work, why they stick in culture; and it’s because of design. David Kirby really analyses the use of design to convince people of certain worlds and world building. John Underkoffler was the guy who designed the gestural interface and then went up to set up a multi billion dollar company based on the excitement generated by Minority Report to build it but obviously it didn’t work we don’t have them…
All that is to say is that basically the reality, believability and tangibility of the designs for John Underkoffler and actually for Minority Report more broadly, is what makes them stick; the reason they were enticing is because they seemed somehow grounded in reality. And there’s lots of detail in that film to bring them out. For instance, there’s a part where when Tom Cruise is swiping across the interface, there’s an error and one of the windows doesn’t come with him, he has to go back and pick it up and those sort of details, bring out the believability of it.
I’m not going to go on about Minority Report anymore. I just wanted to use it to show this connection between imaginaries, design and futures. How the futures we imagine are informed by and inform the stories we tell and how design is a sort of connective tissue that bring both fictional and future imaginaries to life and make them convincing. Because, despite being a complete fiction, as a result of Minority Report we’ve seen probably billions of dollars invested into these speculative technologies at the cost of less glamorous or profitable things like climate intervention or medical science.
So now I want to talk about the way that design is used to construct imaginaries, and the way that you can then start to unpick, unsettle and challenge them through critical practice. And this involve the use of metaphors, charisma, and tropes that draw on science fiction. Earlier on I mentioned Haunted Machines which is a project I ran with my friend Natalie Kane, who’s a curator at the V&A. We started this in 2014 and were really interested in the question of why so much of the emerging technology of the time; voice assistants, Internet of Things devices and so on, were wrapped up in occult language and metaphor.
Once you really start to get into the weeds on this, it’s more than just colloquial and coincidental. We did lots of work here and there’s lots of great social science about this. Essentially magic is a causeless technology; you push button get thing, there’s no work that has to be done on labour involved in that process. Secondly it associates the technology with secret, hidden or forbidden power which also goes to making technology really aspirational since power, speed and control are so revered in society. But it also reveals something about how we imagine technology.
We perhaps like to think that technology and innovation are closely aligned to science but scholars have really shown that technology and innovation respond to deeper, more human and existential desires and fears but dress it up as science in order to give it credibility. For instance, Anthony Enns, explored the ongoing hold that psychotechnologies (brain reading technology) has over the imagination and innovation space (see recent neuralink news)…
Most technologies aren’t really answers to things; they’re charms to make you more powerful, more beautiful, help you live longer or give you access to secret knowledge. Anyone who’s watched Mad Men would know this but I think we assume that somehow the development technology is a rational science that doesn’t tap into desires or emotions that it’s based on, like, scientific principles. And so, like any field that promises the solution to spiritual, existential crises, it fills quickly with charlatans and criminals.
[A quick game of ‘Name that Criminal’ ensues.]
Charisma is really important here, there’s a reason that we keep revering and looking up to these people. William Stahl, who really analysed this enchanting effect that technology had with the early introduction of the PC talked about the important of charismatic figures, sage-like or even messianic who became idols and prophets as a result of this narrative framework that developed around technology as secret, powerful and tapping into needs and desires.
So as well as metaphors of magic, power, speed and control and great charisma, technology draws on pre-existing tropes and imaginaries we have in order to slip it into mainstream acceptance.
So this is Ai-Da which is claimed by its creator, Aiden Meller, to be the first artist robot in the world. And obviously, like a lot of these projects, very little is given away about how it actually works, who built it, what the actual algorithmic processes behind it, but a lot of work goes into presenting it and framing it. In this case, for a hearing in the UK parliament on the future of the creative industries it is female presenting, it’s presenting as juvenile and it’s presented in these overalls and dungarees to look and feel like a creative or artist as well as, again, somewhat juvenile. This scene is fascinating for many reasons and I have written thousands of words about it. The decision to invite a machine to testify before parliament (which they actually say they can’t admit as real evidence) is ludicrous. Its answers are also pre-recorded, so the whole thing is a performance but you wouldn’t give a tape player the same platform. But the really interesting thing about it is that, very quickly, the politicians and legislators quickly fall into step of treating it like a real human being, they start referring to it as ‘she’ and ‘her,’ and they ask it questions directly. So it’s a really fascinating thing about how the design choices around the presentation of what is essentially an algorithm in a box are eliciting empathy, feeling and sort of status quo relationships from these legislators.
There’s also a fascinating part where Meller talks about some of the engineers who worked on it who said that really it’s the worst form of artist robot you can imagine, right? Because if you want a functioning ‘artist’ robot, that produces paintings like Ai-Da, just use a robot arm. Humans very complicated, messy, lots of limbs that don’t really do much in terms of art-making.
So why insist on this human form? Obviously, the whole thing is to draw on and reinforce an imaginary that we’re super familiar with from science fiction of humanoid robots displaying human-like behaviours. This draws on empathy to make the audience feel an emotional connection (again those deep desires and fears) and also makes it more easily ‘consumable’ as the audience are familiar with this sort of setup from TV and film. There’s also a second imaginary at play which is one that is perhaps more powerful but les obvious and that’s nation-building where scholarshave shown now how states and governments are keen to associate themselves with technology to appear future-facing and high tech.
Another great example is the AlphaGodocumentary from DeepMind. So in 2016 DeepMind beat the world Go champion, Lee Sedol and they made this documentary about it which obviously gives DeepMind the opportunity to frame the whole narrative around what they’re doing. And because this is film, they also draw on tropes. So on the left, for instance, is a scene where Lee Sedol realises he’s about to lose, and they have this long lens of him outside smoking a cigarette over melancholy violins. And this on the right is a scene from the very end where one of the advisors is playing with his daughter at sunrise in a vineyard and is saying, how excited he is about the AI future. The whole thing is very well done and choreographed to tell a very familiar David-and-Goliath story of DeepMind, a team of dozens of genius computer scientists, owned by Alphabet, one of the world’s largest and most powerful corporations beating a Korean man at Go. Which on the face of it is an outrageous framing, which is why there’s lots of discussion at the beginning of how complex Go is, how it’s ‘uncomputable’ as a way of showing how DeepMind have not only beaten human intuition and gestalt that Go apparently requires but also this insurmountable mathematical problem.
This connection to games is also really interesting, around the same time IBM put out Watson to win at Jeopardy! And there’s a deep history of AI, computers and chess. The mainstream imaginary of AI (and AGI in particular) involves AI being as good as, if not better than a human. We already have computers that can model whole-Earth weather patterns, or simulate huge crowds moving through space, or image distant galaxies, things that no human can do but thanks to collective imaginaries we’ve set benchmarks of ‘good’ AI as one that has the intuitive and gestalt properties of human thinking. Which is why these folks are so focussed on making AI that can make art or win games; it is a way of disenchanting these human activities and skills, showing that they are calculable, controllable and computable. It’s profoundly nihilistic.
The final thing here is the ‘so what?’ ‘So you’ve built a computer that can win at a game, so what?’ And this is where there’s usually a clever rhetorical swipe and the story turns speculative. You’ll often find here as you do in AlphaGo and in the shorter Watson documentary an extended claim that victory at this one very confined benchmark equates to curing cancer, solving climate change or alleviating poverty. Even though scholarshave shown that there’s little result from these displays and performances other than increased funding and hype.
And then you’ve got things like the design of power and complexity. This is another big trope in AI. This is Alexander Nix, who was the head of Cambridge Analytica who were famous for stealing a lot of data from Facebook and claiming to be able to predict and influence the outcome of elections. Studies since have shown they had no such power whatsoever. They did steal thirty million Facebook profiles but they didn’t have anything fancier than a big Excel spreadsheet. The point is, when you see him or read about him, he’s always described as very charismatic. Like our previous charlatans, that presentation, in this case of a slick, public school guy who’s well connected is really important. And the things that Cambridge Analytica really relied on to bamboozle people is scale and complexity. All of their comms is big numbers, complex terms and ideas. And whether it’s in cinema or in real life, this is often used to construct an AI imaginary; this idea that somehow it’s bigger than us, and we can’t possibly comprehend it. It was used by DeepMind to describe Go as uncomputable and beyond the comprehension of a human. This apparent complexity is used as an invitation to ignore how it works and, again, to secure that secretive, magical power.
Then there’s journalism and media. If you Google ‘artificial intelligence,’ you get these humanoid figures that are usually blue with lots of lines going everywhere accompanied by numbers and data. This is no true representation of AI, and lots of groups are exploring alternatives, but it is a pretty dominant aesthetic metaphor used in mainstream press and reporting which goes to secure an imaginary that…
…is also reinforced in cinema. We can see, and are likely all familiar, with how the same aesthetics are recycled, because if you’re going to explain AI to someone, they’ve probably seen Ironman so you can use that to build your story on. It’s easy and convenient to hijack those aesthetics for stock imagery, and sort of loop them back through culture over and over again. But of course, at the same time, as we saw in Minority Report earlier, real-world technology is shaped by this set of fictions and stories.
[Pause for chat]
So that was a whistle stop tour of how imaginaries are constructed and how design is used to build them. I want to quickly look at how they’re disseminated and what that means for them. Not only these, these imaginaries created and reinforced, but they also have to get out there in the world. Has anyone come across the Shazam Effect? So, in 2012, a bunch of Spanish researchers sat down to answer the question; ‘does all pop music sound the same?’ And they found out it did…
So later, Derek Thompson coined the Shazam Effect to explain this; that through things like Shazam, Spotify, and these increasingly available data platforms, record companies had loads of data about what people like which they could then use to produce more music that conforms with what people are listening to. And, as science fiction author Bruce Sterling says; ‘what happens to musicians happens to everyone.’ And so…
We see this effect in things like International Airbnb Style as coined by Laurel Schwulst. When you’re on Airbnb, you’re trying to attract people to stay at your property and so you look at the other properties that are successful and you design it, present it, photograph it and light it in a way that’s successful for others which results in this homogeneity.
We see the same thing in cars with this Wind Tunnel Effect. There’s so much software and regulation around the design of cars that when you run them through the simulations that are required to make them as efficient as possible, meet the fuel standards and so on that you basically end up with the same slight variations on the same forms.
In architecture we also see the same thing thanks to industrial image production. This is Crystal CG who produce renderings for architectural studios all over the world and they’ve produced hundreds, thousands of images. So they know what worked previously. They know what clients like, they know what works in a particular country or city or region. And so you end up with homogenisation of style and design and form as this centralisation of production makes the process less artistic and more industrial.
And these renderings are particualrly important because I would contend that these images, printed on these eight foot tall hoardings and plastered all over the city are the most common and everyday way that most people come across the future. They might read the news or watch a film, but every day they’re living, working and travelling around these massive, highly saturated, gorgeous images that obscure the real building site and importantly…
…distract from the actual place where they have a voice in the future of their city, which is in planning notices.
I think the thing to take away from all of this as we move on to talk about what you can do and how you work as critical practitioners is to know that design in this situation is never neutral, that it carries forward pre-existing tropes and assumptions from culture, imaginaries and fiction and embeds them in new objects and technologies. So things like real-world AI are designed, by choice, to conform to expectations from fiction and the imagination, even though it is often wholly inappropriate to dealing with real world problems. Madeleine Akrich writes…
So I want to get on to critical practice, because that’s why we’re all here.
So an assumption that I’ve seen a lot in AI and again, I’ve written about quite a bit is the idea that AI will somehow democratise imagination and creativity. And I love this interview with the creator of stable diffusion, David Holz who says…
This idea, that again, is profoundly nihilistic, that creativity, criticality and imagination are just about making the right tool is provably false and very similar to claims that social networks would liberate, democratise and educate. At the same time as claims are made of ‘democratising’ creativity, these tools and platforms are foreclosing imagination to try and make it conform with what AI developers want.
The title of this talk, and it’s related to my thesis title as well is ‘Design and The Construction of Imaginaries’ which is an allusion to an amazing paper by Carl Di Salvo – ‘Design and the Construction of Publics.’ I always point to this work for your one-stop shop on how critical design works. He suggests that design has a role in building public discourse, not just solving problems, he says…
Di Salvo says that publics assemble around issues. This issue might be a new building, it might be a broken toilet, it might be you know, being a parent, it might be unaffordable rent. And very often those issues are designed such as with a building, app or service. So Di Salvo extends this and says that the way that critical design works is by inventing the things for an issue to assemble around. And when you’re talking about things like AI and the sorts of things that are quite ephemeral and tricky to pin down, there’s a powerful role for critical practice to materialise the issue so that people can then assemble around it and talk about it.
So a one of the most well known projects that does this, and one that Di Salvo writes about is Tom Thwaites’ Toaster Project. So Tom Thwaites set out to answer a simple question, which is; ‘can I make a toaster?’ He had to make everything himself, had to smelt all the metal and form all the plastic and reated this great series of YouTube videos, I think became a little documentary and a book. So why? I mean, we already have toasters and as Di Salvo says, he’s not solving anything. The main thing is how the projects reveals how much we take for granted these incredibly complicated supply chain processes that go into a really simple object. Di Salvo calls this a ‘tracer;’ it traces the outlines of a thing that’s otherwise invisible, which is the whole supply chain, industrial set of processes and technologies that produce this really simple object. It reveals them to the audience by saying this is a ludicrously complicated, globalised, exploitive, and wasteful product. So by revealing this thing that’s otherwise invisible or obscured he brings an issue to the front which is otherwise quite difficult for people to understand; supply chains, materials, all that kind of stuff. So this is what what we mean by using design to unsettle or untangle certain tropes.
So Di Salvo calls this a ‘tracer’ but the other type of project in critical practice of design is what he calls ‘projectors.’ You’ve probably heard of speculative design and Tony Dunne writes in his thesis…
Di Salvo says that these projects work by showing how things might be otherwise to reveal the way they are which might be otherwise hard to see, because it can be hard for us to challenge our assumptions about life; think again of the Start Trek ‘no smoking’ sign.
So this project for example is called Robots but would you think of any of these objects as robots? They all have characteristics of robots. The one with the one with the looks a bit like a lamp has to be plugged into the wall in order to work so it can’t move around as much as it would like, the L shaped one is my favourite but has to be held at a particular angle in order to work which requires it to basically rest in the crook of your arm or it won’t work. All of these things are designed to have the behaviours of what we might assume a robots; they have movement, they have autonomy, some sort of agency, arguably, but they look nothing like ‘robots.’ They don’t look like metal humanoid figures, or little dogs or machines but by projecting forward (or sideways, let’s say because it’s not suggesting a particular time) and saying ‘this is how things could be otherwise’ it reveals the assumptions that we have taken for granted; all those tropes that we talked about earlier that are constructed around AI and technology.
So that’s tracers and projectors, used to reveal hidden assumptions. But I also think the hack or exploit is really an important tool. I’ve spent far too long going down to speedrunning rabbit hole which is completely fascinating. So speedrunning is simply people competing to complete a game as fast as possible by any means possible. And the great thing about that challenge is that speed unners don’t see the video game as the designers intended it. They don’t see it as a world with the narrative and certain mechanics in it have been designed, they almost get to the layer underneath the actual construction of it; the architecture of the actual game engine itself, and try and find hacks and exploits in the underlying mathematics really to find a way through. And these hacks require really explicit and sophisticated knowledge of game engines and architecture in order to spot the exploits.
And I like seeing the same concepts on approaching technical systems in critical practice; that ability to see beyond the thing as it’s presented and unpick the reality underneath. For instance, Gabriel Goh took Yahoo’s Images’, not safe for work filter, and turned it all the way to zero to ask; ‘what’s the least pornographic image possible? What if we undid that algorithm, laid it out and turned everything to zero and then rendered what it would give us?’ You end up with quite pastoral and bucolic scenes. You sort of get the sense of classical architecture, green, beaches, sky, that kind of stuff. And this is a similar sort of tracing project that uses exploits to reveal a system. It’s taking the thing that we’re that we’re given, and saying, ‘I’m just gonna lay out all the pieces of it and try and figure out where they come from and the decisions that were made.’ Because someone programmed this, it’s not accidental, someone decided what the least pornographic image possible should be and trained this system on it.
An important note here is how, like a lot of good critical practitioners they document how they do this right because that’s really where the knowledge is. It’s not in the thing itself. It’s in the journey to get there and that’s, that’s really important. They’re all documenting what they did and what they revealed.
I’m going to talk about one of my own works really quickly, which was an Augury a project I did in 2018 with Wesley Goatley, who’s a sound and data artist by training and we’ve worked together on lots of projects. So augury was an ancient divination technique used by the Greeks and Romans which involvd looking at the flight patterns of birds. So they basically say; ‘all the birds are flying West, therefore, we must go to war’ or ‘the birds are flying East therefore, we must go to war.’ It usually ended in ‘we’re going to war.’ Fundamentally, it was a belief that the birds were messengers from the gods.
So we created a sequence-to-sequence machine learning system that trains on ADS-B data of the flight pattern of planes over 50 kilometre radius of London for about four or five weeks and put it next to the latest tweets about the ‘future’ from London at the same time, and then train this machine learning system so that once it was in the gallery, if you asked it for a prediction, it would just give you complete garbage, because there’s absolutely no association or causal connection between these data sets. But what we wanted to do is untangle a lot of the way big tech was talking about AI as almost prophetic in its power while obfuscating the way it actually worked. And so the point of this satire was to say, ‘given how little we know about how often these corporate machine learning systems work, they may as well be the flight patterns of birds and planes.’
And, as I said before, documenting, reflecting, talking with a great collaborator about what you’re doing, the choices you’re making and the thing you want to unpick is super important. Especially as this was one of our first time working with machine learning. And as we’re doing it, it’s revealing to us something about the way that computer scientists or engineer think about things like corpuses, data sets, epochs and so on. What are the corners cut? The conveniences made? The assumptions inscribed in these tools.
So if that’s a ‘tracer’ what can ‘projectors’, alternatives, look like? This is QT.Bot by Lucas LaRochelle. LaRochelle, did a project for years called Queering the Map where they gathered stories of queer experiences all over the world, and then pin them to Google Earth. So basically, people would would say, give an anecdote about how they met their partner, or maybe had a negative experience as well as a queer person at a certain place on Google Earth. This map is now huge, it must be tens of thousands of different people’s testimony. But then LaRochelle trained a machine learning system on that data, on the stories and on the images of the places to generate an arguably queer AI with data is fed it that is very different from a normative, heteronormative data set.
And another speculative trajectory for AI that lots of people are making is how it might enhance our relationship with the non-human world. Such as with the Ecological Intelligence Agency from Superflux. Sascha Pohflepp captured this idea for me, that what AI gives us and should be giving us, isn’t the ability to replicate things that humans can already do but to extend and enhance our ability to understand things that are too fast, too slow, to huge or too small to be comprehended by us. I guess that’s where a lot of the science is pointed but what if that was the imaginary we held too? Not one of speed, power and control but of understanding and empathy and care enabled by the ability to crunch vast data to the human scale.
So what can I leave you with as critical practitioners? I hope I’ve talked about how imaginaries are made and disseminated and how design is used to reinforce those but also how critical practice can use methods like tracing, projecting and exploits to reveal and unpick them, to assemble new publics around the ephemeral and loaded imaginary of AI and other things.
I called this little section ‘making traps’ because I think really, taht’s what all this is about. Whether you want to convince someone of a science fiction future, or invite someone to challenge their assumptions you’re making a trap. Benedict Singleton, reflecting on Villem Flusser writes that…
Fundamentally, no one is creating anything new. The trap maker simply reengineers existing tendencies for a new outcome. Think about a simple rabbit trap; you bend the branch to hold the elasticity, you know the rabbit wants to eat a certain type of bait and follows a certain path and thwack! Rabbit stew. There’s no fundamentally new thing here. All of these futures are traps; drawing on a mastery and sophisticated understanding of things that already exist – aspirations of power, control and speed, stories and fictions of robots and all powerful machine – and pointing them in a new direction that’s favourable to whatever imaginary you want other people to buy into. The question is whether you build traps that keep us in the status quo or ones that break us out of it.
Thanks.
Recents
I contributed to the Service Design College’s Futures and Foresight course on some of this stuff but in a more applied way. You can no go and look at it if you want but I think it is behind a paywall.
Short Stuff
Very short. Trying to spend less time reading newsletters and more time writing PhD even though this transcription took probably four hours.
I haven’t come across anything gushingly positive on the Apple Vision Quest (but then I don’t tend to read uncritically gushing positive journalism). Here’s some from Paris Marx about why it doesn’t make sense,
Max Read suggests that Kyle Chayka gives too much weight to algorithms and feeds in his analysis of why everything is the same.
I’ve been reaching out to catch up with people now I’m emerging from my convalescence including folks I haven’s spoken to since before the Cov. If you wanna hang out, let me know and let’s hang out a bit. Ok, love you bye.