At least three people said ‘I love your blog’ in the last week and each time I felt crestfallen that I’ve been so lax with this this obligation. I really do try and do everything. I think I’ve seen films where in sound recording studios some of the sliders on the big desk of sliders move on their own; I’m not sure why this is, I assume it’s something programmed in, certainly I’ve never been around audio equipment that sophisticated. My own musical recordings were all done with a combination of Garage Band and crates of Kronenburg. Anyway, that’s how life works sometimes: You turn something up at one end and all the way down the other end of the desk a slider is automatically tweaked down.
Making Five Stores From The Distant Future
For the last two weeks I’ve been spending evenings and early morning working on a series of renders for the opening keynote I gave at Orgatec in Cologne last week. The whole thing was a massive faff but quite fun and diverting at a time where I really needed something creative to call my own. Sure, I could have thrown some slides together for a 25 minute-long witter with some data and ‘trends,’ but having spoken with Robert who was organising the whole shindig I felt quite inspired to go the extra mile and turned the research I did into a series of short stories from the future. At some point I’ll figure out how to share these but I wanted (realising that people who read this website are actually interested in this stuff) to talk about how I went about doing it.
To start with I had three constraints: I knew I wanted to write and tell short stories (just because), I knew I had about 20 minutes in a 35 minute talk to do this so I worked out I wanted roughly 5 short stories of about 4 minutes each. I also knew I wanted them all to be connected somehow. The next thing I knew was the big picture of the world. I’d already sent off a blurb and pinned down a couple of things that would shape the world with ideas like degrowth, the end of high finance and speculation, the end or weakening of global norms and institutions and the stuff we know about like climate and demographic change. The final thing was knowing the audience might be futures-curious but like-as-not unfamiliar with most of these concepts.
I started by throwing those big ideas down on a piece of A3 paper and imagining what connected them. For instance, in a world of managed degrowth, people might want to kick against it and you could get a subculture of people looking to engage in high finance and speculation in the same way that living a fully sustainable lifestyle today could be seen as a subculture. You might also start to see a slowdown in global logistics as a result of climate, degrowth and ending global norms, so rather than a world of next-day delivery, everything takes a long time for things to move. Between those two there’s an obvious conflict; the drive for speed, power and control, the reality of sluggish, uncoordinated and messy physical reality. This was the first one I thought about and came to sailing ships but the rest also flowed quite quickly once I started imagining what occurrred at the intersections of different drivers and ideas.
I spent an hour or so doodling away and thinking about the little visuals that emerged and that actually became the backbone of the whole thing. I started by modelling the scenes I was reasonably confident about (like the sailing ships). For each scene it was about making the familiar unfamiliar; the uncanny. For the ships for instance, I used a boxy, cargo ship that might be easily recognised but then put Chinese junker-style sails on copied from a modern sailing vessel. Each scene I wanted to be recognisable but have something different; diesel ships with sails, an office with a playground, a kitchen with 18 seats etc. etc. This is the starting point for most speculative design; finding something materially familiar and normalised and twisting it so that the audience is forced to reconcile their expectation (diesel ships have engines) with what they’re seeing (these ones have sails). So it’s also important that both those things are recognisable. Where I was introducing a brand new element – like the ‘d-rhizome’ in the home office scene; an AI-augmented alternative to the Internet that is fully node-based and inspired by slime moulds or mycelium – these would have to be explained in the story.
From here the stories and scenes sort of developed in tandem along with prompting ideas for the next scenes. Some stories were easy to flesh out to bullet points and pull together like the Bangladeshi immigrants running a semi-autonomous Norwegian vineyard as part of an international soil restoration programme for migrant workers. The pieces just sort of fall into place. Others took more forcing.
The rooftop scene, for example, is about a building caretaker where the building has so much biomaterial and biotech fitted it’s almost a living thing, so I wanted it to be less of a service and more like a doctor; someone who is widely respected and admired for their expertise and time. This is an idea we explored a little of in the Future of Making work that went to Singapore the other week. I knew I wanted the top-down view of the roof in a sort of satire of green roofs. So I put cows on it. If you’re going to cover a roof in grass you might as well have cows and you might as well use their waste to fuel a bioreactor. And the association of the machines with the animals opened up the story beyond the technological to one more similar to a farmer who cares for their animals but it’s a building.
I worked these out by sketching over and over again the scenes in my notebook, adding elements and writing notes on how they might work and how the character relates to them. I didn’t get to writing the prose of the stories until literally on the train over to Cologne. Luckily, my head was so in the world that this all came quite quickly. I settled on a model in which for each scene a character reflects on how they got there, some exposition, some weirdness. I actually ended up using Copilot quite a lot to figure out details like names, locations, species and so on which probably saved a bunch of time hunting for an endangered species of bird that eats berries and migrates from through Germany to the arctic.
On anti-AI aesthetics
A quick note on the style. You might note at the top of that paper it says ‘like Frostpunk.’ I knew I had a lot of work to do so I wanted to reduce the workload as much as possible. So inspired by the game I adopted three tricks. First of all, I tried to stick to fixed view so that I could keep the lighting simple. Apart from the dinner scene, no camera moves through the scene so I didn’t need to worry about what was ‘behind’ the camera and could build the scene like a set. Second of all was using simple flat images as background parallaxes. The rooftop is a great example. The background here is just a flat image of a street. Finally was keeping the style loose and low poly where possible. I didn’t hit this rule all the time. Ironically, the more time-pressured I got, the easier it became to just pull out pre-made assets from Blender Kit. So while the ships is all DIY, with some cardboard cutout UV mapping, by the time I was doing the office scene I was basically just modelling core bits like the room, the weird screen and table, the vertical farm. The rest is all found assets.
I realised quite late that as well as a time-saving effort, these aesthetics decisions were about intentionally distancing the images from the new generative AI aesthetic. I didn’t want to do over-stylised photo-real images with lots of soft blur because I wanted the audience to know that I had made these images by hand, that it took effort and labour to do and that maybe in that effort and labour I had got the opportunity to think about these future scenarios in more depth. That by moving things around, working out how space might function, designing the workarounds people might have to make for their work to fit their lives, that I would learn a lot more about the subject and that this informs the stories.
I know that generative AI image-making has become a popular speculative design tool but I’m pretty sure it’s not actual design. When you put in a prompt to for ‘a future retrofit commercial office where people are living in apartments and spending their days trading in high finance derivatives around a massive table’ you’re not actually designing anything. I suppose you’re actually asking the machine to elicit your own head-cannon from a cultural median for you. Sure that thing has probably never existed before but you’re not really making anything, just skewing a graph.
Design that is also research is what we learn in the actual designing of things; of keyboards and desks and tables and chairs and lamps and switches. In making those things and thinking about the people who will touch and use them you generate knowledge, understanding and insight about the future. If you’re just taking your preconceptions and getting a machine to make them ‘real’ then have you really learned anything? A reason these renders take so long is that even adding a chair to a desk scene forces me to ask questions like; how long does this person sit? What kind of things do they like? Are the proud of their work? What else might they need to do? How might their personality be reflected in the chair? And in exploring and answering those questions I feed the knowledge back into the stories and the world-building.
Points of failure
Of course, none of these projects ever go right. Even after so many years of honing my Blender-craft and convincing myself I had plenty of time there were problems. With about a week to go I lost my notebook and with it, all the sketches, notes and annotations I had been pulling together for each scene. Pretty sure I dropped it somewhere around Central Saint Martins at an event but despite a couple of visits it never showed up so I had to remember a lot of the ideas I had for the last three or four scenes. Second thing was that the PC I was remoting into to do the rendering went offline and took about a week to get back. So I had all the renders backed up and modelled out but time ticking on the actual render time. I ended up sinking about $300 into cloud rendering to meet the deadline. (I missed the deadline, but got it in before the talk which is what counts.)
And of course, nothing ever looks like you want it to. Each of the renders except the vineyard, rooftop and the forest have multiple versions. And even those were re-rendered a bunch to fix bugs or style problems. The original kitchen was just some tables arranged end-to-end with a cooker at the head. It just felt like a big party not like a kitchen purposefully setup for a large group to eat together regularly. The first office was just basically a bullpen with holographic screens which I threw together at 2am one morning and in the cold light of day, rejected as unimaginative and cliched. The idea of having it as a literal live/work retrofit with apartments in a commercial building came later. So really I ended up producing about 16 rendered animations of about two minutes each to get to the final seven.
Finally, and a critical failure for someone who claims to be a designer; I didn’t get to do any testing. There simply wasn’t time to get someone else to have an eye-over of the stories. I was writing and editing them right up to the morning of the keynote itself You should always give time to have someone else edit your work because, though I may know this world inside and out, no one else does and afterwards several audience members commented that it was ‘very dense’ meaning, I imagine, a lot went over people’s heads when spoken and not read on the page. It also probably meant that I wasn’t as confident in them in presentation as I might have been with more dry-runs, even if I did rehearse the whole thing four or five times.
For example, introducing the d-rhizome, this new type of Internet which prioritises real connection rather than command-and-control was tough. Think about a classic science fiction book; usually it only introduces one new idea (e.g. there’s time travel, plants are an alien species, spiders are the apex species) but everything else is broadly the same (e.g. people want to preserve their life, get wealthier (in some way) saved their loved ones, whatever). But science fiction authors get a whole book and your total attention to introduce and explore that idea. I had 5 minutes and a trade conference keynote so I’m not surprised some of it was lost.
Other than that, it’s just all the stuff that goes with anything you’ve worked super hard on; you notice all the things that could be better but I’m long enough in the tooth to know that that’s life and you just have to move on. Anyway, yes, I will find a way to tell you the stories and show you the full renders. It’s on my to-do with everyhting else.
Recent and Upcoming
Couple of recent and upcoming things.
I’ve taken up a teaching role at the London Interdisciplinary School teaching design. I’ve been following the LIS since it launched and been really interested in what a genuinely interdisciplinary education looks like so this is an interesting little peek inside.
I took on a role as an industry champion at the Creative Industries Policy and Evidence Center to advise and consult on the future of the creative industries.
22nd November: I’m going to be at the next Design Declares! event with a host of amazing and luminary folks. Really quite worried about what I’m able to bring to that party.
As I said, I’ll find a way to document the Orgatec stories. The other big one was the opening keynote at the Design and AI symposium hosted by TU Delft. I’m not sure if they were recorded, if not I will also seek to document that but it’s basically a PhD walkthrough with a dance in the middle. I also have thoughts about some of the other stuff that was there.
Reading
I’m significantly behind on keeping up with newsletters because of all the above work. I’ve managed to crawl and skim through about 40 or so in the last few days. There’s an overarching and exasperated message that the amount of money and resource being thrown at AI ($100s billions) versus the actual tangible provable outcomes (5% is of positive impact at various things) are wildly out of proportion which does give the impression that we’re heading for a very real bubble.
The Ethico-Politics of design toolkits by Tomasz Hollanek explores dozens of ethical AI toolkits with some choice words on ethics- and participation-washing as part of a process that is often depoliticised and fails to match the actual needs of AI development processes. These toolkits often call for alternatives, which he points out there are loads of, which are ignored or maligned by mainstream AI practice.
Microsoft’s Hypocrisy on AI. This is depressingly unsurprising but it’s useful to have a bunch of evidence. In the PhD I’m circling a bit around how claims about AI’s ‘potential’ (to do things like cure cancer or mitigate climate change) gain credibility despite these being completely fabricated assertions. It’s a tricky thing to pin down, the PhD is all about how idea A (it can play games really well or chat with your kid) become claim B (it will cure cancer, mitigate climate change) but this article basically shows how big tech is “talking out both sides of its mouth” about these speculative claims by also making a bunch of money selling prospecting tools to fossil fuel companies. I was at an event where I tried to make this self-fulfilling prophecy point to some city leaders:
Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them. Therein lies the crux of the problem: In this new generative-AI paradigm, uncertainty reigns over certainty, speculation dominates reality, science defers to faith.
Brian Merchant has also written up a bit on it here.
Ed Zitron on the Subprime AI Crisis. Zitron (who I like reading but can’t listen to) has been tracking the wobbly finances of big tech in AI for a while and frustratedly pointing out all the inherent contradictions and problems. Zitron extends the usual argument with the specific mechanisms by which AI is sold. One; it’s on you to figure out how to make it useful/valuable (more on this next week) and two: through software as a service that binds you to it. This one gave me real dot-com-bubble vibes. Consume with reporting on underwhelming productivity impacts.
Wes has finally released his Stories from AI-Free Futures. He’s been working really hard on getting this album together as a continuation of Newly Forgotten Technologies which I would broadly describe as ‘specualtion on what comes after AI.’ Please do check them out.
Paul Graham Raven interviewing George Voss here. Part 2 is now out as well.
Apple did another launch which is a great excuse to remember how underwhelming things are. (I would 100% get a Mac Mini though, I really always liked them)
WordPress seems to have got super slow? I have refreshed my browser a bunch but it’s just got really clunky and delayed since I was last here. Perhaps something to do with all the lawsuits? Anyway I love you and assure you that following a very unpleasant summer I am back to regular programming.
I’m particularly annoyed today. I had a backlog of news and research to go through and mainlined too much horrible shit in one go to remain my usual centrist-Dad balanced self. Instead I’m taking this opportunity to work some rage at hypocrisy. It all started with this great Rolling Stone article about this year’s Consumer Electronics Show and an idea that stood out to me as a much better articulation of the root of a bunch of work around my PhD:
The whole week [of CES panels and presentations on AI] was like that: specific and devastating harms paired with vague claims of benefits touted as the salve to all of mankind’s ills.
Throughout the show, the author writes about how wealthy tech moguls stood on stages and loudly promised all the ways that AI would make people richer, happier, healthier, live longer and fix whole ecosystems while in quieter Q&As, they brooded and avoided eye contact while discussing the specific and existing harms and exploits going on; from algorithmic injustice to scams and crime. Then the author discusses the actual tech on display; that despite these claims that AI will cure cancer, eliminate road deaths, deal with climate change and and uplift society, all that is on display are AI sex toys, a pixelated rabbit that orders you the most normal pizza from the list (famously in the demo the creator of the Rabbit R1 just asks for ‘the most popular pizza from Pizza Hut’ which is how everyone orders pizza right? More on that in a bit) and a telescope that can remove light pollution (admittedly cool). There’s an outsize contrast between the claims of potential AI futures (overpromising, blindly optimistic and disconnected from real-world problems), the reality (quick buck gadgets that have little utility as demonstrators) and the evidenced harms (fraud, deception, crime, IP theft, injustice and road deaths.) And these appear to be drifting further apart.
Dan McQuillian has also put it well as “the social benefits are still speculative, but the harms have been empirically demonstrated.” This is a big motivator in my own research in AI and has been really since the early Haunted Machines days: How and why have the imaginary claims of speculative benefits outweighed the observable harms it is doing? What methods, tricks, tactics and strategies are deployed to make us believe in these fantasies?
Most of the executives hoping to profit off AI are in a similar state of mind. All the free money right now is going to AI businesses. They know the best way to chase that money is to throw logic to the wind and promise the masses that if we just let this technology run roughshod over every field of human endeavor it’ll be worth it in the end.
This is rational for them, because they’ll make piles of money. But it is an irrational thing for us to let them do. Why would we want to put artists and illustrators out of a job? Why would we accept a world where it’s impossible to talk to a human when you have a problem, and you’re instead thrown to a churning swarm of chatbots? Why would we let Altman hoover up the world’s knowledge and resell it back to us?
We wouldn’t, and we won’t, unless he can convince us doing so is the only way to solve every problem that terrifies us. Climate change, the cure for cancer, an end to war or, at least, an end to fear that we’ll be victimized by crime or terrorism, all of these have been touted as benefits of the coming AI age. If only we can reach the AGI promised land.
Lots of others have come at this idea in other ways; Bojana Romic on how AI people frame the present as a ‘transhistorical continuity‘ into an inevitable future, Lucy Suchman and Just Weber’s ‘promissory rhetorics‘ where technology is framed by what it will do rather than what it actually does or Lisa Messeri and Janet Vertesi’s ‘projectories‘ where imaginary and ever-receding future technologies are used as justification for present investments and cover for failures.
Another rhetorical flourish I’ve noticed is the constant reference to ‘technology’ as the agent of all this change rather than massive multi-billion dollar companies, their leaders and shareholders creating this stuff. Even more critical groups like the Centre of Humane Technology ask ‘How to tell if a technology will serve humanity well?‘ Rather than the more accurate ‘How to tell if a multibillion dollar company, it’s leaders, shareholders and the regulators they have captured will serve as well?’
The irony of this frustrated critique of the discourse around AI is that it has already been captured by the extremists in big tech. If you point out that AI isn’t actually meeting any of these promises and is hurting a bunch of people along the way, it is turned into an excuse for more, faster AI. Effective accelerationists who are tend to lurk at the forefront of the technology and money discussion will gleefully profess that fuelling the worst excesses of capitalisms is a great idea because actually it will lead to all these things they’ve been promising: That really, the problem isn’t that technology developed and deployed through capitalistic mechanisms will always fail to fulfil its promises as longs as the motivation is shareholder profit, but that it’s only with more, harder, faster capitalism that these promises can be fulfilled. In the word of the angry man that promised us that blockchain, then the metaverse was the next big thing and makes all his money from selling military technology, the market is a self-correcting mechanism with the best interests of humanity at heart and so we must give over more agency to it.
And people keep buying this garbage! Even as the creators are openly, wilfully dismissive of the needs of ‘consumers’ and openly promise to take away their agency! In the run-up to the US election there’s reckons going around again about why working class people vote against their economic interests. I know this is a controversial theory and I’m not a political scientist so not able to weigh in on the debate only to say that in the case of Brexit and Trump, data shows that the people to be hurt most by them were a majority of the voting block. A commonly-heard but dismissive, snobby and deleterious reading of this is to say that all these rhetorical flourishes are effective in convincing people of extremist views (including those of techno-optimist extremists) as the solution to social inequity but the subtext of that reading implies that people are stupid, which they’re not but is exactly what big tech and extremists do think of people.
Perhaps (and this is pure dirty reckons) we should think the other way: a sort of aspiration towards nihlism. As people make decisions about whether to eat or heat their homes, as successive climate records continue to be broken, as geopolitical instability continues to deepen, the answer of big tech is AI sex toys, a pixelated rabbit that orders the most popular pizza and $3500 VR goggles. AKA Jackpot technologies, preparing the wealthy tech class for a diminished society where society is replaced by technological mediation.
All the promises of democratisation, liberation, creative opportunity are demonstrably disproven by a suite of technologies that isolate, divide and exploit. In the current tech future, the aspiration is to have no common cultural reference points with anyone and instead to compete for the most superior human experience by accumulating more technology and more media. It’s no longer about developing technology that might help people navigate the inequities and complexities of society, government and every day life in a big complex assemblage but technologies that isolate and elevate you beyond it such that you no longer have to rely on or work with the state or institutions. Is it this that has an aspirational appeal to people? Imagine if someone could remove your social problems not by solving them per se and making it better for everyone (more efficient bureaucracy, healthcare, schooling, access to good transport systems, good quality housing etc.) but by instead by removing you from having to make any of those decisions at all?
Georgina Voss wrote about or made a an observation once that Silicon Valley tech was about removing having to take responsibility; cooking dinner, driving yourself somewhere, doing your washing, paying your rent. By extension, the most aspirational status espoused by the vision of big tech is one of diminished responsibility and diminished dependence on society.
I often talk about Lawrence Lek’s ‘Unreal Estate: The Royal Academy is Yours‘ – it’s one of my favourite projects and one of the first good bits of art made in Unity I ever saw. In it, a wealthy oligarch has bought the Royal Academy of Art in London and turned it into a gaudy, tasteless mansion draped in leopard print and the cliches of modern art. The point (at least my interpretation) is that to the ultra-wealthy, the world may as well be a game engine, devoid of consequence, transaction costs and material limitations; everything is reprogrammable or reconfigurable and so, by a perverse logic in which nothing really matters because nothing has any real value.
So I’m angry because that’s the logic of big tech evangelists. To drive down the meaning and value of everything so that whatever’s being hoiked this year at CES is seen, by contrast, as the most valuable and important thing ever. That’s why you can stand on stage showing a gadget that orders the most popular pizza for you and in the same few minutes have someone equate that technology with solving crumbling planetary and social health. And people just keep believing it.
PhD
So how is the PhD going? (The three most common questions I get asked are ‘How’s the leg?’ ‘How’s the PhD?’ ‘Can you knock up a powerpoint showing x?’) (The leg is… fine. I have a bit of an early check up later because I’ve been in more pain than I like, the PhD is- well I’m about to tell you and yes I can knock up that powerpoint for you.) Good, thank you. I’ve started the second main chapter (which is chapter 4); Enchantment, The Uncanny and The Sublime. This is one of the three ‘substantial’ chapters that get into the meat of the thesis. In this case it’s looking at how enchantment, uncanniness and sublimity are used to reinforce status quo imaginaries of AI. For example; scale and complexity – by making AI appear to be insurmountably large it gives the impression that intelligence is simply a product of scale and complexity but also makes it difficult to confront or challenge. This is a technique also used by mainstream artists to dress up what is essentially using lots of energy intensive computing to make nice pictures as somehow about intelligence or sentience or meaning.
On the flip side or the amazing critical practices that challenge scale and complexity; comb data sets, point out gaps, highlight the labour and so on. There’s also aspects of enchantment like why chatbots convince us that something more than calculation-at-scale is going on.
At the moment I’m chunking through the notes and quotes I’ve grabbed over the last two years or so as I’ve been reading, trying to sort and organise. I’d like to use two case studies because it would reflect the two used in the the Spectacles, Performance and Demonstration chapter (Ai-Da and AlphaGo) but it might settle on one. Or it might be two that aren’t evenly weighted. I definitely want to use Cambridge Analytica because that was very much about enchanting people with the belief in the power of AI through scale and complexity and the (apparently) uncanny results. The other one might be Synthesiszing Obama, largely because I did a project on it specifically but also because there’s a recurring theme here about human or life-like behaviour and enchantment.
Anyway, I’ll keep you up to date. I’m hoping to have finished crunching the notes by mid-next-week and then start moving things around to form up subchapters and sections. Then it’s that process of just writing over and over and over and over and over again on each section. I’m not aiming to get these as polished as Spectacles, Performance and Demonstration. I need to look at some of the fundamental structure – particularly around how I’m positioning practice – so all I want to do is get to a point where I have the overall shape of the whole thesis and then look at it from top-to-bottom to make sure it’s coherent before diving into the detail.
If I’m honest I’m not spending enough time on it. I accept that it will take a few weeks to get back into the PhD headspace though so I’m ramping up to it. It might mean a little less blogging from me as I divert more time to it but that won’t necessarily be a bad thing.
Short Stuff
Promoting some friends for you to check out; Crystal’s exhibition and Jay’s talk. This is what the Internet is supposed to be for.
Speaking of LLMs, someone managed to ChatGPT’s system prompts (the rules that frame how it responds) and I agree (unusually) with Azeem Azhar that it is brilliant. It is completely fascinating that we can set semantic rules for a trillion parameter computer. That is actually really cool, no sarcasm at all.
This in credibly complex and evolved Codes of Conduct from an online game that Dan Hon linked to.
I read someting recently about how it was quite likely that platforms would start to coalesce again. All of the streamers have had to raise prices and that means consumers have been dropping some. It went like: Ultimately the cost of syndicating some IP for Netflix to run is significantly more cost effective than building and maintaining your own platform when people don’t want to pay for a dozen different ones. The maths of then having to keep creating original content to keep your platform ‘full’ so that people don’t get bored is also pointless when all are doing the same. I think there’s something similar here with XBox de-exclusifying some games. Entrapping ecosystems were good when times were better, now when times are lean, getting in front of eyeballs is still the priority.
Remarkable story of Air Canada chatbot making up a refund policy then Air Canada back-tracking and claiming the bot is a ‘separate legal entity’ and that it shouldn’t have been trusted.
Lots of folks sharing this have commented that ‘running Doom on x’ is now a benchmark for computation. Anyway, running Doom on E Coli bacteria.
OpenAI’s new gizmo named after an entry-level Shimano gearset for some reason is another glossy distraction from the exploitation and misrepresentation at the heart of their business models. I honestly don’t know why nothing stirs in me when I see these things. I sense the genuine glee and excitement that others have for them but I just automatically go ‘oh great, another one, who are they going to hurt this time?’
I finished Tchaikovsky’s ‘Children of…‘ series the other day. I was actually inspired to pick it up because of Matt Jones’ blogging of it. As Matt points out, it’s clear that the corvids in the latest book are meant to be illustrative of the difference between sentience and intelligence or at least to trouble that distinction. Where the other ‘evolved’ species (spiders and octopuses) demonstrate clear sentience as we might relate to it; intelligence plus decisions-making, emotions, sense of self and others, wants, needs, inner worlds etc. (I don’t know the definition) the crows are more ambiguous and in fact claim not to be sentient but to be evolved problem solving machines. The crows live as pairs – one of the pair can observe patterns and spot new things while the other memorises and catalogues. They also can’t ‘speak’ only repeat things they’ve already come across (a la stochastic parrots). I suppose the point is to question those (particularly AI boosters) claiming that sentience emerges from complexity. That’s why every new ‘behaviour’ from a GPT is loudly touted as being indicative of sentience; we read these emergent patterns from complexity as if they are indications of sentience. (I’m writing about this now in the PhD) It’s a good metaphor.
I ended up in a hole on LinkedIn the other day of people responding to a very good post who in the last year have become coaches and experts in AI. Watch out there, folks, the charlatanism is real. Here’s my advice; any time anyone tells you what something could do, ask them why it isn’t. Ok, I love you, bye.
This is a rough, edited transcript of the talk I gave for Bartlett Cinematic and Videogame Architecture students on Monday. I recorded it on my phone which was sat next to me and then used Otter.ai (which is very good I think) to transcribe. Back in the day everyone used to blog their talks and I really liked it so I’m going to try and get back into the habit. I should note that for these types of things I rarely really properly ‘prepare.’ I tend to thrown some ideas together that I think/believe will chime with the audience and then have a more discursive meander through those ideas with them. With more professional stuff it tends to be a bit more uni-directional and pro. Also I don’t have time to scroll through for all the typos and you know I’m bad at that anyway so just apologies in advance really. Anyway, transmission begins:::
Hi folks, thanks for having me. I have to say I’m pretty jealous, if this course existed ten years ago, I would have done it. So I’m going to talk about this idea called ‘Design in the Construction of Imaginaries.’ And I choose these words, for a particular reason, I’ll come on to how I’m gonna use them in a way in a second. But I think it’s important to be on the same page here about what these words mean.
So I come from a design background and I’ve taught interaction design and graphic design and product design and UX and all sorts of stuff. But when I talk about design, I really just mean a sort of sophisticated understanding of material culture. So that might mean digital stuff, it can be physical stuff, architecture. And design means thinking about a particular affect or effect that you want to have on the world. Whereas (and this is purely my own definition) art I think is more subjective, it’s about you as a person.
So then imaginaries is an idea from social science, Sheila Jasanoff is probably the person to read if you’re interested. Imaginaries are a sort of collective headcanon for things in the world. So we have an imaginary of artificial intelligence, which I’m going to talk about quite a bit. We have an imaginary called London, we have an imaginary called gender, we have an imaginary of ‘our people,’ nations have imaginaries. So these are all sort of constructs of certain tropes and myths and stories and visions that we all collectively hold and often can be quite tricky to pin down. And I’m very interested in how design constructs imaginaries, both to build and reinforce mainstream imaginaries but also; how can we use design or material practice that to unpick these imaginaries as well, to challenge them, to question them and to sort of disassemble them and show their parts which is what this little talk is all about.
So very quickly who I am and what I do. I’m Design Futures Lead at Arup Foresight. My job is to think about what the future of various things and for the sake of Arup and our clients, but particularly I lead on using design methods to do that and we have a small growing design team who use design to both produce certain types of outputs like exhibitions and films, but also as a research technique. Before that, I was an academic for a long time and I also ran a curatorial and research project called Haunted Machines with Natalie Kane. But a lot of what I’m going to talk about is mostly related to my PhD work, which I’m doing at Goldsmiths.
So I’m going to kick off with this with with a concept called future foreclosure. This is the idea that we’re not very good at thinking about the future and actually, the futures we construct and the futures we imagine are actually quite limited, and increasingly so. This shot is from Star Trek III; The Search for Spock, which is not as well studied perhaps as 2001; A Space Odyssey for its set design. But still, Gene Roddenberry, the creator of Star Trek put a lot of effort into designing the detail around the Starship Enterprise, including this sign on the transporter, which says ‘No Smoking.’ And I love this because it indicates a world in which the people of the 1980s were able to imagine a future in which you could dematerialise and rematerialise somewhere else completely through this amazing technology. You could jump from a spaceship to a planet or a ship to another ship but, everyone would still be smoking. It shows how we don’t question accepted social norms.
And then there’s this idea that we’re in a kind of bleak place for the future. This is a quote from David Runciman, who’s a Cambridge political scientist. And he was reflecting I think, on what happened in 2022. And he said…
We were talking about the metaverse earlier as perhaps one of the greatest examples of that. It’s just bootstrapping technology onto a financial instrument, and hoping for the best. So there’s a sort of cynical lack of imagination about what the future might be but also a sense of inevitability. My research looks at how AI has been socially constructed, and design’s role in that. One of the really fascinating things about AI is that everyone has a concept of it. Everyone’s seen films, video games, everybody’s heard hysterical news stories. But that also creates its own problems, because it gives AI this sense of inevitability. So these scholars reviewed I think 200-odd ethics guidelines for the use of AI across governments, nonprofits and companies and said…
So that imaginary of an inevitable AI coming is so secured that all of the discourse is just about limiting harms. And this percieved inevitability; the idea that nothing can stop the status-quo AI future and no alternatives can be imagined blinds us to the harms. Sun-ha Hong talks about the idea that…
So there’s this idea that a techno-future is inevitable and foreclosed but it wasn’t always so right? There were times where we’ve had alternative visions for what technology could be. Anyone seen David Cronenberg’s Existenz?
[Group of Gen Z’s doggedly keep their hands down] Oh boy. Okay. So this came out of this same year as The Matrix, 1999. And The Matrix obviously has become a real hallmark of what people now think about as a retro future; the idea of immersing ourselves in a simulated reality and an artificial intelligence that takes over. But Existenz was looking at the idea that we might be carrying around these bio computers or ‘pods’ that we plug into and exist in a different sort of virtual reality that existed between these bio pod. At the time, that was another future imaginary that people had and yet for some reason this is now seen as unreasonable and ridiculous while The Matrix is often held up by journalists as a potential future reality.
So, why do some imaginaries, like an AI apocalypse in The Matrix, take hold and others don’t and what role does design have in the success or failure of them?
Minority Report by Steven Spielberg came out a few years later in 2001. It’s a hugely influential film on the world of design and technology for reasons I will go into in a second. And it’s a really interesting case study in the cultural impact of one film over a huge collective imaginary of what technological futures are. Minority Report (based on the Phillip K Dick short story of the same name) takes place in a future where we’re able to predict crime before it happens and so there’s a ‘pre-crime unit’ that arrest people before they commit the crime. But there lots of other technologies in it like a gestural interface thing, augmented reality, eye tracking, AR and facial recognition. Almost all these technologies that were speculative at the time, captured here.
And so Minority Report becomes a powerful comparison point for journalists and investors around technology for years to come. Rather than opening us to alternatives or presenting a critical question, Minority Report is used as a story, a metaphor of a particular technofuture that drives billions of dollars of investment to technologies like gestural interfaces, AI and worst, predictive policing.
We might think that the role of science fiction, fiction and cinema is to broaden our future imaginaries and to help us challenge the status quo rather than but as philosopher Fredric Jameson said…
Jameson makes a really interesting suggestion that really the role of futures in science fiction, in most cases, isn’t to broaden our imagination, and throw in new ideas and new questions but is to convince us that we’re just in the past of a future that’s inevitable and already pre-decided.
Minority Report becomes incredibly influential. It has a whole Wikipedia page dedicated just to the technologies that are that are in Minority Report. The production team work with loads of researchers at places like MIT and all sorts of technology companies to develop these gadgets and gizmos. And then for years and years and years. We’re talking over two decades now, people have been trying to recreate the technology in Minority Report or using it as a metaphor – a framing device – for real-world technologies.
But there’s a very good reason why Minority Report and artefacts like it work, why they stick in culture; and it’s because of design. David Kirby really analyses the use of design to convince people of certain worlds and world building. John Underkoffler was the guy who designed the gestural interface and then went up to set up a multi billion dollar company based on the excitement generated by Minority Report to build it but obviously it didn’t work we don’t have them…
All that is to say is that basically the reality, believability and tangibility of the designs for John Underkoffler and actually for Minority Report more broadly, is what makes them stick; the reason they were enticing is because they seemed somehow grounded in reality. And there’s lots of detail in that film to bring them out. For instance, there’s a part where when Tom Cruise is swiping across the interface, there’s an error and one of the windows doesn’t come with him, he has to go back and pick it up and those sort of details, bring out the believability of it.
I’m not going to go on about Minority Report anymore. I just wanted to use it to show this connection between imaginaries, design and futures. How the futures we imagine are informed by and inform the stories we tell and how design is a sort of connective tissue that bring both fictional and future imaginaries to life and make them convincing. Because, despite being a complete fiction, as a result of Minority Report we’ve seen probably billions of dollars invested into these speculative technologies at the cost of less glamorous or profitable things like climate intervention or medical science.
So now I want to talk about the way that design is used to construct imaginaries, and the way that you can then start to unpick, unsettle and challenge them through critical practice. And this involve the use of metaphors, charisma, and tropes that draw on science fiction. Earlier on I mentioned Haunted Machines which is a project I ran with my friend Natalie Kane, who’s a curator at the V&A. We started this in 2014 and were really interested in the question of why so much of the emerging technology of the time; voice assistants, Internet of Things devices and so on, were wrapped up in occult language and metaphor.
Once you really start to get into the weeds on this, it’s more than just colloquial and coincidental. We did lots of work here and there’s lots of great social science about this. Essentially magic is a causeless technology; you push button get thing, there’s no work that has to be done on labour involved in that process. Secondly it associates the technology with secret, hidden or forbidden power which also goes to making technology really aspirational since power, speed and control are so revered in society. But it also reveals something about how we imagine technology.
We perhaps like to think that technology and innovation are closely aligned to science but scholars have really shown that technology and innovation respond to deeper, more human and existential desires and fears but dress it up as science in order to give it credibility. For instance, Anthony Enns, explored the ongoing hold that psychotechnologies (brain reading technology) has over the imagination and innovation space (see recent neuralink news)…
Most technologies aren’t really answers to things; they’re charms to make you more powerful, more beautiful, help you live longer or give you access to secret knowledge. Anyone who’s watched Mad Men would know this but I think we assume that somehow the development technology is a rational science that doesn’t tap into desires or emotions that it’s based on, like, scientific principles. And so, like any field that promises the solution to spiritual, existential crises, it fills quickly with charlatans and criminals.
[A quick game of ‘Name that Criminal’ ensues.]
Charisma is really important here, there’s a reason that we keep revering and looking up to these people. William Stahl, who really analysed this enchanting effect that technology had with the early introduction of the PC talked about the important of charismatic figures, sage-like or even messianic who became idols and prophets as a result of this narrative framework that developed around technology as secret, powerful and tapping into needs and desires.
So as well as metaphors of magic, power, speed and control and great charisma, technology draws on pre-existing tropes and imaginaries we have in order to slip it into mainstream acceptance.
So this is Ai-Da which is claimed by its creator, Aiden Meller, to be the first artist robot in the world. And obviously, like a lot of these projects, very little is given away about how it actually works, who built it, what the actual algorithmic processes behind it, but a lot of work goes into presenting it and framing it. In this case, for a hearing in the UK parliament on the future of the creative industries it is female presenting, it’s presenting as juvenile and it’s presented in these overalls and dungarees to look and feel like a creative or artist as well as, again, somewhat juvenile. This scene is fascinating for many reasons and I have written thousands of words about it. The decision to invite a machine to testify before parliament (which they actually say they can’t admit as real evidence) is ludicrous. Its answers are also pre-recorded, so the whole thing is a performance but you wouldn’t give a tape player the same platform. But the really interesting thing about it is that, very quickly, the politicians and legislators quickly fall into step of treating it like a real human being, they start referring to it as ‘she’ and ‘her,’ and they ask it questions directly. So it’s a really fascinating thing about how the design choices around the presentation of what is essentially an algorithm in a box are eliciting empathy, feeling and sort of status quo relationships from these legislators.
There’s also a fascinating part where Meller talks about some of the engineers who worked on it who said that really it’s the worst form of artist robot you can imagine, right? Because if you want a functioning ‘artist’ robot, that produces paintings like Ai-Da, just use a robot arm. Humans very complicated, messy, lots of limbs that don’t really do much in terms of art-making.
So why insist on this human form? Obviously, the whole thing is to draw on and reinforce an imaginary that we’re super familiar with from science fiction of humanoid robots displaying human-like behaviours. This draws on empathy to make the audience feel an emotional connection (again those deep desires and fears) and also makes it more easily ‘consumable’ as the audience are familiar with this sort of setup from TV and film. There’s also a second imaginary at play which is one that is perhaps more powerful but les obvious and that’s nation-building where scholarshave shown now how states and governments are keen to associate themselves with technology to appear future-facing and high tech.
Another great example is the AlphaGodocumentary from DeepMind. So in 2016 DeepMind beat the world Go champion, Lee Sedol and they made this documentary about it which obviously gives DeepMind the opportunity to frame the whole narrative around what they’re doing. And because this is film, they also draw on tropes. So on the left, for instance, is a scene where Lee Sedol realises he’s about to lose, and they have this long lens of him outside smoking a cigarette over melancholy violins. And this on the right is a scene from the very end where one of the advisors is playing with his daughter at sunrise in a vineyard and is saying, how excited he is about the AI future. The whole thing is very well done and choreographed to tell a very familiar David-and-Goliath story of DeepMind, a team of dozens of genius computer scientists, owned by Alphabet, one of the world’s largest and most powerful corporations beating a Korean man at Go. Which on the face of it is an outrageous framing, which is why there’s lots of discussion at the beginning of how complex Go is, how it’s ‘uncomputable’ as a way of showing how DeepMind have not only beaten human intuition and gestalt that Go apparently requires but also this insurmountable mathematical problem.
This connection to games is also really interesting, around the same time IBM put out Watson to win at Jeopardy! And there’s a deep history of AI, computers and chess. The mainstream imaginary of AI (and AGI in particular) involves AI being as good as, if not better than a human. We already have computers that can model whole-Earth weather patterns, or simulate huge crowds moving through space, or image distant galaxies, things that no human can do but thanks to collective imaginaries we’ve set benchmarks of ‘good’ AI as one that has the intuitive and gestalt properties of human thinking. Which is why these folks are so focussed on making AI that can make art or win games; it is a way of disenchanting these human activities and skills, showing that they are calculable, controllable and computable. It’s profoundly nihilistic.
The final thing here is the ‘so what?’ ‘So you’ve built a computer that can win at a game, so what?’ And this is where there’s usually a clever rhetorical swipe and the story turns speculative. You’ll often find here as you do in AlphaGo and in the shorter Watson documentary an extended claim that victory at this one very confined benchmark equates to curing cancer, solving climate change or alleviating poverty. Even though scholarshave shown that there’s little result from these displays and performances other than increased funding and hype.
And then you’ve got things like the design of power and complexity. This is another big trope in AI. This is Alexander Nix, who was the head of Cambridge Analytica who were famous for stealing a lot of data from Facebook and claiming to be able to predict and influence the outcome of elections. Studies since have shown they had no such power whatsoever. They did steal thirty million Facebook profiles but they didn’t have anything fancier than a big Excel spreadsheet. The point is, when you see him or read about him, he’s always described as very charismatic. Like our previous charlatans, that presentation, in this case of a slick, public school guy who’s well connected is really important. And the things that Cambridge Analytica really relied on to bamboozle people is scale and complexity. All of their comms is big numbers, complex terms and ideas. And whether it’s in cinema or in real life, this is often used to construct an AI imaginary; this idea that somehow it’s bigger than us, and we can’t possibly comprehend it. It was used by DeepMind to describe Go as uncomputable and beyond the comprehension of a human. This apparent complexity is used as an invitation to ignore how it works and, again, to secure that secretive, magical power.
Then there’s journalism and media. If you Google ‘artificial intelligence,’ you get these humanoid figures that are usually blue with lots of lines going everywhere accompanied by numbers and data. This is no true representation of AI, and lots of groups are exploring alternatives, but it is a pretty dominant aesthetic metaphor used in mainstream press and reporting which goes to secure an imaginary that…
…is also reinforced in cinema. We can see, and are likely all familiar, with how the same aesthetics are recycled, because if you’re going to explain AI to someone, they’ve probably seen Ironman so you can use that to build your story on. It’s easy and convenient to hijack those aesthetics for stock imagery, and sort of loop them back through culture over and over again. But of course, at the same time, as we saw in Minority Report earlier, real-world technology is shaped by this set of fictions and stories.
[Pause for chat]
So that was a whistle stop tour of how imaginaries are constructed and how design is used to build them. I want to quickly look at how they’re disseminated and what that means for them. Not only these, these imaginaries created and reinforced, but they also have to get out there in the world. Has anyone come across the Shazam Effect? So, in 2012, a bunch of Spanish researchers sat down to answer the question; ‘does all pop music sound the same?’ And they found out it did…
So later, Derek Thompson coined the Shazam Effect to explain this; that through things like Shazam, Spotify, and these increasingly available data platforms, record companies had loads of data about what people like which they could then use to produce more music that conforms with what people are listening to. And, as science fiction author Bruce Sterling says; ‘what happens to musicians happens to everyone.’ And so…
We see this effect in things like International Airbnb Style as coined by Laurel Schwulst. When you’re on Airbnb, you’re trying to attract people to stay at your property and so you look at the other properties that are successful and you design it, present it, photograph it and light it in a way that’s successful for others which results in this homogeneity.
We see the same thing in cars with this Wind Tunnel Effect. There’s so much software and regulation around the design of cars that when you run them through the simulations that are required to make them as efficient as possible, meet the fuel standards and so on that you basically end up with the same slight variations on the same forms.
In architecture we also see the same thing thanks to industrial image production. This is Crystal CG who produce renderings for architectural studios all over the world and they’ve produced hundreds, thousands of images. So they know what worked previously. They know what clients like, they know what works in a particular country or city or region. And so you end up with homogenisation of style and design and form as this centralisation of production makes the process less artistic and more industrial.
And these renderings are particualrly important because I would contend that these images, printed on these eight foot tall hoardings and plastered all over the city are the most common and everyday way that most people come across the future. They might read the news or watch a film, but every day they’re living, working and travelling around these massive, highly saturated, gorgeous images that obscure the real building site and importantly…
…distract from the actual place where they have a voice in the future of their city, which is in planning notices.
I think the thing to take away from all of this as we move on to talk about what you can do and how you work as critical practitioners is to know that design in this situation is never neutral, that it carries forward pre-existing tropes and assumptions from culture, imaginaries and fiction and embeds them in new objects and technologies. So things like real-world AI are designed, by choice, to conform to expectations from fiction and the imagination, even though it is often wholly inappropriate to dealing with real world problems. Madeleine Akrich writes…
So I want to get on to critical practice, because that’s why we’re all here.
So an assumption that I’ve seen a lot in AI and again, I’ve written about quite a bit is the idea that AI will somehow democratise imagination and creativity. And I love this interview with the creator of stable diffusion, David Holz who says…
This idea, that again, is profoundly nihilistic, that creativity, criticality and imagination are just about making the right tool is provably false and very similar to claims that social networks would liberate, democratise and educate. At the same time as claims are made of ‘democratising’ creativity, these tools and platforms are foreclosing imagination to try and make it conform with what AI developers want.
The title of this talk, and it’s related to my thesis title as well is ‘Design and The Construction of Imaginaries’ which is an allusion to an amazing paper by Carl Di Salvo – ‘Design and the Construction of Publics.’ I always point to this work for your one-stop shop on how critical design works. He suggests that design has a role in building public discourse, not just solving problems, he says…
Di Salvo says that publics assemble around issues. This issue might be a new building, it might be a broken toilet, it might be you know, being a parent, it might be unaffordable rent. And very often those issues are designed such as with a building, app or service. So Di Salvo extends this and says that the way that critical design works is by inventing the things for an issue to assemble around. And when you’re talking about things like AI and the sorts of things that are quite ephemeral and tricky to pin down, there’s a powerful role for critical practice to materialise the issue so that people can then assemble around it and talk about it.
So a one of the most well known projects that does this, and one that Di Salvo writes about is Tom Thwaites’ Toaster Project. So Tom Thwaites set out to answer a simple question, which is; ‘can I make a toaster?’ He had to make everything himself, had to smelt all the metal and form all the plastic and reated this great series of YouTube videos, I think became a little documentary and a book. So why? I mean, we already have toasters and as Di Salvo says, he’s not solving anything. The main thing is how the projects reveals how much we take for granted these incredibly complicated supply chain processes that go into a really simple object. Di Salvo calls this a ‘tracer;’ it traces the outlines of a thing that’s otherwise invisible, which is the whole supply chain, industrial set of processes and technologies that produce this really simple object. It reveals them to the audience by saying this is a ludicrously complicated, globalised, exploitive, and wasteful product. So by revealing this thing that’s otherwise invisible or obscured he brings an issue to the front which is otherwise quite difficult for people to understand; supply chains, materials, all that kind of stuff. So this is what what we mean by using design to unsettle or untangle certain tropes.
So Di Salvo calls this a ‘tracer’ but the other type of project in critical practice of design is what he calls ‘projectors.’ You’ve probably heard of speculative design and Tony Dunne writes in his thesis…
Di Salvo says that these projects work by showing how things might be otherwise to reveal the way they are which might be otherwise hard to see, because it can be hard for us to challenge our assumptions about life; think again of the Start Trek ‘no smoking’ sign.
So this project for example is called Robots but would you think of any of these objects as robots? They all have characteristics of robots. The one with the one with the looks a bit like a lamp has to be plugged into the wall in order to work so it can’t move around as much as it would like, the L shaped one is my favourite but has to be held at a particular angle in order to work which requires it to basically rest in the crook of your arm or it won’t work. All of these things are designed to have the behaviours of what we might assume a robots; they have movement, they have autonomy, some sort of agency, arguably, but they look nothing like ‘robots.’ They don’t look like metal humanoid figures, or little dogs or machines but by projecting forward (or sideways, let’s say because it’s not suggesting a particular time) and saying ‘this is how things could be otherwise’ it reveals the assumptions that we have taken for granted; all those tropes that we talked about earlier that are constructed around AI and technology.
So that’s tracers and projectors, used to reveal hidden assumptions. But I also think the hack or exploit is really an important tool. I’ve spent far too long going down to speedrunning rabbit hole which is completely fascinating. So speedrunning is simply people competing to complete a game as fast as possible by any means possible. And the great thing about that challenge is that speed unners don’t see the video game as the designers intended it. They don’t see it as a world with the narrative and certain mechanics in it have been designed, they almost get to the layer underneath the actual construction of it; the architecture of the actual game engine itself, and try and find hacks and exploits in the underlying mathematics really to find a way through. And these hacks require really explicit and sophisticated knowledge of game engines and architecture in order to spot the exploits.
And I like seeing the same concepts on approaching technical systems in critical practice; that ability to see beyond the thing as it’s presented and unpick the reality underneath. For instance, Gabriel Goh took Yahoo’s Images’, not safe for work filter, and turned it all the way to zero to ask; ‘what’s the least pornographic image possible? What if we undid that algorithm, laid it out and turned everything to zero and then rendered what it would give us?’ You end up with quite pastoral and bucolic scenes. You sort of get the sense of classical architecture, green, beaches, sky, that kind of stuff. And this is a similar sort of tracing project that uses exploits to reveal a system. It’s taking the thing that we’re that we’re given, and saying, ‘I’m just gonna lay out all the pieces of it and try and figure out where they come from and the decisions that were made.’ Because someone programmed this, it’s not accidental, someone decided what the least pornographic image possible should be and trained this system on it.
An important note here is how, like a lot of good critical practitioners they document how they do this right because that’s really where the knowledge is. It’s not in the thing itself. It’s in the journey to get there and that’s, that’s really important. They’re all documenting what they did and what they revealed.
I’m going to talk about one of my own works really quickly, which was an Augury a project I did in 2018 with Wesley Goatley, who’s a sound and data artist by training and we’ve worked together on lots of projects. So augury was an ancient divination technique used by the Greeks and Romans which involvd looking at the flight patterns of birds. So they basically say; ‘all the birds are flying West, therefore, we must go to war’ or ‘the birds are flying East therefore, we must go to war.’ It usually ended in ‘we’re going to war.’ Fundamentally, it was a belief that the birds were messengers from the gods.
So we created a sequence-to-sequence machine learning system that trains on ADS-B data of the flight pattern of planes over 50 kilometre radius of London for about four or five weeks and put it next to the latest tweets about the ‘future’ from London at the same time, and then train this machine learning system so that once it was in the gallery, if you asked it for a prediction, it would just give you complete garbage, because there’s absolutely no association or causal connection between these data sets. But what we wanted to do is untangle a lot of the way big tech was talking about AI as almost prophetic in its power while obfuscating the way it actually worked. And so the point of this satire was to say, ‘given how little we know about how often these corporate machine learning systems work, they may as well be the flight patterns of birds and planes.’
And, as I said before, documenting, reflecting, talking with a great collaborator about what you’re doing, the choices you’re making and the thing you want to unpick is super important. Especially as this was one of our first time working with machine learning. And as we’re doing it, it’s revealing to us something about the way that computer scientists or engineer think about things like corpuses, data sets, epochs and so on. What are the corners cut? The conveniences made? The assumptions inscribed in these tools.
So if that’s a ‘tracer’ what can ‘projectors’, alternatives, look like? This is QT.Bot by Lucas LaRochelle. LaRochelle, did a project for years called Queering the Map where they gathered stories of queer experiences all over the world, and then pin them to Google Earth. So basically, people would would say, give an anecdote about how they met their partner, or maybe had a negative experience as well as a queer person at a certain place on Google Earth. This map is now huge, it must be tens of thousands of different people’s testimony. But then LaRochelle trained a machine learning system on that data, on the stories and on the images of the places to generate an arguably queer AI with data is fed it that is very different from a normative, heteronormative data set.
And another speculative trajectory for AI that lots of people are making is how it might enhance our relationship with the non-human world. Such as with the Ecological Intelligence Agency from Superflux. Sascha Pohflepp captured this idea for me, that what AI gives us and should be giving us, isn’t the ability to replicate things that humans can already do but to extend and enhance our ability to understand things that are too fast, too slow, to huge or too small to be comprehended by us. I guess that’s where a lot of the science is pointed but what if that was the imaginary we held too? Not one of speed, power and control but of understanding and empathy and care enabled by the ability to crunch vast data to the human scale.
So what can I leave you with as critical practitioners? I hope I’ve talked about how imaginaries are made and disseminated and how design is used to reinforce those but also how critical practice can use methods like tracing, projecting and exploits to reveal and unpick them, to assemble new publics around the ephemeral and loaded imaginary of AI and other things.
I called this little section ‘making traps’ because I think really, taht’s what all this is about. Whether you want to convince someone of a science fiction future, or invite someone to challenge their assumptions you’re making a trap. Benedict Singleton, reflecting on Villem Flusser writes that…
Fundamentally, no one is creating anything new. The trap maker simply reengineers existing tendencies for a new outcome. Think about a simple rabbit trap; you bend the branch to hold the elasticity, you know the rabbit wants to eat a certain type of bait and follows a certain path and thwack! Rabbit stew. There’s no fundamentally new thing here. All of these futures are traps; drawing on a mastery and sophisticated understanding of things that already exist – aspirations of power, control and speed, stories and fictions of robots and all powerful machine – and pointing them in a new direction that’s favourable to whatever imaginary you want other people to buy into. The question is whether you build traps that keep us in the status quo or ones that break us out of it.
Thanks.
Recents
I contributed to the Service Design College’s Futures and Foresight course on some of this stuff but in a more applied way. You can no go and look at it if you want but I think it is behind a paywall.
Short Stuff
Very short. Trying to spend less time reading newsletters and more time writing PhD even though this transcription took probably four hours.
I haven’t come across anything gushingly positive on the Apple Vision Quest (but then I don’t tend to read uncritically gushing positive journalism). Here’s some from Paris Marx about why it doesn’t make sense,
Max Read suggests that Kyle Chayka gives too much weight to algorithms and feeds in his analysis of why everything is the same.
I’ve been reaching out to catch up with people now I’m emerging from my convalescence including folks I haven’s spoken to since before the Cov. If you wanna hang out, let me know and let’s hang out a bit. Ok, love you bye.