At least three people said ‘I love your blog’ in the last week and each time I felt crestfallen that I’ve been so lax with this this obligation. I really do try and do everything. I think I’ve seen films where in sound recording studios some of the sliders on the big desk of sliders move on their own; I’m not sure why this is, I assume it’s something programmed in, certainly I’ve never been around audio equipment that sophisticated. My own musical recordings were all done with a combination of Garage Band and crates of Kronenburg. Anyway, that’s how life works sometimes: You turn something up at one end and all the way down the other end of the desk a slider is automatically tweaked down.
Making Five Stores From The Distant Future
For the last two weeks I’ve been spending evenings and early morning working on a series of renders for the opening keynote I gave at Orgatec in Cologne last week. The whole thing was a massive faff but quite fun and diverting at a time where I really needed something creative to call my own. Sure, I could have thrown some slides together for a 25 minute-long witter with some data and ‘trends,’ but having spoken with Robert who was organising the whole shindig I felt quite inspired to go the extra mile and turned the research I did into a series of short stories from the future. At some point I’ll figure out how to share these but I wanted (realising that people who read this website are actually interested in this stuff) to talk about how I went about doing it.
To start with I had three constraints: I knew I wanted to write and tell short stories (just because), I knew I had about 20 minutes in a 35 minute talk to do this so I worked out I wanted roughly 5 short stories of about 4 minutes each. I also knew I wanted them all to be connected somehow. The next thing I knew was the big picture of the world. I’d already sent off a blurb and pinned down a couple of things that would shape the world with ideas like degrowth, the end of high finance and speculation, the end or weakening of global norms and institutions and the stuff we know about like climate and demographic change. The final thing was knowing the audience might be futures-curious but like-as-not unfamiliar with most of these concepts.
I started by throwing those big ideas down on a piece of A3 paper and imagining what connected them. For instance, in a world of managed degrowth, people might want to kick against it and you could get a subculture of people looking to engage in high finance and speculation in the same way that living a fully sustainable lifestyle today could be seen as a subculture. You might also start to see a slowdown in global logistics as a result of climate, degrowth and ending global norms, so rather than a world of next-day delivery, everything takes a long time for things to move. Between those two there’s an obvious conflict; the drive for speed, power and control, the reality of sluggish, uncoordinated and messy physical reality. This was the first one I thought about and came to sailing ships but the rest also flowed quite quickly once I started imagining what occurrred at the intersections of different drivers and ideas.
I spent an hour or so doodling away and thinking about the little visuals that emerged and that actually became the backbone of the whole thing. I started by modelling the scenes I was reasonably confident about (like the sailing ships). For each scene it was about making the familiar unfamiliar; the uncanny. For the ships for instance, I used a boxy, cargo ship that might be easily recognised but then put Chinese junker-style sails on copied from a modern sailing vessel. Each scene I wanted to be recognisable but have something different; diesel ships with sails, an office with a playground, a kitchen with 18 seats etc. etc. This is the starting point for most speculative design; finding something materially familiar and normalised and twisting it so that the audience is forced to reconcile their expectation (diesel ships have engines) with what they’re seeing (these ones have sails). So it’s also important that both those things are recognisable. Where I was introducing a brand new element – like the ‘d-rhizome’ in the home office scene; an AI-augmented alternative to the Internet that is fully node-based and inspired by slime moulds or mycelium – these would have to be explained in the story.
From here the stories and scenes sort of developed in tandem along with prompting ideas for the next scenes. Some stories were easy to flesh out to bullet points and pull together like the Bangladeshi immigrants running a semi-autonomous Norwegian vineyard as part of an international soil restoration programme for migrant workers. The pieces just sort of fall into place. Others took more forcing.
The rooftop scene, for example, is about a building caretaker where the building has so much biomaterial and biotech fitted it’s almost a living thing, so I wanted it to be less of a service and more like a doctor; someone who is widely respected and admired for their expertise and time. This is an idea we explored a little of in the Future of Making work that went to Singapore the other week. I knew I wanted the top-down view of the roof in a sort of satire of green roofs. So I put cows on it. If you’re going to cover a roof in grass you might as well have cows and you might as well use their waste to fuel a bioreactor. And the association of the machines with the animals opened up the story beyond the technological to one more similar to a farmer who cares for their animals but it’s a building.
I worked these out by sketching over and over again the scenes in my notebook, adding elements and writing notes on how they might work and how the character relates to them. I didn’t get to writing the prose of the stories until literally on the train over to Cologne. Luckily, my head was so in the world that this all came quite quickly. I settled on a model in which for each scene a character reflects on how they got there, some exposition, some weirdness. I actually ended up using Copilot quite a lot to figure out details like names, locations, species and so on which probably saved a bunch of time hunting for an endangered species of bird that eats berries and migrates from through Germany to the arctic.
On anti-AI aesthetics
A quick note on the style. You might note at the top of that paper it says ‘like Frostpunk.’ I knew I had a lot of work to do so I wanted to reduce the workload as much as possible. So inspired by the game I adopted three tricks. First of all, I tried to stick to fixed view so that I could keep the lighting simple. Apart from the dinner scene, no camera moves through the scene so I didn’t need to worry about what was ‘behind’ the camera and could build the scene like a set. Second of all was using simple flat images as background parallaxes. The rooftop is a great example. The background here is just a flat image of a street. Finally was keeping the style loose and low poly where possible. I didn’t hit this rule all the time. Ironically, the more time-pressured I got, the easier it became to just pull out pre-made assets from Blender Kit. So while the ships is all DIY, with some cardboard cutout UV mapping, by the time I was doing the office scene I was basically just modelling core bits like the room, the weird screen and table, the vertical farm. The rest is all found assets.
I realised quite late that as well as a time-saving effort, these aesthetics decisions were about intentionally distancing the images from the new generative AI aesthetic. I didn’t want to do over-stylised photo-real images with lots of soft blur because I wanted the audience to know that I had made these images by hand, that it took effort and labour to do and that maybe in that effort and labour I had got the opportunity to think about these future scenarios in more depth. That by moving things around, working out how space might function, designing the workarounds people might have to make for their work to fit their lives, that I would learn a lot more about the subject and that this informs the stories.
I know that generative AI image-making has become a popular speculative design tool but I’m pretty sure it’s not actual design. When you put in a prompt to for ‘a future retrofit commercial office where people are living in apartments and spending their days trading in high finance derivatives around a massive table’ you’re not actually designing anything. I suppose you’re actually asking the machine to elicit your own head-cannon from a cultural median for you. Sure that thing has probably never existed before but you’re not really making anything, just skewing a graph.
Design that is also research is what we learn in the actual designing of things; of keyboards and desks and tables and chairs and lamps and switches. In making those things and thinking about the people who will touch and use them you generate knowledge, understanding and insight about the future. If you’re just taking your preconceptions and getting a machine to make them ‘real’ then have you really learned anything? A reason these renders take so long is that even adding a chair to a desk scene forces me to ask questions like; how long does this person sit? What kind of things do they like? Are the proud of their work? What else might they need to do? How might their personality be reflected in the chair? And in exploring and answering those questions I feed the knowledge back into the stories and the world-building.
Points of failure
Of course, none of these projects ever go right. Even after so many years of honing my Blender-craft and convincing myself I had plenty of time there were problems. With about a week to go I lost my notebook and with it, all the sketches, notes and annotations I had been pulling together for each scene. Pretty sure I dropped it somewhere around Central Saint Martins at an event but despite a couple of visits it never showed up so I had to remember a lot of the ideas I had for the last three or four scenes. Second thing was that the PC I was remoting into to do the rendering went offline and took about a week to get back. So I had all the renders backed up and modelled out but time ticking on the actual render time. I ended up sinking about $300 into cloud rendering to meet the deadline. (I missed the deadline, but got it in before the talk which is what counts.)
And of course, nothing ever looks like you want it to. Each of the renders except the vineyard, rooftop and the forest have multiple versions. And even those were re-rendered a bunch to fix bugs or style problems. The original kitchen was just some tables arranged end-to-end with a cooker at the head. It just felt like a big party not like a kitchen purposefully setup for a large group to eat together regularly. The first office was just basically a bullpen with holographic screens which I threw together at 2am one morning and in the cold light of day, rejected as unimaginative and cliched. The idea of having it as a literal live/work retrofit with apartments in a commercial building came later. So really I ended up producing about 16 rendered animations of about two minutes each to get to the final seven.
Finally, and a critical failure for someone who claims to be a designer; I didn’t get to do any testing. There simply wasn’t time to get someone else to have an eye-over of the stories. I was writing and editing them right up to the morning of the keynote itself You should always give time to have someone else edit your work because, though I may know this world inside and out, no one else does and afterwards several audience members commented that it was ‘very dense’ meaning, I imagine, a lot went over people’s heads when spoken and not read on the page. It also probably meant that I wasn’t as confident in them in presentation as I might have been with more dry-runs, even if I did rehearse the whole thing four or five times.
For example, introducing the d-rhizome, this new type of Internet which prioritises real connection rather than command-and-control was tough. Think about a classic science fiction book; usually it only introduces one new idea (e.g. there’s time travel, plants are an alien species, spiders are the apex species) but everything else is broadly the same (e.g. people want to preserve their life, get wealthier (in some way) saved their loved ones, whatever). But science fiction authors get a whole book and your total attention to introduce and explore that idea. I had 5 minutes and a trade conference keynote so I’m not surprised some of it was lost.
Other than that, it’s just all the stuff that goes with anything you’ve worked super hard on; you notice all the things that could be better but I’m long enough in the tooth to know that that’s life and you just have to move on. Anyway, yes, I will find a way to tell you the stories and show you the full renders. It’s on my to-do with everyhting else.
Recent and Upcoming
Couple of recent and upcoming things.
- I’ve taken up a teaching role at the London Interdisciplinary School teaching design. I’ve been following the LIS since it launched and been really interested in what a genuinely interdisciplinary education looks like so this is an interesting little peek inside.
- I took on a role as an industry champion at the Creative Industries Policy and Evidence Center to advise and consult on the future of the creative industries.
- 22nd November: I’m going to be at the next Design Declares! event with a host of amazing and luminary folks. Really quite worried about what I’m able to bring to that party.
As I said, I’ll find a way to document the Orgatec stories. The other big one was the opening keynote at the Design and AI symposium hosted by TU Delft. I’m not sure if they were recorded, if not I will also seek to document that but it’s basically a PhD walkthrough with a dance in the middle. I also have thoughts about some of the other stuff that was there.
Reading
I’m significantly behind on keeping up with newsletters because of all the above work. I’ve managed to crawl and skim through about 40 or so in the last few days. There’s an overarching and exasperated message that the amount of money and resource being thrown at AI ($100s billions) versus the actual tangible provable outcomes (5% is of positive impact at various things) are wildly out of proportion which does give the impression that we’re heading for a very real bubble.
- The Ethico-Politics of design toolkits by Tomasz Hollanek explores dozens of ethical AI toolkits with some choice words on ethics- and participation-washing as part of a process that is often depoliticised and fails to match the actual needs of AI development processes. These toolkits often call for alternatives, which he points out there are loads of, which are ignored or maligned by mainstream AI practice.
- Microsoft’s Hypocrisy on AI. This is depressingly unsurprising but it’s useful to have a bunch of evidence. In the PhD I’m circling a bit around how claims about AI’s ‘potential’ (to do things like cure cancer or mitigate climate change) gain credibility despite these being completely fabricated assertions. It’s a tricky thing to pin down, the PhD is all about how idea A (it can play games really well or chat with your kid) become claim B (it will cure cancer, mitigate climate change) but this article basically shows how big tech is “talking out both sides of its mouth” about these speculative claims by also making a bunch of money selling prospecting tools to fossil fuel companies. I was at an event where I tried to make this self-fulfilling prophecy point to some city leaders:
Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them. Therein lies the crux of the problem: In this new generative-AI paradigm, uncertainty reigns over certainty, speculation dominates reality, science defers to faith.
Brian Merchant has also written up a bit on it here. - Ed Zitron on the Subprime AI Crisis. Zitron (who I like reading but can’t listen to) has been tracking the wobbly finances of big tech in AI for a while and frustratedly pointing out all the inherent contradictions and problems. Zitron extends the usual argument with the specific mechanisms by which AI is sold. One; it’s on you to figure out how to make it useful/valuable (more on this next week) and two: through software as a service that binds you to it. This one gave me real dot-com-bubble vibes. Consume with reporting on underwhelming productivity impacts.
- Wes has finally released his Stories from AI-Free Futures. He’s been working really hard on getting this album together as a continuation of Newly Forgotten Technologies which I would broadly describe as ‘specualtion on what comes after AI.’ Please do check them out.
- Paul Graham Raven interviewing George Voss here. Part 2 is now out as well.
- Apple did another launch which is a great excuse to remember how underwhelming things are. (I would 100% get a Mac Mini though, I really always liked them)
- Meta/Facebook is returning to its roots of assessing strangers through computers.
Listening
WordPress seems to have got super slow? I have refreshed my browser a bunch but it’s just got really clunky and delayed since I was last here. Perhaps something to do with all the lawsuits? Anyway I love you and assure you that following a very unpleasant summer I am back to regular programming.