I’m spending the week in Eindhoven. I always, always like the Netherlands and with all my various activities have probably spent more time here than anywhere else. I’m not sure why, everyone here seems to complain about it all the time but it’s a country that, other than the occasional imposition of extreme right wing politics (and who doesn’t have that) seems to pretty much work and it’s nice to be somewhere that just works. Ok I’m in a rush to get to my next thing and I’ve landed on four things rather than five this week so here you go.
Five Four Slightly Longer Things
1. The Big Story is a a Big Pink Dog
Of course I read the ai-2027 story and it’s really great as a story. But it’s also a story that does a good job of enlivening some pretty tedious technical stuff and some of the concerns about the ethical and socio-economics of AI. Similar to the hype over Situational Awareness last year though it’s worth asking penetrating questions when imbibing these things.
First of all there’s some great storytelling techniques – they allude to concepts form sci-fi about superhuman capabilities or nefariously misaligned AI. This is not to say that those aren’t real possibilities: AI programmers also watch sci-fi and so all that stuff is swirling around both the discourse and the technical instantiation of things. Does fear of a science-fiction outcome make a science-fiction outcome more likely because of the quality of its articulation?
Imagine, for example, that we’d had 60 years of films, video games and novels about how there was a giant pink dog that was brought to life by technology or that we all had our own giant pink dogs in some utopian future or that there was a run on dog food because of all the giant pink dogs or a game where you and your giant pink dog companion fight criminal gangs. We’d probably have lots of assurances from vets and dog breeders about how not to worry about the giant pink dogs, how giant pink dogs were safe. Scholars, writing about the socioeconomic and ethical implications of giant pink dogs; ‘who gets to have giant pink dogs?’ ‘what about other, non-giant, non-pink dogs?’ and some breeders doing some assuredly safe experimentation with genetic engineering just to explore the utopian possibilities of giant pink dogs.
In this scenario, the idea of giant pink dogs is well articulated across culture and so easy to talk about and build or not build. The point being that science-fiction is both a useful rhetorical trick to drive attention to some technical ideas but also a real (if distinct and specific) possibility. It’s important to read these things as science-fiction stories, continuations of a lineage of tropes and ideas used as a wrapper for some speculative technical advances.
The other thing missing from these is the systemic impact of context. The story basically focuses on one variable – technological advancement – which as anyone in futures will tell you is the most wildly unpredictable and opaque variable. Yes, it throws in geopolitics but almost exclusively framing it as an arms race with China which is its own science-fiction trope but there’s no mention of culture, climate, wider economic changes or how other technological priorities shape ‘progress.’
And was it alarming? Of course it was, because it’s a story and that’s how narrative works, it needs action, turning points, denouement. Reality is sorely lacking in redemption arcs. We should always read these things as playful ways of narrativising current technical and political ambitions and tendencies in a science-fiction wrapper for easy digestion.
So what? This is where we make the argument for futures literacy. That predictions are never right, that stories need hooks, that scenarios are useful props for thinking with but not flat-packed futures to be accepted wholesale. Oh flatpack futures, that reminds me that Foom is probably better. It seems to very fashionable for AI people to write these visionary and lengthy essays (in a serif font). For a good one though see Machines of Loving Grace.
2. Of course you want to end democracy, democracy is hard
Eryk Salvaggio has written up a great little piece here about [hype as an obscurer of hope](generative AI offered something else: a way of imagining a better future. Having lost all hope for a better future, we turned to hype. And unlike hope, hype can make you rich.):
generative AI offered something else: a way of imagining a better future. Having lost all hope for a better future, we turned to hype. And unlike hope, hype can make you rich.
He draws out a call to action for people to work harder for democracy; against the idea that it can be perfected through technology but that it will always be flawed and broken and demand our constant attention. As I’ve written to you before, I suspect and would emphasise that a sort of emotive drive that is rarely articulated is the sense that all of this (gestures around) is clearly terrible and not working and technology can give you the privilege of not having to deal with it. Again, to clarify, the argument is not that technology will the issues with delivering equitable health care or skills shortages or inefficient energy systems or transport but that you, personally, as a subscriber, as one of those who get it won’t have to deal with those problems, they will be the problems of other people.
Here’s the rub; democracy is exhausting, institutions are exhausting, life is exhausting. Why wouldn’t you want a technology to make it go away so you can have a break? As Bassett has written; so many people are already treated with bias, viciousness and brutality by institutions run by humans that they have no reason to suspect that running them by machine could be worse. Perhaps it might even be better! And so, of course, yes, ‘we can take you away from the problem, we can part the velvet rope for $39/month, we can automate it all away for you’ is deeply, deeply appealing.
But how do you make the counter argument? I’ve had this argument more times than I care to mention with very-well intentioned, very-well read folks: How do you say to someone; doing this harder, more complex thing when things are harder and more complex than other will be better for more people even if it’s a bit worse for you and so you should do it?
So what? I don’t know. This is probably the greatest problem of our epoch. It used to be that states demanded your effort and investment in taxes, possible military services and the necessary paperwork in exchange for collective protection from threats and more latterly, collective decision-making over the national projects that were important. It’s entirely reasonable for people to feel like that bargain is falling apart. I’m inspired still by the Finland Solutions book: What would a big, galvanising inter/national project look like? Instead of just solving problems, offering something better than just making the problem go away. I mean if Starmer was to jump up tomorrow and say ‘Net Zero by 2035 with a regenerative economy expedited by the proliferation of AI tools’ he’d get my vote is all I’m saying.
3. Design Thinking the Extreme Right Way
I ground my teeth through Christopher Rufo’s interview on the NYT Daily. If you don’t know, Rufo is part of the extreme right intellectual class in the US and is a vociferous and vicious critic of higher education for all the reasons you might expect from an extreme right wing intellectual. In the podcast he outlines with enormous and envious confidence his theory of the liberal takeover of higher education and in precise and threatening detail the ways he will – and has – punished, abused, destroyed, demagogued and demonised educators. With pride. There’s a lot to be said for the explicit and unabashed precision of what these folks were once trying and now are actually doing which is missing from progressive ambitions.
What shook me out of my teeth grinding a little was when Rufo briefly talks about what’s next having managed to force out various presidents and hold universities financially hostage;
We’re in prototyping phase right now, doing some A/B testing
To hear words that might otherwise be uttered by product developers or design strategists from the mouth of someone utterly committed to the destruction (or as he sees it ‘rebalancing’) of higher education was sudden and unsurprising whiplash.
So what? If you were generous you might call this the democratisation of design thinking, except arguably, those tools are now being used to dismantle democracy, which carries it’s own irony which I won’t lie, made me chuckle since 99% of the utterances of ‘democratisation’ are delivered with about as much weight and seriousness as ‘next year I’m joining a gym’ at Christmas.
Let’s join this up with the other observation: That at least as he talks about it, he has clear and explicit sense of what he’s going to do and the impact it’s going to have. He actually speaks like a Marxist; identifying the exact mechanisms and levers that will force change on the system he so clearly hates. One of the many failures of design thinking has been its thin use as a cover to excuse business-as-usual because it’s rarely matched up with a well-articulated ambition to actually change. Interesting insights, thoughtful conversation and a bunch of post-its will only have so much political impact, if the ‘p’ word is even ever allowed in the room. Forcing departments to close and bullying people from their jobs via design thinking is something entirely different.
4. Ecological Prisoners and Refusal
This report is in French but I read Nils Gilman’s summary of it and it resonated with much of what I worry about with the aestheticisation of green transition, that something aspirational and cool for wealthier folks like green energy and retrofitting are punishingly expensive and restrictive for those who can’t. That in forcing this transition a group of ‘ecological prisoners’ are becoming refusers.
When ecological policies translate into higher fuel costs, unaffordable housing retrofits, inaccessible clean food, or threatened livelihoods, without tangible and immediate improvements in daily life, they are perceived not as pathways to a better future, but as further burdens imposed by out-of-touch elites upon populations already struggling with precarity and economic stagnation.
So what? This is something I’ve been thinking more and more about – how to make futures that are at once distant and new and provide a point of aspiration but are also connected with real concerns. The Future Mundane of course is a brilliant first start.
Shorts
- Renewables provided 41% of global energy in 2024, up from 39% in 2023
- “big banks are now banking on a 5.4º F. (3º Celsius) rise in global temperatures above the pre-industrial average.”
- Because the value of the naira slumped against the dollar, it’s become more affordable for Nigerian tech startups to build their own cloud infrastructure.
- AI to call your grandparents to check in actually seems kind of better than I thought it would be. Still. Very Uninvited Guests.
- Blue Origin’s latest heist dressed up as Space Feminism.
- Thomas Matthews is closing. So many studios now.
- I chuckled 18 times at Marina Hyde’s writeup of girlboss feminism in space.
commentary was provided by, among others, Kris Jenner and a bottom-tier Kardashian (Khloé). Khloé glossed the moment of landing with the words: “it’s literally so hard to explain right now”. Other insights? “There’s one woman whose grandfather is back there and he is 92 and they didn’t even have transportation back then.” I mean, the guy was literally pre-horse. Historic scenes.
Upcoming
- I’m in Eindhoven right now as visiting Professor at EAISI. Spent most of yesterday meeting some interesting folks and learning a but about what the department does. I’m taking part in a symposium later today exploring uncertainty and then I’m giving the innaugural lecture tomorrow which will be a further extension of the talk I’ve been giving about my PhD work. I’ll report back here on the week.
- Next month (May 15-16th) I’m off to Swabia for the amazing-looking Re Shape in Schwäbisch Gmünd. It really is an amazing lineup of some of the most interesting people doing research in crticial practice, art, design and AI, some people who’s work I really, really admire so I’m very excited to be in the room with them.
- There’s also an event on ‘Positioning AI; Challenge in Architectural Design and Practice‘ on May 22nd. But I actually really don’t know what I’m going to say there. Those questions are big.
Listening

This isn’t my favourite Sophie Hunter song at the moment but it’s the only one that has a video.
Ok, that’s it, love you, speak next week. Surprised I even managed to do this tbh.