After I was sick last week I got my screen report and saw that I spent a shocking amount of time on social media for those two days. I put a 15 minute lock on both Instagram and LinkedIn and I tell you it has genuinely changed my behaviour. I’m actually less inclined to open them in the first place because I’m an ‘eat your veg first’ kind of person and I worry that what if future me needs to use them for some reason? Better save that time.
Five Things
1. Everything is broken
Ok, hardly novel, but a big problem in the rush to AI everything is that everything is already broken. A few years ago, at the peak of Haunted Machines, I decided to keep a list of everything that was broken for a day. I thought it would be a neat counter to the hysteria around emerging technology. Here are the things that went wrong in the last twelve hours:
I have to plug my laptop to my monitors then open it. login and close it again or else it won’t pick up the monitors. MS Authenticator will always fail the first time saying ‘the number is invalid,’ even when it isn’t. I always have to refresh it on both laptop and phone and then it works second time. Strava on my watch just sometimes doesn’t work. You press record, go to do your thing and look a few minuets later to find it’s crashed. Blender crashed thrice. Riverside editor grinds to a halt once you’ve done about ten minutes of edited content and needs a hard refresh. it might have saved, it might not; there’s only autosave. Powerpoint keeps reverting to previous saves, deleting lots of text or moving things around. Google Drive never actually shows me my actual recent files. Setting up Loop agendas and then wanting them to be in the right folder means they don’t attach to the invite so you have to go and re-attach them for every instance of a recurring meeting.
As computation and automation creeps in, we are forced to find ways to navigate all these partly broken realities; a form of heteromation in which we learn new forms of work just to make the things that are supposed to make our work more productive and easier better: holding the bluetooth connected speaker just so, so that it connects properly. Holding the TV button a little longer to get it to turn on. Changing the WiFI passwords on all the devices.
You might remember this time last year the XZ Utils vulnerability being exposed. Basically, a nefarious state actor almost managed to inject malicious code into most of the world’s servers because it turns out there’s just one guy who writes and manages the compression algorithm used by OpenSSH and he got harangued to put the code in. This was caught by another guy who was playing around with it in his off time. It turns out that the foundations of the Internet are super weak and the stuff that’s built on top of it are super janky, broken and damaging. I’m not sure adding AI to this teetering pile of broken stuff is the best idea anyone’s ever had. I’d expect that as AI is closer integrated into more and more things we continue to see more increasing vulnerabilities and exploits but also brownouts as compatibility issues result in conflicts and the open source projects that currently maintain the Internet start to crumble or be forgotten. The underlying things that built the internet; SSH, DNS, Apache etc are largely maintained by volunteers who are under huge pressure from nefarious actors and big tech who keep piling more weight on their backs.
2. Slop and costless recombination
Over at 404, they’ve been doing work tracking down the origins of AI slop flooding Instagram in particular. This tasteful AI slop includes:
Dora the Explorer feet mukbang; Peppa the Pig Skibidi toilet explosion; Steph Curry and LeBron James Ahegao Drakedom threesome; LeBron James and Diddy raping Steph Curry in prison; anthropomorphic fried egg strippers; iPhone case made of human skin; any number of sexualized Disney princesses doing anything you can imagine and lots of things you can’t; mermaids making out with fish; demon monster eating a woman’s head; face-swapped AI adult influencers with Down syndrome, and, unfortunately, this.
There’s a lot of nuance about why LeBron James in the article FYI. Anyway, these shorts can be produced super quickly and on the cheap the point being that in the endless recombination of characters, contexts and scenarios, you just need one thing that makes people stop, hover, read comments in disgust and basically monetise the account through attention. I was immediately reminded of Elsagate in which YouTube accounts were chaotically recombining IP from children’s cartoons, sometimes just weirdly and obliquely, sometimes in an overtly sexualised or violent way. Because of the functioning of YouTube’s algorithms and the less-than-discerning nature of children, these channels were reaping in huge payouts for their creators.
Both AI slop and Elsagate rely on massive scale and proliferation – quantity way over quality. But while Elsagate stuff sped things up with cheap production, animation and acting, AI slop is just about daisy chaining a couple of large language models to video generators to pump out 30 second videos. Just like phishing emails, you only need a few people to click once, to linger for a few seconds and you’ve started to make money.
I was chatting with someone yesterday who was telling me about weaving; that in a six-by-six grid there are 63 billion combinations of warp and weft in different configurations but that we know which fraction of a fraction of a percent of these 63 billion combinations will work out in a good weave that holds itself together because of hundreds of years of experience with looms. But now (on YouTube) people are going back to first principles and just throwing new patterns at the loom and seeing what comes out; that maybe there are hitherto undiscovered patterns in those 63 billion combinations that might actually work and have never been used.
The point is – in a very new media theory friendly way – that just as with vibe-coding or discovering new materials by just randomly combining molecules, is this sense that when the transaction cost of simple chaotic recombination is near zero it becomes the most productive form of creativity and innovation. It’s arguable that music and entertainment streaming services have been heading this way for a while; that the cultural zeitgesit is so hard to capture and respond to that you’re forced to either continually recreate things that were historically successful or rapidly churn out things with the hope that a fraction of those stick and become profitable.
The AI slop on Instagram is just the next thing to slide up the pole, it won’t be long before streaming services are trying something similar; personalised AI-generated extensions to your favourite shows or shorts based on recombinations of your viewing history, for example; these companies don’t tend to have fantastic imaginations. What I wonder is; are these things aimed at the same people? Elsagate was about 7 or 8 years ago. Is it the same children, now teens and pre-teens on social media, who were targeted by Elsagate now being bombarded with AI slop? Once these kids are paying Netflix subscriptions in a few years, we can expect the slop to come with them.
3. Is there a post-Post-world?
Unfortunately, through little fault of my own, as a European, I’m obligated to rubberneck the car crash of American democracy, because it’s like watching a slow motion explosion in the distance and counting down until the shock wave hits you. It’s hardly a novel observation to suggest that ‘we’ (liberal, progressive, metropolitan, remoaner, anti-growth coalition, luxury beliefs brigade snowflakes) are sort of in paralysis at the unbridled speed and ferocity of what is going on. And one way of dealing with it is to sort of bask in outraged podcasts spitting advice on what should be done about it. All that to say, these opinion are usually of a type and you’ve heard them and don’t need the melancholia of hearing them again. But I did like this writeup from Garbage Day for a particular insight; that we’re not in article world but post world.
In other words, the world of long-form, evidenced and investigative articles being kryptonite for executive wrongdoing are long gone and instead the zeitgeist is maintained by the post vibes;
We’ve replaced the largely one-way street of mass media with not even just a two-way street of mass media and the internet, like we had in the 2010s, but an infinitely expanding intersection of cars that all think they have the right of way. Think about it for a second. When was the last time you truly felt consensus? Not in the sense that a trend was happening around you — although, was it? — but a new fact or bit of information that felt universally agreed upon?
This is interesting because it raises the question of what comes next and how this milieu resolves itself. In the same way the ‘article world’ is over, so is the path of painstaking reputation building that might have accompanied a seasoned journalist or activist; instead it’s a charismatic and momentary, memetic force that propels individuals or ideas to have some political or cultural weight. We already know that operations and systems have been created to try and game these proclivities through recommendation algorithms as much as troll farms. I suspect that attempts to pin down, control and direct these forces might ultimately undermine them; you don’t necessarily need to teach people to spot misinformation if they suspect that all information is misinformation. But this includes ‘article world’ too – so what arises from the immolation of the old and the new information sphere?
We were recently doing an exercise at work with some people who work in knowledge services and I was really interested in where legitimacy is located. Is it in people and trust in them? Is it in systems and trust in them? In a community of peer reviewers and trust in them? In popular consensus around an idea and trust in that? Both article world and post world put trust in people I think. In article world, this trust might be built over a career or else be granted by popular consensus, in post world, it’s the memetic, charismatic ability to whip up excitement. What would the information sphere look like if legitimacy was located somewhere else entirely? Oral, interactive or embedded?
4. Taking the ‘con’ out of economics
Economists are finally getting round to tackling selection bias in a thing they’re calling the ‘credibility revolution.’ I’m sharing this mostly because i was aghast that a concept that is well known and widely acknowledged in most fields is seen as ‘revolutionary’ in economics; selection bias. The author writes about monitoring two groups of software developers; one using generative AI and one not. They found the group using generative AI were 40% faster at merges than the control group but ah, shock horror, wait a second! Even before the introduction of generative AI the first group were already faster!
Honestly, I thought this article would be much punchier than it was but I read the whole thing waiting for a twist and it’s like ‘no; this is just that we now acknowledge selection bias is real and are dealing with it.’ I supposed it’s a bigger piece of the puzzle as people are trying to work out a) is AI actually increasing productivity? Remember that there’s a quorum of thinking that suggests that computers haven’t even increased productivity and b) if it does, how so and in what tasks? The whole idea of whether AI is saving time or not is still very murky. And saving time is not the same as increasing productivity; sure I could automate some tasks, save twenty minutes, but what am I doing with those twenty minutes? Am I spending them just doing other menial tasks? As with a lot of technosolutionism, generative AI gets at symptoms, rather than causes and the root of the problem is that so much work is bullshit. Why would work that would be better automated have to exist in the first place?
5. From da club to da curve
It was something-o-clock in the morning on May 9th 2013 in a club in North London when the Keeling Curve passed 400ppm of Carbon Dioxide in the atmosphere. I remember it because I had a notification setup and it resulted in a round robin on what-was-once Twitter amongst friends who kept an eye on these things. My interest in these ambient data sources became the foundation of Ongoing Collapse and later, decline.online.
It’s now hovering at around 430ppm and is likely to be cut by Musk and DOGE. This quiet little station in Hawaii has been dedicatedly providing one of the most solid and consistent stories about climate change for decades since being set up in 1958. RIP.
Upcoming
I’m taking on a role as visiting Professor at EAISI in Eindhoven this year. I’ll be over there for a handful of week over the year starting with the week commencing 13h April, so give me a shout if you’re about and want to hang out.
Short
Basically. these are things I come across and save to my raindrop.
- Paris Marx on the other global attempts to replicate DOGE. Yes, shockingly, billionaires are leading the charge.
- Five members of Venezuela’s opposition party have been living in the Argentinian embassy under siege by government forces for a year.
- The US installed 1250MW of domestic battery storage last year, a 60% increase on the previous year but it’s expected to slow down because of… you know, America.
- And the world is expected to add 700GW of solar in 2025.
- Julian here on Scratching the Surface.
- Madeline Ashby talking with Paul Graham Raven about worldbuilding and scenarios.
- Another Zitron giga piece here but the thing that jumped out is cloud providers citing the risk that AI providers completely fluff this so-called AI revolution making their businesses worthless.
- Harvard has eliminated tuition for families earning under $200k.
- More vibes-based coding here. This time a game that took 30 minutes to make and earns $50k a month.
- BlueSky has made more money from selling copies of that brilliant T-Shirt than selling custom domains.
- I don’t really read about the squabbling of AI models because it’s pretty tedious. Every few days, there’s a hysterical ‘game changer!’ or a new ‘paradigm’ because of some relatively straightforward breakthrough. But I was interested here in Azeem Azhar talking about the open source tendency predicted to dominate in 2023 seeming to take the lead in performance and accessibility in which he asks if AI ends up more like Apache or DNS than Google and Facebook – the pieces that underpin the Internet rather than the fluff on top.
- Juliana v The United States is not being heard by the Supreme Court. Worth reading about the history and impact of this case though.
- Dodai, a Japanese company is Ehtiopia’s fastest-growing EV maker thanks to battery-swapping tech.
- My colleagues (and, I would say, friends) Bree and Rhiannon have a piece out in a journal about Design for Delight here.

Listening

I said in passing to someone last week (paraphrasing) that we’ve had 250 years of the wealthiest Americans’ desire to avoid paying tax playing havoc with global politics. I stand by it.
When I joined Arup I was asked about pinning down an operational title. ‘Speculative Design Lead’ was suggested but that was never going to happen. It’s a true observation that very few graduates from Design Interactions refer to themselves or each other in terms of ‘speculative design’ any more than graduates of graphic design courses refer to themselves in terms of ‘photoshop’ – it’s a thing you can do and have reasonable competence in, not your whole identity. I settled on ‘design futures’ because I though ‘well, it’s using design for futures and also about the future of design.’ Four year later and, not by my doing at all, the world is replete with ‘design futures’ which seems to be everything from product design that may or may not use so-called AI to service design that just happens to be ten years hence.
Oh, Cassandra. Listen, whatever, I love you. Names aren’t important anyway, just what we do with them. Actually that’s untrue, names are incredibly important. Speak next week.