I had a couple of notes of things I wanted to write about but I haven’t had time to get it all together. I’m trying to get the balance between being a general person-in-the-world, PhD student and day job in balance. They’re different head spaces which means there’s switching protocols but my body is getting used to getting back to a working rhythm which is good. I keep seeing promos for efficiency apps that look really great and useful (Dense Discovery always has a great little section) but then I just think of all the switching costs. I like my way of doing things at the moment, I don’t know where I could be more efficient.
Ok, so the two things I was going to write about were how the Old Web is Dying and that’s well documented but what comes next? Ed Zitron has a pretty dense and well evidenced litany of a decade of failure, deception and broken promises that culminates with the lacklustre arrival of the Apple Vision Quest. There’s a couple of weak signals out there about what’s next and I have some thoughts (including an expansion on what the web looks like if the Rabbit R1 wins out). Whatever Web 3.0 becomes there’s not a lot of clarity between the decentralisers/federalists, the agent-based-AI-silos, the degrowth permacomputers and the simple fact that I see no socio-economic pathway by which Silicon Valley loosens its grip on the wheel. So it’s basically which one of those groups of nerds get in Zuck’s ear. I’ll expand on this next week maybe?
But as a designer, I think it’s important to move beyond the abstract system diagrams and ponzii schemes and actually think about what an interface looks like, what the UX is, the stuff that actually matters to people more than how their nodes connect on a computer science paper somewhere. It might be cool if someone wants to chat it through, Jay?
The second thing was my harumph about Refik Anadol’s new art. It claims to be a nature-based AI, a bit like the MoMA-collection-based-AI that was previously in the news. The conceit of these works is that you train a model on a dataset and then it tells us something about the world of that dataset. In this case; ‘we trained it on data of nature therefore it’s like nature talking to us.’ The whole thing immediately falls apart at the slightest critical enquiry, in the face of reams, and reams and wonderful reams of work about how data is partial, and human-constructed. You only have to look at the ‘beautiful’ images produced to know that this thing is by humans, for humans and tells us nothing about nature at all. There’s plenty of great work out there that seeks to connect a non-human ontology with the human senses and does in a wide and accessible way to big audiences. But this ain’t it.
If I had more time I’d really get into the nuts and bolts of this ‘nature based’ piece but I don’t and it’s literally what the PhD is for. And don’t even get me started on the sci-fi UI. A lot of these pieces work by telling us very little about how the technology is built, by who and through what mechanisms, this is how they enchant and distance us and allow them to become non-technical exercises that can occupy whatever narrative they’re given. Anna Ridler’s work is the perfect counter; where it’s all about the process and the spectacle is a side-effect.
But then, being the mercurial centrist I am, I’m like ‘well it’s not for people like me.’ It’s at the World Economic Forum and it’s talking about AI and nature which are both important things and it’s not terrible. It’s just inaccurate and tropey which in creative academic circles is absolutely unforgivable but maybe for the world’s leaders is fine?
PhD
I started rolling back into PhD work last week. If you remember, I’ve done about half of it, did the upgrade exam in October then immediately found out my leg was broken again and so put it all back on pause. The task at the moment is to start to form up the last two chapters; Enchantment, The Uncanny and The Sublime and Use, Usefulness, and Users. Both of these exist as 18 months of notes, grabbed quotes and idle thoughts and so I am joyfully running through this iterative process of slicing and dicing to try and see a chapter emerge.
For the Enchantment chapter I have something starting to form: Firstly around prediction and scale; how the promise of predictive power, control and being able to control the future becomes enchanting and the scale sublime and overwhelming, and the second bit more about the life-likeness design drive: How we are enchanted by life-like interaction and their uncanniness. I’m not entirely sure of the case studies yet, the first might be Cambridge Analytica or maybe predictive policing in general. The second will likely be early deepfakes because I have a good line of practice there.
Short Stuff
I’ve been playing Cyberpunk since Christmas (which really is excellent, amazing worldbuilding) but I need to finish that so I can concentrate on PhD work without feeling like my head is half in another world.
- Beth Singler has edited the Cambridge Companion to Religion and AI. available here.
- Apple’s quiet AI projects. A colleague sent this to me saying ‘it’s exactly like you said.’ I’ve pretty bang on on big tech AI predictions.
- An interview with Jesse Lyu, the inventor of the Rabbit R1. He is very conscious of the fact that it doesn’t do anything that a phone can’t do and yet you have to carry an extra object. But that’s the only way he can make money! How long will that money last? Well it costs $200, the company cover the $15 a month you spend using GPT turbo so you do the maths until it’s either dead and unsupported or switched to a pricey subscription. And, as above, expect Siri to be able to do all this by the middle of the year.
- I’ve written lots about the misattribution of speculative design. Julian’s done great work elevating design fiction over the last few years to the point where it’s become a keyword for papers and he’s had to step in.
- J-Paul also on the problem with ‘preferable’ futures.
- Both Matt and Rob Horning were also thinking about AI hardware last week apparently.
- Wes suggests that the quest for AI is rooted in human loneliness. I think that’s a part of it but I still prefer the idea that the underlying drive is pure nihlism; a Weberian drive to disenchant the world of any mystery or magic by making it all predictable, computable and controllable. The reason art, game-playing and creativity are you as demonstrations as benchmarks is precisely to show there’s no gestalt or special power in them, that they can be done by computer and thus that the humans who do them are not special.
- Elisava is running a very sick looking talk series; ‘Critical Futures.’
- Alex DS (who features heavily here?) once again capturing the vibes of my job.
- Ed Zitron with possibly the most damning and comprehensive list of big tech failures, cons and deceptions; How Tech Outstayed Its Welcome.
- Nightshade poisons AI models with your art.
- Superflux have a newsletter now as well.
Wes and Scott brought the Star Wars / Withnail & I mashups to my attention after commenting that my unshorn hair gives me an air of Withnail. 15 years ago I would have been flattered. Now I’m concerned. Anyway, I’d never seen those and they’re great. I do occasionally have to search this blog for something I wrote or thought and so I do understand how hard it is to do, if you’ve ever tried. I’ll fix it at some point. We are all works in progress but I love you. Speak next week.