I’ve been out of hospital a few days now, largely immobile and struggling once again with the feeling of uselessness and helplessness. I’m trying to close out the ‘To Read’ list by the 3rd so that I can five in to writing the next PhD chapter ASAP as I’ve sort of set a deadline of mid-Jan for that. I also don’t want this post, which has been open fo two weeks following me into 2023 so I’m shoving it out the door now.
I worry there’s too much opining on this year’s AI explosions going on. Unfortunately my research puts me in the position of needing to engage with a lot of it and also be one of the voices in the echo chamber. Part of this is entirely self-reflexive as me working out my practice and my relationship with the subject. Unlike crypto, I do think AI has incredible potential: AI is doing good stuff quite well. There are now 18 medicines that were developed over the year using AI, it managed to do most of the legwork on solving protein folding and AI was used to successfully identify Covid variants. AI, in applied situations, does seem to live up to most of the hype or at least the hype has calmed down and the adults are taking over.
Noah Smith’s early Decembers letter made some very good, very rational arguments about AI and the near future. With roon, he points out that AI is profoundly unlikely to result in mass unemployment, something I fully agree with simply because at a purely simple level – what good to anyone is mass unemployment? Why would anyone want to just manage algorithms? Who would buy all the stuff? However, the whole argument is still framed through a productivity/efficiency mindset. As long as exploitative capitalism produces AI, it will reproduce exploitative capitalism. Smith and roon describe the idea that AI will probably become ‘autocomplete for everything’ which I think is just about right and something I would actually enjoy: autocompleted code, blogs, emails, texts, micro-tasks, meetings, note-taking, copying from one document to another and so on.
Smith and roon very briefly touch on the opportunity that: ‘The increased wealth that AI delivers to society should allow us to afford more leisure time for our creative hobbies.’ But this is the bit that’s missing. AI should result in more leisure time, a shorter working week, more diversity of activity and experience and joy in people’s lives. How will all this productivity lead to that? Given everything we know and have experienced why wouldn’t AI just result in even greater profit being fed to tech leaders as they massively increase wealth inequality? Why would the next big AI tech companies not just operate like Amazon and use every new innovation to squeeze their workers and customers even harder for Bezos’ space boner?
These people want to automate creativity, the bastion of anti-capitalist work; slow, caring, experiential, personal, emotional. That should tell you everything you need to know about what the world of AI work will look like.
The problem of criticality is another dimension. If we are to go about auto-generating content, be it code, art or text then do we risk losing critical capacity? I would say that critical practice is about being able to and actively thinking about building or making things to understand how other people think about building and making things so that we can better understand why we have the things we have built and made and how they instantiate certain ways of thinking. (Why car-centric cities, why a drive towards productivity-driven automation, why the sexualisation of automated assistants, why a language of the occult to describe computation, why obfuscate labour in automation?) If you’re putting all that making on a machine then can you still think critically at the same level about what you’re doing and why?
No probably not, but maybe you don’t need to. In the mid-2000s, Apple were making Macs cool and I was at art school and everyone wanted one and they shipped with Garage Band. The music was amazing. You could start a band on a whim and produce a demo track super quick. I wrote so many drum lines with no experience of drumming at all or any theoretical understanding of music (it was all punk bands to be fair). The feeling of the music coming out then (now coyly termed ‘indie sleaze’) was magical – anyone and everyone could do it without a stuffy sound engineer giving you a hard time about compression. There are real parallels to the narrative around generative AI now and how it is broadening access to creativity. Garage Band didn’t kill drummers or rich, critical and experimental music and I doubt AI will kill critical practice either. For those that find that important, we’ll keep doing it.
AI could, speculatively, liberate people from work, rebalance wealth, move us away from materially and labour extractive capitalism and towards community-oriented life, giving us time to create, explore, care and flourish. But the very fact that the big breakthrough this year has been about automating creativity (something people actually enjoy doing, but managers and tech people find an annoying expense to be treated with disdain) shows that it is most likely going to reinforce and deepen existing capitalism. AI is no more ‘democratising art’ than Garage Band ‘democratised’ music. Apple and MySpace made all the money from that wave as Microsoft, Open AI and Stability AI and their VC’s will make all the money from generative AI.
Metwhatnow
I had a bunch of stuff about the decline and fall of the metaverse written before going to hospital but I realise now that it was quite gleeful and bitter so I’ve just made a list of stuff:
Cory Doctorow on Facebook (‘Meta’s’) pivot-to-video and the fradulent boosterism of Facebook’s (sorry ,’Meta’s’) aggressive acquisitions towards a failed metaverse: The EU held a metaverse launch party where 6 people showed up – Decentreland has 38 active users but claims it has 6000 for its $1bn valuation. Facebook’s (sorry, ‘Meta’s’) gamble has famously, not paid off either – even their own employees don’t like it. This is probably why Apple is keeping a pretty low-key metaverse presence – they’re not stupid and they actually know how to make grotesque amount of money. Are failures in research infrastructure to blame for web3 and the metaverse? Yes probably.
For 2023
Just some notes of things I had to take into next year:
- Take things at face value. If it looks like a duck, walks like a duck etc. The current vibe has a lot of people casting about to get a feel for what’s coming next and just as many people willing to hoik their nonsense to turn a quick buck. A lot of things that appear stupid and nonsensical are in fact stupid and nonsensical, as this year’s crypto and metaverse crash have shown.
- Saw Hamilton twice this year. Great every time.
- I’m going to have to experiment with image generators / diffusion models. It can’t be avoided anymore so I just need to think about a way to do it that makes it a worthwhile endeavour.
- There are about twenty things that happened in my life that just constantly recur – snippets of conversation, a thing someone said in passing, an experience, an interaction. They’re apparently random, no more meaningful than anything else, but I remember them all the time and they have an outsize role in how I think about myself and my life.
- The only aim of the wealthy is to make more money and avoid paying tax. Any time any tech company issues any big proclamation about the future or democracy, climate or experience or whatever, just know that at the core of the implication wheel is that ambition.
- I discovered that a part of my brain has been sitting idle just holding on to all the lyrics for Million Dead’s I Am The Party, which I was able to sing verbatim, having not even listened to it for maybe ten, fifteen years. What else have I got in storage just taking up space?
- Paying more tax and engaging meaningfully with politics would fix most of the problems they want to innovate in and profit from; climate, public transport, education, equality etc.
- Would this innovation be better as you paying tax and supporting access to voting?
- It really is the sunset of social networks. Goodbye and thanks for all the whatevers. That could mean a lot of introspection but I doubt it.
Short stuff
Sorry these are really badly structured and credited because I was copying stuff on my phone from a hospital bed and it’s hard to do that in a good way come on AI sort it out.
- Timnit Gebru, similar to above, on the role of Effective Altruism in AI safety.
- Linked this above but a deeper dive on the nihilistic disdain AI people have for human creativity.
- Playbook for integrating climate change into your stories from Good Energy.
- The harm from worrying about climate change from the BBC and very much that hope is the answer (when well tempered with real dread.)
- Shumon Basar ruminating on Endcore.
Ok, love you, you know that. Sometimes it’s hard to because I get so wrapped up in my own fears and worries and get quite selfish and self-important. I’m not very good at thinking about other people outside of transactional circumstances which is why I’m terrible at retaining friendships and I don’t know if I can fix that now. But I do love you and sometimes I suddenly feel it and remember and it sort of makes everything else quite pale.