Folks, the new Sleep Token album is out and it’s been the only thing in my ears. I don’t know what else to tell you. The mysterious cultists have been trailing it for the last few months but it is a beautiful thing.
AI is life but life is not inevitable
‘AI is Life‘ suggests ASU astrobiologist and theoretician Sara Walker. (Since someone mentioned in passing ASU being at the centre of controversy-making I am starting to notice it more.) It’s a compelling argument that, in short, we should imagine all technology as products of life, in the same way that mineral sediments and oil deposits are. We don’t consider those ‘alive’ but recognise them as products of ‘life’ so; why not AI? It’s a compelling logic:
Many of us would not recognize mineral diversity as “life” any more than we would the computer screen or magazine you are reading this text on as “life,” but these are products of a sequence of evolutionary events enacted only on Earth. This is as true for a raven as it is for a large language model like ChatGPT. Both are products of several billion years of selective adaptation: Ravens wouldn’t exist without dinosaurs and the evolution of wings and feathers, and ChatGPT wouldn’t exist without the evolutionary divergence of the human lineage from apes, where humans went on to develop language.
Sara, Walker, AI is Life
I’m all for computation as a way of converging human experience with that of the planet: It’s the version of it I want to see, the extension of what Pohflepp described as the human ‘experiential register’ into scales to vast, tiny, slow and fast for humans to understand. Computers could and can translate the world of other beings to the scale to the human to make us more empathetic and comprehending, to prevent us projecting human standards on the rest, but this requires active effort from the activists, artists and scientists driving us to turn away from extractavist technology aimed at ‘efficiency and productivity.’
But, positioning ‘AI as life’ so simply serves the evil spirits of inevitabilism. The spectre of ‘Darwinian evolution,’ often misrepresented, has been used as cover for all sorts of nefarious and downright evil human activities because of the implicit argument that ‘if this is what the very building blocks of the universe have decided, then who are we as humans to intervene?’ Describing AI as part of an ‘evolutionary lineage’ (see also Romic on this) removes the agency humans have and need to express in order to direct it away from another technology used to wreck each other and our planet and places it in the domain of so-called ‘natural forces’ or ‘laws,’ arguments that have been used for generations to assuage guilt and make excuses for the worst excesses of capitalism.
On strictly typological terms, yes, there is an interesting debate to be had about what constitutes life and as folks like Bridle have argued, the imagining of artificial life opens up a re-assessment of what life is and how we relate to it that could inculcate better empathy and understanding of the life around us. I think Walker is trying to do the same, but the corollary or additional part to the article was missing. It should have said that we chose to make the Dodo extinct. We chose to decimate the rainforest. We chose to overfish the oceans. There was no inevitability to it, no evolutionary determinism, no natural law: Just as with nuclear annihilation, human cloning and the end of the space race, regenerative agriculture or fox hunting, humans have and had a choice over the life that’s left on Earth, and we have a choice over the one we chose for and with AI.
Upcoming
I’m taking part in a chat hosted by the Copenhagen Institute of Futures Studies on design and futures on 30th May. Look, they used the serious picture of me that everyone insists on using even though it looks nothing like me. (I think it’s from a phase where I wanted to look older and more ‘professional’)
Weirdly no one in this image is smiling even though we are all quite cheery people. Phil has dominated the shoulder score though. I was reminded last week that I recorded a podcast with Phil and Ben many moons ago that seems to have vanished. I’m not as bothered about this as when I spend days writing an essay only for it to vanish and the publication disappear but it’s still weird when something like that just ceases to exist.
Recents
I’ve been doing a lot of talk recently. I know said I was going to concentrate on Arup and the PhD but it feels like there’s still enough flex to go around and engage with people. I popped into Common Design Studio (which I helped start many years ago) to look at the student exhibition across Melbourne and London and offer some thoughts.
Reading
- Finally got around to reading Ted Chiang’s ‘Will AI become the New Mckinsey?‘ which is his second very good bit of writing about AI.
- On Joe‘s urging I sought out Design Epistemology. I’ve leant heavily on Cross’ definition of design as knowledge production for years; that it’s not art and science but (waves hands) something else. I usually supplement this with Frayling or Borgdorff’s notions of intuition and serendipity in discovery. The authors suggest that neither this nor the attempt to turn design into a science are sufficient epistemologies and either attempt to obfuscate design knowledge by keeping it esoteric (as in example A) or glorify and elevate science as the source of knowledge against others (as in B): “…design is hard enough without making it harder by applying esoteric theories inappropriately or by simplifying to such an extent that it is no longer functional or recognisable as design.”
- One of the most interesting side-effects of the generative AI boom has been increased attention on the role of criticism in driving hype; that criticism can overstate the potentials of a technology and do the job of boosters for them. I first came across this in Nordmann & Rip’s Speculative Ethics work but I finally got around to Vinsel’s great Criti-Hype about the business model of hype-by-criticism. Although I guess his February 2021 claims of a new AI winter missed the mark, it’s a good piece to be mindful of when I catch myself re-quoting Google or McKinsey stats. (Also, pleased to see the tacit recognition that the car is the most dangerous technology in existence.)
- Eryk Salvaggio’s prolific and brilliant blogging continues with In Defence of Human Senses.