I’m really sorry it’s been so long. There’s been a succession of illness, visiting relatives, work overspill and frankly, recovery which have kept me from you. I was in Copenhagen last week for my first work trip since Covid really and certainly with the injury and I’ve absolutely ruined my back with leaning on the stick, so now at least I’m trapped in an ergonomic desk chair with this glaring tab open. PhD is progressing nicely; methodology rewritten, chapter in review. It’s a little less frenetic than earlier because I’m waiting for feedback and making tweaks. I also finally tidied up my Are.na which I’m v proud of.
Rob Horning (✝ Real Life) wrote a piece about the ‘Paralogisms of AI.’ This is a new word for me (meaning fallacious reasoning) but something I’ve been thinking about a lot – how certain rhetorical devices re-occur in drawing attention to the spectacle of AI even as they sneak to dismiss it. This is well-documented in Nordmann and Rip’s idea of ‘speculative ethics:’ (which is different to the other speculative ethics which is ‘speculative’ as in ontological rather than ‘made-up’) They talk about how discussion (either positive or negative) of speculative, future ethical problems with a technology end up distracting from the present reality of the technology as well as the current ethical dilemmas (ChatGPT might put a load of people out of work but it is currently exploiting Kenyan labourers.) Horning writes a lot about some of the rhetorical myths and plays of computation to get humans to engage with them but I really haven’t seen an argument laid out for the logical loopholes people go through in talking about AI futures other than perhaps in AI Myths.
Some of these have been explored on recent Ezra Klein’s, particularly the ‘it’s inevitable, so we must build it.’ This recent one with Kelsey Piper lays out an interesting paradox though; that this attitude is not representative of AI culture more generally. The people who don’t think this line of AI development is a good idea and worry about the dangers, aren’t building it or advocating for it. This means there’s a selection problem in what we hear from AI people about what they think the future of the technology is; those who disagree aren’t being listened to.
I also want to setup a good AI thread. Somewhere where arguably good breakthroughs and applications of the tech are documented. Time was you would just set up a Twitter bot, but I’m not sure if that’s the way to do it nowadays. Maybe a project for a quieter time.
Recent / Upcoming
I was in Copenhagen last week on my first post-Covid, post-nine-smaller-femurs trip. I was there to help open up the Designing Narratives and Evoking Change course for Danish Design Centre which ends today I think. Also did some trips to see various friends around Copenhagen including Crystal who’s installing some grass at a museum (there’s more to it than that, as with all Crystal’s work it’s Smart.) I think it was an important talk. I started looking back on the What if Our World is Their Heaven stuff and realised that all the stuff I’ve been talking about there has moved from weak signals to IRL, which required a bit of a rethink.
- Upcoming, I’m doing a talk for Service Design College on design research and critical thinking tomorrow (30 March) in the morning. I think it’s free.
- Going back on From Later next week but I can’t think about what to talk about. All the signals are really loud.
- I’ll be in Milan for Salone April 20-22. I’m doing a panel with some friends which I don’t think has been announced yet, but lmk if you’re about and want to hang out.
- I’m delivering another masterclass for Milan Poli in May with Malina Dabrowska on speculative design (yeah) and design futures.
Short Stuff
- Language is our Latent Space from Jon Evans is basically a way of redescrbiing Hayles’ cognitive assemblages as all the tech people imagine they are inventing STS for the first. time.
- Brilliant review of Apple’s Extrapolations and the way it completely misses the mark on how fiction should talk about climate change (“It’s the fossil fuels, stupid”)
- In an extension of the logic around Lil Miquela, Levis is going to use computer generated people of colour as model.
- The reading list from Stochastic Parrots day.
- Ed Zitron on the deflating of expectations and delivery in Silicon Valley.
- Another John Lanchester great on silicon and Silicon Valley via Natalie.
- The Crypto world is still all just crime.
This post was in drafts for ages and then I accidentally clicked ‘restore backup’ out of morbid curiosity and it deleted it all so there you go, lesson learned. Ok I love you love you love you.