I’m trying to rush this out from a cottage in Whitstable where I’m on holiday. But I am sticking with my plan of publishing once per week again. This isn’t a good one as a consequence.
I’ve become fascinated by this type of image that I’ve seen circulating on Facebook. This one is for a ‘viral article’ about actors and their children and there’s something very interesting going on. To be clear, I didn’t click on it and don’t click on these things so I’m not sure what’s behind the image but it’s not a stretch to suggest that the content would be almost entirely ChatGPT generated.
What catches my eye is the captioning. Something to bear in mind is that this is one image, presented as four in a carousel-style layout. In other words, someone has intentionally tried to make this look like a slideshow even if these ads can only embed one image, which is a clever dark pattern. The top one is straightforward enough but the others are a bit odd. On the left we have Jim Carey and his presumed child who, from the slice of face visible, appears to be female-presenting yet is captioned with ‘so-‘ implying ‘son.’ This could be another dark pattern, designed to intrigue. The middle one, defying the layout convention has no caption at all but the right one is truly tricky; Jonny Depp and his ‘ds’o-.’ Now I’ve seen enough generative content to know when Dall-E is trying to write text, but why? And also the text is different. What are the drivers at play that has made this image possible and necessary?
A cursory search for the Johnny Depp image is quite easy to source so I assume that the other celebrity photos are ‘real.’ Some cost saving exists here but I’m not sure why it’s on failing to generate the word ‘son’ or ‘daughter.’ The shadier side of media is always a useful way for gauging the stabilisation of a technology; what corners can be but, what costs saved? The brutal bottom line and unilateral attention ambitions of spam makes it the front line for these things.
Recents
The video from the seminar I did with Copenhagen Institute of Futures Studies is up. It’s my usual Design Futures at Arup thing but was unusually coherent so feel free to check it out. Some good questions too. While I’m on Copenhagen, Crystal is at it again with Flora Italica in, I got the walkthrough of this incredible project when I visited the city a few weeks back, she’s so smart, she’s so cool, she hates sharing food. Please check out the project if you’re ever there.
There’s a little video report out from Sharing Desired Futures which is what I was up to in Austria the other week. I don’t think I’m featured but I have really great memories of it as an event and then the retreat.
Reading
- Taxonomy of Risks posed by LMs. Does what it says on the abstract. Found it very useful for some strategy stuff at work.
- Gen Z aren’t super jazzed about AI taking away all the things they were looking forward to. VC boosters already took away income security and planetary health they just wanted to make art.
- Is it the perfect cartoon about AI?
- Took me about two hours to work through Dejan Grba’s Deep Else: A Critical Framework for AI Art, mainly because it’s so comprehensive with analyses of different projects. There’s a lot to agree with in terms of spectacle but it is very, very, very salty. No one really comes out unscathed and there’s a sense in which the practices which are skewered or dismissed are mostly because they don’t fall into or respect a net-art / new media art lineage. Still very good though.
- Profoundly naive rubbish on AI from one of the biggest names but…
- Eryk Salvaggio is still one of the best:
Eryk SalvaggioSure, communication teams might be deliberately linking existential risk as a strategy to avoid discussing the deeper issues at stake for people on today’s planet. It’s their job to come up with frames that win the media discourse. But someone has to believe those stories — including many engineers and computer scientists who ought to know better. So how does one come to the conclusion that the highest priority for humankind is to stop the very technology that they are working to build?
To understand that belief, it is helpful to view “AI will kill us all” more as a convenient, self-reinforcing delusion than a purely cynical misdirection. That delusion reinforces egos by elevating the importance of the work, and makes every decision messianic: more direct social harms end up diminished by contrast.
This is the power of convenient belief as response to cognitive dissonance. Folks may not be consciously engineering an ideology — not that it matters — but find it emerges from the ways they justify the compromises of their work. It is a comforting myth.
Obviously, AI is not a threat to human survival, or the folks writing these letters would just stop developing it and focus on limiting it, rather than writing letters about how they really ought to stop. It can’t be an immediate concern to them, at least not equal to the level of importance they are asking the rest of us to place on it.
Assume that there is no such thing as pure evil or pure stupidity at play. Instead, it is helpful to see it as the slow self-seduction of the story we tell ourselves about our choices.
Apple released it’s VR thing to no-one’s surprise and you won’t be surprised to know how I feel about it. As someone in the know has just said ‘I need to return to my day job of building public utilities.’ There are more important things to be done. I love you though so see you later.