Last week was a couple of the private views around London, at London College of Communication – my old place – and Central Saint Martins. Today is Goldsmiths and Creative Computing Institute I particularly enjoyed Material Futures exhibition and was somewhat buoyed to see a little less mycelium than usual. Being on the other side of shows are interesting, it really makes a difference when someone approaches you to talk about their work.
Pictures about Computers
A couple of threads have been intersecting lately and as I sat down to just say ‘oh this is interesting’ I ended up pulling them together a bit more. Remember this is a blog not an academic journal.
DeepMind has commissioned some 3D artists and graphic designers to produce artworks. Interestingly, these have been released on Unsplash which hosts permission-less images for use, the implication being that DeepMind would prefer if you used these images when writing articles about their work. It’s an interesting choice. On the one hand the images, though beautiful are uncritical and flat. The descriptions of the images are similar to early undergraduate illustration students, one shows a grassy hill-scape and is about the world after nuclear fusion, a project DeepMind are quite rightfully involved in (again, lots of great applications for AI, vanishingly few for crypto.) But they also clearly tap into a younger cultural zeitgeist, aimed less at VCs and funders and more at people who get their news from Instagram and satisfying 3D renders. Perhaps this is aimed at attracting future DeepMinders, perhaps it’s simply about challenging the dominant aesthetics. Either way it shows an awareness on DeepMind’s part that the way AI is culturally assimilated is as important as what it is.
Which is why it’s surprising that none of the images in the series are actually produced using DeepMind technology.
DeepMind first bounded into the public domain with Deep Dream in 2015. The famed ‘puppy slug‘ images from one of the early GANs are a long way removed from the incredible power and versatility of the technology today but brought with them a marked aesthetic which seized popular imagination about AI. There was a few years where puppy slugs peppered every article, conference slide deck (including my own) and blog (yes) mentioning AI. It was a product ‘of’ the machine and before we really had the language to either understand what was going on, let alone describe it to ourselves it provided a stand-in for what, up until then was the stock imagery of androids brushing their hands against curtains of 1’s and 0’s. It felt like a peering under the hood.
For DeepMind there hasn’t been a slate of images or a technology released with as much public and cultural impact since. These images on Unsplash, may well be seeking earnestly to explore and shape a new visual language (I don’t think that’s going to happen with pretty metaphors that could really be about anything if you removed the caption) but then why not use the technology itself to do it? These images are all nice, glossy Cinema 4D or Maya pieces that slot seamlessly into the world of shareable CGI images but they don’t jump out and make a mark like Deep Dream did.
However, on the other side, Open AI, perhaps in the shadow of their founder, are giddily saturating the world with Dall-E and Dall-E 2 memes.
The feeds have been filling up with Dall-E images, so much so that it’s starting to become it’s own meme, people inputting ludicrous scenarios and enjoying the return. It reminds m a little of the Deep Dream glory days. Now, Dall-E 2 is open for previews, supposed to be even more advanced and is loudly being heralded as the ‘future of design’ because you can actually call anything the future of design.
This really isn’t a post about the inherent creative and critical problem of equating design with putting prompts into Discord. But if Dall-E isn’t going under the hood it’s certainly letting you into the mechanic’s garage to look around, for all my cynicism of Open AI, they do tend to publish quite lengthy analysis of releases their own page highlights the inherent biases in the software, something that any designer should really be reading before giddily proclaiming it to be the future.
Now, the aspirations of DeepMind as a whole and Dall-E as a small part of Open AI (again, not actually open) are different. There is a sense in which DeepMind, for all of its problems, is quietly and diligently getting to the business of developing solutions to technical (and sometimes, regrettably, social) problems using its technology while Open AI like to buoy interest and funding by releasing snippets of culturally shocking pieces like GPT3 (exclusively licensed by Microsoft, again not Open) and Dall-E for meme creators to play with.
All this to say, that in the process of normalising AI there seem to be three distinct approaches. The first is the churning of stock imagery of robots and blue 1’s and 0’s as above. These tap into twenty, maybe thirty years of cinema, video games and TV to build suspension of disbelief. We still see this being deployed in those same arenas today.
The second is DeepMind’s more cautious, representative approach; tapping into an aesthetic zeitgeist but making it ‘about’ AI. This is more akin to how the Economist or New Scientist might illustrate – interpretive human made images. But these are forgettable, they could be about anything, they don’t have a critical impact on our understanding of image production.
And thirdly is the memetification of AI aesthetics, releasing it to the crowd and letting the memes make themselves, whether through GPT3 or Dall-E. I wouldn’t be surprised to start seeing this in Cinema, TV and Video Games as they gain a popular foothold.
Really the place you want to go for this is Encoding Futures, Maya Ganesh, some of the BBC R&D stuff and the resulting platform Better Images of AI. I’ve done a couple of arts about it. Natalie and I have also been chipping away at it for sometime at Haunted Machines. Really if I was better I would have finished my PhD about this by now which is all about this feedback loop between CGI and the imaginary of AI but I’m busy and like riding bikes when I have an hour off.
The way to think this forward is through advertising because that’s where the cultural zeitgeist is built and responded to. And advertising relies on convincing people of how something feels – driving, drinking, having comprehensive life insurance. How do you advertised an AI? We’ve seen it in Google Home and Amazon Alexa, the convenient monolith, part of the family, gleefully responding to commands and a ukulele soundtrack. These are easier to sell, they’re domestic products which are basically operating system doorways to an ecosystem. A more general or advanced AI is a more difficult thing to hero image.
Reading
You guys write too much and I’ve only just hit the 95% of gamified me-time needed to actually sit down and read things. I managed to finish Andrew Dana Hudson’s Our Shared Storm pretty whip-smart. Grim, positive reading. Will it convince people to change their ways and confront the climate crisis? Probably not, you’re either there or you’re not going there. Does it offer a really nuanced and interesting critique and celebration of the massive giga-project that is COP? Yup. Also check out the NFL podcast he did where he talks a little more about Post-Normal Science.
I decided to create an Are.na board for books I’ve read. Why not? You might find it useful and it will convince me to read more when I fall into bed at 21:07 and would usually drift off to a Warhammer painting video in 6 minutes. And it comports with my fully gamified lifestyle: Targets! Data! Badges! ‘Chievos!
Playing
Why not have a section on playing? I started Norco but it was a bit much on a Saturday morning so saving that. I’m now the proud owner of a Switch so I was browsing for something and got Citizen Sleeper from Gareth Damien Martin of In Other Water fame. It’s a Sci-Fi text-based RPG with some real nice gameplay mechanics where you basically have a limited number of things you can do each ‘cycle’ and each ‘cycle’ progresses storylines so you have to make choices about what to focus on. It’s definitely a min/maxing game and I have not done that of course.
Short Stuff
- How did i miss NeRF? Basically machine learning yourself 3D meshes from 2D images. It’s like really good photogrammetry. Not enormously different to Deep Stereo of yesteryear either but, really good.
- Real Life, as usual being bang on in last week’s newsletter critiqued Kevin Roose’s ‘millennial lifestyle subsidy‘ notion pretty much brilliantly:
The word lifestyle, which implies having a choice, is especially galling, given that the business model of the companies in question was to undercut alternatives for the service they offered so that people would have no choice but to use them. … To equate the services these companies offer, which prompt isolation and stratification and dislocation from neighborhood life, with “urban lifestyles” is to entirely misunderstand them
Real Life newsletter
Alright love you see you later.