This one’s been in drafts for a long time and I didn’t want to wait til next Wednesday so here it is.
ICYMI, I got in a pretty bad accident in Richmond Park on 19th August while doing some laps. In the space of two seconds a dog ran into the road, turned around, hesitated and hovered and forced me to slam on the brakes, sending me and the bike flying and shattering my femur into eight pieces. This caused a lot of internal bleeding and I was rushed to Kingston, then St. George’s hospital for surgery. Then I had ten days in hospital with three blood transfusions to get me walking again and now four weeks at home on crutches.
Recovery is going really well generally and I’m hoping to be off crutches next week and possibly even back on a bike, even if indoor training by the end of the year. I’m easing back into work gradually and the pain is totally bearable and lessening every day but the frustration is just how difficult and exhausting everything is. I’ve gone from having super-active, long days of lots of work, exercise and activity to basically needing to take a break every 20 minutes. The fatigue is real and I hope it passes, especially as I get back to exercising and get some more endorphins kicking around.
Then, last week, when I was going to send you this I got knocked out with flu from nowhere (well, not nowhere, the nine-month old girl that lives with us is most likely the culprit) and spent two and a half days with fever in bed which I guess is the side-effect of a generally weakened state.
Where’s the render?
Big format changes over here at the Bounding Box Inc. While in convalescence and tallying up the value of my time I realised that something has to give. Between a full-time job that often bleeds over, my own practice, raising the infant, hopefully returning to bikes and the nascent PhD, there’s just too much. It’s not an unusual or in any way an heroic story but I was just doing too much and frankly, not doing the PhD because it was the hardest thing. Doing the render each week is anything between half a day and a day’s work and that’s valuable time where I really need to break the back of the PhD work. I’m six years into it and not even half way and if I want to combat the anxiety that haunts me every night in bed then I need to make meaningful sacrifices to it.
It’s somewhat ironic because the PhD is very much about computer generated imagery, even though that conversation is accelerating massively all the time now with the AI renderers clogging up LinkedIn, we probably are only six months away from the first short film made in Midjourney or whatever.
So they may come back. It’s still a part of my practice, and it is a practice-based bit of research but I need to attack some theory and thinking for a bit before I can run more experiments for you.
This has also meant doing some recoding on the site because it was all set up around that header video and now it’s all kerfuffled.
While I’m on that.
My interest in these things (both boutique CGI and the AI stuff) is how they loop in with imagination. This interview with David Holz (formerly of Leap Motion now of Midjourney) is really fascinating for a bunch of reasons. Not least because social media is so lousy and loud with the opinions of futurists screaming about how exciting it is to just ask for images and get them without pesky designers or artists challenging their assumptions and provoking their imagination and so it’s unusual to hear from someone actually designing these systems.
We started off testing the raw technology in September last year, and we were immediately finding really different things. We found very quickly that most people don’t know what they want. “Here’s a machine you can imagine anything with it — what do you want?” And they go: “dog.” And you go “really?” and they go “pink dog.” So you give them a picture of a dog, and they go “okay” and then go do something else.
David Holz
Holz is keen to frame Midjourney as an ‘engine’ for imagination: It doesn’t actually replace imagination or make up for a lack of creativity but it can augment it. So if you’re lacking in vision in the first place it’s actually kinda useless. It’s worth reading. I don’t agree with it all: For instance, it bears the usual hallmark of a techno-centric worldview that the social problem of not enough participation in future-making is solveable by a technology as opposed to better education or public policy.
This all nicely overlaps/intersects with this great piece from Erik Savaggio – Radicalized by the Game Genie – which does a great job of re-contextualising AI image generators as games that have their own mechanics and interactions and, importantly (and in full agreement with my general scepticism) as processes of play, not in ends of themselves – don’t get dazzled by the spectacle.
What if DALLE2 is not strictly an art-making tool like Photoshop, but an art-making Sim, a game engine for art making? DALLE2 encourages creative expression within the rules and structures of its mechanics. This is how game narratives are co-created: imagination enters into a mutually created space, and players interact with that world which the code renders to reach some game-stated goal. In the art game, the goal is the creation of a suitable image, rather than slaying a dragon or building the city of our dreams.
Erik Savaggio
Reading
I cracked through Image of the Future by Frederik Polak a week or two ago. This was on reading some of Johannes‘ reflections on it as part of his thesis work. I have to be honest; did not enjoy. It’s supposed to be a cornerstone of futures thinking and I’m sure when it was published in the 70s it was, but it’s very hard to read past the western, Christian supremacy in it. Polak relentlessly laments the ‘decline’ of society as ‘evidenced’ by how previous (European) societies have related to images of the future. It’s not a particularly wild thesis: When there are optimistic images of the future, society thrives, when there isn’t or their dystopian, society dies. ‘Thriving’ in these terms is eg. the Industrial Revolution and Age of Enlightenment where everything was great and everybody was happy and healthy and things were perfect everyone agrees.
There’s a little more to it and part one is maybe a useful list of the history of utopias or future visions since pre-Christianity but honestly, I wouldn’t bother.
I also read Smoke and Mirrors from Gemma Milne. I saw her present at a panel discussion around hype and AI a few years ago and bought her book right away hoping to delve a little deeper since it’s such an important part of my research stuff but the subtitle (How Hype Obscures the Future and How to See Past It) is a little misleading: It’s not really about hype per se, but about nine hyped-up technologies and tries to peel back the curtain a little bit to show you the current state and discourses of eg. urban farming, AI, quantum computing etc. etc. If you’re emerged in the tech/futures space there’s nothing super new here but it’s a useful primer. I did really like her notion that AI isn’t a novel form of black boxing, but that all our social bureaucracy is a black box and we’re just automating it. I might use that.
The Black Box problem is not unique to the current ‘age of AI’, it’ part of our society. AI is not this separate thing we don’t control and which independently came into being – it’s and accentuation of humans and the society in which we live, which we build and affect and can change if we desire.
Gemma Milne, Smoke and Mirrors, p.240.
Other than that I cracked a seam of stuff on AI and expectation that I’ve been delving deep on. Prediction without Futures, from Sun-Ha Hong looks at bits of computational reductionism as well as how prediction and the perception of the power to predict gives permission to entities to make social decisions. Enchanted Determinism, Alexander Campolo and Kate Crawford picks up a similar thread about how the construction of myth around AI (which is very much back in vogue, circa Haunted Machines 2014) helps to enchant the seemingly accurate predictive results; that it builds a cultural spectacle around the machines that make them more powerful due to their inscrutability.
I’m sort of on the fence about whether AI is inscrutable really or if we just all agree to say that. But that’s another trail of thought
Short Stuff
Bleep bloop. More to it than this but here’s what I had in drafts.
- The tap of ‘AI is magic’ is now stuck in the ‘on’ position for futurity. In 2014, I thought, ‘huh, that’s a weird and interesting blip‘ and now here we are.
- Using DALL-E 2 to redesign streets. This is a use-case I can really get behind. It’s less about vainglorious futurists defining future aesthetics than everyday folks having the opportunity to imagine the World Without Cars. That’s a good and wholesome application of ‘AI.’ Especially because it’s not trying to be photoreal.
- Speaking of, another good application, this time of Stable Diffusion, in reverse-engineering prompts to reveal the biases in the data used to train it.
- Andrew Dana Hudson coining the idea of ‘Omelianism’ in which we might spot false utopias.
Well next week I’ve decided to write to you about what this actual damn PhD is about. Not just random musings but a sort of semi-structured breakdown because I keep mentioning it, keep thinking about it, keep finding more things to read but rarely actually tell you about it.
(Read that back in Mike Duncan closing out an episode voice, queue string music… ‘but rarely, actually, tell you about it.’)
Ok, love you as always, byee.