The anticyclonic gloom has passed! I really loved the anticyclonic gloom. I loved the words and I loved the ways it evoked the idea of being in Silent Hill (the original). It may not surprise you to know that the fog in Silent Hill was a technical convenience to reduce render distance so the PlayStation could handle the map. I will miss the anticyclonic gloom.
Slipping a Disk
And if you do play games, (or do renderings) have you ever had that thing where a character or object just glitches and starts spinning around wildly and incomprehensibly? I feel like that’s what hype does to the futures cone. When big, blustery and implausible claims about how a technology is going to solve climate change or cure all social ills is made, you’e briefly forced to shake up the cone in order to even consider and dispel them.
Promissory rhetorics and big speculative assertions about future benefits are, after all possible. It’s possible that we might one day live in a socioeconomic utopia brought about by generative AI but it’s not plausible. But when we’re told that this technology will usher in an age of amazing prosperity, that speculative cone is sort of shaken up and for a brief moment the possible but implausible slips into the frame of the plausible.
You’re forced to imagine this future if only to discredit it but in doing so temporarily move it into the domain of credibility so that you can expose it.
Squirrels
Ok, here’s an analogy I might run with the kid: So let’s say we both know about squirrels but I basically only know things from cursory observation and general understanding of rodents.
And let’s say she says to me one day; ‘Squirrels have six legs.’ and I say, ‘No they don’t’ and she says; ‘Yes they do’ and I say, ‘Based on my experience so far with squirrels and my understanding of mammals, it’s just implausible that squirrels have six legs. There might be one with six legs but squirrels, as a rule, do not have six legs.’
‘But they do,’ she insists ‘the ones you’ve seen just happen to have four legs.’ So I go out and spend a day looking at squirrels, hunting for even one example of an elusive six-legged squirrel.
Of course I don’t find one, but by engaging in this activity I’ve given credibility to the idea that there might be one or even that I’m just extraordinarily lucky in only having seen the incredibly rare four-legged squirrel my entire life and what’s more have just spent a bunch of time thinking and talking about squirrels. So just replace squirrels with AI, the four-legged one being the usual annoying, disappointing but sort of charming squirrel/AI and the six-legged squirrel being this childish fantasy that the kid (multi billion dollar technology company) is absolutely adamant is real and what’s more, you’re the only one who doesn’t see it.
The effect of this all is incredibly disorienting. If you’re constantly having to flex and reorient just to comprehend ludicrous claims how can anyone have time or clarity to focus on the actual? So hype, as much as a mechanism to inflateexpectations and drive attention is also a sort of psyop that can undermine more general credibility of futures. It makes the whole structure of the cone (if it’s still the appropriate metaphor) unstable and unreliable, not because it’s been provably broken (e.g. the implausible turns out to actually be possible) but because you’re constantly having to refresh it to entertain and integrate ludicrous claims.
I guess it’s a similar phenomena to the idea that as the world is flooded with more and more misinformation and synthetic data, the credibility of everything is undermined and people start to question things even labelled as ‘real.‘ If enough VCs have told you that they’re going to end homelessness using their app, when one comes along that might actually do that, you’re disinclined to believe it. You’ve increasingly tightened your cone and ossified it around four-legged squirrels so much that when a six-legged one does come across your view, it must be a trick.
PhD
We’re motoring along again. Today, obviously, as I’m writing this I am not doing it because I am also trying to get back to blogging since approximately five or six people have asked ‘what’s going on there?’ Next week I might write up how I’m using Obsidian. I’m a few weeks off having a definitive sense of the next chapter (enough to share with you anyway.) I’m currently back-filling; so taking the (now) 20,000 words of unstructured notes in a massive document in scrivener and filing and shooting them away into different corners of Obsidian and watching, quite beautifully and remarkably, a structure and narrative naturally emerge.
Recents
Julian and I wrote a chapter for the Practice of Futurecasting which is the product of a few days spent in the mountains of Austria last spring. We wrote a chapter about taking imagination seriously and how that can be done in a ‘business’ setting.
If you’re in London next Friday I’m taking part in a panel with some incredibly luminary folks called for ‘Design Declares.‘ I think the focus is a little more on the overlap between futures and sustainability and how we can bring them all together.
Reading
Now I’m back in PhD flow, it is all just PhD stuff I’m afraid. Not as much random articles. The newsletter backlist is piling up again (I’d got it down to about 140 week before last, it’s back up at 250 ish today.)
Why is Retraction Watch so good? Like rubber necking a bad science car crash.
A lot of the assumptions about scaling in generative AI are off according to AI scaling myths. The AI Snake Oil folks don’t so much questions the efficacy of scaling (in this piece) but the fact of it. The assumption that data and power are just out there ready to be used (for instance, yes, there’s a gajillion terabytes of YouTube but a lot of it has no words in it.)
Absolutely wonderful, wonderful, gorgeous, gorgeous writing from Tim Maughan in Not My Problem. It has the same sort of feeling as The Deluge (which I’m still shook by) in that things don’t just go dystopia, it’s just a future of piling on inconvenience and hypocrisy in the name of speculative utopias. Dispondo-futuro? Something like that. What would a cynics (in the classical sense) futurism be called? Cynicofuturism maybe. Anyway, it’s gorgeous and i love him.
Listening
Sometimes I do this dance walking into the office to feel good. Because I love you. Speak to you later.
I decided not to post this yesterday, the feed were busy. I was up listening to the election results come in from the US. I went to bed thinking “I can’t believe they”ll win” (just as I did twice in 2016), I’ve woken up to “oh it won”t be so bad” which made me laugh out loud having read a fascinating appendix about terror management theory this week.
It’s Your Fault if it Doesn’t Work
Last week I mentioned the talk I gave at Orgatec in Cologne. Well, immediately after that I was meant to shoot across the country to Eindhoven for the wonderful Design and AI symposium to take part in a panel that very afternoon. However, due to the massive catastrophe that is Deutsche Bahn (that plenty of people had warned me about) I didn’t make it. I thought, given I had 6 hours in which to make a 2 hour journey, that even with the delays and problems, I’d get there.
Anyway, I didn’t but I made it for the morning keynote the next day: It’s Still Magic Even if You Know How It’s Done. Now, where I’d spent two weeks dragging the previous day’s talk up from my brain and out from render farms, this one I just sort of threw together the day before. It was broadly a meander through the bulk of my PhD work so in that sense, it wasn’t that difficult and I was able to draw on a lot of the material I already had,
Of course the title borrows from one of the two ways Terry Pratchett has put the sentiment down that magic is knowable but still is also magic. In Pratchett’s case he is making a case for magic (which I wholly support) while I was, in the context of the theme of ‘enchantment’ talking about how we can both know that AI is a technical, computational thing while also be convinced to feel that it’s magic, usually to try and sell us impossible dreams. You’ve heard all this stuff before. The talk went really well. I hope they recorded it. If not, it’s probably worth writing up here at some point although I probably won’t be able to capture that old Revell-is-winging-it charm.
But, what I wanted to write to you about is: Later that day I experienced both a sense of vindication and slight guilt. Someone from a Very Large Technology Company was on stage later in the day to deliver a presentation as part of a session on ‘industry Perspectives’ and was in the unfortunate (and I think, knowing) position of having to repeat all the tropes, narrative tricks, rhetorical and metaphorical sleight-of-hands I’d just been critiquing not an hour or so before. This, paired with the odd Freudian slip (the phrase ‘replace journalists’ was very briefly left hanging in the air before a rapid backtrack to ‘augment journalists’) the use of big numbers and graphs showing things like ‘scale’ and ‘growth’ (again, which I’d talked about being used to sell speculative fantasies) I can only imagine this was a difficult talk to have to give in the context of what I’d just walked the audience through.
It’s Because You’re Not Sufficiently Courageous and Curious
The narrative conceit of the talk was how ‘courage’ and ‘curiosity’ ‘make the magic happen’ in AI. There’s a lot to take in there. The obvious part is the reliance on the word ‘magic’ to explain the value of this thing. And again, I’d just stood on stage and dissected how magic is used to over-inflate the efficacy of technology as well as make its operation and construction opaque. But the part that was new to me was the constant circling back to the need for us – the potential user or client – to be courageous and curious in order for the AI to work. I’m paraphrasing but the gist of it was ‘your courage and curiosity plus our technology.’
The preface to this was ‘generative AI is new and confusing and difficult’ but you can make money off it if you are sufficiently courageous and curious. In other words; it’s not up to us to tell you what this thing is for or how it makes your business/service/product better. If you can’t see that it’s because of your own failings and I think it was the first time I’d seen it put so plainly.
There’s obviously a hugeamountout there at the moment about the lack of tangible utility or ROI of generative AI for real contexts, whether at work or home. Some great examples of isolated tasks and my working theory is much more benefit to people who are information rich and time poor (like freelancers or small businesses) but as a general, paradigmatic shift in both work and economy, it’s not particularly weighty. The implication of the logic from the Very Large Technology Company is that that’s your problem. That if you’re not using generative AI to great effect it’s because you’re not taking enough risk, you’re not thinking big enough. These rhetorical tricks don’t feel that unusual (think of Diesel’s ‘Only The Brave’ which also implies that it’s up to you as the consumer to live up to the expectation of the brand.) But that’s about a cultural identity or fashion item (or perfume actually) – it’s an aesthetic choice.
I wasn’t there (I was doing other things like being silly and having fun) but I imagine there were people from this Very Big Technology Company going around the world in the early 2000s and saying to people at conferences ‘You know how much you spend on business travel? You can spend less on video conferencing and it’s quicker and more people can do it so it’s better.” And people in those audiences probably looked at a number which looked better and went ‘yeah ok.’
I doubt (but again, I was being silly so can’t testify to it) that they walked on stage and said ‘are you going to pay for video conferencing or are you an intellectually stunted, snivelling coward?’ Because videoconferencing is self-evidently cheaper and more effective than expensive flights for international communication. But, because generative AI isn’t self-evidently useful in any significant way other than for novelty and performative futurity, you have no option but to use a handful of slightly ethically dubious, highly experimental and probably carefully edited examples, wave the made-up big numbers from widely discredited consultant reports around, flash an s-curve and imply that anyone who doesn’t stump up cash is a craven luddite.
PhD
The above was a new one has been added to my ‘AI myths’ lists. I think other people have been making these lists of AI myths too but this week I’ve been bingeing on PhD to try and get over a bit of a hump and get the gears going again. This has largely also been about transitioning my brain into Obsidian and doing things like, yes, ordering and compiling lists of rhetorical tricks used in AI.
I’m not going to deep dive into what I’ve been doing (I did but I deleted it), I’ll save that for another week but it’s been an interesting process of doing something I’ve wanted to do for years in setting up Obsidian and letting it emerge through an intense but organise process of 5 or 6 15-hour days with my head fully inside my PhD. I’m also writing this post in Obsidian right now. I’m using tags with anything I put on the blog so I can also link that into my ‘second brain’ (God, as I said to Maya, I sound like a linux bro saying these things). Look I’ll post a screenshot.
That’s what I’ve been staring at for days now. I haven’t got anywhere near as much done as I wanted but I haven’t wasted a minute either so you know, I don’t feel bad about it.
Reading
I don’t post everything I read here by the way. Just things that are interesting or useful or feature friends.
Sean Johnston’s The Technological Fix traces the origins and impacts of the post-war idea that technology could and would fix all social problems. Unsurprisingly perhaps, the perspective is rooted in nuclear science; by creating weapons that would eliminate the need for further conflict and a power source that would mitigate scarcity, the men of the time believed they would end the root causes of social ills through ‘applied science.’
I’m Running Out of Ways To Explain How Bad This Is. When your distrust of political institutions and belief in your own worldview outweighs the admitted fact that information you are creating, sharing and disseminating is made up or false.
Apparently The Whisperverse is the future of mobile computing with augmented reality plus AI inside your head. I sort of like this idea, I could see it being charming and enjoyable and helpful but it all relies on magical terminology as usual.
Amongst my reading Consent in Crisis was long but interesting. Basically; websites are increasingly guarded against AI scrapers through their robots.txt. However, this means the bulk of scraping is less and less diverse as the ‘long tail’ of the web fails to scraped into models. The authors show, for instance, that news sites make up about 40% of tokens in C4 (the common crawl dataset they examined) but only 1% of ChatGPT queries. While the most popular are creative composition makes up almost 30% and sexual content almost 15% of ChatGPT requests. The authors argue that this means a misalignment with heavily biased outcomes.
Listening (and watching)
I don’t watch much TV and (this is going to sound outrageously pretentious) I feel better because of it. I don’t get dragged into staying up past when I want to, I don’t feel like hours just vanish into nothing. But I’ve been watching Industry while on the turbo trainer and there are two major problems: First, the finance babble makes no sense but there’s enough tense eye contact or knowing nods to get a read on whether what one character finance-babbles is good new or bad news. The second thing is I can’t tell the difference between the foppish private school boys. I’m on like episode 5 or 6 and Rob, Ross, Greg, Steve, Simon are all interchangeable to me. Because the writing jumps quite a lot as well, just right over situations or events, they could conceivably all be the same foppish private school boy and I just missed that subtle editing queue. But in listening:
Courtney LaPlante joined by Tatiana Shmayluk to duet Circle with Me is the birthday present I wanted, thank you and I love you. Speak to you next week.
At least three people said ‘I love your blog’ in the last week and each time I felt crestfallen that I’ve been so lax with this this obligation. I really do try and do everything. I think I’ve seen films where in sound recording studios some of the sliders on the big desk of sliders move on their own; I’m not sure why this is, I assume it’s something programmed in, certainly I’ve never been around audio equipment that sophisticated. My own musical recordings were all done with a combination of Garage Band and crates of Kronenburg. Anyway, that’s how life works sometimes: You turn something up at one end and all the way down the other end of the desk a slider is automatically tweaked down.
Making Five Stores From The Distant Future
For the last two weeks I’ve been spending evenings and early morning working on a series of renders for the opening keynote I gave at Orgatec in Cologne last week. The whole thing was a massive faff but quite fun and diverting at a time where I really needed something creative to call my own. Sure, I could have thrown some slides together for a 25 minute-long witter with some data and ‘trends,’ but having spoken with Robert who was organising the whole shindig I felt quite inspired to go the extra mile and turned the research I did into a series of short stories from the future. At some point I’ll figure out how to share these but I wanted (realising that people who read this website are actually interested in this stuff) to talk about how I went about doing it.
To start with I had three constraints: I knew I wanted to write and tell short stories (just because), I knew I had about 20 minutes in a 35 minute talk to do this so I worked out I wanted roughly 5 short stories of about 4 minutes each. I also knew I wanted them all to be connected somehow. The next thing I knew was the big picture of the world. I’d already sent off a blurb and pinned down a couple of things that would shape the world with ideas like degrowth, the end of high finance and speculation, the end or weakening of global norms and institutions and the stuff we know about like climate and demographic change. The final thing was knowing the audience might be futures-curious but like-as-not unfamiliar with most of these concepts.
I started by throwing those big ideas down on a piece of A3 paper and imagining what connected them. For instance, in a world of managed degrowth, people might want to kick against it and you could get a subculture of people looking to engage in high finance and speculation in the same way that living a fully sustainable lifestyle today could be seen as a subculture. You might also start to see a slowdown in global logistics as a result of climate, degrowth and ending global norms, so rather than a world of next-day delivery, everything takes a long time for things to move. Between those two there’s an obvious conflict; the drive for speed, power and control, the reality of sluggish, uncoordinated and messy physical reality. This was the first one I thought about and came to sailing ships but the rest also flowed quite quickly once I started imagining what occurrred at the intersections of different drivers and ideas.
I spent an hour or so doodling away and thinking about the little visuals that emerged and that actually became the backbone of the whole thing. I started by modelling the scenes I was reasonably confident about (like the sailing ships). For each scene it was about making the familiar unfamiliar; the uncanny. For the ships for instance, I used a boxy, cargo ship that might be easily recognised but then put Chinese junker-style sails on copied from a modern sailing vessel. Each scene I wanted to be recognisable but have something different; diesel ships with sails, an office with a playground, a kitchen with 18 seats etc. etc. This is the starting point for most speculative design; finding something materially familiar and normalised and twisting it so that the audience is forced to reconcile their expectation (diesel ships have engines) with what they’re seeing (these ones have sails). So it’s also important that both those things are recognisable. Where I was introducing a brand new element – like the ‘d-rhizome’ in the home office scene; an AI-augmented alternative to the Internet that is fully node-based and inspired by slime moulds or mycelium – these would have to be explained in the story.
From here the stories and scenes sort of developed in tandem along with prompting ideas for the next scenes. Some stories were easy to flesh out to bullet points and pull together like the Bangladeshi immigrants running a semi-autonomous Norwegian vineyard as part of an international soil restoration programme for migrant workers. The pieces just sort of fall into place. Others took more forcing.
The rooftop scene, for example, is about a building caretaker where the building has so much biomaterial and biotech fitted it’s almost a living thing, so I wanted it to be less of a service and more like a doctor; someone who is widely respected and admired for their expertise and time. This is an idea we explored a little of in the Future of Making work that went to Singapore the other week. I knew I wanted the top-down view of the roof in a sort of satire of green roofs. So I put cows on it. If you’re going to cover a roof in grass you might as well have cows and you might as well use their waste to fuel a bioreactor. And the association of the machines with the animals opened up the story beyond the technological to one more similar to a farmer who cares for their animals but it’s a building.
I worked these out by sketching over and over again the scenes in my notebook, adding elements and writing notes on how they might work and how the character relates to them. I didn’t get to writing the prose of the stories until literally on the train over to Cologne. Luckily, my head was so in the world that this all came quite quickly. I settled on a model in which for each scene a character reflects on how they got there, some exposition, some weirdness. I actually ended up using Copilot quite a lot to figure out details like names, locations, species and so on which probably saved a bunch of time hunting for an endangered species of bird that eats berries and migrates from through Germany to the arctic.
On anti-AI aesthetics
A quick note on the style. You might note at the top of that paper it says ‘like Frostpunk.’ I knew I had a lot of work to do so I wanted to reduce the workload as much as possible. So inspired by the game I adopted three tricks. First of all, I tried to stick to fixed view so that I could keep the lighting simple. Apart from the dinner scene, no camera moves through the scene so I didn’t need to worry about what was ‘behind’ the camera and could build the scene like a set. Second of all was using simple flat images as background parallaxes. The rooftop is a great example. The background here is just a flat image of a street. Finally was keeping the style loose and low poly where possible. I didn’t hit this rule all the time. Ironically, the more time-pressured I got, the easier it became to just pull out pre-made assets from Blender Kit. So while the ships is all DIY, with some cardboard cutout UV mapping, by the time I was doing the office scene I was basically just modelling core bits like the room, the weird screen and table, the vertical farm. The rest is all found assets.
I realised quite late that as well as a time-saving effort, these aesthetics decisions were about intentionally distancing the images from the new generative AI aesthetic. I didn’t want to do over-stylised photo-real images with lots of soft blur because I wanted the audience to know that I had made these images by hand, that it took effort and labour to do and that maybe in that effort and labour I had got the opportunity to think about these future scenarios in more depth. That by moving things around, working out how space might function, designing the workarounds people might have to make for their work to fit their lives, that I would learn a lot more about the subject and that this informs the stories.
I know that generative AI image-making has become a popular speculative design tool but I’m pretty sure it’s not actual design. When you put in a prompt to for ‘a future retrofit commercial office where people are living in apartments and spending their days trading in high finance derivatives around a massive table’ you’re not actually designing anything. I suppose you’re actually asking the machine to elicit your own head-cannon from a cultural median for you. Sure that thing has probably never existed before but you’re not really making anything, just skewing a graph.
Design that is also research is what we learn in the actual designing of things; of keyboards and desks and tables and chairs and lamps and switches. In making those things and thinking about the people who will touch and use them you generate knowledge, understanding and insight about the future. If you’re just taking your preconceptions and getting a machine to make them ‘real’ then have you really learned anything? A reason these renders take so long is that even adding a chair to a desk scene forces me to ask questions like; how long does this person sit? What kind of things do they like? Are the proud of their work? What else might they need to do? How might their personality be reflected in the chair? And in exploring and answering those questions I feed the knowledge back into the stories and the world-building.
Points of failure
Of course, none of these projects ever go right. Even after so many years of honing my Blender-craft and convincing myself I had plenty of time there were problems. With about a week to go I lost my notebook and with it, all the sketches, notes and annotations I had been pulling together for each scene. Pretty sure I dropped it somewhere around Central Saint Martins at an event but despite a couple of visits it never showed up so I had to remember a lot of the ideas I had for the last three or four scenes. Second thing was that the PC I was remoting into to do the rendering went offline and took about a week to get back. So I had all the renders backed up and modelled out but time ticking on the actual render time. I ended up sinking about $300 into cloud rendering to meet the deadline. (I missed the deadline, but got it in before the talk which is what counts.)
And of course, nothing ever looks like you want it to. Each of the renders except the vineyard, rooftop and the forest have multiple versions. And even those were re-rendered a bunch to fix bugs or style problems. The original kitchen was just some tables arranged end-to-end with a cooker at the head. It just felt like a big party not like a kitchen purposefully setup for a large group to eat together regularly. The first office was just basically a bullpen with holographic screens which I threw together at 2am one morning and in the cold light of day, rejected as unimaginative and cliched. The idea of having it as a literal live/work retrofit with apartments in a commercial building came later. So really I ended up producing about 16 rendered animations of about two minutes each to get to the final seven.
Finally, and a critical failure for someone who claims to be a designer; I didn’t get to do any testing. There simply wasn’t time to get someone else to have an eye-over of the stories. I was writing and editing them right up to the morning of the keynote itself You should always give time to have someone else edit your work because, though I may know this world inside and out, no one else does and afterwards several audience members commented that it was ‘very dense’ meaning, I imagine, a lot went over people’s heads when spoken and not read on the page. It also probably meant that I wasn’t as confident in them in presentation as I might have been with more dry-runs, even if I did rehearse the whole thing four or five times.
For example, introducing the d-rhizome, this new type of Internet which prioritises real connection rather than command-and-control was tough. Think about a classic science fiction book; usually it only introduces one new idea (e.g. there’s time travel, plants are an alien species, spiders are the apex species) but everything else is broadly the same (e.g. people want to preserve their life, get wealthier (in some way) saved their loved ones, whatever). But science fiction authors get a whole book and your total attention to introduce and explore that idea. I had 5 minutes and a trade conference keynote so I’m not surprised some of it was lost.
Other than that, it’s just all the stuff that goes with anything you’ve worked super hard on; you notice all the things that could be better but I’m long enough in the tooth to know that that’s life and you just have to move on. Anyway, yes, I will find a way to tell you the stories and show you the full renders. It’s on my to-do with everyhting else.
Recent and Upcoming
Couple of recent and upcoming things.
I’ve taken up a teaching role at the London Interdisciplinary School teaching design. I’ve been following the LIS since it launched and been really interested in what a genuinely interdisciplinary education looks like so this is an interesting little peek inside.
I took on a role as an industry champion at the Creative Industries Policy and Evidence Center to advise and consult on the future of the creative industries.
22nd November: I’m going to be at the next Design Declares! event with a host of amazing and luminary folks. Really quite worried about what I’m able to bring to that party.
As I said, I’ll find a way to document the Orgatec stories. The other big one was the opening keynote at the Design and AI symposium hosted by TU Delft. I’m not sure if they were recorded, if not I will also seek to document that but it’s basically a PhD walkthrough with a dance in the middle. I also have thoughts about some of the other stuff that was there.
Reading
I’m significantly behind on keeping up with newsletters because of all the above work. I’ve managed to crawl and skim through about 40 or so in the last few days. There’s an overarching and exasperated message that the amount of money and resource being thrown at AI ($100s billions) versus the actual tangible provable outcomes (5% is of positive impact at various things) are wildly out of proportion which does give the impression that we’re heading for a very real bubble.
The Ethico-Politics of design toolkits by Tomasz Hollanek explores dozens of ethical AI toolkits with some choice words on ethics- and participation-washing as part of a process that is often depoliticised and fails to match the actual needs of AI development processes. These toolkits often call for alternatives, which he points out there are loads of, which are ignored or maligned by mainstream AI practice.
Microsoft’s Hypocrisy on AI. This is depressingly unsurprising but it’s useful to have a bunch of evidence. In the PhD I’m circling a bit around how claims about AI’s ‘potential’ (to do things like cure cancer or mitigate climate change) gain credibility despite these being completely fabricated assertions. It’s a tricky thing to pin down, the PhD is all about how idea A (it can play games really well or chat with your kid) become claim B (it will cure cancer, mitigate climate change) but this article basically shows how big tech is “talking out both sides of its mouth” about these speculative claims by also making a bunch of money selling prospecting tools to fossil fuel companies. I was at an event where I tried to make this self-fulfilling prophecy point to some city leaders:
Microsoft is reportedly planning a $100 billion supercomputer to support the next generations of OpenAI’s technologies; it could require as much energy annually as 4 million American homes. Abandoning all of this would be like the U.S. outlawing cars after designing its entire highway system around them. Therein lies the crux of the problem: In this new generative-AI paradigm, uncertainty reigns over certainty, speculation dominates reality, science defers to faith.
Brian Merchant has also written up a bit on it here.
Ed Zitron on the Subprime AI Crisis. Zitron (who I like reading but can’t listen to) has been tracking the wobbly finances of big tech in AI for a while and frustratedly pointing out all the inherent contradictions and problems. Zitron extends the usual argument with the specific mechanisms by which AI is sold. One; it’s on you to figure out how to make it useful/valuable (more on this next week) and two: through software as a service that binds you to it. This one gave me real dot-com-bubble vibes. Consume with reporting on underwhelming productivity impacts.
Wes has finally released his Stories from AI-Free Futures. He’s been working really hard on getting this album together as a continuation of Newly Forgotten Technologies which I would broadly describe as ‘specualtion on what comes after AI.’ Please do check them out.
Paul Graham Raven interviewing George Voss here. Part 2 is now out as well.
Apple did another launch which is a great excuse to remember how underwhelming things are. (I would 100% get a Mac Mini though, I really always liked them)
WordPress seems to have got super slow? I have refreshed my browser a bunch but it’s just got really clunky and delayed since I was last here. Perhaps something to do with all the lawsuits? Anyway I love you and assure you that following a very unpleasant summer I am back to regular programming.