The anticyclonic gloom has passed! I really loved the anticyclonic gloom. I loved the words and I loved the ways it evoked the idea of being in Silent Hill (the original). It may not surprise you to know that the fog in Silent Hill was a technical convenience to reduce render distance so the PlayStation could handle the map. I will miss the anticyclonic gloom.
Slipping a Disk
And if you do play games, (or do renderings) have you ever had that thing where a character or object just glitches and starts spinning around wildly and incomprehensibly? I feel like that’s what hype does to the futures cone. When big, blustery and implausible claims about how a technology is going to solve climate change or cure all social ills is made, you’e briefly forced to shake up the cone in order to even consider and dispel them.
Promissory rhetorics and big speculative assertions about future benefits are, after all possible. It’s possible that we might one day live in a socioeconomic utopia brought about by generative AI but it’s not plausible. But when we’re told that this technology will usher in an age of amazing prosperity, that speculative cone is sort of shaken up and for a brief moment the possible but implausible slips into the frame of the plausible.
You’re forced to imagine this future if only to discredit it but in doing so temporarily move it into the domain of credibility so that you can expose it.
Squirrels
Ok, here’s an analogy I might run with the kid: So let’s say we both know about squirrels but I basically only know things from cursory observation and general understanding of rodents.
And let’s say she says to me one day; ‘Squirrels have six legs.’ and I say, ‘No they don’t’ and she says; ‘Yes they do’ and I say, ‘Based on my experience so far with squirrels and my understanding of mammals, it’s just implausible that squirrels have six legs. There might be one with six legs but squirrels, as a rule, do not have six legs.’
‘But they do,’ she insists ‘the ones you’ve seen just happen to have four legs.’ So I go out and spend a day looking at squirrels, hunting for even one example of an elusive six-legged squirrel.
Of course I don’t find one, but by engaging in this activity I’ve given credibility to the idea that there might be one or even that I’m just extraordinarily lucky in only having seen the incredibly rare four-legged squirrel my entire life and what’s more have just spent a bunch of time thinking and talking about squirrels. So just replace squirrels with AI, the four-legged one being the usual annoying, disappointing but sort of charming squirrel/AI and the six-legged squirrel being this childish fantasy that the kid (multi billion dollar technology company) is absolutely adamant is real and what’s more, you’re the only one who doesn’t see it.
The effect of this all is incredibly disorienting. If you’re constantly having to flex and reorient just to comprehend ludicrous claims how can anyone have time or clarity to focus on the actual? So hype, as much as a mechanism to inflateexpectations and drive attention is also a sort of psyop that can undermine more general credibility of futures. It makes the whole structure of the cone (if it’s still the appropriate metaphor) unstable and unreliable, not because it’s been provably broken (e.g. the implausible turns out to actually be possible) but because you’re constantly having to refresh it to entertain and integrate ludicrous claims.
I guess it’s a similar phenomena to the idea that as the world is flooded with more and more misinformation and synthetic data, the credibility of everything is undermined and people start to question things even labelled as ‘real.‘ If enough VCs have told you that they’re going to end homelessness using their app, when one comes along that might actually do that, you’re disinclined to believe it. You’ve increasingly tightened your cone and ossified it around four-legged squirrels so much that when a six-legged one does come across your view, it must be a trick.
PhD
We’re motoring along again. Today, obviously, as I’m writing this I am not doing it because I am also trying to get back to blogging since approximately five or six people have asked ‘what’s going on there?’ Next week I might write up how I’m using Obsidian. I’m a few weeks off having a definitive sense of the next chapter (enough to share with you anyway.) I’m currently back-filling; so taking the (now) 20,000 words of unstructured notes in a massive document in scrivener and filing and shooting them away into different corners of Obsidian and watching, quite beautifully and remarkably, a structure and narrative naturally emerge.
Recents
Julian and I wrote a chapter for the Practice of Futurecasting which is the product of a few days spent in the mountains of Austria last spring. We wrote a chapter about taking imagination seriously and how that can be done in a ‘business’ setting.
If you’re in London next Friday I’m taking part in a panel with some incredibly luminary folks called for ‘Design Declares.‘ I think the focus is a little more on the overlap between futures and sustainability and how we can bring them all together.
Reading
Now I’m back in PhD flow, it is all just PhD stuff I’m afraid. Not as much random articles. The newsletter backlist is piling up again (I’d got it down to about 140 week before last, it’s back up at 250 ish today.)
Why is Retraction Watch so good? Like rubber necking a bad science car crash.
A lot of the assumptions about scaling in generative AI are off according to AI scaling myths. The AI Snake Oil folks don’t so much questions the efficacy of scaling (in this piece) but the fact of it. The assumption that data and power are just out there ready to be used (for instance, yes, there’s a gajillion terabytes of YouTube but a lot of it has no words in it.)
Absolutely wonderful, wonderful, gorgeous, gorgeous writing from Tim Maughan in Not My Problem. It has the same sort of feeling as The Deluge (which I’m still shook by) in that things don’t just go dystopia, it’s just a future of piling on inconvenience and hypocrisy in the name of speculative utopias. Dispondo-futuro? Something like that. What would a cynics (in the classical sense) futurism be called? Cynicofuturism maybe. Anyway, it’s gorgeous and i love him.
Listening
Sometimes I do this dance walking into the office to feel good. Because I love you. Speak to you later.
One of the things I’ve been really emphasising about this new technological wave in talking to people is that we’re not in the ‘exciting’ and frenetic days of early social media or the Internet. This isn’t a time where some new technologies are emerging and smart, playful outsiders are coming in and showing us new ways we might do things. Generative AI is characterised by four or five of the world’s wealthiest companies, run by a few dozen of the world’s wealthiest men, focussing on the two wealthiest states, fighting to maintain the status quo.
Of course there are and will be, weird and interesting things that happen along the way but the incumbents are so powerful that they can just hoover up any competition. This was well analysed by Henry Farrell on the political economy of AI. He points out that, just as with the early Internet, a war over IP is emerging between the incumbent corporation that capitalise on culture and the artists and creatives who feed that culture, only this time the incumbents aren’t Disney, Warner Brothers and the record companies as with Netflix, Napster and Spotify but the big tech companies; Microsoft, Google, Amazon and so on trying to extend the living they’ve made off the back of the work of creatives. The point Farrell makes is there a future in which this just kills culture and the Internet; that the well is so poisoned by synthetic media and market disincentives that the whole enterprise of the Internet just sort of ossifies and collapses.
As we know from Gopnik, generative AI is a cultural technology, a way of organising and disseminating knowledge. It doesn’t create anything new but changes the way we order things and value them. The IP fights going on are a symptom of this shift and in fighting to maintain total supremacy and status quo over a speculative future market, the incumbents are likely smothering anything new that might emerge as a result.
In a sort of answer to the last’s posts provocation, (‘If someone tells you what something could do, ask them why it isn’t.‘) why would any of these incumbents seek to change the techno-cultural production machine that has made their bosses billionaires? AI isn’t a disruptive force to them, it’s a compliant one and the aim is simply to avoid letting any of your three or four competitors claim any space off you. Luckily for us, maybe, it’s actually going quite badly as Open AI starts to hit a ceiling, the numbers look unworkable and they keep launching things that flop or provide some novelty but little functional utility.
Short Stuff
I’ve been asked to do an interview for a thing but the thing I really like is that I’ve been given a long time to do it. I have the questions but now have two months to answer them which is really really interesting because it means I can actually think and sit with them rather than dash them off like often happens or as on this blog
A good piece on rituals, it’s a useful antidote to the sometimes lazy framing of ‘smartphones are now rituals’ that you sometimes see in popular reporting. Rituals have specific qualities and properties that are not present in most technologically-mediated content binges.
Following on from my last post, Dave Karpf’s review of Dixon’s book on blockchains: “He sees some problems with the Internet that venture capital helped build. The only solution he can imagine is more venture capital.”
Wes on tech’s delusional relationship to Star Trek. (Weirdly I added an overlong footnote about Star Trek to a recent essay. Probably won’t make the cut but it was basically ‘Star Trek is a silly thing to pin your colours to because it is politically infeasible.’)
The fascinating trap OpenAI has itself in as a result of its arrangements with Microsoft and as a nonprofit to prove that it has not got anywhere near so-called AGI.
This was a really great interview with Cameron Tonkinwise (and Okskar!). I nodded enthusiastically with most of his talking about designers in organisations. Was hoping there’d be more succinct and clear definition around Transition Design but there’s a lot of great content there.
I have little loyalty or connection to San Francisco but you know I detest the hubris and nihlism of tech culture. This is a great piece from Rebecca Solnit on how it has destroyed the city and the paradox at the heart of claims of democratisation while Silicon Valley increasingly lives and encourages isolation, alienation and separation. Putting it better than I did in the last post.
Something really sparked a circuit in this great article from Beth Singler about apocalyptism in AI. I’m paraphrasing but; apocalypses are a utopia for those that survive them.
Sorry it was very short this week. I feel like I’ve worked through a lot of stuff recently already and have been focussing on work to try and get various things finished and over the line. Ok, love you, bye.
J-Paul quickly pointed out that last week’s post was a little unclear which is fair enough, these ideas aren’t super clear in my mind which is why I blog them to have exactly these conversations. I guess the thrust of it was: “Is/was the end of hollowed out of Design Thinking a sign of a forthcoming proper engagement with proper design from the mainstream world?” Anyway, it sparked some interesting thoughts on the blue job site even if it was mostly on my doodle of what my design world looks like.
I got good medical news on Monday. Short version; the surgery worked and I’m allowed to start weight-bearing on my leg. This means I only need to use a stick rather than two crutches everywhere and can now do things like carry a cup, hold my child, stand up for more than five minutes etc. etc. I started back at work this week and hoping to get into the office next week. Let me know if you want to catch up about anything.
You don’t really want an AI button
Lots of folks have been excited by the Rabbit R1 and the Humane Pin. They’re the first forays into AI products which is exciting to industrial and UX designers bored of black rectangles. Each has a gimmick to knock it over the line; the Rabbit represents the illustrious design pedigree of Teenage Engineering and the Humane Pin draws on sci-fi tropes to have another crack at gestural interfaces. But I just don’t see them working. I can’t help it; the intuitive, designerly bit of me sucks air through my teeth and exhales with a ‘naaaaaah.’
They’re drumming up hype because they claim a new typology of object that some see as more appropriate for so-called AI but they do it purely on the back of science fiction fantasies with little consideration for how people actually use their technology and how it fits in society. There’s a series of misunderstandings and assumptions; the first is that generative AI immediately demands different types of physical interaction and secondly that hardware innovations are driven purely by software as opposed to social impetuses.
The first one is tricky. Generative AI might mean that people interact with information differently (just as the Internet heralded) and so demand different types of hardware to serve that need, but it won’t happen overnight. I personally buy into the theory that AI is just a nooscope; a new way of examining and organising information rather than new information per se. And these devices seem to be ways or organising information for those who are money rich and time poor:
who is a person that’s an early adopter of gadgets, but is so disengaged with what they eat and where they travel, that they’ll just accept the default choices from a brand new platform that will certainly have bugs?
We don’t access information in this linear way any more. Expedia killed off travel agents because we could see, on a screen, the range of options across time, cost and convenience and then make a decision ourselves that feels better informed than by talking to someone or something. Expedia is the absolute worst but it gave us what we wanted.
There’s also the simple truth that a a keyboard and mouse are the very best, most versatile and high fidelity way of interacting with computers and billions invested in gestural and voice interfaces have failed to show any different; they give you neither the power, dexterity or flexibility of a mouse even if, twenty years on, thinkfluencers are still telling us the Minority Report interface is coming. (See also: elite PC Gamers for keyboard maximalism) As well as the sci-fi tropes and gorgeously elan industrial design being pulled on to make implausible designs desirable, there’s also the ‘push’ of the past; the idea, buried in all this, that smartphones are a temporary stepping stone on a path to a new form of ubiquitous AI interface, a technique common in tech to position our present as part of a ‘transhistorical continuum‘ of an inevitable future.
Don’t forget your phone
I think it’s fair to say (and I’m correct) that the last good phone was the iPhone 5 and it’s been shit since; all the hardware worked, it had a good form, size and shape where the camera didn’t stick out and you didn’t need to carry around extra batteries. It was completely fine. It was so completely fine that Apple have now gone back and got rid of the annoying slippy-as-a-fish bevels and put back the 5’s rugged industrial edges so that you can actually feel and grip it with your fingers in the dark first thing in the morning. Now, I’m not being hyperbolic when I say I honestly don’t know what iPhone model we’re on now, what the other companies are doing and (other than parroting Apple marketing) what’s better since the 5 other than ‘better battery, better screen, better camera.’
I remember watching the Apple conferences when there were exciting! But now it’s just a series of incremental ‘improvements’ (‘best battery/screen/camera ever’) and some faffy apps that tell you when it’s time to have a biotic yoghurt based on the colour of the moon or whatever. So why is it all so crappy and boring? Why has the novelty and excitement worn off? One interpretation might be that innovation around mobile devices has stagnated; that the industry has become bloated and are waiting on fictional ‘breakthroughs’ like AI or the metaverse. However, it would be more accurate to say that we’ve stabilised the smart phone – built a series of norms and expectations around it – and that the inventors actually have very little wiggle room with which to do anything new.
The car provides a useful historical analogy; over the last hundred or so years, the car has been stabilised such that there’s very little that can be done to change its fundamental design or role in our lives. Roads, tunnels and bridges have been designed around its size and speed, traffic signs and management around its power, legislation around its efficiency, design and safety. As these things have been pinned down, the software used to design them has taken it all on board to frame and limit the range of design, mostly. More annoyingly, we’ve sprung social norms and impetuses around the car such as placing shopping areas and parks on the assumptions of driving and where people live.
The same has happened with phones, although not as loudly perceptible. Sure, we’ve designed pockets and bags around them which, (compared to bridges) are relatively mutable but we’ve also evolved social norms around them; when and how to use them, the role they have in our commute, during meetings or family dinners and so on. This might limit so-called ‘innovation’ and make them seem really boring but it also makes them very powerful social signallers and, just like the car, even if we have the technology, it’s going to be very hard to unpick them from our lives, much harder than the AI product people think.
Signalling
You see, the good thing about a phone is it is a very visible social symbol. For instance, laying it out on the table, face down is a way of saying that you want to be aware of it but not distracted unduly. You might be expecting a call or indeed, broadcasting to other people that you are giving them attention. A ‘no phones at the table rule’ is a more enforced version of this. You might then pick it up and carelessly flip it over to signal your intention to leave. You might have it on loud or silent depending on how much disdain you have for those around you relative to any notification you might receive. On public transport you can use it to blast music to annoy people or use it as a concealment mechanism to dissuade eye contact.
For a little rectangle it has a remarkable role in instantiating and mediating social relations. For example, you can massively expand or shrink the range of your personal space with it: Drawing it close to your face on the bus shrinks that space to effectively the bit of air between the screen and your eyes, making it easily defensible in a very packed public environment. Conversely, putting it on a stand with a ring light in a busy public place massively inflates your personal space and pushes others’ out the way. That’s why wondering into the back of a dance video on a busy high street feels like walking through someone else’s living room to get to the other side of their house.
Remember why Google Glass flopped? Ostensibly it was for ‘privacy’ but I don’t think that’s exactly right. People are recorded all the time by CCTV, their browsers and the institutions around them and only the very most paranoid or activistic care that much. It’s also not necessarily about consent; you don’t actively consent to be on CCTV. No, I think it’s about the disruption of personal space. These devices are agents of or extensions of your personal space and all those sorts of norms I’ve described above are ways of negotiating this augmented space: Google Glass users and dance influencers expand themselves to fill your space and claim it which is why it feels awkward and horrid. Or, from the other extreme; it’s very hard to see or know what someone else is doing on their phone without physically invading their space; peeking over their shoulder or pulling it from them. It fits within the human boundaries we’ve had for tens of thousands of years. Possibly longer.
AI-in-a-box
The Humane Pin, with its outward-facing projector, camera and obnoxious position on the user’s body is attempting to hurl itself bodily into these norms as if Google Glass never happened and it will fail because only the most obnoxious and socially ambivalent have no empathy for how other people see them. The Rabbit might have an easier time here; its interactions are familiar as a sort of walkie-talkie-Pokedex but the question has to be asked about what it does that a phone is incapable of doing, if anything it does less but just in a lovely Teenage Engineering box. These things aren’t smartphone killers, they don’t offer nearly the same practical or social utility. They’re for time-poor cash-rich people who’s main focus is signalling to other people that they’re into AI.
I’m reminded of Alex Deschamps-Sonsino saying something in passing many years ago about ‘putting an Arduino in a box and seeing what happens.’ This was when the Internet of Things was in full overdrive and everyone thought that we’d soon ditch our phones for a suite of sensors and actuators all around us. Probably a decade on, the ideology hasn’t really changed; the phone is still seen as a temporary stepping stone into a future dreamed up by old men decades ago, only now it’s by putting AI in a box and seeing what happens.
The Rabbit does look gorgeous tho.
Short Stuff
Alan Warburton has released his new film at the thewizardof.ai. Like his other works it’s a brilliant essay on the critical issues around a technology. I think what marks Alan’s work out for me is that he is a (self-described) ‘jobbing animator.’ As well as an artist and academic he works for commercial clients which I think gives him a uniquely grounded perspective in talking about critical issues.
I also really enjoyed this talk from Eryk Salvaggio on AI as imaginary.
More in #breezepunk; mobile kite wind power for temporary generation. I was wondering if this would be more efficient than solar for mobile use but the inventors appear to propose using it in combination with solar and fossil fuel generators.
Molly White on the US Securities and Exchange Commission reluctant approval of Bitcoin ‘ETP’s‘ (no I don’t understand and US financial regulation is not something that I have time to get my head around – but it’s important and more interestingly, hamstrung.)
Some former colleagues have launched ‘CoDesign4Transitions‘ (I love them but who lets academics name things?) they have some money for PhD places.
I was a massive fan of the Magnus Archives, which I binged and wiki’d through Covid. Lovecraftian mystery horror taking place through archival recordings. They’ve starting releasing the follow-up series, the Magnus Protocol with a new cast of characters and the first episode is just dripping with easter eggs.
Matt is releasing his mad clock. He’s made it poetic, refined and beautiful but I want a dial on the back that I can crank up from ‘prosaic’ to ‘profounf’ to ‘unhinged’ and fully bathe in a generative AI psychosis.
George’s book, Systems Ultra is out. It’s been a long old journey so very jazzed to see it hit shelves. Go buy a copy:
Systems Ultra explores how we experience complex systems: the mesh of things, people, and ideas interacting to produce their own patterns and behaviours.
What does it mean when a car which runs on code drives dangerously? What does massmarket graphics software tell us about the workplace politics of architects? And, in these human-made systems, which phenomena are designed, and which are emergent? In a world of networked technologies, global supply chains, and supranational regulations, there are growing calls for a new kind of literacy around systems and their ramifications. At the same time, we are often told these systems are impossible to fully comprehend and are far beyond our control.
I was listening to this Ezra Klein with Kyle Chayka about taste in which they discuss the difference between curation-proper and ‘curated feeds’ which really feed you more of what you want without regard for creator or context. I wonder if my personal curation method of reading (focus on relevance) is limiting my exposure to new ideas. I’m going to make more of a conscious effort this week to read things I would normally dismiss after the first few paragraphs.