J-Paul quickly pointed out that last week’s post was a little unclear which is fair enough, these ideas aren’t super clear in my mind which is why I blog them to have exactly these conversations. I guess the thrust of it was: “Is/was the end of hollowed out of Design Thinking a sign of a forthcoming proper engagement with proper design from the mainstream world?” Anyway, it sparked some interesting thoughts on the blue job site even if it was mostly on my doodle of what my design world looks like.
I got good medical news on Monday. Short version; the surgery worked and I’m allowed to start weight-bearing on my leg. This means I only need to use a stick rather than two crutches everywhere and can now do things like carry a cup, hold my child, stand up for more than five minutes etc. etc. I started back at work this week and hoping to get into the office next week. Let me know if you want to catch up about anything.
You don’t really want an AI button
Lots of folks have been excited by the Rabbit R1 and the Humane Pin. They’re the first forays into AI products which is exciting to industrial and UX designers bored of black rectangles. Each has a gimmick to knock it over the line; the Rabbit represents the illustrious design pedigree of Teenage Engineering and the Humane Pin draws on sci-fi tropes to have another crack at gestural interfaces. But I just don’t see them working. I can’t help it; the intuitive, designerly bit of me sucks air through my teeth and exhales with a ‘naaaaaah.’
They’re drumming up hype because they claim a new typology of object that some see as more appropriate for so-called AI but they do it purely on the back of science fiction fantasies with little consideration for how people actually use their technology and how it fits in society. There’s a series of misunderstandings and assumptions; the first is that generative AI immediately demands different types of physical interaction and secondly that hardware innovations are driven purely by software as opposed to social impetuses.
The first one is tricky. Generative AI might mean that people interact with information differently (just as the Internet heralded) and so demand different types of hardware to serve that need, but it won’t happen overnight. I personally buy into the theory that AI is just a nooscope; a new way of examining and organising information rather than new information per se. And these devices seem to be ways or organising information for those who are money rich and time poor:
who is a person that’s an early adopter of gadgets, but is so disengaged with what they eat and where they travel, that they’ll just accept the default choices from a brand new platform that will certainly have bugs?
Anil Dash via Steve Messer
We don’t access information in this linear way any more. Expedia killed off travel agents because we could see, on a screen, the range of options across time, cost and convenience and then make a decision ourselves that feels better informed than by talking to someone or something. Expedia is the absolute worst but it gave us what we wanted.
There’s also the simple truth that a a keyboard and mouse are the very best, most versatile and high fidelity way of interacting with computers and billions invested in gestural and voice interfaces have failed to show any different; they give you neither the power, dexterity or flexibility of a mouse even if, twenty years on, thinkfluencers are still telling us the Minority Report interface is coming. (See also: elite PC Gamers for keyboard maximalism) As well as the sci-fi tropes and gorgeously elan industrial design being pulled on to make implausible designs desirable, there’s also the ‘push’ of the past; the idea, buried in all this, that smartphones are a temporary stepping stone on a path to a new form of ubiquitous AI interface, a technique common in tech to position our present as part of a ‘transhistorical continuum‘ of an inevitable future.
Don’t forget your phone
I think it’s fair to say (and I’m correct) that the last good phone was the iPhone 5 and it’s been shit since; all the hardware worked, it had a good form, size and shape where the camera didn’t stick out and you didn’t need to carry around extra batteries. It was completely fine. It was so completely fine that Apple have now gone back and got rid of the annoying slippy-as-a-fish bevels and put back the 5’s rugged industrial edges so that you can actually feel and grip it with your fingers in the dark first thing in the morning. Now, I’m not being hyperbolic when I say I honestly don’t know what iPhone model we’re on now, what the other companies are doing and (other than parroting Apple marketing) what’s better since the 5 other than ‘better battery, better screen, better camera.’
I remember watching the Apple conferences when there were exciting! But now it’s just a series of incremental ‘improvements’ (‘best battery/screen/camera ever’) and some faffy apps that tell you when it’s time to have a biotic yoghurt based on the colour of the moon or whatever. So why is it all so crappy and boring? Why has the novelty and excitement worn off? One interpretation might be that innovation around mobile devices has stagnated; that the industry has become bloated and are waiting on fictional ‘breakthroughs’ like AI or the metaverse. However, it would be more accurate to say that we’ve stabilised the smart phone – built a series of norms and expectations around it – and that the inventors actually have very little wiggle room with which to do anything new.
The car provides a useful historical analogy; over the last hundred or so years, the car has been stabilised such that there’s very little that can be done to change its fundamental design or role in our lives. Roads, tunnels and bridges have been designed around its size and speed, traffic signs and management around its power, legislation around its efficiency, design and safety. As these things have been pinned down, the software used to design them has taken it all on board to frame and limit the range of design, mostly. More annoyingly, we’ve sprung social norms and impetuses around the car such as placing shopping areas and parks on the assumptions of driving and where people live.
The same has happened with phones, although not as loudly perceptible. Sure, we’ve designed pockets and bags around them which, (compared to bridges) are relatively mutable but we’ve also evolved social norms around them; when and how to use them, the role they have in our commute, during meetings or family dinners and so on. This might limit so-called ‘innovation’ and make them seem really boring but it also makes them very powerful social signallers and, just like the car, even if we have the technology, it’s going to be very hard to unpick them from our lives, much harder than the AI product people think.
Signalling
You see, the good thing about a phone is it is a very visible social symbol. For instance, laying it out on the table, face down is a way of saying that you want to be aware of it but not distracted unduly. You might be expecting a call or indeed, broadcasting to other people that you are giving them attention. A ‘no phones at the table rule’ is a more enforced version of this. You might then pick it up and carelessly flip it over to signal your intention to leave. You might have it on loud or silent depending on how much disdain you have for those around you relative to any notification you might receive. On public transport you can use it to blast music to annoy people or use it as a concealment mechanism to dissuade eye contact.
For a little rectangle it has a remarkable role in instantiating and mediating social relations. For example, you can massively expand or shrink the range of your personal space with it: Drawing it close to your face on the bus shrinks that space to effectively the bit of air between the screen and your eyes, making it easily defensible in a very packed public environment. Conversely, putting it on a stand with a ring light in a busy public place massively inflates your personal space and pushes others’ out the way. That’s why wondering into the back of a dance video on a busy high street feels like walking through someone else’s living room to get to the other side of their house.
Remember why Google Glass flopped? Ostensibly it was for ‘privacy’ but I don’t think that’s exactly right. People are recorded all the time by CCTV, their browsers and the institutions around them and only the very most paranoid or activistic care that much. It’s also not necessarily about consent; you don’t actively consent to be on CCTV. No, I think it’s about the disruption of personal space. These devices are agents of or extensions of your personal space and all those sorts of norms I’ve described above are ways of negotiating this augmented space: Google Glass users and dance influencers expand themselves to fill your space and claim it which is why it feels awkward and horrid. Or, from the other extreme; it’s very hard to see or know what someone else is doing on their phone without physically invading their space; peeking over their shoulder or pulling it from them. It fits within the human boundaries we’ve had for tens of thousands of years. Possibly longer.
AI-in-a-box
The Humane Pin, with its outward-facing projector, camera and obnoxious position on the user’s body is attempting to hurl itself bodily into these norms as if Google Glass never happened and it will fail because only the most obnoxious and socially ambivalent have no empathy for how other people see them. The Rabbit might have an easier time here; its interactions are familiar as a sort of walkie-talkie-Pokedex but the question has to be asked about what it does that a phone is incapable of doing, if anything it does less but just in a lovely Teenage Engineering box. These things aren’t smartphone killers, they don’t offer nearly the same practical or social utility. They’re for time-poor cash-rich people who’s main focus is signalling to other people that they’re into AI.
I’m reminded of Alex Deschamps-Sonsino saying something in passing many years ago about ‘putting an Arduino in a box and seeing what happens.’ This was when the Internet of Things was in full overdrive and everyone thought that we’d soon ditch our phones for a suite of sensors and actuators all around us. Probably a decade on, the ideology hasn’t really changed; the phone is still seen as a temporary stepping stone into a future dreamed up by old men decades ago, only now it’s by putting AI in a box and seeing what happens.
The Rabbit does look gorgeous tho.
Short Stuff
- Alan Warburton has released his new film at the thewizardof.ai. Like his other works it’s a brilliant essay on the critical issues around a technology. I think what marks Alan’s work out for me is that he is a (self-described) ‘jobbing animator.’ As well as an artist and academic he works for commercial clients which I think gives him a uniquely grounded perspective in talking about critical issues.
- I also really enjoyed this talk from Eryk Salvaggio on AI as imaginary.
- More in #breezepunk; mobile kite wind power for temporary generation. I was wondering if this would be more efficient than solar for mobile use but the inventors appear to propose using it in combination with solar and fossil fuel generators.
- Video of plants responding to danger.
- Molly White on the US Securities and Exchange Commission reluctant approval of Bitcoin ‘ETP’s‘ (no I don’t understand and US financial regulation is not something that I have time to get my head around – but it’s important and more interestingly, hamstrung.)
- Some former colleagues have launched ‘CoDesign4Transitions‘ (I love them but who lets academics name things?) they have some money for PhD places.
- Our nascent Madrid function has a role open for someone early in their career.
- I started a collection of things about how the Old Web is Dying. Just added What Happened to My Search Engine? and The Internet is Full of AI Dogshit, both of which are from the latest Creative Destruction.
- I was a massive fan of the Magnus Archives, which I binged and wiki’d through Covid. Lovecraftian mystery horror taking place through archival recordings. They’ve starting releasing the follow-up series, the Magnus Protocol with a new cast of characters and the first episode is just dripping with easter eggs.
- Matt is releasing his mad clock. He’s made it poetic, refined and beautiful but I want a dial on the back that I can crank up from ‘prosaic’ to ‘profounf’ to ‘unhinged’ and fully bathe in a generative AI psychosis.
- George’s book, Systems Ultra is out. It’s been a long old journey so very jazzed to see it hit shelves. Go buy a copy:
Systems Ultra explores how we experience complex systems: the mesh of things, people, and ideas interacting to produce their own patterns and behaviours.
What does it mean when a car which runs on code drives dangerously? What does massmarket graphics software tell us about the workplace politics of architects? And, in these human-made systems, which phenomena are designed, and which are emergent? In a world of networked technologies, global supply chains, and supranational regulations, there are growing calls for a new kind of literacy around systems and their ramifications. At the same time, we are often told these systems are impossible to fully comprehend and are far beyond our control.
I was listening to this Ezra Klein with Kyle Chayka about taste in which they discuss the difference between curation-proper and ‘curated feeds’ which really feed you more of what you want without regard for creator or context. I wonder if my personal curation method of reading (focus on relevance) is limiting my exposure to new ideas. I’m going to make more of a conscious effort this week to read things I would normally dismiss after the first few paragraphs.
Ok, you know I love you. Speak next week.