One of the things I’ve been really emphasising about this new technological wave in talking to people is that we’re not in the ‘exciting’ and frenetic days of early social media or the Internet. This isn’t a time where some new technologies are emerging and smart, playful outsiders are coming in and showing us new ways we might do things. Generative AI is characterised by four or five of the world’s wealthiest companies, run by a few dozen of the world’s wealthiest men, focussing on the two wealthiest states, fighting to maintain the status quo.
Of course there are and will be, weird and interesting things that happen along the way but the incumbents are so powerful that they can just hoover up any competition. This was well analysed by Henry Farrell on the political economy of AI. He points out that, just as with the early Internet, a war over IP is emerging between the incumbent corporation that capitalise on culture and the artists and creatives who feed that culture, only this time the incumbents aren’t Disney, Warner Brothers and the record companies as with Netflix, Napster and Spotify but the big tech companies; Microsoft, Google, Amazon and so on trying to extend the living they’ve made off the back of the work of creatives. The point Farrell makes is there a future in which this just kills culture and the Internet; that the well is so poisoned by synthetic media and market disincentives that the whole enterprise of the Internet just sort of ossifies and collapses.
As we know from Gopnik, generative AI is a cultural technology, a way of organising and disseminating knowledge. It doesn’t create anything new but changes the way we order things and value them. The IP fights going on are a symptom of this shift and in fighting to maintain total supremacy and status quo over a speculative future market, the incumbents are likely smothering anything new that might emerge as a result.
In a sort of answer to the last’s posts provocation, (‘If someone tells you what something could do, ask them why it isn’t.‘) why would any of these incumbents seek to change the techno-cultural production machine that has made their bosses billionaires? AI isn’t a disruptive force to them, it’s a compliant one and the aim is simply to avoid letting any of your three or four competitors claim any space off you. Luckily for us, maybe, it’s actually going quite badly as Open AI starts to hit a ceiling, the numbers look unworkable and they keep launching things that flop or provide some novelty but little functional utility.
Short Stuff
I’ve been asked to do an interview for a thing but the thing I really like is that I’ve been given a long time to do it. I have the questions but now have two months to answer them which is really really interesting because it means I can actually think and sit with them rather than dash them off like often happens or as on this blog
A good piece on rituals, it’s a useful antidote to the sometimes lazy framing of ‘smartphones are now rituals’ that you sometimes see in popular reporting. Rituals have specific qualities and properties that are not present in most technologically-mediated content binges.
Following on from my last post, Dave Karpf’s review of Dixon’s book on blockchains: “He sees some problems with the Internet that venture capital helped build. The only solution he can imagine is more venture capital.”
Wes on tech’s delusional relationship to Star Trek. (Weirdly I added an overlong footnote about Star Trek to a recent essay. Probably won’t make the cut but it was basically ‘Star Trek is a silly thing to pin your colours to because it is politically infeasible.’)
The fascinating trap OpenAI has itself in as a result of its arrangements with Microsoft and as a nonprofit to prove that it has not got anywhere near so-called AGI.
This was a really great interview with Cameron Tonkinwise (and Okskar!). I nodded enthusiastically with most of his talking about designers in organisations. Was hoping there’d be more succinct and clear definition around Transition Design but there’s a lot of great content there.
I have little loyalty or connection to San Francisco but you know I detest the hubris and nihlism of tech culture. This is a great piece from Rebecca Solnit on how it has destroyed the city and the paradox at the heart of claims of democratisation while Silicon Valley increasingly lives and encourages isolation, alienation and separation. Putting it better than I did in the last post.
Something really sparked a circuit in this great article from Beth Singler about apocalyptism in AI. I’m paraphrasing but; apocalypses are a utopia for those that survive them.
Sorry it was very short this week. I feel like I’ve worked through a lot of stuff recently already and have been focussing on work to try and get various things finished and over the line. Ok, love you, bye.
J-Paul quickly pointed out that last week’s post was a little unclear which is fair enough, these ideas aren’t super clear in my mind which is why I blog them to have exactly these conversations. I guess the thrust of it was: “Is/was the end of hollowed out of Design Thinking a sign of a forthcoming proper engagement with proper design from the mainstream world?” Anyway, it sparked some interesting thoughts on the blue job site even if it was mostly on my doodle of what my design world looks like.
I got good medical news on Monday. Short version; the surgery worked and I’m allowed to start weight-bearing on my leg. This means I only need to use a stick rather than two crutches everywhere and can now do things like carry a cup, hold my child, stand up for more than five minutes etc. etc. I started back at work this week and hoping to get into the office next week. Let me know if you want to catch up about anything.
You don’t really want an AI button
Lots of folks have been excited by the Rabbit R1 and the Humane Pin. They’re the first forays into AI products which is exciting to industrial and UX designers bored of black rectangles. Each has a gimmick to knock it over the line; the Rabbit represents the illustrious design pedigree of Teenage Engineering and the Humane Pin draws on sci-fi tropes to have another crack at gestural interfaces. But I just don’t see them working. I can’t help it; the intuitive, designerly bit of me sucks air through my teeth and exhales with a ‘naaaaaah.’
They’re drumming up hype because they claim a new typology of object that some see as more appropriate for so-called AI but they do it purely on the back of science fiction fantasies with little consideration for how people actually use their technology and how it fits in society. There’s a series of misunderstandings and assumptions; the first is that generative AI immediately demands different types of physical interaction and secondly that hardware innovations are driven purely by software as opposed to social impetuses.
The first one is tricky. Generative AI might mean that people interact with information differently (just as the Internet heralded) and so demand different types of hardware to serve that need, but it won’t happen overnight. I personally buy into the theory that AI is just a nooscope; a new way of examining and organising information rather than new information per se. And these devices seem to be ways or organising information for those who are money rich and time poor:
who is a person that’s an early adopter of gadgets, but is so disengaged with what they eat and where they travel, that they’ll just accept the default choices from a brand new platform that will certainly have bugs?
We don’t access information in this linear way any more. Expedia killed off travel agents because we could see, on a screen, the range of options across time, cost and convenience and then make a decision ourselves that feels better informed than by talking to someone or something. Expedia is the absolute worst but it gave us what we wanted.
There’s also the simple truth that a a keyboard and mouse are the very best, most versatile and high fidelity way of interacting with computers and billions invested in gestural and voice interfaces have failed to show any different; they give you neither the power, dexterity or flexibility of a mouse even if, twenty years on, thinkfluencers are still telling us the Minority Report interface is coming. (See also: elite PC Gamers for keyboard maximalism) As well as the sci-fi tropes and gorgeously elan industrial design being pulled on to make implausible designs desirable, there’s also the ‘push’ of the past; the idea, buried in all this, that smartphones are a temporary stepping stone on a path to a new form of ubiquitous AI interface, a technique common in tech to position our present as part of a ‘transhistorical continuum‘ of an inevitable future.
Don’t forget your phone
I think it’s fair to say (and I’m correct) that the last good phone was the iPhone 5 and it’s been shit since; all the hardware worked, it had a good form, size and shape where the camera didn’t stick out and you didn’t need to carry around extra batteries. It was completely fine. It was so completely fine that Apple have now gone back and got rid of the annoying slippy-as-a-fish bevels and put back the 5’s rugged industrial edges so that you can actually feel and grip it with your fingers in the dark first thing in the morning. Now, I’m not being hyperbolic when I say I honestly don’t know what iPhone model we’re on now, what the other companies are doing and (other than parroting Apple marketing) what’s better since the 5 other than ‘better battery, better screen, better camera.’
I remember watching the Apple conferences when there were exciting! But now it’s just a series of incremental ‘improvements’ (‘best battery/screen/camera ever’) and some faffy apps that tell you when it’s time to have a biotic yoghurt based on the colour of the moon or whatever. So why is it all so crappy and boring? Why has the novelty and excitement worn off? One interpretation might be that innovation around mobile devices has stagnated; that the industry has become bloated and are waiting on fictional ‘breakthroughs’ like AI or the metaverse. However, it would be more accurate to say that we’ve stabilised the smart phone – built a series of norms and expectations around it – and that the inventors actually have very little wiggle room with which to do anything new.
The car provides a useful historical analogy; over the last hundred or so years, the car has been stabilised such that there’s very little that can be done to change its fundamental design or role in our lives. Roads, tunnels and bridges have been designed around its size and speed, traffic signs and management around its power, legislation around its efficiency, design and safety. As these things have been pinned down, the software used to design them has taken it all on board to frame and limit the range of design, mostly. More annoyingly, we’ve sprung social norms and impetuses around the car such as placing shopping areas and parks on the assumptions of driving and where people live.
The same has happened with phones, although not as loudly perceptible. Sure, we’ve designed pockets and bags around them which, (compared to bridges) are relatively mutable but we’ve also evolved social norms around them; when and how to use them, the role they have in our commute, during meetings or family dinners and so on. This might limit so-called ‘innovation’ and make them seem really boring but it also makes them very powerful social signallers and, just like the car, even if we have the technology, it’s going to be very hard to unpick them from our lives, much harder than the AI product people think.
Signalling
You see, the good thing about a phone is it is a very visible social symbol. For instance, laying it out on the table, face down is a way of saying that you want to be aware of it but not distracted unduly. You might be expecting a call or indeed, broadcasting to other people that you are giving them attention. A ‘no phones at the table rule’ is a more enforced version of this. You might then pick it up and carelessly flip it over to signal your intention to leave. You might have it on loud or silent depending on how much disdain you have for those around you relative to any notification you might receive. On public transport you can use it to blast music to annoy people or use it as a concealment mechanism to dissuade eye contact.
For a little rectangle it has a remarkable role in instantiating and mediating social relations. For example, you can massively expand or shrink the range of your personal space with it: Drawing it close to your face on the bus shrinks that space to effectively the bit of air between the screen and your eyes, making it easily defensible in a very packed public environment. Conversely, putting it on a stand with a ring light in a busy public place massively inflates your personal space and pushes others’ out the way. That’s why wondering into the back of a dance video on a busy high street feels like walking through someone else’s living room to get to the other side of their house.
Remember why Google Glass flopped? Ostensibly it was for ‘privacy’ but I don’t think that’s exactly right. People are recorded all the time by CCTV, their browsers and the institutions around them and only the very most paranoid or activistic care that much. It’s also not necessarily about consent; you don’t actively consent to be on CCTV. No, I think it’s about the disruption of personal space. These devices are agents of or extensions of your personal space and all those sorts of norms I’ve described above are ways of negotiating this augmented space: Google Glass users and dance influencers expand themselves to fill your space and claim it which is why it feels awkward and horrid. Or, from the other extreme; it’s very hard to see or know what someone else is doing on their phone without physically invading their space; peeking over their shoulder or pulling it from them. It fits within the human boundaries we’ve had for tens of thousands of years. Possibly longer.
AI-in-a-box
The Humane Pin, with its outward-facing projector, camera and obnoxious position on the user’s body is attempting to hurl itself bodily into these norms as if Google Glass never happened and it will fail because only the most obnoxious and socially ambivalent have no empathy for how other people see them. The Rabbit might have an easier time here; its interactions are familiar as a sort of walkie-talkie-Pokedex but the question has to be asked about what it does that a phone is incapable of doing, if anything it does less but just in a lovely Teenage Engineering box. These things aren’t smartphone killers, they don’t offer nearly the same practical or social utility. They’re for time-poor cash-rich people who’s main focus is signalling to other people that they’re into AI.
I’m reminded of Alex Deschamps-Sonsino saying something in passing many years ago about ‘putting an Arduino in a box and seeing what happens.’ This was when the Internet of Things was in full overdrive and everyone thought that we’d soon ditch our phones for a suite of sensors and actuators all around us. Probably a decade on, the ideology hasn’t really changed; the phone is still seen as a temporary stepping stone into a future dreamed up by old men decades ago, only now it’s by putting AI in a box and seeing what happens.
The Rabbit does look gorgeous tho.
Short Stuff
Alan Warburton has released his new film at the thewizardof.ai. Like his other works it’s a brilliant essay on the critical issues around a technology. I think what marks Alan’s work out for me is that he is a (self-described) ‘jobbing animator.’ As well as an artist and academic he works for commercial clients which I think gives him a uniquely grounded perspective in talking about critical issues.
I also really enjoyed this talk from Eryk Salvaggio on AI as imaginary.
More in #breezepunk; mobile kite wind power for temporary generation. I was wondering if this would be more efficient than solar for mobile use but the inventors appear to propose using it in combination with solar and fossil fuel generators.
Molly White on the US Securities and Exchange Commission reluctant approval of Bitcoin ‘ETP’s‘ (no I don’t understand and US financial regulation is not something that I have time to get my head around – but it’s important and more interestingly, hamstrung.)
Some former colleagues have launched ‘CoDesign4Transitions‘ (I love them but who lets academics name things?) they have some money for PhD places.
I was a massive fan of the Magnus Archives, which I binged and wiki’d through Covid. Lovecraftian mystery horror taking place through archival recordings. They’ve starting releasing the follow-up series, the Magnus Protocol with a new cast of characters and the first episode is just dripping with easter eggs.
Matt is releasing his mad clock. He’s made it poetic, refined and beautiful but I want a dial on the back that I can crank up from ‘prosaic’ to ‘profounf’ to ‘unhinged’ and fully bathe in a generative AI psychosis.
George’s book, Systems Ultra is out. It’s been a long old journey so very jazzed to see it hit shelves. Go buy a copy:
Systems Ultra explores how we experience complex systems: the mesh of things, people, and ideas interacting to produce their own patterns and behaviours.
What does it mean when a car which runs on code drives dangerously? What does massmarket graphics software tell us about the workplace politics of architects? And, in these human-made systems, which phenomena are designed, and which are emergent? In a world of networked technologies, global supply chains, and supranational regulations, there are growing calls for a new kind of literacy around systems and their ramifications. At the same time, we are often told these systems are impossible to fully comprehend and are far beyond our control.
I was listening to this Ezra Klein with Kyle Chayka about taste in which they discuss the difference between curation-proper and ‘curated feeds’ which really feed you more of what you want without regard for creator or context. I wonder if my personal curation method of reading (focus on relevance) is limiting my exposure to new ideas. I’m going to make more of a conscious effort this week to read things I would normally dismiss after the first few paragraphs.
I want to run a thought experiment by you that I’ve run before with several folks; what happens when ‘Design’ is just ‘design?‘ What happens when it’s no longer seen as a specialist field with accompanying specialist training but has broad comprehension by lots of people? What happens when everyone can claim to be a designer?
Let me explain by way of a historic parallel. Occasionally, I still come across folks who refer to ‘The Digital’ as if it is a discrete, modular layer of stuff, to be inserted into everyday life and switched on and off as convenient when in fact, for most people, it doesn’t even make sense to refer to ‘digital’ or ‘computer’ technology any more – it’s just technology. Even more than that, it’s not conceivably separate from the way we live, work, consume, interact and socialise.
‘The Digital’ is a colloquial hangover from a time when digital technology was novel, poorly understood and required some degree of expertise, training or experience in transitioning organisations and individuals to working ‘digitally.’ Organisations saved time by digitising files and documents, making them searchable and editable. Individuals communicated and shared more effectively through the web and so on. To you this may seem deadly obvious stuff; as per last weekwe grew up on the Internet. Our files have (almost) always been digitised, we met our friends on the Internet, we learned MS Office as pre-teens and spent breaks making animated slide shows and our evenings on MSM.
Now, ten or twenty years on, I want to suggest design is at similar early stage in its normalisation. But this maybe requires a little backstep into what ‘design’ I’m talking about and the different ways that I seem to encounter it in professional life. I’ll freely caveat that this is highly partial to my particular perspective from London. And obviously lots of people have tried to diagrammatise the different types of design disciplines, some notable examples include the Dubberly design map, Dan Hill and Elliot’s version which I also hacked at.
What even is design?
There strikes me as being a tension within design around whether it is approached as a way of getting certain outcomes (products, services, answers – ‘for’) or a way of understanding something better (research – ‘about’) although these overlap heavily. This second approach often feels opaque and hard to grasp. After all, design museums, shops and exhibitions are full of objects and images which are evidently outcomes. This isn’t the post to go through the different forms of design research and how knowledge is produced in design but even in something like the greater popular comprehension of UX, is evidence of acknowledgement that design has a unique way of understanding systems, people and problems. This is the principle that Design Thinking predicated itself. But unlike Design Research which I (and others) would characterise as open-ended, iterative and exploratory, Design Thinking is solution-oriented and discriminative.
But, zooming out, (again highly limited view and partial) there are three main parts here: All the green parts originate in and place their intellectual and practical heritage in the ‘design school tradition.’ E.g. someone here actually has one or two design degrees. A service designer emerging from a London university will likely have looked at grids, modernist architecture and affordance theories; the whole caboodle of design school front-loading that’s built into curriculums. This, again, is another blog post (‘why do I need to know about Le Corbusier to make apps?’ maybe) but, just as with other domains in the sciences and humanities, design, as a legitimate academic domain, is built on intellectual foundations proven in existing practice and theory. This is why design short courses bother me: I would never do a two week short course in virology and proclaim to be developing a cure for the flu.
As this big green zone has expanded it’s started to brush up against other fields, particularly Social Science Seashore and Engineering Continent. I’ve been talking recently about how design borrowed from and piggybacked on social sciences as it sought for academic legitimacy in the early 2000’s. The formal structures of journals, papers, the cycle of conferences, the underlying constructivist theories, references to Foucault and Derrida were all ways of shoehorning this practice into a form that was palatable to the academic world. Of course, the unresolved question of practice is still being dealt with: How does making things produce knowledge?
On the opposite side is Engineering Continent. I’m personally less familiar with this end of my map because, frankly I don’t find it as interesting. This is where things like human factors, bits of architecture, human-computer interaction and so on start to bleed over between design and engineering. There is a whole cottage theory I have here about people who pick up design as engineers/technical people and designers who pick up engineering. I think this is largely where UX is, at least from what I can glean. See also; engineering inventing critical theory.
Ok, so design then?
So how does this relate to my original proposition that, much like with ‘The Digital,’ Big-d Design is becoming normalised? It’s my contention that as design has rubbed up against other fields, it has started to transfer into them as well. Design has already spread across the top of business and government and those organisations will be pressing the value of it back into graduates and new workers who will press it back into universities. We’re also seeing elite universities use design as a crutch for interdisciplinary study and research and design seems to be featuring heavily in sustainability transitions. Even the American MBA Design Thinking world is reckoning with the green bits on the map and acknowledging the shortcomings of parachuting design into business and innovation. So let’s assume then, (from this, again; meagre and partial perspective) that more people who are non-designers get design (in some way) than five or ten years ago and that it’s seen as a core competency of future workers. After all, I’d say that a good design education will get you most of WEF’s future skills.
And then let’s look at the other not-so-weak-signal; the Design Thinking Deflation, and extend out historic parallel with ‘The Digital.’ The promise of total home automation, of all-pervasive servile AI, of a life of luxury enabled by digital technology and a world brought together by social media obviously failed to materialise. Really we just got cheap crap bombarded at our eyeballs a bit faster. Similarly, the over-hyped promises of Design Thinking to fix social inequity, climate change, poverty, world hunger really just downgraded to making customers slightly less inconvenienced. Now, to my memory the end of ‘The Digital’ was marked by flailing attempts to get everyone to code and getting everyone reskilled in ‘cyber‘ as if the mere act of doing digital would somehow lead to a fabulously wealthy future (something something scaling technopolitical national visions). I remember talking to prospective students at open days desperate to know what coding language they would learn. What if, I would ask, the government decides in a few years that making ceramic violas is the next big industry? The point is, are we seeing a similar rubicon being crossed in Design→design? Everyone gets it and wants a piece of it but they’re not sure what it is, how it works or what to do with it. Yet.
So then what happens? What happens in five years when your electrical engineering course has a critical design module? When your political science degree features a unit looking at affordances? When your job description for a financial analyst asks for comprehension of design-led research? What does it mean when design is everywhere and what does it look like? And what happens to big-D Design?
Well, the end of ‘The Digital’ absolutely didn’t kill off computer science, in fact I would argue it did the opposite: It elevated computer programmers, developers and inventors to prophetic, even messianic status in popular culture, it opened the doors to the tech boom as computers went into every home and people normalised putting screens in their pockets. Good things came of it too, like all the stuff I’m using now to type this out and share this and MSM Messenger. What if a wider, richer understanding of design and what designers do opens doors to time, funding, responsibility and respect that leads to greater innovations, research and developments in the field? What if, instead of sending managers off on two week courses, people actually seek out broadly experienced, curious and critical designers with years of experience that can only be gained through practice? What if designers aren’t a curious novelty, prodded and poked and, when push comes to shove, asked to make slideshows but are seen as just as legitimate as leaders as lawyers, accountants, political scientists and even engineers?
One more thing
Final thing and this really is another piece: The conflicting operating logics of design and digital. Remember how along the way Design ingested all that yummy gooey social theory? Well as it transitions into small-d design it’s going to take all that with it. All this social theory says that things don’t necessarily have or call for ‘solutions,’ that facts and artefacts are partial and subjective, that objectivity is a social construct, uncertainty and complexity are real and just have to be lived in, that computers can be wrong, that play, iteration, testing and failing are research and development tools. And, as we know from the Post Office scandal (and a PhD about the social construction of AI), the premise of digital is the opposite; certainty, solutions, prediction. And these two logics might increasingly run into conflict and fight for supremacy; do you, as an organisational leader (ok, suppose you are though) listen to your digital team who say the algorithm definitely says x? Or your design team who’s research suggests y with a bit of a z and, yes, some x.
Short Stuff
IKEA’s visions for the future home are ‘semi-apocalyptic’ claims Fast Company. This is a sentiment I’ve encountered directly in some of my work; that images of a resilient, sustainable future in which we necessarily live, consume, travel and interact more sparingly are some how ‘apocalyptic.’ It was a common bit of feedback we got on Abundance; that it was somehow dystopian. I noticed it most across generations; older folks would generally see it as negative while younger folks might see it as challenging but exciting. ‘Dystopian’ and ‘apocalyptic’ are useful for labelling anything which challenges that status quo I guess.
Alex DS here on the importance of spreadsheets for designers. Something that I’m also pretty zealous about. I think I’ve pointed out on podcasts that the real work of organisational change happens in the spreadsheet, not the slide deck. I also think she’s on to something that really it all comes back to spreadsheets when organising a team. I’ve tried a bunch of things and they feel like a lot of effort. Something about human logic works on spreadsheets, or maybe the other way around.
Eryk Salvaggio has a new film out called SWIM. I can’t help but wonder if it’s a reflection or dig at Refik Anadol’s MoMA piece.
Fun piece on why all websites look the same. This is more in that ‘the web is dying’ signal. I should collate these a bit more. Ok I started a collection here.
The Apple Vision Quest is out in a week or so. Matt asks, in typical style, what the ‘fart app’ is for it. Really it’s an important point. The thing that lets people ‘get’ a technology is rarely useful or serious.
Paris Marx has written about Musk’s latest labour struggles in Scandinavia. I’ve been reading a lot that has referenced collective bargaining and organised labour as a crucial component of green transition. E.g.; we need these new technologies and ways of living and that’s going to take a lot of work and people need to be fairly compensated but also greater voice in that process can drive green transition quicker.
I’d never heard of the ‘Walt Calculation‘ – basically that some problems are so hard that it’s easier/more efficient to wait for technology to develop to help solve them than to try and solve them with today’s technology.
This one was hard to write. It’s an idea that has emerged a lot in recent conversations and talks and really works better in that format but this is what this is for; I have to write it down, to make it make sense in written words. Well I gave it a shot. I should point out that I was kicked to do it by a post about Silvio Larusso’s What Design Can’t Do(the post I’ve now lost) which he has sent to me and I have yet to read. But I was wondering how the cultures he’s critiquing might receive it. Would they recognise the critiques of design thinking? I need to read it first before reckoning on that I reckon.
I still use Twitter to post things I’m reading but there really is no interaction there. And Threads, copying a model that we’re all over seems to have struggled to pickup so other than WhatsApps I’m not really engaging intellectually much online, just posting. I’ve found that Twitter has a lineup of first-name-only female bots that like tweets and seem to rotate weekly. This week is Josephine and Hazel. Sometimes I glance into Discord but that feels like a full-time job.
I need to look at the layout of this place. Looks great on a big screen or mobile but in between it’s poopy. I’ll get on that at some point. Love you, speak next week.