About
Blogroll

Tag: Technology

  • Box110: We Just Haven’t Been Capitalisming Hard Enough

    DS097: Been a while since I did an animation. I thought that focussing on motion would mean I’d spend less time geting myopic about materials and lighting but no, it made no difference. There’s a lot of daily artists I follow who just seem to through up some simple shapes, add some fog and bam, beautiful render.

    I’m particularly annoyed today. I had a backlog of news and research to go through and mainlined too much horrible shit in one go to remain my usual centrist-Dad balanced self. Instead I’m taking this opportunity to work some rage at hypocrisy. It all started with this great Rolling Stone article about this year’s Consumer Electronics Show and an idea that stood out to me as a much better articulation of the root of a bunch of work around my PhD:

    The whole week [of CES panels and presentations on AI] was like that: specific and devastating harms paired with vague claims of benefits touted as the salve to all of mankind’s ills. 

    The Cult of AI

    Throughout the show, the author writes about how wealthy tech moguls stood on stages and loudly promised all the ways that AI would make people richer, happier, healthier, live longer and fix whole ecosystems while in quieter Q&As, they brooded and avoided eye contact while discussing the specific and existing harms and exploits going on; from algorithmic injustice to scams and crime. Then the author discusses the actual tech on display; that despite these claims that AI will cure cancer, eliminate road deaths, deal with climate change and and uplift society, all that is on display are AI sex toys, a pixelated rabbit that orders you the most normal pizza from the list (famously in the demo the creator of the Rabbit R1 just asks for ‘the most popular pizza from Pizza Hut’ which is how everyone orders pizza right? More on that in a bit) and a telescope that can remove light pollution (admittedly cool). There’s an outsize contrast between the claims of potential AI futures (overpromising, blindly optimistic and disconnected from real-world problems), the reality (quick buck gadgets that have little utility as demonstrators) and the evidenced harms (fraud, deception, crime, IP theft, injustice and road deaths.) And these appear to be drifting further apart.

    Dan McQuillian has also put it well as “the social benefits are still speculative, but the harms have been empirically demonstrated.” This is a big motivator in my own research in AI and has been really since the early Haunted Machines days: How and why have the imaginary claims of speculative benefits outweighed the observable harms it is doing? What methods, tricks, tactics and strategies are deployed to make us believe in these fantasies?

    Most of the executives hoping to profit off AI are in a similar state of mind. All the free money right now is going to AI businesses. They know the best way to chase that money is to throw logic to the wind and promise the masses that if we just let this technology run roughshod over every field of human endeavor it’ll be worth it in the end. 

    This is rational for them, because they’ll make piles of money. But it is an irrational thing for us to let them do. Why would we want to put artists and illustrators out of a job? Why would we accept a world where it’s impossible to talk to a human when you have a problem, and you’re instead thrown to a churning swarm of chatbots? Why would we let Altman hoover up the world’s knowledge and resell it back to us?

    We wouldn’t, and we won’t, unless he can convince us doing so is the only way to solve every problem that terrifies us. Climate change, the cure for cancer, an end to war or, at least, an end to fear that we’ll be victimized by crime or terrorism, all of these have been touted as benefits of the coming AI age. If only we can reach the AGI promised land. 

    The Cult of AI

    Lots of others have come at this idea in other ways; Bojana Romic on how AI people frame the present as a ‘transhistorical continuity‘ into an inevitable future, Lucy Suchman and Just Weber’s ‘promissory rhetorics‘ where technology is framed by what it will do rather than what it actually does or Lisa Messeri and Janet Vertesi’s ‘projectories‘ where imaginary and ever-receding future technologies are used as justification for present investments and cover for failures.

    Another rhetorical flourish I’ve noticed is the constant reference to ‘technology’ as the agent of all this change rather than massive multi-billion dollar companies, their leaders and shareholders creating this stuff. Even more critical groups like the Centre of Humane Technology ask ‘How to tell if a technology will serve humanity well?‘ Rather than the more accurate ‘How to tell if a multibillion dollar company, it’s leaders, shareholders and the regulators they have captured will serve as well?’

    The irony of this frustrated critique of the discourse around AI is that it has already been captured by the extremists in big tech. If you point out that AI isn’t actually meeting any of these promises and is hurting a bunch of people along the way, it is turned into an excuse for more, faster AI. Effective accelerationists who are tend to lurk at the forefront of the technology and money discussion will gleefully profess that fuelling the worst excesses of capitalisms is a great idea because actually it will lead to all these things they’ve been promising: That really, the problem isn’t that technology developed and deployed through capitalistic mechanisms will always fail to fulfil its promises as longs as the motivation is shareholder profit, but that it’s only with more, harder, faster capitalism that these promises can be fulfilled. In the word of the angry man that promised us that blockchain, then the metaverse was the next big thing and makes all his money from selling military technology, the market is a self-correcting mechanism with the best interests of humanity at heart and so we must give over more agency to it.

    And people keep buying this garbage! Even as the creators are openly, wilfully dismissive of the needs of ‘consumers’ and openly promise to take away their agency! In the run-up to the US election there’s reckons going around again about why working class people vote against their economic interests. I know this is a controversial theory and I’m not a political scientist so not able to weigh in on the debate only to say that in the case of Brexit and Trump, data shows that the people to be hurt most by them were a majority of the voting block. A commonly-heard but dismissive, snobby and deleterious reading of this is to say that all these rhetorical flourishes are effective in convincing people of extremist views (including those of techno-optimist extremists) as the solution to social inequity but the subtext of that reading implies that people are stupid, which they’re not but is exactly what big tech and extremists do think of people.

    Perhaps (and this is pure dirty reckons) we should think the other way: a sort of aspiration towards nihlism. As people make decisions about whether to eat or heat their homes, as successive climate records continue to be broken, as geopolitical instability continues to deepen, the answer of big tech is AI sex toys, a pixelated rabbit that orders the most popular pizza and $3500 VR goggles. AKA Jackpot technologies, preparing the wealthy tech class for a diminished society where society is replaced by technological mediation.

    All the promises of democratisation, liberation, creative opportunity are demonstrably disproven by a suite of technologies that isolate, divide and exploit. In the current tech future, the aspiration is to have no common cultural reference points with anyone and instead to compete for the most superior human experience by accumulating more technology and more media. It’s no longer about developing technology that might help people navigate the inequities and complexities of society, government and every day life in a big complex assemblage but technologies that isolate and elevate you beyond it such that you no longer have to rely on or work with the state or institutions. Is it this that has an aspirational appeal to people? Imagine if someone could remove your social problems not by solving them per se and making it better for everyone (more efficient bureaucracy, healthcare, schooling, access to good transport systems, good quality housing etc.) but by instead by removing you from having to make any of those decisions at all?

    Georgina Voss wrote about or made a an observation once that Silicon Valley tech was about removing having to take responsibility; cooking dinner, driving yourself somewhere, doing your washing, paying your rent. By extension, the most aspirational status espoused by the vision of big tech is one of diminished responsibility and diminished dependence on society.

    I often talk about Lawrence Lek’s ‘Unreal Estate: The Royal Academy is Yours‘ – it’s one of my favourite projects and one of the first good bits of art made in Unity I ever saw. In it, a wealthy oligarch has bought the Royal Academy of Art in London and turned it into a gaudy, tasteless mansion draped in leopard print and the cliches of modern art. The point (at least my interpretation) is that to the ultra-wealthy, the world may as well be a game engine, devoid of consequence, transaction costs and material limitations; everything is reprogrammable or reconfigurable and so, by a perverse logic in which nothing really matters because nothing has any real value.

    So I’m angry because that’s the logic of big tech evangelists. To drive down the meaning and value of everything so that whatever’s being hoiked this year at CES is seen, by contrast, as the most valuable and important thing ever. That’s why you can stand on stage showing a gadget that orders the most popular pizza for you and in the same few minutes have someone equate that technology with solving crumbling planetary and social health. And people just keep believing it.

    PhD

    So how is the PhD going? (The three most common questions I get asked are ‘How’s the leg?’ ‘How’s the PhD?’ ‘Can you knock up a powerpoint showing x?’) (The leg is… fine. I have a bit of an early check up later because I’ve been in more pain than I like, the PhD is- well I’m about to tell you and yes I can knock up that powerpoint for you.) Good, thank you. I’ve started the second main chapter (which is chapter 4); Enchantment, The Uncanny and The Sublime. This is one of the three ‘substantial’ chapters that get into the meat of the thesis. In this case it’s looking at how enchantment, uncanniness and sublimity are used to reinforce status quo imaginaries of AI. For example; scale and complexity – by making AI appear to be insurmountably large it gives the impression that intelligence is simply a product of scale and complexity but also makes it difficult to confront or challenge. This is a technique also used by mainstream artists to dress up what is essentially using lots of energy intensive computing to make nice pictures as somehow about intelligence or sentience or meaning.

    On the flip side or the amazing critical practices that challenge scale and complexity; comb data sets, point out gaps, highlight the labour and so on. There’s also aspects of enchantment like why chatbots convince us that something more than calculation-at-scale is going on.

    At the moment I’m chunking through the notes and quotes I’ve grabbed over the last two years or so as I’ve been reading, trying to sort and organise. I’d like to use two case studies because it would reflect the two used in the the Spectacles, Performance and Demonstration chapter (Ai-Da and AlphaGo) but it might settle on one. Or it might be two that aren’t evenly weighted. I definitely want to use Cambridge Analytica because that was very much about enchanting people with the belief in the power of AI through scale and complexity and the (apparently) uncanny results. The other one might be Synthesiszing Obama, largely because I did a project on it specifically but also because there’s a recurring theme here about human or life-like behaviour and enchantment.

    Anyway, I’ll keep you up to date. I’m hoping to have finished crunching the notes by mid-next-week and then start moving things around to form up subchapters and sections. Then it’s that process of just writing over and over and over and over and over again on each section. I’m not aiming to get these as polished as Spectacles, Performance and Demonstration. I need to look at some of the fundamental structure – particularly around how I’m positioning practice – so all I want to do is get to a point where I have the overall shape of the whole thesis and then look at it from top-to-bottom to make sure it’s coherent before diving into the detail.

    If I’m honest I’m not spending enough time on it. I accept that it will take a few weeks to get back into the PhD headspace though so I’m ramping up to it. It might mean a little less blogging from me as I divert more time to it but that won’t necessarily be a bad thing.

    Short Stuff

    Promoting some friends for you to check out; Crystal’s exhibition and Jay’s talk. This is what the Internet is supposed to be for.

    • Speaking of LLMs, someone managed to ChatGPT’s system prompts (the rules that frame how it responds) and I agree (unusually) with Azeem Azhar that it is brilliant. It is completely fascinating that we can set semantic rules for a trillion parameter computer. That is actually really cool, no sarcasm at all.
    • This in credibly complex and evolved Codes of Conduct from an online game that Dan Hon linked to.
    • Crystal Bennes is exhibiting When Computers Were Women in London this March.
    • Jay on Myth-Making Mechanisms in Autonomous Worlds. Basically that Dungeons & Dragons is the most important techno-social technology of the modern world.
    • Cohere for AI’s new open source multi-language LLM that addresses the language gap in LLMs.
    • I read someting recently about how it was quite likely that platforms would start to coalesce again. All of the streamers have had to raise prices and that means consumers have been dropping some. It went like: Ultimately the cost of syndicating some IP for Netflix to run is significantly more cost effective than building and maintaining your own platform when people don’t want to pay for a dozen different ones. The maths of then having to keep creating original content to keep your platform ‘full’ so that people don’t get bored is also pointless when all are doing the same. I think there’s something similar here with XBox de-exclusifying some games. Entrapping ecosystems were good when times were better, now when times are lean, getting in front of eyeballs is still the priority.
    • Eryk Salvaggio highlights how Sora can extend a video 46 seconds in either direction, posing a disinformation risk.
    • Remarkable story of Air Canada chatbot making up a refund policy then Air Canada back-tracking and claiming the bot is a ‘separate legal entity’ and that it shouldn’t have been trusted.
    • Lots of folks sharing this have commented that ‘running Doom on x’ is now a benchmark for computation. Anyway, running Doom on E Coli bacteria.
    • OpenAI’s new gizmo named after an entry-level Shimano gearset for some reason is another glossy distraction from the exploitation and misrepresentation at the heart of their business models. I honestly don’t know why nothing stirs in me when I see these things. I sense the genuine glee and excitement that others have for them but I just automatically go ‘oh great, another one, who are they going to hurt this time?’
    • I’ll be honest I’ve only read half of this at posting because it’s very long but a bit of a state-of-the-union for design leaders.

    I finished Tchaikovsky’s ‘Children of…‘ series the other day. I was actually inspired to pick it up because of Matt Jones’ blogging of it. As Matt points out, it’s clear that the corvids in the latest book are meant to be illustrative of the difference between sentience and intelligence or at least to trouble that distinction. Where the other ‘evolved’ species (spiders and octopuses) demonstrate clear sentience as we might relate to it; intelligence plus decisions-making, emotions, sense of self and others, wants, needs, inner worlds etc. (I don’t know the definition) the crows are more ambiguous and in fact claim not to be sentient but to be evolved problem solving machines. The crows live as pairs – one of the pair can observe patterns and spot new things while the other memorises and catalogues. They also can’t ‘speak’ only repeat things they’ve already come across (a la stochastic parrots). I suppose the point is to question those (particularly AI boosters) claiming that sentience emerges from complexity. That’s why every new ‘behaviour’ from a GPT is loudly touted as being indicative of sentience; we read these emergent patterns from complexity as if they are indications of sentience. (I’m writing about this now in the PhD) It’s a good metaphor.

    I ended up in a hole on LinkedIn the other day of people responding to a very good post who in the last year have become coaches and experts in AI. Watch out there, folks, the charlatanism is real. Here’s my advice; any time anyone tells you what something could do, ask them why it isn’t. Ok, I love you, bye.