It was supposed to rain which would have made it all the more reasonable to spend the weekend sat at the same desk I sit at every day tapping away on the same keyboard I tap away on every day.
I feel like nothing happened this weekend. Maybe because i didn’t leave the house for a ride. We watched Parasite on Saturday which I hadn’t seen yet so that feels like something that’s been lingering for a while and needed doing. Mrs Revell finished Schitt’s Creek (I watched it once before) and we both got a little teary. My plan of meeting people and staying in touch for chitter chatter has been working out great. I’ve met a few new people and caught up with old friends and collaborators. I really enjoy the distraction, so do let me know if you want to chitter chatter or else I might just come for you myself.
Mistaking Statistical Likelihood for Meaning
A few years ago I read Can Computers Create Meaning? by N. Katherine Hayles. It was recommended after I finished Finite State Fantasia and to be honest I probably should have read it before, it would have made that project much better. Anyway, the paper kicked off a chain of interest in how humans drive meaning from the activities of machines. Charismatic Megapigment was a bit of a jab at this – a computer ostensibly attempting to discern meaning from a painting while an audience attempts to discern meaning from the machine’s actions, both of which are ludicrous and meaningless. The problem I’ve always found is how to position this meanigless-ness in a way that is graspable. People look at machines doing things and immediately project agency onto the machine ‘it’s doing this’ or as I’ve tried to talk about before ‘what’s it doing?‘ as if they are somehow able to perform complex meaning-making and communicate it.
Elizabeth Losh did a great job of describing what we might see as a machine’s communicating or performing higher meaning-making as simple ‘exigence’ – another concept that’s stuck with me. Losh’s draws on various anthropological studies to unlearn the rather romantic idea that human beings are in a naturally communicative state. The problem with believing this is that we project this onto non-human things and assume that they too are always trying to communicate. Alexander Galloway talks about this too in Excommunication – that the resting state of humans is non-communication and that we communicate when something has changed or requires drawing other’s attention too. This is I believe Losh’s loose idea of exigence. My cat, for example, is not saying ‘I’m hungry’ when it yowls at me every morning – it isn’t capable (as far as I understand, happy to be wrong here) of that complexity of meaning-making. It is expressing exigence – an urgent change in or need for change in circumstance or environment. This is not to say that it doesn’t have some internalised concept of what it needs (food) and how to go about getting it (yowling) but it isn’t literally saying ‘I’m hungry’ in a ‘cat language.’
So, in other words – assuming I’m not getting twisted. The most fundamental form of communication is non-communicaiton or the expression of exigence. And so, because we’re human and believe that everything means something we project or interpret meaning-making from expressions of exigence by non-human things like cats or machines: Yowls become a sort of language, and processes become ‘thinking’ or ‘doing.’ So I was pleased and amused (because like all good papers it’s actually quite funny) by this paper unpicking GTP-3 a bit and exposing it’s lack of meaning-making power:
GPT-3 is an extraordinary piece of technology, but as intelligent, conscious, smart, aware, perceptive, insightful, sensitive and sensible (etc.) as an old typewriter.
The authors conducted a bunch of experiments in successfully demonstrating a technical separation between meaning-making and what is basically fancy predictive texting and show how it conforms to one of the oldest mistakes in the book; mistaking correlation for causation. Or, in their own words, ‘statistical likelihood for meaning.’ GPT-3 can’t make meaning because it has no internalised understanding of the collations it performs any more than a printer understands a paper-cut. It may be a threat to copywriters where artistry or sophistication is not required such as in advertising or instruction texts but it’s not convincing enough at scale to pass for human writing. The authors speculate that we might see ‘cut and paste’ replaced with ‘prompt and collate’ but only in the type of industries heavily reliant on cutting and pasting in the first place.
One final thing I like is the idea of a ‘least unsuccessful AI.’ One of the big factors in AI hype is this sense of inevitability in speed, power and accuracy. Critically repositioning GPT-3 as the ‘least unsuccessful’ AI is a good way of bringing it all down a notch which I might borrow. The AI that gets us will be the least unsuccessful AI.
Short Stuff
I opened up Evernote to draw check some links for the above and found they have once again done an update that completely shatters my organisational system. They also removed the ability to save documents opened in other programmes and since I can’t stand Evernote’s annotation system I’m sort of stuck without being able to annotate things I read or even find them again now. Do you have a good system for saving, organising, searching through and annotating papers, articles etc? Preferably one with a webclipper like Evernote? But one that allows me to highlight text (bafflingly not a feature in Evernote, just the wobbly, free-hand highlighter) and search by author (of the paper)?
- More machine learning hype; not really anything qualitatively meaningful here but OpenAI (the very much not open company behind GPT-3) has also been messing around with music for some reason.
- Well, ok the reason is pretty obvious, it’s part of a larger ideology of proving that everything – art, writing, music etc. – is computable and thus knowable and thus controllable and thus can be industrialised and exploited for profit. See last week’s post for the same thing but for a planet.
- I also read Ground Truth to Fake Geographies by Abelardo Gil-Fournier and Jussi Parikka. I can’t claim to understand any more than half of it but it introduced me to Google’s PlaNet project. It’s super interesting on models and truth, but one of those papers that’s 1% on the too-dense side.
- Long read on How Eugenics Shape Statistics via long-time fan and diagram meme diva David Benque.
- Thinking about mistaking statistical likelihood for meaning I wonder if there’s instances of where it happens the other way round? Where something meaningful happens or is communicated but it is just dismissed as probability?
- Did you see Crystal’s great lecture on the early days of CERN? She’s so good, how is she so good?
- New Blender stuff is coming with 2.91 including editable volumes, finally, finally a boolean system that maybe works and ‘fuzzy search’ which gives you inexact matches for things you might be looking for in Blender’s labyrinthine features.
This week’s going to be busy. I don’t find busy-ness to be produced by a quantity of work. I can plan for quantity; eg. if I have to write 5000 words (which I do) I know how long that will take and can plan for it (which I haven’t). I find actual busy-ness tends to emerge for me when I have to deal with a mutually contradictory set of tasks or decisions. There are a lot of tasks and decisions this week that, while on the surface quite simple, will require some careful processing and analysis to avoid having to double-back and retrace my steps. This makes the length of the tasks difficult to discern because it’s more like tacking in changing conditions than a long straight line. Do you know what I mean?
‘Oh yes, very coded Tobias, very sophisticated and smart.’ Thank you.
Well, you know I love you at least. Stay in touch and speak to you next week.
One thought on “Box 019: Least Unsuccessful AI”
Comments are closed.