Idle Thoughts on Seeing and Knowing

A few weeks back I was at the excellent Systems Literacy at the Whitechapel hearing words from good friends James Bridle, Georgina Voss and Tom Armitage. A thoroughly excellent event but I left considering our rhetorical hangups on 'seeing' and 'sensing' (which one could argue are different things). They're terms that have become cliched, with itchy fingers hovering over metaphorical bingo cards of 'Seeing Like a X' or 'Making x Visible.' Powerful curatorial bumpf to be sure but we're now all asking 'Where does making things visible get us?' Years ago I aired this as my frustration with what was then the state of Speculative and Critical Design. Once you've created your public, (constructed your god (another post for another, darker time)), highlighted your issue, then what?

I guess the title of the event (Literacy) sums up what I'm thinking through here which is types of knowledge, speaking and sensing and the use of these terms when talking about complex technological systems and literacy.

As Tom Armitage noted in his talk, literacy is the ability to both read and write, and since that relies on the technology of an alphabet - in the broadest sense - that means a certain amount of 'knowing how' as opposed to 'knowing that.' Here we run up against John Searle's famed Chinese Room - the inhabitants of the room have a propositional system of knowledge, shorthanded to 'knowing that.' They know that one Chinese character follows another according to a prescribed (geddit?) set of rules. The Chinese literates on the other side of the door have the 'know how' having internalised the rules and developed a natural affinity with the technology of Chinese written language. I'm probably way out here but this suggests two ways that something could be seen to be 'literate.' One through knowing how and the other knowing that. Knowing how as a tacit and internalised literacy and knowing that as a technical, external literacy.

James Bridle brought up the power of source code as something real and legible that contains knowledge. But again, we have two levels here. The literate who 'know how' to read the source code and correlate it to what is presented on the web and those who 'know that' it's there and forms a central part of the way we 'see.'

Now, in a general way, machines can 'read' and 'write' and in that sense, they are literate. But their literacy is that second type of technical literacy that comes from knowing that not knowing how; they understand the rules of language but usually can't discern its abstracted meaning. Or, that meaning has no meaning. Or the meaning of that meaning has no meaning. I'm no AI expert.

Round and round it goes, who is literate? What is literate,? What does it mean to be literate? But this skips over the most interesting thing: What are these literate things talking about and how? I guess, what I was thinking is that if the machine is literate, or can convince others of its literacy or something then it is capable of, and in a sense is engaging in rhetoric. They can persuade us to do something - 'plug in to a wall socket or your computer will run out of power.'

Luckily, someone much more qualified than me figured this. In Sensing Exigence, Elizabeth Losh suggests a rhetorical standpoint for machines. Since expression springs from '...a defect, an obstacle, something needing to be done, a thing that is other than it should be...' and machines process things and call on us (or other machines) when there's a need to address an issue, they have 'exigence' - an urge or need to speak even if this speech is produced from knowing that not knowing how. In the same sense that machines 'speak' when they have exigence, they listen and read. Machines can read, write and this makes them audiences. In that case, she suggests, we can't seriously critique connected objects and complex systems unless we see them as more than things to communicate through or about, and things to communicate with. So:

How can we talk with things instead of to, through or about them? 
i) Is that worth doing? 
ii) Is it mad? 
iii) Will people laugh at you?

As someone who regularly and inadvertently thanks cash machines and mumbles at ticket barriers, I can answer part iii with a resounding 'yes.' Part ii is more a matter of opinion and depends on what else you were doing before and after talking to the machine.

Talking with is important. Talking to however implies a uni-directionality. Here's some badly improvised definitions:

  • Talking At: Communicating toward another unaware/uncaring/unknowing whether the other is or is capable of listening. OR Writing whether it can be Read or not. 
  • Talking To: Communicating toward, aware and knowing that the other is listening but without listening oneself. OR Writing, knowing it can be Read. 
  • Talking With: A two-way exchange of both talking and listening between parties. OR Reading and Writing each other. 
So, we're already deep in 'talking to.' There's two directional forces at work trying to get us to talk 'to' the machines. First is the whole 'learn to code' lobby. This idea that everyone should, no absolutely must learn to code. Holistic and fulfilling education be damned, if only everyone could code! This is trying to launch humans out of knowing that and into knowing how. Out of knowing that code exists and into knowing how to use it. The second is making computers more 'intelligent' in order that they can better understand our complex and abstract language. This is trying to brute-force computers out of knowing that there are certain rules to human language that can be followed convincingly to simulate literacy and knowing how to extract and communicate meaning and the values of that meaning. (Or the meaning of the values of that meaning etc. etc. etc.) I thin I approached this in my original Haunted Machines talk, suggesting that in any conversation between a human and a machine, one party is lying to the other about the nature of their reality. For instance, machines might be imbued with human characteristics while we might adapt our behaviours to suit the needs of a machine.

All these methods to get us to talk 'to' machines or them to talk 'to' us are pointless. The technology of an alphabet, no matter how good, is a codification of meaning and by forcing one party to codify its meaning more, they make concession to the other party who will get more meaning out of the exchange. I.e. in the Chinese Room, the people in the room cannot discern meaning but give plenty of meaning to the Chinese literates outside. There's no talking 'with' between the two parties.


Prior to the Whitechapel event, I'd been trying to bend my brain around the ongoing ontology/epistemology debate in anthropology (I still haven't). This little circle of frustration centres on how animism is dealt with in the literature with Eduardo Viveiros de Castro suggesting that since 'things define themselves,' that animism is a surety, an ontology, a way in which things are as opposed  to an epistemology, an approach to knowing.

So, hurriedly, I scribbled 'Internet Epistemology' and thought 'yeah I'll come back to that and know what I meant.' Well, notes never work out like that and a quick search reveals that it's pretty well defined way of looking at either how knowledge travels and is validated on the Internet, or how distributed, networked knowledge works. So, not that.

What I meant, it turns out, is that rather than simply 'seeing' machines and systems we should try and work out how we can talk 'with' them. Allowing a two-way reading and writing of subjects that helps us to construct knowledge. Then we go for 'seeing like an x' to 'knowing like an x' or even better, 'understanding an x.'

Machines sense. To say they 'see' is somewhat a misnomer. A person looking through a camera 'sees' but the camera is only sensing. We at the same time as humans find ourselves in the business of 'sense-making' - trying to convert what we see into sense. This all happens at the same time as machines convert what they sense into what we see in the first place.