Sight, Scanning, Screens, Sophistication

Desktop Panorama:

A little a while back I had a go at shooting what I called 'desktop panoramas' using my iPhone and laptop. I didn't really make much of a deal of them at the time, beyond posting some to Twitter. They're actually remarkably hard to capture, requiring an almost acrobatic skill to turn the computer 180 degrees while simultaneously sliding the phone along the screen at some semblance of a regular speed without tilting the screen back or forth too much. I wouldn't recommend it. I'm surprised either device survived.

There was something quite beautiful about the images. (Some of them anyway.) They have an almost painterly quality, the sharp edges of pixels and typekits become a smudgy, shaky mess as the screen, phone, hand, and body worked in a complex dance of decoding and recoding the information being scanned into a jumbled form of colour and shape.

 Desktop Panoramas: Facebook

There's obvious physical limitations to the taking of desktop panoramas - I only have two hands, so most of them were done with the phone resting at the bottom of the screen which kind of fixes the line of the camera across the screen at 11.6cm, roughly halfway up the visible screen. In the second Facebook attempt I tried to 'free hand' the panorama to get a more interesting capture to predictable but equally interesting results - the panorama appears to double back on itself and possess rendered depth.

Panoramic shooting technology was obviously designed to capture huge, sweeping landscapes in dramatic fashion. It achieves deep views with complex focal length distortion that lend a grandeur and sense of scale and depth by elongating the frame so substantially. They also bear two other remarkable artifacts. Firstly, a representation of time. I'm not a photographer and most folk would probably rattle on about the relationship between photography and time but I'm being more literal here. You start at the left and turn to the right, the left is earlier than the right in time. This has all sorts of horrid consequences. But also more subtle ones. The panorama below manages to catch the resonance of the camera's shutter speed and my desktop lamp - something unseen to human eyes. (Again with that lovely painterly effect caused by my human hands.)

Again, there's something fascinating about the decoding and recoding of all the technologies and media that combine into the shot, my hands, the text, the flickering of the light, the movement of the camera. All of those ambiguities are codified into a single artifact where the compromises made show themselves as 'glitches.'

The point of ambiguities is, I think, particularly important. I'm currently reading (slowly, it's annoyingly verbose) Evil Media by Matthew Fuller and Andrew Goffey. Early on they make a point about how ambiguity is a form of power in a system that requires discrete and confined coding of information. Using language ambiguously but confidently can provide plausible deniability and offset responsibility in a human interaction (see also Graeber's Utopia of Rules about which direction this interaction takes place (clue: top to bottom)) and ambiguity gives an individual the power and opportunity to assert their interpretation as the most truthful or objective. There's no scientific progress without contention and power vacuums tend to arise in periods of uncertainty. As the chapter suggests - Leverage[d] Anxiety. This is also what the authors terms 'sophistication' - some ability to leverage ambiguity to your own meaning.

In the panorama, the camera tries to compromise light levels, colour, flickering inputs, my shit hand and so on to build as an objective view of reality as possible for me. (Obviously within the confines of the biased engineering of the machine in the first place which, for instance, makes it small enough to fit in a man's trouser pocket at the cost of perhaps some ability.) It's a system designed to disambiguate what it sees where perhaps in reality we would embrace the ambiguity. The human eye simply can't see 180 degrees in focus all the time and we're happy with that uncertainty, the camera isn't.

This theme of images trying to resolve themselves through machine thinking has been brought up by Boris Anthony in Puppyslugs R Us over there where he explains the logic of how we construct descriptions (images) out of memory. Some of these memories are shared and almost all of them are definitely ambiguous. When I type 'house' you'll see a very different house to me. Even something as specific as 'Tobias' shoes' will elicit different and ambiguous thoughts. When you say 'house' to Google's neural net, it constructs an image out of very specific, unambiguous images of dogs and so on that form its own memory. The compromise - the lack of sophistication - is in the Puppyslugs, the sympathetic imagery where we can see the outline and structure but something that is clearly not a house, but an assemblage of puppies.

If Google or Alphabet or whatever do decide to go evil with neural nets then it'll be in using our ambiguity around the shared memory of 'house' into a more objective and truthful vision of 'house.' In this world, our versions (no matter which) of 'house' are definitely wrong because they're formed in ambiguity while Google/Alphabet's is formed in specific definition. Similar I guess to what they're doing to maps and territories. Intuition, metis and opportunities for sophistication would go out of the bay window as Google told us what the definitive image of a house is.

This ties into larger stuff around the construction of objective reality which I won't go into. The last thing to tie in here is Joanne McNeil's excellent piece - again, over there - on screengrabs as POV shots for social networks and the Internet.

I'm right with her when she says, 'I can’t remember taking screenshots until about five or six years ago.' I was 'trained' to take them in university as a way of keeping evidence of my largely digital work but quickly dropped it as an annoying interruption. Now I find myself taking screenshots all the time. No explanation of better hardware, software and more generous data plans needed here.

In almost the oppostie way to a desktop panorama - which imposes limitations to increase ambiguity and give a new type of sight to the screen, screenshots allow users to smash through the post-optimal limits of the technology. There’s an element of cunning and sophistication in doing so, a sense of hijacking the media or platform and turning its imposed limitations to new uses while thoroughly working within its bounds. The 140 character rule is the most obviously imposed limitation, McNeil points out that it was first defined by early interactions with Twitter largely being over SMS. Now it is a nostalgic post-optimal quirk used to create value to the platform by simulating limitations.

A limit of 140 characters forces new and sophisticated behaviours, from the invention of '@,' the growingly common use of numbered tweets to make long points, screenshots and even the sophisticated understanding of how handles work in embedded tweets:

Screengrabbing provides its own form of sophisticated manipulation - McNeil mentions folk screengrabbing Snapchat and Tinder but consider also the invention of 'regrams' a feature still not purposely integrated by Instagram. But she also types about the incredible contextualisation that screengrabs offer, not only in the content being screengrabbed, which in itself reveals so much, but in that thin bar of metatext at the top: Why is this person screengrabbing Dail Mail comments at 4 in the morning with only 3% battery? Oh they've got wi-fi, they must be at home.

Whole stories can be built out of that thin blue bar. This is almost opposite to what James Bridle was searching for in the disembodied camera - the screengrab tells you so much about a person's proclivities, position, time, state of mind and interests. It is one of our best inward-looking cameras. As McNeil says:
'Like old GoPro footage of an afternoon cycling, these screenshot images bring you back to where you were looking at that minute.'
The sophisticated manipulation of sight, scans and screens provides a space for new narratives as well as the leveraging of power over how these images are interpreted and what makes them absolute, and what that even is. After all, faking screenshots isn't hard.

I used to play a trick when I worked in a shop where I'd screengrab the entire desktop of the computer and then set it as the desktop background and move the icons off-screen. Hilarious I know, but folk had a comprehension of that desktop, they understood the iconography and how the image translated to action and I understood how easy it was to destroy that rigid comprehension.

(Below are some scans I made when trying to fix my printer the other day. They didn't end up being particularly relevant but they're too damn pretty to just leave out.)