Super brief today. I did read some stuff but it was terribly boring and I won’t waste your time like it did mine.
Doing
Natalie and I did a live event on Sunday for Peckham Digital. The first time I’ve stood in a room with an audience for about two years and it was really great. We of course rattled on about our usual stuff – haunting this, futures that – but had some great discussions and questions from a great group of folks. It made me feel a little more confident about doing live events again.
Also, I wrote a little piece for Dirty Furniture about the connection between necromancy and phones for their new issue. It’s out on pre-order now. The issue also features Crystal Bennes, Jay Owens, Disnovation and a bunch more.
Short Stuff
Computing during collapse and sustainable security. What happens if the infrastructures that support computing go under; Unplanned Obsolescence.
Pixel: A Biography. The crux of the argument on this history of the pixel is that the pixel is a mathematical, 0-dimension thing that can’t be seen and that the little glowing squares on your screen are actually ‘display elements’ which if I follow the same logic, would be ‘dixels.’ I’m not super convinced by this level of pedantry or Alvy Ray Smith’s general patronising tone (which I suppose he’s earned) but it is an illuminating and thorough history.
Ahmed Ansari’s opening keynote at DHS. I keep reading it in snippets. The equivalent of watching Twin Peaks on my phone over ten years.
This is the last day of summer. The closest the UK got to a heatwave. The country remarkably escaped the extreme weather events a lot of the world saw this summer and so we huddled in the gloom and grey. Back to school this week.
Reading
The last ‘object’ examined in James Garboury’s self-same book is the GPU. He’s keen to point out that though many histories of graphics start with the invention and dissemination of the Graphical Processing Unit, his ends there. The book makes pains to focus only on a particular bit of foundational history taking place at Utah University which is truly fascinating but I feel like more connections could have been made with contemporary discourse and newer players like GANs. For instance, the last chapter here on GPUs also talks about how GPUs were designed using graphics software themselves so that:
In this moment we see an epistemic break, in which the design and function of the computer itself becomes a recursive process – one that is always already computational, operating at a scale and speed that demands the mediation of graphics software… For [Friedrich] Kittler, “the last act of writing may well have been the moment when, in the early seventies, Intel engineers laid out some dozen square meters of blueprint paper (64 square meters in the case of the later 8086) in order to design the hardware architecture of their first integrated microprocessor.” In off-loading the design of hardware from the hand of the engineer into graphical software, the computer ceases to cohere under the dispotiff of writing or inscription, transformed by the logic of simulation.
Garboury, J. (2021), Image Objects, 176.
The thread of argument running through Garboury’s work is that computer graphics shape the way we conceive of the world as they visualise it and so understanding the history of the science, the decisions that were made and by who is a useful gateway to understanding this ordering system. Again, it’s not something he goes super deep into in the same depth as other writers, but this ordering system is fundamentally about simulation; the reproduction of physical phenomena in a computer. In this final part he suggests – after a neat idea that computers in the age of object-oriented programming are just a series of nested, recursive machines – that the invention of GPUs are themselves a simulation, designed in computers and for computers.
There are nice parallels here with other ideas. Sara Hooker in the Hardware Lottery writes about how much of the development of machine learning is just down to the chance of the hardware architecture available. As we know, much of that hardware is GPUs, inherited from the need to drive graphical processes and displays and now forming the backbone of the quest for artificial intelligence. Hooker asks how it might have been different; how different notions, theories and models of computation might have emerged if we had started with different hardware. Companies are increasingly building specialised GPUs aimed at crypto mining and machine learning; both processes that GPUs are more suited to than CPUs but the underlying logic is the same that was invented in the 1970s to drive displays.
I’m sort of wary of the bombast in the quote above, that designing chips on computers transformed them into the logic of simulation. I will certainly be using the argument when talking about graphics software but the first CAD software were incredibly manual and involved a lot of specialised expertise, it wasn’t simply a case of running a demand through a computer and taking what it spat out. Even today, this complexity is exposed when AI is used to design chips when ‘many tasks involved in chip design cannot be automated, so expert designers are still needed.’
Modern microprocessors are incredibly complex, featuring multiple components that need to be combined effectively. Sketching out a new chip design normally requires weeks of painstaking effort as well as decades of experience. The best chip designers employ an instinctive understanding of how different decisions will affect each step of the design process. That understanding cannot easily be written into computer code, but some of the same skill can be captured using machine learning.
If we’re to compare this dizzying notion of computers designing their own chips with Hooker’s contention that development is unknowingly hogtied by the hardware it’s inherited then is there a lower or upper limit? Sure, machine learning can probably make marginal gains, but could a machine learning system have the capacity to design a computer that much more complex than itself, like Deep Thought creating the Earth? I remember becoming fascinated by people building computers in Minecraft and wondered if it was possible to build a computer that was faster than the one Minecraft was being played on. Someone could probably do the calculation – we know the ‘size’ of Minecraft and the data on clipping distance and so on must be easy to find.
I’m sure I read somewhere that it is impossible to conceive of the universe because there are more galaxies than neurons.
Upcoming
On Sunday, Natalie and I are giving a talk at Peckham Digital about Myths of AI. I might include some of the above if I get time. It looks like a really great, locally-focussed event with lots of young folks, creative technologists and interesting ideas bouncing around. Do come along if you can.
Short Stuff
The People’s Graphic Design Archive built on Notion which I haven’t yet used but am curious about and if I ever get the chance will dig into.
I opened up Reply All yesterday morning to find that they’d had a full blown crisis at the beginning of the year. There’s an article about it here and they ran two episodes on it. They really step around ‘it’ though. Constantly saying ‘mistakes,’ ‘learning’ etc.
I went out to try and get some photo scans the other day using a variety of apps and found all the results a bit poor and glitchy. The new iPhone 12 has LIDAR on it which a lot of the good results I’m seeing online seem to be relying on. AR Kit is ok for macro scale stuff but for capturing scenery, it’s just bumpy and ineffective.
Reading
There has to be a word for that feeling of seeing the work you’re really into done better than you ever could. It’s a sort of deflation met with excitement. I’m about halfway throughImage Objects by Jacob Gaboury and very much in that head space. Gaboury tells us a sweeping history of computer graphics based around the work done at the university of Utah that drove the field forward. In doing so he untangles the particularly logic of computer graphics as computational objects first and images second and the paradoxes of visual representation. The first chapter for instance goes through the work done on the Hidden Surface Problem. To a computer:
…graphical objects exist in their totality – as a collection of coordinates, points, image files and object databases – prior to their manifestation as a visible image. Graphical objects are in this sense non-phenomenological, known in their entirety prior to out perception of them.
Page 32
The Hidden Surface Problem lies in programming a computer to hide things which would not normally be perceptible to a human because they are behind other things. You have to programme perspective and perception into a machine that was never designed to be visual. Gaboury then examines perspective – not as a linear path through art history but something that has emerged in different contexts at different times.
So far he seems to be heading in the same direction as, for instance Deborah Levitt, that computer graphics are distinct from photography and cinema rather than a technological continuation of them. That the underlying logics and experience of working with computer graphics are more akin to painting or sculpture and the potential of them lies in moving beyond logics of cinema and into new aesthetic sensibilities. (I’m paraphrasing Levitt’s words, don’t @ me).
Anyway, it’s a remarkable book and I’m bummed and excited by it. Gaboury is doing a much more rigorous and rich job of what I attempted with Computers Making Pictures (he actually even uses a similar phrase). This is probably be cause he’s been working on it for a decade with a a bunch of different research projects while I scribbled some notes for a week and a half. That’s a useful delineation of my work ethic against that of actual scholars.
Doing
Natalie and I are heading all the way over to Peckham Digital Festival on the 12th September to roll out our AI schitck. This looks like a really fun and interesting event for folks from the local area to get into and share work in creative technology. I’m sure we’ll see lots of people we know. It’ll also be my first in-person event for two years. It’s free so you should go.
Short Stuff
A sort of cheery story from the intersection of automation, open source and ‘democratisation’ in design: The Rise of Semi-automated Illustration. Several interviewees make a point about the homogenisation effect of automation but stress that the memetic nature of visual culture is hardly new, it just happens faster.
Why is it So Hard to Be Rational? via Shannon. Mattern. A sort of review of rationalism including the ‘rationalism community’ and all the pitfalls, including the need for meta-cognition. I like that things Spock deemed ‘logically impossible’ happened 80% of the time, largely because he assumed everyone else was as logical as him.
Deb Chachra on Care at Scale. Brilliantly demonstrating the political and infrastructural imbalance (‘Carbon dioxide in the atmosphere is allowed to go everywhere. People are not.’) and calling for an ultrastructure. I also never saw such a comprehendible version of Rawl’s Veil of Ignorance – I finally get it.
Ok, super brief I know, I’m about to head off for a few days. Love you.