This is the last day of summer. The closest the UK got to a heatwave. The country remarkably escaped the extreme weather events a lot of the world saw this summer and so we huddled in the gloom and grey. Back to school this week.
Reading
The last ‘object’ examined in James Garboury’s self-same book is the GPU. He’s keen to point out that though many histories of graphics start with the invention and dissemination of the Graphical Processing Unit, his ends there. The book makes pains to focus only on a particular bit of foundational history taking place at Utah University which is truly fascinating but I feel like more connections could have been made with contemporary discourse and newer players like GANs. For instance, the last chapter here on GPUs also talks about how GPUs were designed using graphics software themselves so that:
In this moment we see an epistemic break, in which the design and function of the computer itself becomes a recursive process – one that is always already computational, operating at a scale and speed that demands the mediation of graphics software… For [Friedrich] Kittler, “the last act of writing may well have been the moment when, in the early seventies, Intel engineers laid out some dozen square meters of blueprint paper (64 square meters in the case of the later 8086) in order to design the hardware architecture of their first integrated microprocessor.” In off-loading the design of hardware from the hand of the engineer into graphical software, the computer ceases to cohere under the dispotiff of writing or inscription, transformed by the logic of simulation.
Garboury, J. (2021), Image Objects, 176.
The thread of argument running through Garboury’s work is that computer graphics shape the way we conceive of the world as they visualise it and so understanding the history of the science, the decisions that were made and by who is a useful gateway to understanding this ordering system. Again, it’s not something he goes super deep into in the same depth as other writers, but this ordering system is fundamentally about simulation; the reproduction of physical phenomena in a computer. In this final part he suggests – after a neat idea that computers in the age of object-oriented programming are just a series of nested, recursive machines – that the invention of GPUs are themselves a simulation, designed in computers and for computers.
There are nice parallels here with other ideas. Sara Hooker in the Hardware Lottery writes about how much of the development of machine learning is just down to the chance of the hardware architecture available. As we know, much of that hardware is GPUs, inherited from the need to drive graphical processes and displays and now forming the backbone of the quest for artificial intelligence. Hooker asks how it might have been different; how different notions, theories and models of computation might have emerged if we had started with different hardware. Companies are increasingly building specialised GPUs aimed at crypto mining and machine learning; both processes that GPUs are more suited to than CPUs but the underlying logic is the same that was invented in the 1970s to drive displays.
I’m sort of wary of the bombast in the quote above, that designing chips on computers transformed them into the logic of simulation. I will certainly be using the argument when talking about graphics software but the first CAD software were incredibly manual and involved a lot of specialised expertise, it wasn’t simply a case of running a demand through a computer and taking what it spat out. Even today, this complexity is exposed when AI is used to design chips when ‘many tasks involved in chip design cannot be automated, so expert designers are still needed.’
Modern microprocessors are incredibly complex, featuring multiple components that need to be combined effectively. Sketching out a new chip design normally requires weeks of painstaking effort as well as decades of experience. The best chip designers employ an instinctive understanding of how different decisions will affect each step of the design process. That understanding cannot easily be written into computer code, but some of the same skill can be captured using machine learning.
Samsung Has It’s Own AI Designed Chip, WIRED
If we’re to compare this dizzying notion of computers designing their own chips with Hooker’s contention that development is unknowingly hogtied by the hardware it’s inherited then is there a lower or upper limit? Sure, machine learning can probably make marginal gains, but could a machine learning system have the capacity to design a computer that much more complex than itself, like Deep Thought creating the Earth? I remember becoming fascinated by people building computers in Minecraft and wondered if it was possible to build a computer that was faster than the one Minecraft was being played on. Someone could probably do the calculation – we know the ‘size’ of Minecraft and the data on clipping distance and so on must be easy to find.
I’m sure I read somewhere that it is impossible to conceive of the universe because there are more galaxies than neurons.
Upcoming
On Sunday, Natalie and I are giving a talk at Peckham Digital about Myths of AI. I might include some of the above if I get time. It looks like a really great, locally-focussed event with lots of young folks, creative technologists and interesting ideas bouncing around. Do come along if you can.
Short Stuff
- The People’s Graphic Design Archive built on Notion which I haven’t yet used but am curious about and if I ever get the chance will dig into.
- I opened up Reply All yesterday morning to find that they’d had a full blown crisis at the beginning of the year. There’s an article about it here and they ran two episodes on it. They really step around ‘it’ though. Constantly saying ‘mistakes,’ ‘learning’ etc.
- Caroline Sinders introducing her practice at Site Gallery. There’s a great taxonomy of critical, technical and practical practices here.
Love you, love you, love you.