I wrote a forward for this book Computer Generated coming out from Gingko Press early next year. There’s dozens, maybe hundreds of artists featured and it was nice to get to write something for non-academic audiences. It was mostly about the history and future of computer graphics drawing on a bunch of different stories but looking particularly at the way social media has played a role in the explosion of the art form.
Digital Sketch (DS058)
Lots of folks tell me they enjoy my ‘boxes’ which is great! But there’s a conflation in the mind of others (at least in some cases) of the weekly render (which I call a ‘digital sketch’) with the blog – the box. I guess it’s not important. Many more people respond to the weekly render than read the blog but, partly in the spirit of clear delineation and partly to force some reflexive practice, I have decided to add this section talking about the render, which you can skip to or over.
DS058 was a slow one. I’ve more recently become interested in making interesting environments with colours, textures and lighting than focussing on animations or simulations; I use an are.na board as a starting point. This one was another where the stage was set and then something had to happen. The exploding sphere really has no significance. Just decided on a whim that it might be nice. I used the inbuilt Cell Fracture addon to shatter the sphere including a recursive pass for smaller pieces. Then made them all active rigid bodies, made the room and pillars rigid bodies but passive, chucked a force field in the middle, turned gravity off and set the simulation speed to 0.0001 and hey presto.
You may recognise the barriers from DS010. Blender has finally introduced an asset library in 3.0, a feature that’s been sorely missing. This makes it easier to reuse common objects and materials. It is however quite janky to use. You need a master file into which you import asset and then save it – there may be other workarounds.
Short Stuff
Jemimah Knight asking for input on a project with BBC R&D looking for better AI visuals. This is a big bit of my own work – why these particular imaginaries appear, embodied through the particular images we have. I think this is a more illustrative direction than critical tech.
Takram have worked with Hitachi to produce Three Transitions, an interactive web project encouraging change. It’s an interesting visualisation of transition design as a process and indicated a way that people might meaningfully engage with it.
Bleakly funny reflection on bureaucracy and design; Paperweight; a cautionary tale of onerous oversight. About the attempt to implement UK Government Digital Services-style working in the Canadian equivalent.
This week’s render was probably the toughest ever to produce. I built the set before knowing what was going in it and then decided to return to my attempt at making a fully rigged robot arm.
The model is actually a beefed-up version of a desktop arm with added hydraulics and cabling with my own materials added. The tricky bit was doing all the rigging. I decided to learn inverse kinematics after avoiding it my whole life. This means that the rig follows a leader, with all the rest of the armature automatically adjusting based on a series of weights and limits rather than forward kinematics where you animate from the base outwards. The difference is easily explained in how you pick things up; if you reach out your hand to pick up a cup, your arm automatically follows without you having to direct it. If we were a forward kinematic system you would first have to position your upper arm, then your lower arm, then your hand, then your fingers.
I even rigged the cables to work this way so that they move and flex realistically. It was definitely one of the most grinding renders I’ve done because it takes a lot more forward-thinking about workflow, if you set things up in the wrong order or if one piece is just slightly misaligned you can end up with a broken and glitchy system. Having spent two or three days carefully and intricately setting up an enormously complicated system I then decided to chuck in an element of danger by using cloth simulation to anchor a balloon to the claw. You can set different parts of a cloth mesh to behave differently so I kept the string floppy and the balloon pressured, inverted gravity and weakened it for the whole scene (which is a lot easier than telling something how to float) and then ran the simulation about three dozen times. My only advice here: ramp the simulation quality steps way up – sky’s the limit. I think the final version is something like 300 against the default of 5.
Fallacies
This week’s Exponential View focussed on quantum computing with an interview with a founder of one of the companies developing it. The science is fascinating and it’s something I know blissfully little about. What was interesting was the way the interviewee, Chad Rigetti, trod the line between the mundanity of technological innovation and the existential premise of computing at the quantum level. For example, when asked about application he talked about the quantum theory of gravity, something that we cannot really experiment with on Earth with our current computers. However, when pushed on the everyday application he defaulted to national security, intelligence and finance, citing Moore’s law and saying that ‘computing technology has always been a fundamental driver of economic development… an inevitable march of better computing power.’ I fully believe him when he says the science is what’s most interesting, it’s just predictably tragic that something as incredible a unifying theory of gravity isn’t going to drum up as much funding as crypto.
There are two fundamental interconnected fallacies in technological innovation. Both of which have been explored extensively by scholars of STS: First, that technological ‘evolution’ is inevitable – that the next thing will be better – and secondly that everything will be modellable or simulatable. The narrative of tech innovation is that through these duelling paradoxes, some supremacy over the ‘messiness’ of ‘nature’ (both human and non-human) can be achieved. But these are fallacies. Every new innovation is never quite good enough to model things with enough accuracy and so the next innovation is the promissory one with the current one being an exception for its failures: ‘…past failures are often isolated as special or peculiar cases with little technically or organizationally in common with the newly proposed promissory solution.’ (Borup et al.)
This is not to say that innovation is pointless, it’s more complex than that. Is there a way to present new technological innovation as neither inevitable nor final? Rather than an all-or-nothing approach something that is more about presenting technology in a constant state of imperfect flux? Open-source stuff has some of this. Blender’s development pipeline is fascinating because it’s totally open and done by volunteers. There’s some fanfare around releases but there’s never any promissory rhetorics of finality; it’s treated as incomplete (and all the more charming for it) and in constant development, which is a useful way to inspire the community to contribute. I’m sure there are loads of other examples.
Short Stuff
Venkatash Rao is literally giving away his OODA loop work for anyone to use. OODA loops were all the rage about five or six years ago but Rao has stuck with them and made something really rich. Also, I’m super in to giving stuff away and it’s great to see such an influential character leading there.
Meredith Whittaker’s Steep Cost of Capture – a pretty concise overview of the big-tech-so-called-AI research industry nexus.
Piece on Vox here talking about that idea of inevitability more and it’s ties to the American manifest destiny worldview.
Everyday Robots seems like a complex project – to build robots that can perform everyday chores. I’m always torn on these types of projects. On the one hand, it’s a super interesting and remarkably complicated set of technical goals to be able to teach robots to do things that we take for granted like folding sheets and wiping surfaces. On the other hand it feels like something we don’t really need robots for – we’re good at household chores already and the house was built around the able human body in most cases, so why adapt robots for it? They cite economic productivity as a rationale but again, there are better reasons for robots; see the ever-citable Paro.
Brief into from IGN on speedrunning and tools. It’s a bit hyperbolic in places and skips over some of the interesting things in specific controversial runs. For more on the exact maths of that 1 in 7 trillion Minecraft speedrun check out this Standup Maths.
I’m migrating blogging to Monday because of my new training schedule. It’s best to get a solid block in Tuesday-Friday. Ok, love you, love you, love you. Have an amazing week.
This is the last day of summer. The closest the UK got to a heatwave. The country remarkably escaped the extreme weather events a lot of the world saw this summer and so we huddled in the gloom and grey. Back to school this week.
Reading
The last ‘object’ examined in James Garboury’s self-same book is the GPU. He’s keen to point out that though many histories of graphics start with the invention and dissemination of the Graphical Processing Unit, his ends there. The book makes pains to focus only on a particular bit of foundational history taking place at Utah University which is truly fascinating but I feel like more connections could have been made with contemporary discourse and newer players like GANs. For instance, the last chapter here on GPUs also talks about how GPUs were designed using graphics software themselves so that:
In this moment we see an epistemic break, in which the design and function of the computer itself becomes a recursive process – one that is always already computational, operating at a scale and speed that demands the mediation of graphics software… For [Friedrich] Kittler, “the last act of writing may well have been the moment when, in the early seventies, Intel engineers laid out some dozen square meters of blueprint paper (64 square meters in the case of the later 8086) in order to design the hardware architecture of their first integrated microprocessor.” In off-loading the design of hardware from the hand of the engineer into graphical software, the computer ceases to cohere under the dispotiff of writing or inscription, transformed by the logic of simulation.
Garboury, J. (2021), Image Objects, 176.
The thread of argument running through Garboury’s work is that computer graphics shape the way we conceive of the world as they visualise it and so understanding the history of the science, the decisions that were made and by who is a useful gateway to understanding this ordering system. Again, it’s not something he goes super deep into in the same depth as other writers, but this ordering system is fundamentally about simulation; the reproduction of physical phenomena in a computer. In this final part he suggests – after a neat idea that computers in the age of object-oriented programming are just a series of nested, recursive machines – that the invention of GPUs are themselves a simulation, designed in computers and for computers.
There are nice parallels here with other ideas. Sara Hooker in the Hardware Lottery writes about how much of the development of machine learning is just down to the chance of the hardware architecture available. As we know, much of that hardware is GPUs, inherited from the need to drive graphical processes and displays and now forming the backbone of the quest for artificial intelligence. Hooker asks how it might have been different; how different notions, theories and models of computation might have emerged if we had started with different hardware. Companies are increasingly building specialised GPUs aimed at crypto mining and machine learning; both processes that GPUs are more suited to than CPUs but the underlying logic is the same that was invented in the 1970s to drive displays.
I’m sort of wary of the bombast in the quote above, that designing chips on computers transformed them into the logic of simulation. I will certainly be using the argument when talking about graphics software but the first CAD software were incredibly manual and involved a lot of specialised expertise, it wasn’t simply a case of running a demand through a computer and taking what it spat out. Even today, this complexity is exposed when AI is used to design chips when ‘many tasks involved in chip design cannot be automated, so expert designers are still needed.’
Modern microprocessors are incredibly complex, featuring multiple components that need to be combined effectively. Sketching out a new chip design normally requires weeks of painstaking effort as well as decades of experience. The best chip designers employ an instinctive understanding of how different decisions will affect each step of the design process. That understanding cannot easily be written into computer code, but some of the same skill can be captured using machine learning.
If we’re to compare this dizzying notion of computers designing their own chips with Hooker’s contention that development is unknowingly hogtied by the hardware it’s inherited then is there a lower or upper limit? Sure, machine learning can probably make marginal gains, but could a machine learning system have the capacity to design a computer that much more complex than itself, like Deep Thought creating the Earth? I remember becoming fascinated by people building computers in Minecraft and wondered if it was possible to build a computer that was faster than the one Minecraft was being played on. Someone could probably do the calculation – we know the ‘size’ of Minecraft and the data on clipping distance and so on must be easy to find.
I’m sure I read somewhere that it is impossible to conceive of the universe because there are more galaxies than neurons.
Upcoming
On Sunday, Natalie and I are giving a talk at Peckham Digital about Myths of AI. I might include some of the above if I get time. It looks like a really great, locally-focussed event with lots of young folks, creative technologists and interesting ideas bouncing around. Do come along if you can.
Short Stuff
The People’s Graphic Design Archive built on Notion which I haven’t yet used but am curious about and if I ever get the chance will dig into.
I opened up Reply All yesterday morning to find that they’d had a full blown crisis at the beginning of the year. There’s an article about it here and they ran two episodes on it. They really step around ‘it’ though. Constantly saying ‘mistakes,’ ‘learning’ etc.