I wrote a forward for this book Computer Generated coming out from Gingko Press early next year. There’s dozens, maybe hundreds of artists featured and it was nice to get to write something for non-academic audiences. It was mostly about the history and future of computer graphics drawing on a bunch of different stories but looking particularly at the way social media has played a role in the explosion of the art form.
Digital Sketch (DS058)
Lots of folks tell me they enjoy my ‘boxes’ which is great! But there’s a conflation in the mind of others (at least in some cases) of the weekly render (which I call a ‘digital sketch’) with the blog – the box. I guess it’s not important. Many more people respond to the weekly render than read the blog but, partly in the spirit of clear delineation and partly to force some reflexive practice, I have decided to add this section talking about the render, which you can skip to or over.
DS058 was a slow one. I’ve more recently become interested in making interesting environments with colours, textures and lighting than focussing on animations or simulations; I use an are.na board as a starting point. This one was another where the stage was set and then something had to happen. The exploding sphere really has no significance. Just decided on a whim that it might be nice. I used the inbuilt Cell Fracture addon to shatter the sphere including a recursive pass for smaller pieces. Then made them all active rigid bodies, made the room and pillars rigid bodies but passive, chucked a force field in the middle, turned gravity off and set the simulation speed to 0.0001 and hey presto.
You may recognise the barriers from DS010. Blender has finally introduced an asset library in 3.0, a feature that’s been sorely missing. This makes it easier to reuse common objects and materials. It is however quite janky to use. You need a master file into which you import asset and then save it – there may be other workarounds.
Short Stuff
Jemimah Knight asking for input on a project with BBC R&D looking for better AI visuals. This is a big bit of my own work – why these particular imaginaries appear, embodied through the particular images we have. I think this is a more illustrative direction than critical tech.
Takram have worked with Hitachi to produce Three Transitions, an interactive web project encouraging change. It’s an interesting visualisation of transition design as a process and indicated a way that people might meaningfully engage with it.
Bleakly funny reflection on bureaucracy and design; Paperweight; a cautionary tale of onerous oversight. About the attempt to implement UK Government Digital Services-style working in the Canadian equivalent.
This week’s render was probably the toughest ever to produce. I built the set before knowing what was going in it and then decided to return to my attempt at making a fully rigged robot arm.
The model is actually a beefed-up version of a desktop arm with added hydraulics and cabling with my own materials added. The tricky bit was doing all the rigging. I decided to learn inverse kinematics after avoiding it my whole life. This means that the rig follows a leader, with all the rest of the armature automatically adjusting based on a series of weights and limits rather than forward kinematics where you animate from the base outwards. The difference is easily explained in how you pick things up; if you reach out your hand to pick up a cup, your arm automatically follows without you having to direct it. If we were a forward kinematic system you would first have to position your upper arm, then your lower arm, then your hand, then your fingers.
I even rigged the cables to work this way so that they move and flex realistically. It was definitely one of the most grinding renders I’ve done because it takes a lot more forward-thinking about workflow, if you set things up in the wrong order or if one piece is just slightly misaligned you can end up with a broken and glitchy system. Having spent two or three days carefully and intricately setting up an enormously complicated system I then decided to chuck in an element of danger by using cloth simulation to anchor a balloon to the claw. You can set different parts of a cloth mesh to behave differently so I kept the string floppy and the balloon pressured, inverted gravity and weakened it for the whole scene (which is a lot easier than telling something how to float) and then ran the simulation about three dozen times. My only advice here: ramp the simulation quality steps way up – sky’s the limit. I think the final version is something like 300 against the default of 5.
Fallacies
This week’s Exponential View focussed on quantum computing with an interview with a founder of one of the companies developing it. The science is fascinating and it’s something I know blissfully little about. What was interesting was the way the interviewee, Chad Rigetti, trod the line between the mundanity of technological innovation and the existential premise of computing at the quantum level. For example, when asked about application he talked about the quantum theory of gravity, something that we cannot really experiment with on Earth with our current computers. However, when pushed on the everyday application he defaulted to national security, intelligence and finance, citing Moore’s law and saying that ‘computing technology has always been a fundamental driver of economic development… an inevitable march of better computing power.’ I fully believe him when he says the science is what’s most interesting, it’s just predictably tragic that something as incredible a unifying theory of gravity isn’t going to drum up as much funding as crypto.
There are two fundamental interconnected fallacies in technological innovation. Both of which have been explored extensively by scholars of STS: First, that technological ‘evolution’ is inevitable – that the next thing will be better – and secondly that everything will be modellable or simulatable. The narrative of tech innovation is that through these duelling paradoxes, some supremacy over the ‘messiness’ of ‘nature’ (both human and non-human) can be achieved. But these are fallacies. Every new innovation is never quite good enough to model things with enough accuracy and so the next innovation is the promissory one with the current one being an exception for its failures: ‘…past failures are often isolated as special or peculiar cases with little technically or organizationally in common with the newly proposed promissory solution.’ (Borup et al.)
This is not to say that innovation is pointless, it’s more complex than that. Is there a way to present new technological innovation as neither inevitable nor final? Rather than an all-or-nothing approach something that is more about presenting technology in a constant state of imperfect flux? Open-source stuff has some of this. Blender’s development pipeline is fascinating because it’s totally open and done by volunteers. There’s some fanfare around releases but there’s never any promissory rhetorics of finality; it’s treated as incomplete (and all the more charming for it) and in constant development, which is a useful way to inspire the community to contribute. I’m sure there are loads of other examples.
Short Stuff
Venkatash Rao is literally giving away his OODA loop work for anyone to use. OODA loops were all the rage about five or six years ago but Rao has stuck with them and made something really rich. Also, I’m super in to giving stuff away and it’s great to see such an influential character leading there.
Meredith Whittaker’s Steep Cost of Capture – a pretty concise overview of the big-tech-so-called-AI research industry nexus.
Piece on Vox here talking about that idea of inevitability more and it’s ties to the American manifest destiny worldview.
Everyday Robots seems like a complex project – to build robots that can perform everyday chores. I’m always torn on these types of projects. On the one hand, it’s a super interesting and remarkably complicated set of technical goals to be able to teach robots to do things that we take for granted like folding sheets and wiping surfaces. On the other hand it feels like something we don’t really need robots for – we’re good at household chores already and the house was built around the able human body in most cases, so why adapt robots for it? They cite economic productivity as a rationale but again, there are better reasons for robots; see the ever-citable Paro.
Brief into from IGN on speedrunning and tools. It’s a bit hyperbolic in places and skips over some of the interesting things in specific controversial runs. For more on the exact maths of that 1 in 7 trillion Minecraft speedrun check out this Standup Maths.
I’m migrating blogging to Monday because of my new training schedule. It’s best to get a solid block in Tuesday-Friday. Ok, love you, love you, love you. Have an amazing week.
I was watching this Veritassum about snowflakes the other evening. The episode features ‘snowflake scientist’ Ken Libbrecht about his work and it is all really interesting. Of course, they get to the inevitable question: ‘Is it true that no two snowflakes are the same?’ At this point, Libbrecht laughs and points out that the question is somewhat ludicrous; no two anything are the same: No two trees, animals, rocks, grains of sand, salt crystals are the same so why would it be any different for snowflakes? I imagine the spirit of the thought that ‘no two snowflakes are the same’ is based on the seeming contradiction between their visual geometric-ness yet lack of any uniformity. Something in us imagines there must be some mass production at work for such forms but I find this reminder that everything is unique quite beautiful.
Simulate this
‘To make our future real, simulation is the answer.’ The lede is buried a few paragraphs down in this Nvidia blog on Earth-2. The audaciously named new hardware project Nvidia are building in their metaverse – ‘Omnivrese.’ Nvidia will invest more in faster computation to try and achieve this with the aim of modelling climate change, and in doing so, contribute significantly to climate change. Matt Webb pointed out another thing this week; a chip with 1.2 trillion transistors that can apparently simulate the things ‘faster than the laws of physics.’
As Matt points out, this is a rather spurious claim. We’ve been able to simulate things faster than physics like the movement of celestial bodies for ages, long before computers, but there may be an application at the atomic or micro scale. In fact the test run on it was fluid dynamics, which as anyone who’s ever got me on this in the pub will know, is almost impossible to do in real time partly because a) it’s really hard and b) mathematicians haven’t actually solved fluid dynamics.
Nonetheless, the provocatively named ‘Wafer Scale Engine’ manages to run water physics faster than real time – no small feat. There’s an interesting reverse hardware tendency here as well; the designers of the engine suggest that the prevailing logic of daisy-chaining a bunch of CPUs and GPUs together is not as effective as simply building bigger chips – a reversal of decades of miniaturisation.
This tracks with something that’s been floating around since I read that Hardware Lottery paper, the crux of which is; we only got this type of computation because these were the bits and bobs that were lying around in labs but they’re probably not the best bits for the job. See also the well documented subversion of the GPU for AI despite originally being designed to send pixels to a monitor in Image Objects and as Murray Shanahan describes in this week’s Exponential View: ‘[I]t’s a sort of hack. It’s repurposing of a technology that was meant for something completely different.’
All this tracks back to this metaverse thing as well – the big Internet rebrand. The socials are trying to construct captive economies but Nvidia are more interested in simulating the Earth, again; weird flex, but ok. An interesting thought experiment to do is to do a classic design school crit review on the metaverse ask what demand the metaverse is responding to? What audience or user need has been identified that the metaverse is a response to? What is the problem that it is trying to solve? I know this seems a fickle thing to ask of a lot of so-called innovations but it quickly becomes obvious that the problem is the hard ceiling on attention, time and property that the big socials have hit and the need to artificially raise that ceiling by crafting whole new economies. And the audience for these metaverses are really the companies themselves, not users. So the metaverse is basically a guarded open-pit mine built with coincidental, repurposed hardware that’s being used as the hastily assembled barricades. It will probably work too, simply because of all the things I go on about; the language of inevitability, the foreclosure of alternatives, the co-opting of everyday interaction inside these walled gardens.
Maybe it’s just the people I hang out with but a common complaint I have never heard is ‘I wish I just had more Facebook in my life and it could store all my stuff.’
Is there a possibility for us to construct our own metaverses? That might be nice, I suppose something like Minecraft or even the popularity of Animal Crossing hint at that – a world of your own you can build that you can bring people into that isn’t quite as hostile and extractive? (Ironic given the title of Minecraft). Even then, as J-Paul Neeley pointed out in a recent chat, it would still be built on the back of Amazon Web Services. This is perhaps (again, coming up in the same chat) a space to revisit the Dewey-an ‘public’ – start from scratch. Rather than a focus on platforms, what are the objects, issues and loci around which a public assembles and then what is the appropriate metaverse for that public that they can build? What tools might they use?
Is there something better than the. open pit mines of the socials, the deep sea mining of NFTs or the hubristic folly of whole-Earth simulation that you can do with fast computers?
Short Stuff
A marketing company has set up ‘Earth’s Black Box‘ – a plan for a megalith in Tennessee that will record 30-50 years worth of data documenting the demise of human civilisation. Sort of like a fancier decline.online with a physical presence. Some notes of skepticism: The website appears to show a stream of tweets with no indication as to methodology from the marketing company behind the project. It may be a somewhat speculative proposal to drum up activism on climate change, in which case fair enough really.
Spotify founder Daniel Ek has invested a bunch of money in a startup doing military AI. Another type of shareholder rebellion is the artists and users of the platform calling for a boycott.
You probably saw that I shared the news. I’m leaving London College of Communication after an amazing long journey to go to Arup Foresight as Design Futures Lead. I’ll be starting around the end of February but leaving LCC imminently to take some time off. I’m very sad to go, I love LCC and my friends and colleagues, but it was the right time and the right opportunity. My old job is up and advertised and if you want to ask any questions, get in touch.
Anyway, I also love you, but you know that. Speak later.