When god wishes to punish a man he first deprives him of reason.
I really don’t know what to write about this week. Normally I start and then it all sort of falls into place but I’ve been feeling scattered and tired, I’m not sure I have it in me. I’ve watched a lot of horror on TV which is a nice escape, have you got any good recommendations? I’ve been enjoying the Netflix haunting anthologies. I’m doing a lot of cycling this weekend, overnight to Whistable and back which around 200km then a club ride of 80 on Sunday. I’m just looking forward to those to be honest, is escapism a mark of age?
I’m sure you saw two blunders this week from the UK government in their inability to qualify the arts. Firstly, a skills identifying quiz that decided everybody should be a boxer and then a quickly rescinded campaign suggest ballet dancers retrain in ‘cyber.’ There’s nothing I can write that hasn’t been said elsewhere and better but I worry that in this crisis we lose sight of value. I’ve seen designers and artist fall into this trap recently too; extolling the financialisaiton of education and design, positioning graduates as arch-capitalists who should be ‘hustling.’ The financial dimensions of design and academic practice are real but are not ends in and of themselves. You need to understand them but only so that you can move through it and change them. Competitiveness is only going to be more and more encouraged; government believes in herd immunity for culture too.
Here’s Sony’s own teardown of the PS5. I actually uttered ‘Christ, look at the size of those fans.’
I was in Otford on the bicycle last week and came across Uranus in its model of the solar system. I had no idea this thing existed. It’s a scale model of the solar system based in Otford but its furthest extents are in New Zealand and Los Angeles.
Rain on a window has this strange effect of collapsing sound and space. You can’t hear dogs and sirens and traffic anymore – the things that give the city scale and make you feel small in it. There’s just the immediacy of rippling and tapping water on the window, as if there’s nothing and no-one else at all.
Sorry this is another one in the ‘I got too lazy to stop’ type of post. Thank you 148-ish weekly readers. If you like this let me know, if you don’t, keep it to yourself. [The first bit here is a bit introspective, self-aggrandising, pathetic and reflexive, though it does talk a bit about how I go about making these sketches. If you’d rather some maths and a fun speculation, skip to part 2.]
I Can’t Keep Up With My Dissatisfaction
I was watching Grayson’s Art Club last night. I really like it in the same way that I like watching videos of animals being rescued on Facebook; it’s sort of meaningless but it makes you feel good. It’s probably quite staged or at least misrepresented but it feels intimate and real and I’m perfectly happy to shut down critical faculties to smile a bit. Anyway, he was talking about flow and where to start and where to stop when making art and I was disheartened at how satisfying he finds that process and those questions. I find it endlessly frustrating. I’m not trying to claim these sketches as art so perhaps the comparison is mute but I get to this horrible point every week where the idea of the sketch starts receding into impossibility. Like this week, I started with a really simple premise ‘What would a frozen explosion look like?’ Not like a bullet-time Hollywood thing more like what if someone froze an explosion and put it in a museum?
I was inspired by the images of the Challenger shuttle disaster which are almost like a museum piece; like a moment in time that was captured by dozens of cameras from dozens of angles. It’s profoundly sad for many reasons but it’s a spectacle in its sadness. The subject also provided a technical challenge which was to use Blender’s new Mantaflow smoke simulator to try and get as close as possible to a specific explosion. So, I began by endlessly reiterating the simulation; running it, tweaking, running it, tweaking, running it ad infinitum. Most of my Saturday morning was just fram-ing (I’m using this word I just made up in the same way that you might say ‘inching’ but moving a frame at a time) keyframes to get the effect as close as possible.
There was lots of new stuff to figure out here. Mnataflow is quite different to Blender’s previous in-house system. It’s faster and more accurate but needs more care to get there. Then I wanted to get the lighting right, so I imagined it would be artificially lit with blue indirect sky light and yellow-white from the sun. So I set up area lights and position them and then modelled big stadium floodlights. Then it needs something to control it, so I put in rods to hold it up and a server to keep the fire ‘warm.’ Anyway, it goes on and on and after six or seven hours I just realise that the point where I’m happy is only getting further away. And that’s when I chose to stop and send it to render. And it’s the same every time. I’ve never reached the fabled ‘yeah, I’m happy with that, time to stop.’ I give up at ‘I can’t keep up with my dissatisfaction.’
It’s a feeling I get a lot in other things too; that I can keep working forever and never get it right, so I stop there. It’s the same, for example, with hard books. If after a few chapters I’m still in ‘no, I don’t get it’ I give up because I know there’s just more to re-read if I keep going. The acceleration of my not-understanding outpaces the possibility of my understanding and there’s only so much damn time.
Anyway, this all made me think about another accelerating relative cost; the energy required to accurately simulate energy:
The Solar Cost of Realism
This week’s digital sketch was produced in Cycles which is Blender‘s ray-tracing engine. As a quick explanation of what those words mean, Blender is a 3D rendering program that can model and simulate things in 3D. You can use it for architecture or 3D printing but it’s best geared for images and animation which is what I do. Blender has a couple of engines which are basically the ways it runs certain processes. An engine is a good analogy actually, it’s similar to having to having a car which you can switch between electric and diesel; it moves forward but the way in which it does that is different and has different qualities. (This is a bad analogy in that electric engines are clearly superior to diesel engines while the different engines in Blender serve different purposes.)
Last year, Blender released Eevee to great excitement, and rightly so. Eevee is an attempt to compete with Epic’s Unreal engine, which is basically what a load of video games and increasing amount of cinematic special effects are made in. I’ve been playing around with Eevee since release last summer and all the sketches I’ve done here have been made with it. The difference between Eevee and Cycles is simple: Eevee is quick, Cycles is accurate. The main way they do this is a bit more complex but not too complex:
Cycles uses ray-tracing. This is a process where the engine attempts to trace the path of all the light rays from all the light sources as they bounce around all the stuff in the scene and end up in the camera. This means you get a greater degree of accuracy the more sampling you do: The more time you leave the engine to iterate over and over again on the rays being traces, the more accurate the lighting. So in this week’s rendering it used 256 samples; each pixel has been checked 256 times for the right colours based on the light bouncing around in the scene. This takes about 8 and a half minutes per full HD frame, this is not real-time rendering when you think that a frame is about a twenty-fifth or so of a second on a good day.
Eevee is a real-time (ish) rendering engine that uses light-mapping AKA rasterizing AKA direct lighting instead of ray-tracing. Instead of tracing the path of the light sources, the engine sort of ‘paints’ things like shadows on to surfaces receiving them and generates a flat image (or ‘map’) of the colours of those surfaces. This is how video games generally work, although contemporary hardware is able to get pretty good at real-time ray tracing. The down side of this process is a loss of accuracy; reflections, bouncing, indirect lighting, ambient light and other incidental artefacts of physical reality are very difficult to get in real-time so you get much less accuracy. But, a frame takes about 20 seconds to render.
Unreal is a remarkable exception to this law of cost vs accuracy they’re very close to closing the gap on accuracy and speed as the Unreal 5 demos showed and their use in recent on-set special effects shows. So, while tapping my feet for 58 hours, waiting for this week’s sketch to render I wondered about the relative costs. How far do we go for realism? When does realism actuall start to tug at reality in the cost of producing it? How much light do you need to produce realistic light?
Well, Cycles runs off a graphics card and I can calculate, to a reasonable degree of accuracy the power that card draws…
The NVIDIA software on the PC shows it averaging 67.5% of maximum power which is 150 x 0.675 = 101.25 watts.
The rendering is taking about 8 minutes 30 seconds per frame for 415 frames: 8.5 x 415 = 3527.5 minutes or 58.791667 hours or 58 hours and 47 minutes.
So, in kilowatt-hours that’s 101.2 x 58.791667 = 5,949.71667004 watt-hours or 5.95 kilowatt-hours.
How much power a solar panel produces is 🤷 dependent on a bunch of stuff but seems to be in the range of 250-400 watts. If we had continuous sunlight for 58.7 hours then this Blender would be using approximately half of it. At the moment the UK is getting around 11.5 hours sunlight a day but… it’s Britain. On a cloudy day, solar panels operate at approximately 10-25% efficiency. So…
Best-case scenario: 400 watts x 0.25 x 11.5 hours = 1150 watt-hours or 1.15 kilowatt-hours.
Worst-case scenario: 250 watts x 0.1 x 11.5 hours = 287.5 watt-hours or 0.2875 kilowatt-hours
Let’s average it out to 0.8625kwh per day.
It’s going to take (58.791667 / 24) = 2.4496527917 days to render. 0.8625 x 2.4496527917 = 2.1128255328kwh produced by our imaginary solar panel in that time in British weather. While we need 5.95 kwh. So we’d need three solar panels, just to run the GTX 1070 to produce this week’s digital sketch. This obviously doesn’t include any other part of the computer, the screen or my flat.
So Cycles is terrible for the environment and I’m never happy anyway so back to Eevee next week. At least it doens’t put a big dent in the planet to use it.
Short Stuff
I already opined about the ‘oh my god they kept all the health data in Microsoft Excel’ story over on the hateful Irish microblog. Listen, what did you think big data was exactly, it was always just bigger Excel files? In parallel, this story about Palantir’s upcoming IPO came out recently. The short version is that Palantir basically makes nice interfaces and isn’t actually that smart.
I loved the new Watchmenseries. I don’t know if that’s the right thing to say or if it’s problematic or not but I just though it was so fucking smart. I’m sure someone will tell me why I’m wrong.
Open AI continues to undermine its credentials, its mission and, you know, its actual name by licensing GPT-3 exclusively to Microsoft. (I made a promise, which I’ve definitely told you about before, to stop spending energy being outraged at hypocrisy, so this is just an FYI rather than a ‘can you fucking believe it?!’)
I’ve been spending a lot of time on the proper bike and not much on the fixed gear, I got on it the other day to go into town and was shocked at how hard it was to push along for something that for the last fifteen years has bene as natural as walking. Anyway, I’ve definitely posted this before but here again is Ana Puga’s Hotline which I always turn to for inspiration. She’s an incredible mover; proper ‘I’m a leaf on the wind’ stuff. Which is nice to see again after how janky men’s racing has been over the last few weeks.
I’ve been trying to call people I haven’t spoken to in ages just to chat. Lots of folks have reflected on how the various states of lockdown have given them perspective and I realised that I don’t spend time talking to the people who inspire me enough. Anyway, if you’d like to just have a chat, let me know. I’d love to chat with you. If not, I’ll write you again next week. Love you as always.
The sky was briefly beautiful this afternoon. I looked up from the screen to our east-facing balcony, holding my neck at an unfamiliar angle and couldn’t really think about anything except how rich the clouds were and how the gradient of blue to dirty gold at the horizon looked like the backdrop of a massive set I’d accidentally wondered into.
Normally this blog and the digital sketch are mostly done by the weekend, I chuck in things here and there over the beginning of the week, give it a quick glance over in the morning and then post. This time I’m cutting it pretty close to the wire, staying up late the night before. This may be because I kept fiddling with the sketch; I’m not happy with it. It feels too cliched, there’s no focus, the lighting is a bit janky. Interiors are hard. So I chucked some volumetrics in and so infinitely multiplied the rendering time which means it was only finished about 15 minutes ago. I also didn’t get to really play with the new global illumination addon because I realised I didn’t know enough about Blender‘s existing global illumination setup, hence the new learning curve, hence the fiddling, hence the lateness.
We can Simulate it For You Wholesale
I’ve been chewing over a thought that popped up in some writing a few weeks back: It feels like realism is entering a really contested space. There are so many different versions: cinematic, animatic, climatic, natural, mathematical. All of these have their own versions of ‘realism.’ Even then, within each of these larger version of realism are contests based on the limits of the technical frame through which most folks experience these realisms. I want to try and unpick this with examples of two at the boundaries of video game-esque simulation: Microsoft’s new Flight Simulator (which I haven’t and won’t ever use) and the interactive cycling ‘simulator’ Zwift (which I do use).
Both of these software, though based on video game engines and technology are edge-case video games. They aim to simulate certain effects of a real-world experience, of flight and cycling to a degree of realism. It’s important to note that the briefs are different: Flight Simulator does fall into the fantasies of video games in the sense that it aims to connect (most) of its ‘players’ with an experience they can never have – flying a plane. While Zwift doesn’t and can’t seek to simulate cycling, only to give a passably entertaining experience of cycling on the spot in your flat. And, unlike Flight Simulator everyone who uses Zwift is necessarily also a cyclist – because you need a bike.
What do I mean by a contested space? Should you ask rhetorical questions when writing? Well, most desktop computers are now very good at rendering good quality graphics relatively easy, both thanks to advances in the software and the hardware and standardisation of certain technical processes that make them more interoperable. However, there has to be some concessions, there’s only so much that can be simulated and so the world has to be built in response to imagined use-cases. So what decisions are made when someone, creating a world, defines the boundaries of realism? How do you predict the actions of an actor or individual in your simulation such that all the affordances of the world appear as realistic, and not as simulation? Let’s briefly explore a corollary from Blender:
Given a limited technical frame, limited either by money, memory, power or speed, decisions have to be made about what is primarily constitutive of reality in constructing the simulation and what is secondary.
For instance, Blender‘s ocean modifier tool is remarkably good at really quickly simulating what appears to be the surface of a body of water. This might otherwise take hours of messing around with procedural displacement, but in a few clicks a relatively new can make an ocean, pond or lake surface and animate it, even simulate foam and spray. (I know this because I do it in day one of my Blender 101 class.) The good folks at Blender (which, remember, is produced for free by a foundation) decided that this was a tool users would need to improve their workflow, but crucially they also decided that this tool would mostly be used at a scale 10 – 100 meters. If you attempt to render a scene any closer than 10 meters-ish to the result then the geometry and lack of surface detail becomes obvious. You can fix it up with some extra stuff but that’s not part of the drag-and-drop modifier that Blender have designed. Any further out than about 100 meters and you’re forced to tile it, resulting in an obvious repetitive pattern. Again, this is fixable but requires an extra bit of know-how and experience.
Blender had to make a judgement call, given a limited technical frame of the average user’s computing power, technical skill and preferential aesthetic to deploy use these particular algorithms to make this particular modifier that works in these particular cases. Given a limiteless amount of computing power, this would not be a problem, but realism has been zoned to 10-100 meters. Blender have circumscribed a technology based on certain expectations about its use. To draw on my favourite paraphrased definition of technology from Alfred Gell, it is a tool (the computer, Blender), the knowledge on how to use the tool (which has been made as simple as possible by a drag-and-drop modifier) and the social necessity of its use which is limited to 10-100 meters.
(Don’t even ask about horizons. There’s a reason you don’t do horizons.)
So, back to Flight Simulator and Zwift. Nearly universaly hysterically positively reviews of the new Flight Simulator have pointed to its astounding accuracy both of the experience and attention to detail. This has been achieved, as previously noted, by Microsoft plugging together various bits of its ecosystem like mapping, weather, 3D scans and advances in procedural rendering into Flight Simulator to produce what appears to be a vast, realistic world.
I think, with this type of thing we sit at the edge of an interesting sea-change where it’s in fact easier to conceive of the game as a god-mode tour of Microsoft’s GIS data trove as opposed to a simple flight sim. Similar to the way video-game publisher and faceless world-eaters Electronic Arts have become the de facto gatekeepers of all football data, real or virtual such that it’s easier to see them as data brokers than game developers. But, even with all this global data the trick of realism only works at the scale within which you were intended to interact with it.
For instance, it uses Bing maps (lol) to generate the whole world and then extrudes building and landscapes from available data. However, limited by its technical frame (Internet connection speeds, desktop computation) it makes best-guesses about how to generate these 3D forms and so you end up with Buckingham Palace rendered as a drab office building. In the form of interaction intended by the designers, these oversights are forgivable being that you might for instance decide to cross the whole of Eurasia in one session and as a gestalt experience, the whole thing needs to be accurate enough. The social necessity (believable, global flight) makes use of a an imperfect tool.
Briefly then, Zwift also offers a rendering of Buckingham Palace, hence the useful parallel. Again, it’s worth pointing out that the designers have no intention of suspending disbelief; the Zwift maps are playgrounds – everything is out-sized, brightly coloured and simplistic. Zwift took a punt on the chance that most users would either not be Londoners or be not familiar enough that they would believe it by landmarks. London in Zwift is a trope of London, and Buckingham Palace is a trope of Buckingham Palace; all stone and imposing but this isn’t streamed, it’s all on your hard drive so the models are low-resolution with simple geometry and recycled assets. In fact, the quality of the palace model in Zwift is comparable with the one in Flight Simulator for geometric detail. The realism of Zwift has nothing to do with the visuals, it’s all in the relentless stream of data and feeling in the legs. It’s never going to convince you that you aren’t in your living room and it doens’t want to. It just needs to hit the tropes and symbols of London to make it cartoon fun.
Flight Simulator‘s glitches are a counter-factual rendering of reality in which the expectations are well-defined (Buckingham Palace is a big stone, classical building) but the reality (it is an office block from like, Swindon) conflicts. In Zwift, the expectations are cartoonishly reduced and out-sized, enough to trigger acknowledgement but not enough to suspend disbelief.
Really I’ve been writing a lot about expectations and reality so this stuff is jsut rattling around up there.
I almost made the whole post about this ↓ but I got distracted by renders of Buckingham Palace. Anyway, when Epic Games aren’t busy trying to disassemble Apple and Google’s app gatekeeping monopoly they’re also making significant leaps in special effects and developing the Unreal engine (of ‘Holy shit! Have you seen the Unreal 5 demo?‘ x 16/week fame), the world’s most successful video game engine. This totally slipped me by but it was used on The Mandalorian: a massive LED screen with a Unreal real-time rendering on it, on a sound stage. The death of green-screen? They’ve basically collapsed a massive part of the post-production chain. This is super cool but very upsetting because of all the Manovich and Levitt theory I’ve read that draws on the aesthetics of post-production, which may not be around much longer. Oh well.
You ever get the sense that Facebook still hasn’t figured out what it’s for? That the pomp and bombast is wearing thin? Like Robespierre toward the end they’ve both failed to bring the promised land and keep the advertisers happy. Sort of easy to forget that it’s a dying empire. The latest 20 Minutes Into The Future bulletin concerns the world after Facebook. The signs of ageing are already there; it’s slow to respond to changes and threats, can only survive by copying or stealing and is hitting an upper limit of the viability of an out-dated model. (Literally. The number of dead users is set to surpass live users in the 2060s.) But what happens to cultural memory and heritage when one private company holds all of it?
In my apparently weekly reading on Disco Elysium (why haven’t you played it yet? Its only like 20 hours, take the weekend my friend.) I was frustrated by this article on Vice about how a lack of sincerity holds it back from ‘being the game about Communism it could have been.’ I’m not sure I agree with all of it. The author seems angry that the games’ steer towards characters that self-deceive and obfuscate their feelings makes the fabled revolution impossible. I agree that a better world requires sincerity and honesty but calling on the main character to get over his own very real internal emotional turmoil because his ‘solidarity should be elsewhere’ is a flawed political strategy that we should not be inspired by. If you read any interviews with the creators (who are avowed communists) they highlight self-deprecation as a starting point in levelling the playing field of political discourse. And boy, does Disco Elysium self-deprecate.
Here’s beloved collaborator, colleague and friend Charley Peters on the Art Fictions podcast. It’s all structured around the Yellow Wallpaper, which I’ve never read but there’s so many great things there; ‘might as well talk of a female liver.’ I listened to it while writing this (me n Wes get a shout out).
Since this whole thing has been about simulation. Go to this, which looks at the other side of the whole thing.
I had more short stuff including something related to the new Epic/Unreal tech but once again I am using A Content Strategy and saving it for next week.
Those short stuffs weren’t very short, I appreciate that. But, you know, once I start I get too lazy to stop. Anyway, I love you as always, I’ll write you next week.