News: June 2017

Every two months ain't bad I reckon. I've migrated this to my blog too so I don't have to overwrite every time. 

Well, the big news I suppose is that I've taken on a new job at LCC. As of last Wednesday I'm Course Leader for MA Interaction Design Communication. I'm really excited to get knee-deep in a new challenge and craft something amazing with a great course but I guess I'm also a bit sad to be leaving the undergraduates behind. I'm still going to be teaching on BA Interaction Design Arts and BA Information and Interface Design but my focus will be on working on a vision for IDC. There's lots of other stuff going on at LCC too. I'm currently deep in planning for two other side projects:
  • London Design Festival. I'm curating the show for the Critical Design and Digital Futures cluster at the design school. We've got big plans for a showcase of work and research being done by staff in this area, and I"m looking forward to showing it to everyone
  • Speculative and Critical Design Summer School. We've got another edition organised by Ben Stopher and myself coming up in July. This time on the theme of 'Limits to Growth.' With Anab Jain and Georgina Voss we produced a short podcast discussing some of the ideas which you can listen to here
  • Interaction Design Arts' graduation show opens on June 15th. There's some great work. Please come along if you can. 
  • One point eight million other things.

Studio
Finite State Fantasia is at De Brakke Gronde in Amsterdam until July in a very different form. I'm genuinely excited to keep playing with it and find other ways of getting it to talk and say stuff. My admin/creative work balance is way out of alignment right now and so any time I get to just sit down and play around with actual projects is a joy. This latest iteration is probably my favourite and it's had some good feedback. I've blogged about it here. Also at De Brakke Gronde is mine and Natalie's Alchemy podcast. Rumour has it that it was one of the key inspirations behind this year's Fiber festival so I'm really happy that it's found a place.

Over in Athens at the 'Tomorrows' exhibition I've got New Mumbai on show. They've also made prints of some of the images that I never really meant to exhibit but actually work quite well.



The video from my talk at STRP about some of the context about the project has been put up. I've been working through the relative themes of 'What's It Doing,' 'Swimming With Submarines' and 'Haunted Machines' over the last few months and if you've been at talks I've been doing you've probably seen a sequential tightening of the narrative.

Strange Telemetry
Georgina and I just returned from Athens where we were running a workshop on speculative design and the city as part of the Tomorrows exhibition mentioned above. We had a great crew of participants from across the city and had some really interesting discussion. I'm sure there'll be a writeup soon.



Upcoming
I've got Mercenary Cubiclists going back on exhibition at the Vienna Biennale 'How Will We Work' show curated by the Superflux folks. The lineup for that show is stellar. I don't currently have any plans to go out and see it yet but I'm really hoping I will soon.

Natalie and I are going to be chairing a seminar on Haunted Machines and Wicked Problems at the HKU in Utrecht which puts me in Utrecht again every weekend throughout the end of June. Engaging students in the festival is a big part of what we want so I"m excited to be running this little summer school for them.

There are literally hundreds of other things. I've got nested to-do lists hidden inside other to-do lists set to ten minute timers. But if I look I'll never finish this so I'm just going to call it quits.

I can't embed an Instagram as normal because Blogger is rubbish as everyone knows but I"m too stubborn to leave.

Love you all,

Tx

Finite State Fantasia at De Brakke Grond

The Finite Stata Fantasia is on exhibition at the moment as part of On Alchemy and Magic, a continuation of the original The Act of Magic show at STUK, Leuven now at De Brakke Grond in Amsterdam. I've made some pretty significant changes to the functioning of it for this iteration


The format of the installation has been changed to fit the space. From the beginning we were faced with the problem of not actually being able to fit the room inside De Brakke Grond. Including projector throw, the full installation requires around ten square meters of space. Instead the project had to be refigured. This was an opportunity to address some of the failings in the original installations. The most significant being that it was hard to 'get.'

Following the installations at STUK and STRP I'd interviewed the curators and various people associated with the project to gain feedback. This problem of not 'getting it' was addressed both times and is a self-contradictory problem. The piece is, to a degree, intentionally meant to alienate viewers. The sounds it makes are harsh at inhuman, the space is illegible. The project is meant to put the viewer inside the sensorium of a machine and for them to experience alienation and cognitive dissonance as the machine represented its interpretation of the space. I've always been hesitant as to how much explanation need to be given. Sometimes 'an invisible machine is moving around the space and making a point every time it hits something, including you' is enough to indicate what the audience is seeing and hearing, sometimes it requires deeper explanation. One thing that always came out was a need to understand what this process added up to. It's clear from the motion and dynamics of the presentation that something is happening, but what is the end result?


An example of a remeshed simulation from the second iteration of the project at STRP. 

At STRP I took the data produced form each simulation and ran it through an incredibly arduous rendering pipeline to produce re-meshes of the space. These meshes were made by taking the points produced in each simulation, calculating their normals - roughly, the direction the point faces - and running various different remeshing algorithms in MeshLab to get a model out of the other side.

This had a significant effect on audience interaction with the piece. During the last weekend of the STRP Biennale, the outputs were printed and, according to the feedback, the audience were immediately drawing comparisons and similarities between the two spaces. Both are inherently cuboid and indicate a relationship and it once the idea of the invisible machine is seeded, it's easy to see the connection.

For De Brakke Grond I decided to make this process live. This would give me the opportunity to break the process down into its constituent parts and draw a direct connection between them. For the Unity parts; the simulation and the point cloud, this is a relatively simple procedure of moving cameras around. The remeshing was more complex and relied on a lot of Python to connect Unity's output to Blender (I'm forever grateful to whoever instigated the idea of being able to right click on any Blender object to grab its Python code.) Additionally, a script is running in FFMPEG, which packages the images produced by each Blender render into a video file every day.

One simulation sped up 600%. The real simulation time is one hour. 

An example of a remesh, sped up significantly.

The obvious loss with this setup was the interactivity. Without the room to enter, there was no way to introduce human obstacles that play with the mesh and affect the way the machine is interacting with the space. To make sure that the simulations were varied and interesting, we introduced a script to the simulation that generated between zero and three random objects every simulation. These are simple shapes - cubes, cylinders, cones, spheres - generated at random that interfere with the space. In the above video you can in fact see the two spheres that were generated for that simulation in the last few seconds of the remeshing.

The piece therefore became more 'explanatory.' It's easy to see how the first screen leads to the second and the second to the third. As previously noted however, there are significant deviation which make little sense from the programming side. In the video above for instance, the machine appears to become 'stuck' to the surface of one of the spheres in the simulation. This results in a knotting of the space around increasingly smaller and denser movements. The way the Blender Point Cloud Skinner algorithm works is that it weights points that are closer together heavier and gives them preference, so the entire mesh appears to start knotting and contorting around the trapped machine.

The concern here is also that the piece is too explanatory. The conceptual basis of building an experiential installation that intentionally tries to alienate human beings is lost on something that is essentially a debugging tool for human legibility.

Next year The Finite State Fantasia may be worked up into a much larger version but for now, after De Brakke Grond it's being parked. Working with simulation tools as a way to bring ideas of computation into an experiential installation was an interesting new approach but I'm not entirely satisfied by the audience-facing nature of it. All the iterations are a way to create dialogue with other humans, using the machine as a prop, and I'm more interested in examining the critical structures of the machines themselves in a wide historical context - why certain machines are a certain way, what does this indicate about future shapes of human space?

FSF I have no idea

I'm just setting up the new Finite State Fantasia ready for the who next week at De Brakke Grond. It's something that I've had my stuck in for the last few months while I've been building it and taking it around Europe.

Due to the space constraints here it's had to be radically reimagined for exhibition and so I've ended up taking a bit more of a systematic, 3rd-person way of showing it. Something that is maybe a little less experiential and mysterious but tells you more about the work and its process.

The irony is that I've been looking at it for the last few hours and I had no idea what it's doing. Everything's calibrated and working properly but the emergent properties of this new setup mean that it's almost totally illegible to me. I can see things happening, but I don't know why they're happening or what's making it work in a way that is completely unexpected. I suppose having set it up I had a real strong idea of its function and so was never really that in awe of it. I would struggle to explain its actions now if someone asked. Which is genuinely great.


Blackout In Render Street

I've recently been thinking about the structural and computational limits of rendering. I've written a lot previously about how this media is rapidly become a key popular orientating mechanism for the future, but unlike print, it obviously has dependencies. Then I was going to write a blog about it, then I wrote a story. ...
-

The flickering lights on the nearby crane, diffused by the thick city air, caught her attention. It was coming up to 9pm and the evening brownout would be up soon with power cut to non-vital services. During winter there were four a day, each for an hour and she was constantly amazed at how quickly the city had adjusted and normalised this dramatic infraction over the last two years. Gazing out of the window over her laptop into the never-dark of the city she glared pointedly at the ever-shifting urban landscape. The rapidity of construction was staggering but it was hard to remember it being any different, adjusted and normalised as everything in the city was.

She examined the site in front of her block; a massive pit of gravel and half-finished foundation that extended to the next street over. Trucks and machinery littered the site and huge halogens illuminated the ground casting harsh and perfectly black shadows under the machines and concrete and wire. All ringed by the defensive palisade of the standard 8 foot construction hoardings. ’What used to be there?’ she wondered. Last week there was one crane, now three, next week… six? Who knew?

She walked past the site and others like it everyday. The buildings grew and changed, were demolished again and rebuilt all over again. The city was pockmarked with pits and piles of rubble and foundations forever. Her streetscape was mostly digital hoardings; glaring and glowing LED screens proclaiming property lifestyle choices. Smiling families catching frisbees in opulent green spaces that never seemed to materialise in the ceaseless churn of construction and deconstruction.

She flicked open a new tab in her browser and went to Street View. Entering the time machine, she scrobbled back through bookmarks from the last few months when the Google cars had been through; hoarding, hoardings, mesh fence, hoardings, hoardings, hoardings.. all the same but different. Eventually she found something. Five years back, the time before the hoardings had arrived. A nondescript, squat concrete block of two or three storeys receded from the road with a ramp leading up to the heavy wooden security door. No indication of what it was. A community centre or hall, maybe some shops or classrooms. It didn’t actually matter. She felt no personal nostalgia for the boring building. It’s just.

It’s just that it’s been five years since anything on this street had been real. Not a promise or a speculation, a future, a rendering or an investment opportunity. A future ever receding behind the CGI towers and greenscape. Five years since that squat concrete bulk of sink-estate architecture had been replaced by ‘City Living With A Wholesome Lifestyle’ or ‘Urban Choices’ or ‘Metropolitan Family Quality’ or ‘Quality. Efficiency. Living’ or ‘Completely Global. Completely Local’ or ‘Live Your Dream’ or ‘Living: Redefined’ or ‘The New Face of Iconic’ or ‘Celebrating Heritage, Creating Opportunity’ or, or, or…

She studied the concrete block on her screen. It sat, blissfully unaware of its fate, obscured by a mix of vegetation and the stretched distortion of the Google car’s camera. The trees were gone for sure. The block eradicated, how was it the same place? Google’s rendering made it so she supposed. Using her mouse, she tried to rotate around and zoom, find out more about the hideous little thing but to no satisfaction. She was stymied by Google’s limited data capture of the time and a generation of architecture that refused to broadcast any intentions.

She looked back out of the window. The hoardings on the site oppostie were still going, looping their kitsch little animations, the gleeful renders and future promises unaware of the brutal power cut on its way. Her laptop battery was charged in advance as always, the blackouts were scheduled and she was well-attuned to making time for them and normally she’d continue through the hour. But. She hesitated.

After a few moments she closed her laptop, put on a coat and left the flat. She descended the stairs and crossed the street.

She concentrated on the hoarding on the other side of the road from her apartment building door. From this perspective she could see none of the concrete foundations, rubble or machinery behind it’s dimming digital glare. In front of gleaming towers, on a glorious sunny day a young woman in sunglasses, gliding down a non-existent gravel path smiled back at her, her hand reached out, she laughed, beckoned and turned around. The CGI view panned up to the sky, the promised apartment towers gleamed in the rendered skybox, constructed from some foreign atmosphere. The view faded to white. ‘Live in Tomorrow’s City.’

She stepped over the road in the chill and quiet and walked over to the hoarding. The animation restarted. The woman walked towards her, smiled, laughed, beckoned, turned, the view panned. ‘Live in Tomorrow’s City.’ Fade to white. The animation restarted.

She moved closer, following the woman’s face as she smiled, laughed, beckoned. ‘Live in Tomrrow’s City.’ Fade to white.

Closer still. She reached out with both hands and touched the dirty surface of the hoarding layered with grime and soot from the street and the site. Smiling, laughing, beckoning. ‘Live in Tomrrow’s City.’ Fade to white.

Closer. Smiling, laughing, beckoning, ‘Live in Tomorrow’s City.’ White.

Closer. The image became blurred, the laughing, the beckoning, the tag line became enmeshed and blurred with the bright LEDs.

She felt the cool plastic of the hoarding touch the tip of her nose. In front of her eyes, a handful of impossibly bright LEDs struggled to maintain focus, blinding her and filling her vision. Somewhere else there was laughing, beckoning, ‘Live in Tomorrow’s City.’ White.

And then just gone. Just black. Her eyes adjusted. Her ears suddenly attuned to the absence of the gentle buzz of electricity. In front of her eyes, the LEDs resolved to dark grey bulbs and she could see the smudged brown of the dirt. No one laughed or beckoned. The future had turned black.

She stepped back, all along the street, the hoardings had cut leaving nothing but grimy screens. She was alone and the street was empty but it was more and less than that. It was empty of itself. As if she was looking at the shell of the street. It came form nowhere and belonged to anywhere. Cracked pavement, unmarked road, black walls. With only the blank LED screens becoming obelisks in her eye line she felt trapped. She looked at her hands, her fingers rubbed with the grime of the LED hoarding. Her hands were grey. She tried to remember the concrete block that stood here five years ago but could only summon Google’s rendering, an overcast day, trees, blurred edges where the software stitched the mediocre panorama into reality. She turned around in the grey, everything suddenly real between renders of past and future.

Then the most brilliant illumination. She saw her hands cast in shadows and white then blues and greens. Colours so bright and real, impossible reality.

Smiling, laughing, beckoning. ‘Live in The City of Tomorrow.’

I, Renderer.

I recently came across Alan Warburton's new short video essay Spectacle, Speculation, Spam, it's pretty comprehensive and understandable breakdown of how software itself is a unifying point of theory and practice. Using the tool as the means of production invites you to theorise on what it means to be using that tool in a critical way. It's worth watching because there's a lot of nuance in the argument and it's something that in my practice I spend a lot of time thinking about. I'm kind of in to opening up and talking about the tools used in art and design production, not in an open-source way but more because I think those tools have an interesting relationship of being shaped and shaping their application. The project I'm working on at the moment is a simulation of a simulation of a machine inside a gallery. I made the conscious decision to very visibly realise what it is to the audience - none of the working will be hidden, nor will the fact that it's essentially just a simulation running on Unity and not a 'real' machine at the highest level. This is a way of opening up the layers of production to the audience, not presenting a spectacle (though I hope it will be spectacular) but as  a kind of Pompidou Centre effect.

Art-Labour

Warburton also raises the vexed problem of labour. Though this new project is built in Unity, I didn't build it. I've dabbled in Unity but am nowhere near experienced enough to build what I envisioned to the degree of production I'd be happy with so I've employed someone else and have taken a more directorial role. This is a first for me, I've always done 95% of the labour on my projects - barring extremely specialist services like industrial 3D printing or where assembling a team is necessary like film-making. I was faced with the problem of wanting to do something new and ambitious that was outside my skill set but also not having the time to actually learn those new skills, so I had to bite the bullet and get in someone else.

Rather than a one-way relationship, the result has been a lot of interesting critical conversations between myself and the Unity developer as a result of our understandings of how different software packages work. My 3D software expertise is in Blender (where I would say I'm at a high level of skill) while his in Unity. We've ended up spending a lot of time in discussion about the different ways these software packages work out things like physics, particles and even basics like colours. It's exactly these kind of base level operating structures that I find great points of critical enquiry in working with software: why does Unity render particles such-and-such a way and Blender in such-and-such a way? I'm not going to go into a big software comparison, this isn't that type of blog but it leads to interesting questions about who these packages are for and why they were designed that way and by who? They're both free but in different ways; They both offer similar functions but for different outcomes and so on.

Another interesting point on the problem of art-labour is that I'm working with a developer normally used to games. These games need to be made quick and dirty but look slick and polished. They need to go straight to user's phones and devices so inconsistencies and bugs need to be removed and ironed out. In my work, I'm trying to bring out exactly these software flaws and allow the audience to see the simulation fail and break which I think is a cognitive dissonance for employment, reminding me of Jeremy Hutchinson's Err. project; 'I want you to make something for me but it's fine if it doesn't work.' I've been inviting the developer to leave unintentional flaws in the simulation setup, if it gets stuck or glitches and resets early then that tells us much more about these technologies than faking it does.



Protorenderer

I've never been able to draw (everybody says that and everyone else says that everyone else can draw), I suppose what I mean is; I was always frustrated with my inability at drawing to properly represent my ideas at any more than the most basic, diagrammatic level. When I was at the RCA I decided to play around with 3D printing and a classmate showed me Blender as a way to quickly knock it up. After that I started playing around with it to make my own models of things, not as renders in themselves but as ways to think about objects and their physicality. I enjoyed rotating around them, zooming in, catching different angles and trying to get the thing on screen to fit with what I imagined. This new project, before even hitting Unity had me building a dozen iterations of the setup and functional behaviour in a model of the exhibition space (see above) as a way to think about audience impressions and experience as well as work out very technical constraints like projector throws.


My first render from 88.7 Stories From The First Transnational Traders - 2011-ish?

This is less of a product-design process than a cinematic process. I've always thought about things cinematically and have previously written somewhere or other about how cinematic visuals so easily slip into popular culture and so make a powerful vehicle for designers. The purpose of rendering for me is to get an impression of the thing with one-order of reality removed, as in cinema. The shot above is probably the first render I ever did. It's terrible. But at the time I was overjoyed. It contains millions of faces and only about 3 materials. I teach students Blender and I think one of the hardest conceptual jumps to make is that you're not trying to model reality - you're faking reality. Game engines use this technique all the time with things like clipping distance and object cutoffs to lower processing power on things that are far away. You can't tell from this image but in that glass room at the back are dozens of hand modelled office chairs that no one can ever see but suck up valuable processing time. I had yet to learn my own lessons.



So rendering became a method of prototyping. Most of my projects now involve Blender in some stage of their production, in some, for their entirety. The remake of 88.7 was an entirely rendered 28 minute film and performance. With that project I didn't want to do something that 'represented' the fiction I'd come up with but something that augmented it, offering a more interpretive eye into the characters who's stories I was reading. 'The Manager' is all gold and clockwork, 'The Engineer' a ghostly schematic-like first-person view point, 'The Trader' has a sea of data and we only ever see flickering fluorescent lights for the scientist as she gazes at the ceiling wondering about her situation.



Watching Mephitic Air, made with Wesley Goatley, who's also doing the sound for the new project (another skill and theory set I haven't time to become expert in) is just over 30 minutes of rendered pollution data. Taking something that is classically 'immaterial' (in popular discourse, not reality) and giving it the illusion of materiality which is totally simulated. This influenced our decision to project on thin dust sheets rather than hard projector screens. Walking around it you could never fully grasp the shape of what was happening, just a vague sense of motion and change with the occasional explosion of visuals and sound. The materials are close representations of the substances we were aestheticising at room temperature. Things never normally seen in isolation because they're too small, volatile and diluted, suddenly made hard and irreal.

-

It's also a little bit of a concern of comfort. Through my practice, I've found the tautology that 'you end up doing what you do' really applies. I once spent a year writing and all I got was people asking me to do writing. I now find rendering so comfortable that I seem to always turn to it whenever anything new comes along. That's fine for prototyping but a part of this new project is pushing it aside once it comes to production. Instead of producing a render, which can only ever live on screen or paper, producing an 'artwork' in the fullest sense - with the optimum tools for the ideas, that it's less about the methods of production and what they represent than what the thing is itself. I think my attachment to rendering would make that otherwise impossible. The benefit of unifying practice and theory through the critical use of software (or whatever tools) only goes so far before they start to obfuscate the ideas you're trying to talk about in the first place - yet another difficult balance to strike.

I'm probably going to write and think more about why I render as time progresses and I've done endless little bits of writing on it. It's a hard skill, I wouldn't say there's a rendering package apart form maybe SketchUp that you could pick up and get running with in a day, there's some really interesting conceptual barriers to get round when working with 3D on a 2D screen which lead to difficulties that are more than just learning functionality.
SaveSave