Last week I posted a writeup of the talk I gave at the Speculative Futures meetup in London. Loads of folks read and shared which is great and no-one (at least publicly or to me) expressed outrage at the thoughtlessness of my polemics. It sort of refreshes my faith in the version of Twitter we had where it was fun and supportive before the vicious McCarthyist version we have now. It's funny that in a week it had almost as many reads as my original Critical Exploits writeup from six years ago.
Six years ago feels a Hitchcock zoom away. I suppose it demonstrates some sort of consistency in thought that I've kept it going that long and that the practice has expanded and grown so much since - sometimes for the better, sometimes not. I'm delivering Critical Exploits this afternoon to some students in fact and it always elicits a fun discussion.
It's a conversation I'm keen to continue and something I'm exploring actively in my PhD (when I have time) so having feedback from folks was super useful and I'm always grateful for links to other texts or examples of projects that I can measure my ideas against so please keep them coming.
decline.online is cooked and on the table! I've just uploaded and launched a working version (2.2) and I'm really happy with it. I made a last minute decision last night to do some redesigning and remove the black border that was sitting over everything before. There were a couple of other more technical and conceptual things that I've been working through and will have to continuously evolve as it grows:
- Dealing with the growing data set is going to be the most pressing problem. The background scraper runs every four hours which means it will rapidly start to become quite heavy. I've implemented a fixed-width visualisation for the line charts so that they scroll left to go back in time but I'll have to keep an eye on how that feels as the amount of data grows. I played around with a click-to-zoom feature so that you could click on a text line and it would zoom to show you the whole historical record but I couldn't get it running right.
- I put in some static sources – links to PDFs, maps and so on that are interesting but not necessarily scrape-able. There's much more of these than scrape-able data sets so I need to be careful to not go overboard with these and stick with the idea of building a historical record that can be visualised in single data points.
- I've tested in Chrome and Safari on various screens but who knows what type of problems will occur as people start to play with it.
- I also need data sources. I've got a list of things to investigate and see what I can do but I want more from other people. If there are things you'd like to see or data sources you know of let me know and I'll run them into an update for version three if I can.
Building this again was a whole learning process and I got to grips with some things I've never done before, particularly working with Python for web-scraping (comparatively easy) and using d3 (often very frustrating.) As it keeps growing I'll try and make the most of the opportunity to keep learning new things. Send links, send feedback.
Like I say I was on holiday so this week I'm catching up with paperwork and finishing off a little project for Haunted Machines which might launch soon and I can tell you about then. I'm going to be at the Graphic Design Educator's Network event next Wednesday at the RCA. It'll be interesting to see what the movers and shakers of graphics are thinking about.
Love you. x