I took 10 days off a computer which was genuinely wild and came back to hundreds of emails, most of which were newsletters and marketing so I’ve been on one of my periodic unsubscribe binges and being pretty brutal about whether I’ve actually ever read a newsletter and if not, delete. I mostly read aggregating things because I just want the bottom like quick, not thousands of words of hot reckons – I save my PhD time for that kind of reading. Nonetheless here I am scribing out a few hundred words of hot reckons.
I am sort of constantly side-eyeing the app that I want to help me organise my brain but I’m not sure it exists. I never look at the Twitter or Threads feed (just post there) so most things come through the 50 or so newsletters I’m subscribed to. Between other things I scroll through these and, similar to books, have a pretty hard ‘I’m not obligated to read this’ rule e.g. if it’s not grabbing me; skip it.
If it does grab me, then the filing begins. For anything PhD-relevant, e.g. papers and journal articles, it gets downloaded and added to Zotero where it can be read, tagged and annotated properly. I try and read 1-3 things in Zotero a day to keep on top of it. Any pertinent quotes or ideas are dragged out and dumped in Scrivener for later sorting as part of the PhD process as described previously.
Then there might be stuff that’s more transportable or mutable or work-relevant; resources, signals or reckons which all get put into Raindrop via the bookmark bar applet. Resources are lists, maps, toolsets, kits of.. whatever that might be useful one day. Reckons I’ve been less good at looking after because there’s so many of them and they don’t tend to have much staying power but any long-form essays or articles. Finally, signals should be broadly familiar to people; little snippets of change or interesting things that are worth keeping hold of. These actually get used a lot for work.
Regardless, I have another applet that I wrote myself that will quote whatever I’ve selected to post to a tweet which I then copy to the Meta one which is about my only interaction with them and I do more as a sort of ‘I’m still alive’ to the Internet. The thing is, this is such a perfectly honed workflow for my reading and research that I’m not sure what I’m looking for or even if there’s a problem – how would I know if I’m missing or forgetting things? People recommend things and new ways of working and things pop up all the time that I scroll through. Like the ‘obsidian method‘ looks great but then does it just take up more time to learn a new way of doing things any maybe my way of doing things is fine? My issue isn’t organising methods its the time to do everything. And maybe friction is good?
Upcoming
A lot of things. I saw a thing about how taking on lots of bits and bobs is a sign of low stimulation which is something I probably need to do some more should searching with. But, presented for your consideration:
- HCID conference 2024 – 18th September, London. With absolute heroes and legends Laura Forlano, Sara Heitlinger and Paulina Yurman amongst others. There should be a word for when you suddenly find yourself platformed with people you’ve looked up to your whole career but feel massive inferiority next to. I’ll be wittering on about design and futures and all you ahve to do is sit through that to get to the good stuff. 18th September
- Wherever, Whenever Festival – 22 October, Cologne. This looks huuuuuuge. I love chatting with Rob who is the organiser of this shindig and a super lovely guy. I’m going to be looking at five speculative future of work scenarios. Then getting a train up to Eindhoven for:
- Design and AI Symposium at Dutch Design Week – 22/23 October, Eindhoven. I think I’m doing something both days. Both a keynote thing and a panel chat on a different day. More folks I’m a big fan of at the event but I don’t think the full programme is released yet. I’m really looking forward to this one. Should be similar to the amazing panel we did in 2022.
Reading
- How to Raise your Artificial Intelligence. I’m a huge fan of Melanie Mitchell and Alison Gopnik’s work and this light-tough chat seems to brush over most of their theories. It may have even been Mitchell’s ‘Why AI is Harder Than We Think‘ that firmed up my PhD thinking a few years ago with a sort of ‘ok, then why do people keep going on about it?’ This interview with them both seeks to redirect attention away form hysterical men talking about the end of the world or enormous profit and towards the still worrying but likely outcomes. Bonus points as well for pointing out that we already have a technology that kills millions of people each year; the car.
- Poking Holes in Reality with Prototypes from Libby Miller references some amazing practices, including Strange Telemetry work but is essentially about how shonky, half-finished, sketchy prototypes challenge the inevitbailism of Silicon Valley technological visions.
- The Politics of Scaling from Sebastian Pfotenhauer, Brice Laurent, Kyriaki Papageorgiou, and Jack Stilgoe. Yes this is once again going to start being more of a PhD blog and on my reading list was this paper on why and how scaling has become the normative political and economic imperative of social and political change, driven by technology.
- The Simple Macroeconomics of AI by Daron Acemoglu landed a few months ago but I only just got to reading through it. Well, reading through the bits that aren’t dense economic calculations. It basically says productivity gains aren’t that great because it’s either automating things that were already automated to some degree, that the ease of integration for contextually complex tasks is overstated and AI can’t theoretically exceed human productivity on ‘hard tasks’ (like diagnosis) because it can only be trained on human historic performance and finally that the costs of fighting ‘bad AI’ (like crime and social media manipulation) dig deep into productivity gains. Finally he basically says ‘if you really want AI to have an (arguably) positive impact on society and economy then focus on making it good and reliable rather than messing around with sci-fi fantasies of human-like intelligence:’
To put it simply, it remains an open question whether we need foundation models (or the current kind of LLMs) that can engage in human-like conversations and write Shakespearean sonnets if what we want is reliable information useful for educators, healthcare professionals, electricians, plumbers and other craft workers. - AI Scaling Myths from the AI Snake Oil substack has some useful data to point to in the ongoing debate about pumping more data at LLMs to make them ‘better.’ What’s interesting about it is they quite succinctly identify two collective myths that AI people hold to be true. One is that ’emergence’ (surprising new behaviour, novelty, gestalt; e.g. do things not in the training data or infer) is a product of more and more data. I’m not sure on this one because I don’t understand the science but there’s essentially two sides as to whether you believe an LLM can surpass its training data or not. Logically I’d say not but AI people are banking on it. The second one is this idea that that benchmark capability is equal to social usefulness. This one is a pretty obvious no from me and a lot of my PhD is about trying to pull this one apart. There’s a sort of baked in assumption that with enough power and novelty, mass AI adoption and social change is inevitable which is just a fundamental misunderstanding of how the world works. This is why AI people spend so much time talking about power and only vaguely hand wave at what their AI is actually for.
- Contraption Theory, Venkatash Rao’s love letter to helicopters explores the typology of ‘contraptions’ – objects of enormous complexity and highly limited design variability that defy the tendency to simplify or variegate through design because of their ‘One Weird Trick.’ Arguably, my first ever talk that I did at an event while an undergrad was about a similar thing but I think I called it ‘Designs that Don’t Make Sense’ and talked about tourbillons (we have digital watches now), the SR71 (a plane that barely flies made out of Russian titanium) and the Concorde (a giant stonking battleship at the end of battleships). I guess the point of all of those is that they’re more political or social objects than practical ones.
Listening
I listen to a lot of K and J-pop because it’s beaty, catchy, vibey and I can’t understand the lyrics so I’m not distracted. It’s that or techno or metal but at the moment I’m enjoying the good vibes.
Alright that’s about it. Another brief one with no big hot reckons but I’m sluggish. Love you, speak later.