Journal tags: language

75

sparkline

Feedback

If you wanted to make a really crude approximation of project management, you could say there are two main styles: waterfall and agile.

It’s not as simple as that by any means. And the two aren’t really separate things; agile came about as a response to the failures of waterfall. But if we’re going to stick with crude approximations, here we go:

  • In a waterfall process, you define everything up front and then execute.
  • In an agile process, you start executing and then adjust based on what you learn.

So crude! Much approximation!

It only recently struck me that the agile approach is basically a cybernetic system.

Cybernetics is pretty much anything that involves feedback. If it’s got inputs and outputs that are connected in some way, it’s probably cybernetic. Politics. Finance. Your YouTube recommendations. Every video game you’ve ever played. You. Every living thing on the planet. That’s cybernetics.

Fun fact: early on in the history of cybernetics, a bunch of folks wanted to get together at an event to geek about this stuff. But they knew that if they used the word “cybernetics” to describe the event, Norbert Wiener would show up and completely dominate proceedings. So they invented a new alias for the same thing. They coined the term “artificial intelligence”, or AI for short.

Yes, ironically the term “AI” was invented in order to repel a Reply Guy. Now it’s Reply Guy catnip. In today’s AI world, everyone’s a Norbert Wiener.

The thing that has the Wieners really excited right now in the world of programming is the idea of agentic AI. In this set-up, you don’t do any of the actual coding. Instead you specify everything up front and then have a team of artificial agents execute your plan.

That’s right; it’s a return to waterfall. But that’s not as crazy as it sounds. Waterfall was wasteful because execution was expensive and time-consuming. Now that execution is relatively cheap (you pay a bit of money to line the pockets of the worst people in exchange for literal tokens), you can afford to throw some spaghetti at the wall and see if it sticks.

But you lose the learning. The idea of a cybernetic system like, say, agile development, is that you try something, learn from it, and adjust accordingly. You remember what worked. You remember what didn’t. That’s learning.

Outsourcing execution to machines makes a lot of sense.

I’m not so sure it makes sense to outsource learning.

Magic

I don’t like magic.

I’m not talking about acts of prestidigitation and illusion. I mean the kind of magic that’s used to market technologies. It’s magic. It just works. Don’t think about it.

I’ve written about seamless and seamful design before. Seamlessness is often touted as the ultimate goal of UX—“don’t make me think!”—but it comes with a price. That price is the reduction of agency.

When it comes to front-end development, my distrust of magic tips over into being a complete control freak.

I don’t like using code that I haven’t written and understood myself. Sometimes its unavoidable. I use two JavaScript libraries on The Session. One for displaying interactive maps and another for generating sheet music. As dependencies go, they’re very good but I still don’t like the feeling of being dependant on anything I don’t fully understand.

I can’t stomach the idea of using npm to install client-side JavaScript (which then installs more JavaScript, which in turn is dependant on even more JavaScript). It gives me the heebie-jeebies. I’m kind of astonished that most front-end developers have normalised doing daily trust falls with their codebases.

While I’m mistrustful of libraries, I’m completely allergic to frameworks.

Often I don’t distinguish between libraries and frameworks but the distinction matters here. Libraries are bits of other people’s code that I call from my code. Frameworks are other people’s code that call bits of my code.

Think of React. In order to use it, you basically have to adopt its idioms, its approach, its syntax. It’s a deeper level of dependency than just dropping in a regular piece of JavaScript.

I’ve always avoided client-side React because of its direct harm to end users (over-engineered bloated sites that take way longer to load than they need to). But the truth is that I also really dislike the extra layer of abstraction it puts between me and the browser.

Now, whenever there’s any talk about abstractions someone inevitably points out that, when it comes to computers, there’s always some layer of abstraction. If you’re not writing in binary, you don’t get to complain about an extra layer of abstraction making you uncomfortable.

I get that. But I still draw a line. When it comes to front-end development, that line is for me to stay as close as I can to raw HTML, CSS, and JavaScript. After all, that’s what users are going to get in their browsers.

My control freakery is not typical. It’s also not a very commercial or pragmatic attitude.

Over the years, I’ve stopped doing front-end development for client projects at work. Partly that’s because I’m pretty slow; it makes more sense to give the work to a better, faster developer. But it’s also because of my aversion to React. Projects came in where usage of React was a foregone conclusion. I wouldn’t work on those projects.

I mention this to point out that you probably shouldn’t adopt my inflexible mistrustful attitude if you want a career in front-end development.

Fortunately for me, front-end development still exists outside of client work. I get to have fun with my own website and with The Session. Heck, they even let me build the occasional hand-crafted website for a Clearleft event. I get to do all that the long, hard stupid way.

Meanwhile in the real world, the abstractions are piling up. Developers can now use large language models to generate code. Sometimes the code is good. Sometimes its not. You should probably check it before using it. But some developers just YOLO it straight to production.

That gives me the heebie-jeebies, but then again, so did npm. Is it really all that different? With npm you dialled up other people’s code directly. With large language models, they first slurp up everyone’s code (like, the whole World Wide Web), run a computationally expensive process of tokenisation, and then give you the bit you need when you need it. In a way, large language model coding tools are like a turbo-charged npm with even more layers of abstraction.

It’s not for me but I absolutely understand why it can work in a pragmatic commercial environment. Like Alice said:

Knitting is the future of coding. Nobody knits because they want a quick or cheap jumper, they knit because they love the craft. This is the future of writing code by hand. You will do it because you find it satisfying but it will be neither the cheapest or quickest way to write software.

But as Dave points out:

And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.

I shudder!

But again, this too is nothing new. We’ve all seen those codebases that contain mysterious arcane parts that nobody dares touch. coughWebpackcough. The issue isn’t with the code itself, but with the understanding of the code. If the understanding of the code was in one developer’s head, and that person has since left, the code is dangerous and best left untouched.

This, as you can imagine, is a maintenance nightmare. That’s where I’ve seen the real cost of abstractions. Abstractions often really do speed up production, but you pay the price in maintenance later on. If you want to understand the codebase, you must first understand the abstractions used in the codebase. That’s a lot to document, and let’s face it, documentation is the first casuality of almost every project.

So perhaps my aversion to abstraction in general—and large language models in particular—is because I tend to work on long-term projects. This website and The Session have lifespans measured in decades. For these kinds of projects, maintenance is a top priority.

Large language model coding tools truly are magic.

I don’t like magic.

The premature sheen

I find Brian Eno to be a fascinating chap. His music isn’t my cup of tea, but I really enjoy hearing his thoughts on art, creativity, and culture.

I’ve always loved this short piece he wrote about singing with other people. I’ve passed that link onto multiple people who have found a deep joy in singing with a choir:

Singing aloud leaves you with a sense of levity and contentedness. And then there are what I would call “civilizational benefits.” When you sing with a group of people, you learn how to subsume yourself into a group consciousness because a capella singing is all about the immersion of the self into the community. That’s one of the great feelings — to stop being me for a little while and to become us. That way lies empathy, the great social virtue.

Then there’s the whole Long Now thing, a phrase that originated with him:

I noticed that this very local attitude to space in New York paralleled a similarly limited attitude to time. Everything was exciting, fast, current, and temporary. Enormous buildings came and went, careers rose and crashed in weeks. You rarely got the feeling that anyone had the time to think two years ahead, let alone ten or a hundred. Everyone seemed to be passing through. It was undeniably lively, but the downside was that it seemed selfish, irresponsible and randomly dangerous. I came to think of this as “The Short Now”, and this suggested the possibility of its opposite - “The Long Now”.

I was listening to my Huffduffer feed recently, where I had saved yet another interview with Brian Eno. Sure enough, there was plenty of interesting food for thought, but the bit that stood out to me was relevant to, of all things, prototyping:

I have an architect friend called Rem Koolhaas. He’s a Dutch architect, and he uses this phrase, “the premature sheen.” In his architectural practice, when they first got computers and computers were first good enough to do proper renderings of things, he said everything looked amazing at first.

You could construct a building in half an hour on the computer, and you’d have this amazing-looking thing, but, he said, “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”

I went to visit him one day when they were working on a big new complex for some place in Texas, and they were using matchboxes and pens and packets of tissues. It was completely analog, and there was no sense at all that this had any relationship to what the final product would be, in terms of how it looked.

It meant that what you were thinking about was: How does it work? What do we want it to be like to be in that place? You started asking the important questions again, not: What kind of facing should we have on the building or what color should the stone be?

I keep thinking about that insight: “It didn’t help us make good buildings. It helped us make things that looked like they might be good buildings.”

Substitute the word “buildings” for whatever output is supposedly being revolutionised by generative models today. Websites. Articles. Public policy.

Bóthar

England is criss-crossed by routes that were originally laid down by the Romans. When it came time to construct modern roads, it often made sense to use these existing routes rather than trying to designate entirely new ones. So some of the roads in England are like an early kind of desire path.

Desire paths are something of a cliché in the UX world. They’re the perfect metaphor for user-centred design; instead of trying to make people take a pre-defined route, let them take the route that’s easiest for them and then codify that route.

This idea was enshrined into the very design principles of HTML as “pave the cowpaths”:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

Ireland never had any Roman roads. But it’s always had plenty of cowpaths.

The Irish word for cow is .

The Irish word for road is bóthar, which literally means “cowpath”.

The cowpaths were paved in both the landscape and the language.

Cryosleep

On the last day of UX London this year, I was sitting and chatting with Rachel Coldicutt who was going to be giving the closing keynote. Inevitably the topic of converstation worked its way ’round to “AI”. I remember Rachel having a good laugh when I summarised my overall feeling:

I kind of wish I could go into suspended animation and be woken up when all this is over and things have settled down one way or another.

I still feel that way. Like Gina, I’d welcome a measured approach to this technology. As Anil puts it:

Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

I very much look forward to using language models (probably small and local) to automate genuinely tedious tasks. That’s a very different vision to what the slopagandists are pushing. Or, like Paul Ford says:

Make it boring. That’s what’s interesting.

Fortunately, my cryosleep-awakening probably isn’t be too far off. You can smell it in the air, that whiff of a bubble about to burst. And while it will almost certainly be messy, it’s long overdue.

Paul Ford again:

I’ve felt so alienated from tech over the past couple of years. Part of it is the craven authoritarianism. It dampens the mood. But another part is the monolithic narrative—the fact that we live in a world where there seem to be only a few companies, only a few stories going at any time, and everything reduces to politics. God, please let it end.

Coattails

When I talk about large language models, I make sure to call them large language models, not “AI”. I know it’s a lost battle, but the terminology matters to me.

The term “AI” can encompass everything from a series of if/else statements right up to Skynet and HAL 9000. I’ve written about this naming collision before.

It’s not just that the term “AI” isn’t useful, it’s so broad as to be actively duplicitous. While talking about one thing—like, say, large language models—you can point to a completely different thing—like, say, machine learning or computer vision—and claim that they’re basically the same because they’re both labelled “AI”.

If a news outlet runs a story about machine learning in the context of disease prevention or archeology, the headline will inevitably contain the phrase “AI”. That story will then gleefully be used by slopagandists looking to inflate the usefulness of large language models.

Conflating these different technologies is the fallacy at the heart of Robin Sloan’s faulty logic:

If these machines churn through all media, and then, in their deployment, discover several superconductors and cure all cancers, I’d say, okay … we’re good.

John Scalzi recently wrote:

“AI” is mostly a marketing phrase for a bunch of different processes and tools which in a different era would have been called “machine learning” or “neural networks” or something else now horribly unsexy.

But I’ve noticed something recently. More than once I’ve seen genuinely-useful services refer to their technology as “traditional machine learning”.

First off, I find that endearing. Like machine learning is akin to organic farming or hand-crafted furniture.

Secondly, perhaps it points to a severing of the ways between machine learning and large language models.

Up until now it may have been mutually benificial for them to share the same marketing term, but with the bubble about to burst, anything to do with large language models might become toxic by association, including the term “AI”. Hence the desire to shake the large-language model grifters from the coattails of machine learning and computer vision.

Donegal to Galway to Clare

After spending a week immersed in the language and the landscape of Glencolmcille, Jessica and I were headed to Miltown Malbay for the annual Willie Clancy music week.

I could only get us accommodation from the Monday onwards so we had a weekend in between Donegal and Clare. We decided to spend it in Galway.

We hadn’t booked any travel from Glencolmcille to Galway and that worked out fine. We ended up getting a lift from a fellow student (and fellow blogger) heading home to Limerick.

Showing up in Galway on a busy Saturday afternoon was quite the change after the peace and quiet of Glencolmcille. But we dove right in and enjoyed a weekend of good food and music.