Instance: programming.dev
Joined: 5 months ago
Posts: 133
Comments: 45
London based software development consultant
Posts and Comments by codeinabox, codeinabox@programming.dev
Comments by codeinabox, codeinabox@programming.dev
I think you’re misconstruing the author’s argument, at no point does the author imply that Claude knows best, or that Electron apps are better. Their closing argument is certainly not an endorsement for Electron or AI slop.
Don’t get me wrong: writing this brings me no joy. I don’t think web is a solution either. I just remember good times when native did a better-than-average job, and we were all better for using it, and it saddens me that these times have passed.
I just don’t think that kidding ourselves that the only problem with software is Electron and it all will be butterflies and unicorns once we rewrite Slack in SwiftUI is not productive. The real problem is a lack of care. And the slop; you can build it with any stack.
Imagine being such a slop-brainwashed fanboi
Do you have any evidence for this? Looking through the post, and the author’s other blog post titles, there is very little mention of AI or Claude.
Instead of throwing labels at the author, it’s much more worthwhile to discuss their key argument about the challenges of developing native apps.
I wonder if we’ll end up in a situation of open source projects with closed source tests. Though I don’t know how that would work, because how would you contribute a new feature if the tests are closed? 🤔
There are some really good tips on delivery and best practice, in summary:
Speed comes from making the safe thing easy, not from being brave about doing dangerous things.
Fast teams have:
- Feature flags so they can turn things off instantly
- Monitoring that actually tells them when something’s wrong
- Rollback procedures they’ve practiced
- Small changes that are easy to understand when they break
Slow teams are stuck because every deploy feels risky. And it is risky, because they don’t have the safety nets.
I think there’s many solutions to this, including setting a minimum account age to accept pull requests from, or using Vouch.
Guys, can we add a rule that all posts that deal with using LLM bots to code must be marked? I am sick of this topic.
How would you like them to be marked? AFAIK Lemmy doesn’t support post tags
What I’m saying is the post is broadly about programming, and how that has changed over the decades, so I posted it in the community I thought was most appropriate.
If you’re arguing that articles posted in this community can’t discuss AI and its impact on programming, then that’s something you’ll need to take up with the moderators.
In fact, this garbage blogspam should go on the AI coding community that was made specifically because the subscribers of the programming community didn’t want it here.
This article may mention AI coding but I made a very considered decision to post it in here because the primary focus is the author’s relationship to programming, and hence worth sharing with the wider programming community.
Considering how many people have voted this up, I would take that as a sign I posted it in the appropriate community. If you don’t feel this post is appropriate in this community, I’m happy to discuss that.
My nuanced reply was in response to the nuances of the parent comment. I thought we shared articles to discuss their content, not the grammar.
Regardless of what the author says about AI, they are bang on with this point:
You have the truth (your code), and then you have a human-written description of that truth (your docs). Every time you update the code, someone has to remember to update the description. They won’t. Not because they’re lazy, but because they’re shipping features, fixing bugs, responding to incidents. Documentation updates don’t page anyone at 3am.
A previous project I worked on we had a manually maintained Swagger document, which was the source of truth for the API, and kept in sync with the code. However no one kept it in sync, except for when I reminded them to do so.
Based on that and other past experiences, I think it’s easier for the code to be the source of truth, and use that to generate your API documentation.
There are plenty of humans using em dash, how do you think large language models learnt to use them in the first place? NPR even did an episode on it called Inside the unofficial movement to save the em dash — from A.I.
There is much debate about whether the use em-dash is a reliable signal for AI generated content.
It would be more effective to compare this post with the author’s posts before gen AI, and see if there has been a change in writing style.
This quote on the abstraction tower really stood out for me:
I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.
They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.
But sure. AI is the moment they lost track of what’s happening.
The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.
Even if the bubble pops, the existing large language models will remain, as will AI assisted coding.
Instead, most organisations don’t tackle technical debt until it causes an operational meltdown. At that point, they end up allocating 30–40% of their budget to massive emergency transformation programmes—double the recommended preventive investment.
I can very much relate to this statement. Many contracts I’ve worked on in the last few years, have been transformation programmes, where an existing product is rewritten and replatformed, often because of the level of tech debt in the legacy system.
I originally shared this after stumbling upon it in one of Martin Fowler’s posts.
The article reminds me of how my mother used to buy dress patterns, blueprints if you will, for making her own clothes. This no code library is much the same, because it offers blueprints if you wanted to build your own implementation.
So the thing that interests me is what has more value - the code or the specifications? You could argue in this age of AI assisted coding that code is cheap but business requirements still involve a lot of effort and research.
To give a non-coding example, I’ve been wanting to get some cupboards built, and every time I contact a carpenter about this, it’s quite expensive to get something bespoke made. However, if I could buy blueprints that I could tweak, then in theory, I could get a handyman to build it for a lower cost.
This is a very roundabout way of saying I do think there are some scenarios where the specifications would be more beneficial than the implementation.
Thank you everyone for your input. I have created a separate community, !aicoding@programming.dev, for AI coding related discussions.
I agree with you on that point, and the same could be said about the meat and dairy industry. However I don’t think the answer is censoring discussions about cooking beef or chicken.
You can’t compare racist posts, which are a form of hate speech and a breach of this instance’s code of conduct, with discussions about topics that you don’t agree with.

The End of Coding? Wrong Question (architecture-weekly.com)
What LLMs revealed is how many people in our industry don’t like to code.
Cloudflare rewrites Next.js as AI rewrites commercial open source (blog.pragmaticengineer.com)
Relicensing with AI-assisted rewrite (tuananh.net)
In the world of open source, relicensing is notoriously difficult. It usually requires the unanimous consent of every person who has ever contributed a line of code, a feat nearly impossible for legacy projects. chardet , a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.
What Is Code Review For? (blog.glyph.im)
Comments
I think you’re misconstruing the author’s argument, at no point does the author imply that Claude knows best, or that Electron apps are better. Their closing argument is certainly not an endorsement for Electron or AI slop.
Do you have any evidence for this? Looking through the post, and the author’s other blog post titles, there is very little mention of AI or Claude.
Instead of throwing labels at the author, it’s much more worthwhile to discuss their key argument about the challenges of developing native apps.
Claude is an Electron App because we’ve lost native (tonsky.me)
API-wise, native apps lost to web apps a long time ago. Native APIs are terrible to use, and OS vendors use everything in their power to make you not want to develop native apps for their platform.
Why 83% of organizations reportedly trust open source with their most sensitive assets (thenewstack.io)
Meta gave React its own foundation. But it's not letting go just yet. (thenewstack.io)
npmx: A Lesson in Open Source's Collaboration Feedback Loops (opensourcepledge.com)
Gram 1.0 released (gram.liten.app)
This is the first release of Gram, an open source code editor with built-in support for many popular languages. Gram is an opinionated fork of the Zed code editor. It shares many of the features of Zed, but is also different in some very important ways.
Nobody knows how the whole system works (surfingcomplexity.blog)
Yes, and... (htmx.org)
I teach computer science at Montana State University. I am the father of three sons who all know I am a computer programmer and one of whom, at least, has expressed interest in the field. I love computer programming and try to communicate that love to my sons, the students in my classes and anyone else who will listen.
Welcoming the Open Source Endowment (opensourcepledge.com)
The Generative AI Policy Landscape in Open Source (redmonk.com)
I compiled and analyzed the generative AI policies of 32 open source organizations, including foundations like the Linux Foundation, Apache, and Eclipse, as well as individual projects like the Linux Kernel, Gentoo, curl, and Matplotlib.
I wonder if we’ll end up in a situation of open source projects with closed source tests. Though I don’t know how that would work, because how would you contribute a new feature if the tests are closed? 🤔
Tests Are The New Moat (saewitz.com)
It used to be that good documentation, strong contracts, well designed interfaces, and a comprehensive test suite meant users could trust your platform. Help you develop it further. That it was rigid and well designed. And yet, all of these things actually just make it easier for competing companies to re-build your work on their own foundations.
Don't Accomplish Everything (tidyfirst.substack.com)
If you haven’t heard of 3X, it’s a framework for thinking about how Facebook in those fairly-early days (2011) managed to:
Insider amnesia (seangoedecke.com)
Speculation about what’s really going on inside a tech company is almost always wrong.
Programming as theory building is true now more than ever (slightknack.dev)
I’ve repeatedly brought up the paper Programming as Theory Building in conversation with friends this past week, so I figured it would be good to write up the common thread of these conversations and discuss how the ideas in the paper are relevant today.