I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.
90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.
Glad to see there’s other programmers out there who actually take pride in their work.
deleted by creator
It’s interesting that all the devs I already respected don’t use it or use it very sparingly and many of the devs I least respected sing it’s praises incessantly. Seems to me like “skill issue” is what leads to thinking this garbage is useful.
I’d rather hone my skills at writing better, more intelligible code than spend that same time learning how to make LLMs output slightly less shit code.
Whenever we don’t actively use and train our skills, they will inevitably atrophy. Something I think about quite often on this topic is Plato’s argument against writing. His view is that writing things down is “a recipe not for memory, but for reminder”, leading to a reduction in one’s capacity for recall and thinking. I don’t disagree with this, but where I differ is that I find it a worthwhile tradeoff when accounting for all the ways that writing increases my mental capacities.
For me, weighing the tradeoff is the most important gauge of whether a given tool is worthwhile or not. And personally, using an LLM for coding is not worth it when considering what I gain Vs lose from prioritising that over growing my existing skills and knowledge
That’s certainly one possibility. But another possibility is that the people praise LLMs are not very good at judging whether the code it generates is of good quality or not…
I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:
And if you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.___
At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.
+10000 to this comment
deleted by creator
A simple, but succinct summary of the real cost of LLMs. Literally, everything human for something that is just a twisted reflection of the greed of the richest.
This is exactly how I feel about LLM’s. I will use them if I have to to get something done that would be time consuming or tedious. But I would never willingly sign up for a job where that’s all it is.
If it allows to kick out code faster to meet whatever specs/acceptance criteria laid out before me, fine. The hell do I care if the code is good or bad. If it works, it works. My company doesn’t give af about me. I’m just a number. No matter how many “we are family” speeches they give. Or try to push the “we are all a team and will win”….we aren’t all a team. Why should I care more than “does it work”. As long as profits go up, the company is happy. They don’t care how good or pretty my code is.
Tell me again how you’ve never become the subject matter expert on something simply because you were around when it was built.
Or had to overhaul a project due to a “post-live” requirements change a year later.
I write “good enough” code for me, so I don’t want to take a can opener to my head when I inevitably get asked to change things later.
It also lets me be lazier, as 9 times out of 10 I can get most of my code from a previous project and I already know it front to back. I get to fuck about and still get complex stuff out fast enough to argue for a raise.
Been the sme and completely architected and implemented the entire middleware server farm for my last company. First in ibm after taking it over from someone else that started it, just a here you go takeover. Then moving from an ibm shop to oracle, cause the vp wanted a gold star and wouldn’t listen to anyone. I left when they were moving to red hat when the next vp came in and wanted their gold star. Little over 400 servers. Been there done that.
This person is right. But I think the methods we use to train them are what’s fundamentally wrong. Brute-force learning? Randomised datasets past the coherence/comprehension threshold? And the rationale is that this is done for the sake of optimisation and the name of efficiency? I can see that overfitting is a problem, but did anyone look hard enough at this problem? Or did someone just jump a fence at the time and then everyone decided to follow along and roll with it because it “worked” and it somehow became the golden standard that nobody can question at this point?
The generalized learning is usually just the first step. Coding LLMs typically go through more rounds of specialized learning afterwards in order to tune and focus it towards solving those types of problems. Then there’s RAG, MCP, and simulated reasoning which are technically not training methods but do further improve the relevance of the outputs. There’s a lot of ongoing work in this space still. We haven’t seen the standard even settle yet.
Yeah, but what I meant was: we took a wrong turn along the way, but now that it’s set in stone, sunk cost fallacy took over. We (as senior developers) are applying knowledge and approaches obtained through a trap we would absolutely caution and warn a junior against until the lesson sticks, because it IS a big deal.
Reminds me of this gem:

The researchers in the academic field of machine learning who came up with LLMs are certainly aware of their limitations and are exploring other possibilities, but unfortunately what happened in industry is that people noticed that one particular approach was good enough to look impressive and then everyone jumped on that bandwagon.
That’s not the problem though. Because if I apply my perspective I see this:
Someone took a shortcut because of an external time-crunch, left a comment about how this is a bad idea and how we should reimplement this properly later.
But the code worked and was deployed in a production environment despite the warning, and at that specific point it transformed from being “abstract procedural logic” to being “business logic”.
deleted by creator
I think you’re misunderstanding that paragraph. It’s specifically explaining how LLMs are not like humans, and one way is that you can’t “nurture growth” in them the way you can for a human. That’s not analogous to refining your nvim config and habits.
Argument doesn’t check out. You can still manage people, and they can use whatever tools make them productive. Good understanding of the code & ability to pass PR reviews isn’t going anywhere, nor is programmer skill
Not unless the claims that companies are hiring less junior devs in favour of LLMs with senior coder oversight. If this is indeed a true trend and AGI is not achieved, we might have senior coder shortage in future.
I think this is true to some degree, but not exclusively true; new grads still get jobs. However, I think it’ll take some time for universities to catch up with the changes they need to make to refocus on architecture, systems design & skilled use of LLMs
My opinion is that the demand for software is still dramatically higher than what can be achieved by hiring every single senior dev + LLM. I.e. there will need to be more people doing it in the future regardless of efficiency gains




