Roles aren’t disappearing - capabilities are expanding, and often the problem isn’t the system, it’s the prompt. I saw that firsthand at this year’s Pragmatic Summit in San Francisco.
I was at Pragmatic Summit when Chip Huyen reframed the AI conversation - if any product can be generated from a clear description, code isn’t the constraint, and true value lies elsewhere.
After spending time with OpenClaw and seeing how it actually works, I’m convinced the hype is real. It shows that autonomous AI agents are finally living up to their promise.
I get it - goals often feel like extra homework. But I’ve found they don’t have to be. Done right, they can keep you focused, accelerate your learning, and guide better decisions.
I was in the room at this year’s Pragmatic Summit when Laura Tacho dropped the numbers: nearly all developers use AI coding assistants, over a quarter of production code is AI-written - and yet productivity gains haven’t budged past 10%.
That number is expected to rise to 65% within two years. Yet 96% of developers, according to this Sonar research, say they don’t fully trust AI-generated code.
With AI building features, teams must shift from doing tasks to orchestrating them - PMs guide intent, engineers oversee systems, designers review output live, and QA builds self-healing processes.
Building with LLMs is nothing like traditional software. If we want something that actually works in production, we have to test it, monitor it, and keep iterating on real customer data.