General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums"Tokenmaxxing" is making developers less productive than they think (TechCrunch, April 17)
Another article on AI code generators not helping as much as many developers believe.
No paywall:
https://techcrunch.com/2026/04/17/tokenmaxxing-is-making-developers-less-productive-than-they-think/
He says that engineering managers are seeing code acceptance rates of 80% to 90%meaning the share of AI-generated code that developers approve and keepbut theyre missing the churn that happens when engineers have to revise that code in the following weeks, which drives the real-world acceptance rate down between 10% and 30% of generated code.
-snip-
GitClear, another company in this space, published a report in January that found AI tools increased productivity, but also that its data showed regular AI users averaged 9.4x higher code churn than their non-AI counterpartsmore than double the productivity gains the tools provided.
Faros AI, an engineering analytics platform, drew on two years of customer data for its March 2026 report. The finding: code churnlines of code deleted versus lines addedhad increased 861% under high AI adoption.
Jellyfish, which bills itself as an intelligence platform for AI-integrated engineering, collected data on 7,548 engineers in the first quarter of 2026. The firm found that the engineers with the largest token budgets produced the most pull requests (proposed changes to a shared codebase), but the productivity improvement didnt scale. They achieved two times the throughput at ten times the cost of tokens. In other words, the tools are generating volume, not value.
-snip-
patphil
(9,142 posts)AI is inherently unable to have the kind of "Aha" moment where an idea gets understood, and the best possible solution is seen.
I don't see AI being anything more than a GIGO generator if it's not working with human engineers.
There's no elegance in AI solutions; no creativity, no sense of ownership of the code it creates. How does something with those kind of limitations test the code. How can it really challenge what it has built?
End users are also a necessary part of the process.
We always had end users as part of our team, because nobody can break code better than someone who expectations of use aren't always in the build documents.
AI may be there some day, but not for quite a while. Right now it's dangerous to allow AI to do complex coding without human review and intervention.