General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsLast year a study showed experienced software developers were slowed down by AI. This year they balked at being tested.
The same company, METR, wanted to test them.
My thread about that study, from last July:
AI slows down some experienced software developers, study finds
https://www.democraticunderground.com/10143494159
Link to last year's study: https://arxiv.org/abs/2507.09089
They ran into problems conducting a study this year:
https://metr.org/blog/2026-02-24-uplift-update/
-snip-
Our raw results show some evidence for speedup. Our early 2025 study found the use of AI causes tasks to take 19% longer, with a confidence interval between +2% and +39%. For the subset of the original developers who participated in the later study, we now estimate a speedup of -18% with a confidence interval between -38% and +9%. Among newly-recruited developers the estimated speedup is -4%, with a confidence interval between -15% and +9%.
-snip-
Recruitment and retention of developers has become more difficult. An increased share of developers say they would not want to do 50% of their work without AI, even though our study pays them $50/hour to work on tasks of their own choosing. Our study is thus systematically missing developers who have the most optimistic expectations about AIs value.
-snip-
Some developers were less likely to complete tasks that they submitted if they were assigned to the AI-disallowed condition. One developer did not complete any of the tasks that were assigned to the AI-disallowed condition.
-snip-
As for what's going on here... I'd guess a lot of developers who like using AI for coding don't want to know if it isn't making them much more productive. Last year's findings were probably an unpleasant shock for them. They might not want to find out just how slow they'd be doing that coding without AI, either - whether other devs being tested could write software faster without AI than they could.
Much easier for them to continue using AI and assuming they're more productive.
And a very bright undergrad student (computer science and AI) at a college in India suggested the developers were addicted to AI.
https://www.linkedin.com/pulse/has-ai-already-crossed-addiction-threshold-developers-aditya-tomar-trvnc
Aditya Tomar
Published Feb 26, 2026
-snip-
The effect that is obvious? The refusal. The psychological dependence. The I really like using AI! confession from someone paid to help science.
So lets get uncomfortable: Is AI the first widely adopted technology that creates more skilled helplessness than actual skill?
Weve seen this before with calculators and mental math, GPS and spatial reasoning, social media and attention spans. But never at the core of high-leverage intellectual work like software engineering. Never with tools that market themselves as augmenting while quietly eroding the very faculties they claim to enhance.
-snip-
Developers arent choosing AI because the data convinced them. Theyre choosing it because not choosing it now feels like self-harm.
And what that student wrote last week about METR's study seems in agreement with this Business Insider article published today:
https://www.businessinsider.com/claude-outages-anthropic-ai-software-engineers-developers-coding-dependance-2026-3
By Hugh Langley and Pranav Dixit
-snip-
Gauresh Pandit, a senior software engineer at Meta, told Business Insider that tools like Claude have quickly become embedded in engineers' day-to-day work. He said that when Claude went down, he turned his attention to non-coding tasks because he believed it might be slower to tackle the coding manually.
"It might not be that the muscle is lost but it feels like it's just simple to use an LLM even for the simplest things now, because it acts like a single button action to get things done," he said, referring to large language models.
"Claude outages hit way harder when you realize you've outsourced half your brain to it," one Redditor posted. Another joked: "I guess I'll write code like a caveman."
-snip-
highplainsdem
(61,431 posts)meadowlander
(5,119 posts)It takes longer to write a prompt that will get me 90% of the way to what I would write myself and then rewrite and recheck everything for that last 10% than it usually takes me to just do the thing myself in almost every case.
The one thing I do find it useful for is doing executive summaries for something I've already written but even there it takes more time because I feel like the standards for polishing and polishing and polishing something (and double checking that AI hasn't dropped out something important or slipped in something untrue) takes more time and is less satisfying work than writing it myself.
I do like AI but I can't say hand on heart that it saves me any meaningful amount of time or significantly improves the quality of my work.
Where it's a nightmare is when new college grads with imposter syndrome use it for everything to sound smart and then I have to spend hours in peer review rewriting all the screeds of AI slop they produce. That significantly increases my workload because it takes twice as long and there's no learning curve when the newbie didn't write it themselves and has no intention of even trying to learn how to write it better in the future.
highplainsdem
(61,431 posts)people who are actually experts on whatever genAI is generating, who point out that just catching and correcting its errors can eat up or exceed the time it supposedly saves, and people who aren't experts but want to pretend they are - imposter syndrome, as you said - who don't check the AI results carefully enough, and in some cases don't check at all.
There are now AI-written papers in scientific and medical journals online - papers that got past what was apparently peer review that didn't happen or was done by AI - that include what are clearly AI responses to prompts.
I posted an OP recently that referred to an article mentioning a survey of developers that found about half trusted AI enough they didn't bother to check results. It's scary to think how much risky AI code is already in use.
I can't, because of the way it was trained, illegally, on stolen intellectual property. And with the intention of eliminating as many jobs as possible. AI companies tell workers the AI is meant to make their jobs easier, while the AI peddlers tell execs and company owners that the goal is to lay off those employees.
hunter
(40,609 posts)... as one realizes the problems they are solving have been solved thousands of times before and one, like Sisyphus, is pushing a boulder up an ever growing mountain of abstractions, only to see it roll down again before it reaches the top.
New creatures are not born and do not grow in the underworld of generative AI. It is a dead place of eternal repetition and toil.