Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,431 posts)
Wed Mar 4, 2026, 07:52 PM Yesterday

Last year a study showed experienced software developers were slowed down by AI. This year they balked at being tested.

The same company, METR, wanted to test them.

My thread about that study, from last July:

AI slows down some experienced software developers, study finds
https://www.democraticunderground.com/10143494159

Link to last year's study: https://arxiv.org/abs/2507.09089

Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down.


They ran into problems conducting a study this year:

https://metr.org/blog/2026-02-24-uplift-update/

Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup. We additionally believe there have been selection effects due to a lower pay rate (we reduced the pay from $150/hr to $50/hr), and that our measurements of time-spent on each task are unreliable for the fraction of developers who use multiple AI agents concurrently.

-snip-

Our raw results show some evidence for speedup. Our early 2025 study found the use of AI causes tasks to take 19% longer, with a confidence interval between +2% and +39%. For the subset of the original developers who participated in the later study, we now estimate a speedup of -18% with a confidence interval between -38% and +9%. Among newly-recruited developers the estimated speedup is -4%, with a confidence interval between -15% and +9%.

-snip-

Recruitment and retention of developers has become more difficult. An increased share of developers say they would not want to do 50% of their work without AI, even though our study pays them $50/hour to work on tasks of their own choosing. Our study is thus systematically missing developers who have the most optimistic expectations about AI’s value.

-snip-

Some developers were less likely to complete tasks that they submitted if they were assigned to the AI-disallowed condition. One developer did not complete any of the tasks that were assigned to the AI-disallowed condition.

-snip-


As for what's going on here... I'd guess a lot of developers who like using AI for coding don't want to know if it isn't making them much more productive. Last year's findings were probably an unpleasant shock for them. They might not want to find out just how slow they'd be doing that coding without AI, either - whether other devs being tested could write software faster without AI than they could.

Much easier for them to continue using AI and assuming they're more productive.

And a very bright undergrad student (computer science and AI) at a college in India suggested the developers were addicted to AI.

https://www.linkedin.com/pulse/has-ai-already-crossed-addiction-threshold-developers-aditya-tomar-trvnc

Has AI Already Crossed The Addiction Threshold For Developers?

Aditya Tomar
Published Feb 26, 2026

-snip-

The effect that is obvious? The refusal. The psychological dependence. The “I really like using AI!” confession from someone paid to help science.

So let’s get uncomfortable: Is AI the first widely adopted technology that creates more skilled helplessness than actual skill?

We’ve seen this before with calculators and mental math, GPS and spatial reasoning, social media and attention spans. But never at the core of high-leverage intellectual work like software engineering. Never with tools that market themselves as “augmenting” while quietly eroding the very faculties they claim to enhance.

-snip-

Developers aren’t choosing AI because the data convinced them. They’re choosing it because not choosing it now feels like self-harm.



And what that student wrote last week about METR's study seems in agreement with this Business Insider article published today:

https://www.businessinsider.com/claude-outages-anthropic-ai-software-engineers-developers-coding-dependance-2026-3

Claude outages lay bare software developers' growing reliance on AI: 'I guess I'll write code like a caveman'

By Hugh Langley and Pranav Dixit

-snip-

Gauresh Pandit, a senior software engineer at Meta, told Business Insider that tools like Claude have quickly become embedded in engineers' day-to-day work. He said that when Claude went down, he turned his attention to non-coding tasks because he believed it might be slower to tackle the coding manually.

"It might not be that the muscle is lost but it feels like it's just simple to use an LLM even for the simplest things now, because it acts like a single button action to get things done," he said, referring to large language models.

"Claude outages hit way harder when you realize you've outsourced half your brain to it," one Redditor posted. Another joked: "I guess I'll write code like a caveman."

-snip-
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Last year a study showed experienced software developers were slowed down by AI. This year they balked at being tested. (Original Post) highplainsdem Yesterday OP
kick highplainsdem Yesterday #1
I'm not a developer, but I'll say in my job it has honestly probably slowed me down. meadowlander Yesterday #2
Thanks for the reply! Ever since genAI tools became widely available, I've seen this contrast between highplainsdem 12 hrs ago #3
Participating in the study evokes a feeling of existential dread... hunter 9 hrs ago #4

meadowlander

(5,119 posts)
2. I'm not a developer, but I'll say in my job it has honestly probably slowed me down.
Wed Mar 4, 2026, 10:19 PM
Yesterday

It takes longer to write a prompt that will get me 90% of the way to what I would write myself and then rewrite and recheck everything for that last 10% than it usually takes me to just do the thing myself in almost every case.

The one thing I do find it useful for is doing executive summaries for something I've already written but even there it takes more time because I feel like the standards for polishing and polishing and polishing something (and double checking that AI hasn't dropped out something important or slipped in something untrue) takes more time and is less satisfying work than writing it myself.

I do like AI but I can't say hand on heart that it saves me any meaningful amount of time or significantly improves the quality of my work.

Where it's a nightmare is when new college grads with imposter syndrome use it for everything to sound smart and then I have to spend hours in peer review rewriting all the screeds of AI slop they produce. That significantly increases my workload because it takes twice as long and there's no learning curve when the newbie didn't write it themselves and has no intention of even trying to learn how to write it better in the future.

highplainsdem

(61,431 posts)
3. Thanks for the reply! Ever since genAI tools became widely available, I've seen this contrast between
Thu Mar 5, 2026, 09:44 AM
12 hrs ago

people who are actually experts on whatever genAI is generating, who point out that just catching and correcting its errors can eat up or exceed the time it supposedly saves, and people who aren't experts but want to pretend they are - imposter syndrome, as you said - who don't check the AI results carefully enough, and in some cases don't check at all.

There are now AI-written papers in scientific and medical journals online - papers that got past what was apparently peer review that didn't happen or was done by AI - that include what are clearly AI responses to prompts.

I posted an OP recently that referred to an article mentioning a survey of developers that found about half trusted AI enough they didn't bother to check results. It's scary to think how much risky AI code is already in use.

I do like AI


I can't, because of the way it was trained, illegally, on stolen intellectual property. And with the intention of eliminating as many jobs as possible. AI companies tell workers the AI is meant to make their jobs easier, while the AI peddlers tell execs and company owners that the goal is to lay off those employees.

hunter

(40,609 posts)
4. Participating in the study evokes a feeling of existential dread...
Thu Mar 5, 2026, 12:59 PM
9 hrs ago

... as one realizes the problems they are solving have been solved thousands of times before and one, like Sisyphus, is pushing a boulder up an ever growing mountain of abstractions, only to see it roll down again before it reaches the top.

New creatures are not born and do not grow in the underworld of generative AI. It is a dead place of eternal repetition and toil.

Latest Discussions»General Discussion»Last year a study showed ...