Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,486 posts)
3. Thanks for the reply! Ever since genAI tools became widely available, I've seen this contrast between
Thu Mar 5, 2026, 09:44 AM
Thursday

people who are actually experts on whatever genAI is generating, who point out that just catching and correcting its errors can eat up or exceed the time it supposedly saves, and people who aren't experts but want to pretend they are - imposter syndrome, as you said - who don't check the AI results carefully enough, and in some cases don't check at all.

There are now AI-written papers in scientific and medical journals online - papers that got past what was apparently peer review that didn't happen or was done by AI - that include what are clearly AI responses to prompts.

I posted an OP recently that referred to an article mentioning a survey of developers that found about half trusted AI enough they didn't bother to check results. It's scary to think how much risky AI code is already in use.

I do like AI


I can't, because of the way it was trained, illegally, on stolen intellectual property. And with the intention of eliminating as many jobs as possible. AI companies tell workers the AI is meant to make their jobs easier, while the AI peddlers tell execs and company owners that the goal is to lay off those employees.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Last year a study showed ...»Reply #3