Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(3,829 posts)
11. I used to see this all the time, but with the latest models of Claude and Gemini
Tue Apr 14, 2026, 01:14 PM
Apr 14

They are both doing this a LOT less lately with Sonnet 4.6 and Gemini 3.1.

I've seen entirely too many examples of chatbots offering one wrong answer after another, very confidently, and apologizing for each error and then offering another wrong answer, just as confidently.


Part of my point here is they're getting better at figuring stuff out, i.e. not repeatedly giving wrong answers, promising to do better next time, then repeating the same mistake again and again. I know exactly what you're talking about with that. Less than a year ago I was still seeing it a lot, esp. on cheaper/free models. But they are moving forward from that, very rapidly with the latest, paid models.

And I'm sorry, but the question I posed required the model to do something that is not demonstrably different from "thinking". It couldn't have just derived that answer from the mess of words I posted (some of it was even redundant and unclear once I read it over again, but it was not misled by that) without understanding what I was asking. These things are thinking more and more like humans with every passing iteration. You can believe me or not, but I'm telling you, I use the shit every day. Yes, it still makes mistakes, but overall, it's getting freaking smart as hell. It's scary.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»As someone who must use A...»Reply #11