Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,658 posts)
4. Those chatbots are designed to both sound authoritative and to flatter the AI user to keep them
Wed Apr 22, 2026, 01:17 PM
18 hrs ago

using that chatbot. It's a very dangerous combination when there's no real intelligence there and the generative AI model can hallucinate in almost any way at any time.

AI companies, to avoid legal liability, warn users the AI can make mistakes and so they should check results. They also market AI primarily as a way to save time, which is in complete opposition to the advice to check the results. They also don't warn people that the AI getting part of the results right does NOT mean that it didn't get part or even all of the rest of the results wrong. A lot of AI users never check at all, or check a detail or two and assume that if that's correct, the rest is.

GenAI models should never have been released.

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»Real-life horror story ab...»Reply #4