Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

highplainsdem

(62,660 posts)
Wed Apr 22, 2026, 12:32 PM Yesterday

Real-life horror story about a leukemia patient who died after AI gave him a misdiagnosis & he rejected doctors' advice [View all]

Found out about this today after reading Gary Marcus's Substack post from yesterday:

Please don’t trust your chatbot for medical advice
https://substack.com/home/post/p-194902044

That links to a NY Times article from April 13 about Ben Riley, a friend of Gary's - and the son of that leukemia patient who died after refusing treatment for too long because he believed AI over his doctors:

He Warned About the Dangers of A.I. If Only His Father Had Listened.
https://www.nytimes.com/2026/04/13/well/ai-chatbots-cancer.html

By summer 2025, Joe had become much sicker. He had gained 80 pounds from steroids he was taking to manage his symptoms. Lymph nodes all over his body had swelled, including one on his neck that made it painful to move his head. His white blood cell count was 10 times higher than when Dr. Marzbani first started recommending treatment, a sign the cancer had rapidly spread.

Joe’s window for treatment was quickly closing. The more frail Joe became, the less likely he was to tolerate the medications. Dr. Marzbani decided to confront him.

“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”

Joe sent him a research report he generated with Perplexity.



Ben Riley's Substack post about what happened to his father:

The role of AI in the death of my father
https://buildcognitiveresonance.substack.com/p/the-role-of-ai-in-the-death-of-my

It was a shock when I discovered what was happening, as you might imagine. I only discovered what was going on when my father gave me access to his online medical record, allowing me to peer into his long-running correspondence with his oncologist. From that I learned that my dad had used Perplexity to self-diagnose his condition and had sent the Perplexity report, if it can be called that, along to his very perplexed and frustrated doctor. Given that I’d spent the better part of a year talking with my father about the unreliability of factual statements made by AI, you can only imagine my extreme frustration discovering that my efforts had utterly failed within my own family.

AI enthusiasts, whether in education or more broadly, will often try to cover their asses from responsibility for non-factual statements by AI models by saying, “well, you always need to check their output.” As a general matter, that’s a ludicrous claim, since the whole value proposition of these tools is to spare us cognitive effort—but in this instance, it’s exactly what I did. I contacted the doctors who led the study that Perplexity cited in support of its statement that refraining from Ven-Obi was the proper course of action for someone with Richter’s. Much to my surprise, both doctors replied straightaway, and confirmed what I already knew to be true, that Perplexity had misstated the conclusion of their research, and that my father should follow the course of treatment his oncologist was recommending.

Of course I immediately passed this information along to my dad, desperately hoping to appeal to his scientific and empirically oriented belief system. But he didn’t respond at all. I was yelling into the void. It was only after several more months passed, and after his physical condition continued to worsen dramatically, before he finally agreed to start the Ven-Obi treatment his oncologist had recommended a year prior. It didn’t seem to matter at that point, sadly. Although the treatment immediately reduced his white blood cell count, his pain endured, and culminated in his death just a few weeks ago.


Despite horror stories like this, AI companies continue to push the use of their very flawed generative AI tools to research health and medical subjects, and the Trump regime wants more and more use of genAI.
14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»Real-life horror story ab...