Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,658 posts)
Wed Apr 22, 2026, 12:32 PM 16 hrs ago

Real-life horror story about a leukemia patient who died after AI gave him a misdiagnosis & he rejected doctors' advice

Found out about this today after reading Gary Marcus's Substack post from yesterday:

Please don’t trust your chatbot for medical advice
https://substack.com/home/post/p-194902044

That links to a NY Times article from April 13 about Ben Riley, a friend of Gary's - and the son of that leukemia patient who died after refusing treatment for too long because he believed AI over his doctors:

He Warned About the Dangers of A.I. If Only His Father Had Listened.
https://www.nytimes.com/2026/04/13/well/ai-chatbots-cancer.html

By summer 2025, Joe had become much sicker. He had gained 80 pounds from steroids he was taking to manage his symptoms. Lymph nodes all over his body had swelled, including one on his neck that made it painful to move his head. His white blood cell count was 10 times higher than when Dr. Marzbani first started recommending treatment, a sign the cancer had rapidly spread.

Joe’s window for treatment was quickly closing. The more frail Joe became, the less likely he was to tolerate the medications. Dr. Marzbani decided to confront him.

“Why do you believe this?” he remembered asking Joe during one appointment. “Where’s this coming from?”

Joe sent him a research report he generated with Perplexity.



Ben Riley's Substack post about what happened to his father:

The role of AI in the death of my father
https://buildcognitiveresonance.substack.com/p/the-role-of-ai-in-the-death-of-my

It was a shock when I discovered what was happening, as you might imagine. I only discovered what was going on when my father gave me access to his online medical record, allowing me to peer into his long-running correspondence with his oncologist. From that I learned that my dad had used Perplexity to self-diagnose his condition and had sent the Perplexity report, if it can be called that, along to his very perplexed and frustrated doctor. Given that I’d spent the better part of a year talking with my father about the unreliability of factual statements made by AI, you can only imagine my extreme frustration discovering that my efforts had utterly failed within my own family.

AI enthusiasts, whether in education or more broadly, will often try to cover their asses from responsibility for non-factual statements by AI models by saying, “well, you always need to check their output.” As a general matter, that’s a ludicrous claim, since the whole value proposition of these tools is to spare us cognitive effort—but in this instance, it’s exactly what I did. I contacted the doctors who led the study that Perplexity cited in support of its statement that refraining from Ven-Obi was the proper course of action for someone with Richter’s. Much to my surprise, both doctors replied straightaway, and confirmed what I already knew to be true, that Perplexity had misstated the conclusion of their research, and that my father should follow the course of treatment his oncologist was recommending.

Of course I immediately passed this information along to my dad, desperately hoping to appeal to his scientific and empirically oriented belief system. But he didn’t respond at all. I was yelling into the void. It was only after several more months passed, and after his physical condition continued to worsen dramatically, before he finally agreed to start the Ven-Obi treatment his oncologist had recommended a year prior. It didn’t seem to matter at that point, sadly. Although the treatment immediately reduced his white blood cell count, his pain endured, and culminated in his death just a few weeks ago.


Despite horror stories like this, AI companies continue to push the use of their very flawed generative AI tools to research health and medical subjects, and the Trump regime wants more and more use of genAI.
12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Real-life horror story about a leukemia patient who died after AI gave him a misdiagnosis & he rejected doctors' advice (Original Post) highplainsdem 16 hrs ago OP
Take it from a current nursing student Sympthsical 16 hrs ago #1
Have these tools improved at all over the course of your time at school? fujiyamasan 4 hrs ago #12
Or for legal advice or for a girlfriend displacedvermoter 16 hrs ago #2
It's weird how much trust people put into these things JI7 15 hrs ago #3
Those chatbots are designed to both sound authoritative and to flatter the AI user to keep them highplainsdem 15 hrs ago #4
Indeed. On particular reader comment confirms as much: markpkessinger 13 hrs ago #7
Humans aren't rational creatures Kaleva 15 hrs ago #5
Don't trust AI for anything! mwmisses4289 14 hrs ago #6
At the end of the day AI is nothing more than a computer program PCB66 11 hrs ago #8
Now everyone can ask an ersatz doctor for an ersatz diagnosis! struggle4progress 10 hrs ago #9
One of the reasons I'm very glad I retired from being a peds NP was people wanting telehealth... 3catwoman3 5 hrs ago #10
I use it to get advice about my pt but then Tree Lady 4 hrs ago #11

Sympthsical

(11,039 posts)
1. Take it from a current nursing student
Wed Apr 22, 2026, 12:39 PM
16 hrs ago

Chatbots are often extremely wrong on medical matters and interventions.

I don't trust it with my assignments. I would never attempt to actually help someone with it.

That said. Nursing schools use AI to teach students. Sherpath AI is a thing.

It's only middling useful to me and often find myself even double checking that. And that is something they tell us to use, that our program is forcing us to pay for.

Gonna be a funny little world in five to ten years.

fujiyamasan

(1,904 posts)
12. Have these tools improved at all over the course of your time at school?
Thu Apr 23, 2026, 12:35 AM
4 hrs ago

I’m not familiar with SherpathAI so I just looked it up. I’m assuming it’s trained specifically on the content of your books… or the general curriculum?

I could see some uses for this (just for general assistance or a study tool) but the false sense of confidence it generates concerns me.

And doctors used to complain about Dr google!

displacedvermoter

(4,784 posts)
2. Or for legal advice or for a girlfriend
Wed Apr 22, 2026, 12:41 PM
16 hrs ago

as the former could cost you your business while the AI girlfriend might just tell you to commit suicide, apparently.

JI7

(93,772 posts)
3. It's weird how much trust people put into these things
Wed Apr 22, 2026, 12:59 PM
15 hrs ago

I get using it to see what is said but to actually reject what the doctor said ? Maybe go for a second opinion from another doctor. But to just trust the AI........

highplainsdem

(62,658 posts)
4. Those chatbots are designed to both sound authoritative and to flatter the AI user to keep them
Wed Apr 22, 2026, 01:17 PM
15 hrs ago

using that chatbot. It's a very dangerous combination when there's no real intelligence there and the generative AI model can hallucinate in almost any way at any time.

AI companies, to avoid legal liability, warn users the AI can make mistakes and so they should check results. They also market AI primarily as a way to save time, which is in complete opposition to the advice to check the results. They also don't warn people that the AI getting part of the results right does NOT mean that it didn't get part or even all of the rest of the results wrong. A lot of AI users never check at all, or check a detail or two and assume that if that's correct, the rest is.

GenAI models should never have been released.

markpkessinger

(8,928 posts)
7. Indeed. On particular reader comment confirms as much:
Wed Apr 22, 2026, 03:39 PM
13 hrs ago
Michaeldg
Durham CT ·
April 13
As a primary care physician, I have used AI tools to explore possible causes of difficult to diagnose patient symptoms. While helpful in bringing forth diagnosis that might have been missed, as one drills down on how to manage a particular disease I have seen it confidently assert erroneous conclusions. When I point out the scientific inconsistencies of what it has asserted it reverts to sycophantic praise for my intelligence and abruptly changes its recommendations. What worries me is that I was only able to spot it's inaccuracies because of my depth of knowledge and experience in the subject, something that the average user does not have. I view AI as being like a precocious teenager, but do not trust it with important decisions. It can provide difficult to find information but often comes to incorrect conclusions. While it can be useful to familiarize a patient with a certain condition, I see considerable danger in lay people believing the AI chat over the advise of an experienced clinician as this story so amply demonstrates.

Kaleva

(40,391 posts)
5. Humans aren't rational creatures
Wed Apr 22, 2026, 01:30 PM
15 hrs ago

People tend to accept that which reinforces their already formed world views and reject that which challenges their world views.

My guess is that AI told him what he wanted to hear.

PCB66

(138 posts)
8. At the end of the day AI is nothing more than a computer program
Wed Apr 22, 2026, 05:38 PM
11 hrs ago

created by a human. One thing we have learned about computer programs over the decades is that shit in-shit out.

AI is a useful tool for somethings but not for everything.

3catwoman3

(29,608 posts)
10. One of the reasons I'm very glad I retired from being a peds NP was people wanting telehealth...
Wed Apr 22, 2026, 11:40 PM
5 hrs ago

...visits. I did not trust my ability, or anyone else's, to accurately evaluate a sick young child via a computer screen. No way I would want to be involved with AI.

Tree Lady

(13,336 posts)
11. I use it to get advice about my pt but then
Thu Apr 23, 2026, 12:24 AM
4 hrs ago

I also check that with the therapist. It does give good advice about when you are overworking muscles and how to slow down while still getting exercise.

It’s hard when you have a lot of questions and doctors or therapists are not available.

I wouldn’t use it by itself for anything serious.

Latest Discussions»General Discussion»Real-life horror story ab...