General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsReal-life horror story about a leukemia patient who died after AI gave him a misdiagnosis & he rejected doctors' advice
Found out about this today after reading Gary Marcus's Substack post from yesterday:
Please dont trust your chatbot for medical advice
https://substack.com/home/post/p-194902044
That links to a NY Times article from April 13 about Ben Riley, a friend of Gary's - and the son of that leukemia patient who died after refusing treatment for too long because he believed AI over his doctors:
He Warned About the Dangers of A.I. If Only His Father Had Listened.
https://www.nytimes.com/2026/04/13/well/ai-chatbots-cancer.html
Joes window for treatment was quickly closing. The more frail Joe became, the less likely he was to tolerate the medications. Dr. Marzbani decided to confront him.
Why do you believe this? he remembered asking Joe during one appointment. Wheres this coming from?
Joe sent him a research report he generated with Perplexity.
Ben Riley's Substack post about what happened to his father:
The role of AI in the death of my father
https://buildcognitiveresonance.substack.com/p/the-role-of-ai-in-the-death-of-my
AI enthusiasts, whether in education or more broadly, will often try to cover their asses from responsibility for non-factual statements by AI models by saying, well, you always need to check their output. As a general matter, thats a ludicrous claim, since the whole value proposition of these tools is to spare us cognitive effortbut in this instance, its exactly what I did. I contacted the doctors who led the study that Perplexity cited in support of its statement that refraining from Ven-Obi was the proper course of action for someone with Richters. Much to my surprise, both doctors replied straightaway, and confirmed what I already knew to be true, that Perplexity had misstated the conclusion of their research, and that my father should follow the course of treatment his oncologist was recommending.
Of course I immediately passed this information along to my dad, desperately hoping to appeal to his scientific and empirically oriented belief system. But he didnt respond at all. I was yelling into the void. It was only after several more months passed, and after his physical condition continued to worsen dramatically, before he finally agreed to start the Ven-Obi treatment his oncologist had recommended a year prior. It didnt seem to matter at that point, sadly. Although the treatment immediately reduced his white blood cell count, his pain endured, and culminated in his death just a few weeks ago.
Despite horror stories like this, AI companies continue to push the use of their very flawed generative AI tools to research health and medical subjects, and the Trump regime wants more and more use of genAI.
Sympthsical
(11,039 posts)Chatbots are often extremely wrong on medical matters and interventions.
I don't trust it with my assignments. I would never attempt to actually help someone with it.
That said. Nursing schools use AI to teach students. Sherpath AI is a thing.
It's only middling useful to me and often find myself even double checking that. And that is something they tell us to use, that our program is forcing us to pay for.
Gonna be a funny little world in five to ten years.
fujiyamasan
(1,904 posts)Im not familiar with SherpathAI so I just looked it up. Im assuming its trained specifically on the content of your books
or the general curriculum?
I could see some uses for this (just for general assistance or a study tool) but the false sense of confidence it generates concerns me.
And doctors used to complain about Dr google!
displacedvermoter
(4,784 posts)as the former could cost you your business while the AI girlfriend might just tell you to commit suicide, apparently.
JI7
(93,772 posts)I get using it to see what is said but to actually reject what the doctor said ? Maybe go for a second opinion from another doctor. But to just trust the AI........
highplainsdem
(62,658 posts)using that chatbot. It's a very dangerous combination when there's no real intelligence there and the generative AI model can hallucinate in almost any way at any time.
AI companies, to avoid legal liability, warn users the AI can make mistakes and so they should check results. They also market AI primarily as a way to save time, which is in complete opposition to the advice to check the results. They also don't warn people that the AI getting part of the results right does NOT mean that it didn't get part or even all of the rest of the results wrong. A lot of AI users never check at all, or check a detail or two and assume that if that's correct, the rest is.
GenAI models should never have been released.
markpkessinger
(8,928 posts)Durham CT ·
April 13
As a primary care physician, I have used AI tools to explore possible causes of difficult to diagnose patient symptoms. While helpful in bringing forth diagnosis that might have been missed, as one drills down on how to manage a particular disease I have seen it confidently assert erroneous conclusions. When I point out the scientific inconsistencies of what it has asserted it reverts to sycophantic praise for my intelligence and abruptly changes its recommendations. What worries me is that I was only able to spot it's inaccuracies because of my depth of knowledge and experience in the subject, something that the average user does not have. I view AI as being like a precocious teenager, but do not trust it with important decisions. It can provide difficult to find information but often comes to incorrect conclusions. While it can be useful to familiarize a patient with a certain condition, I see considerable danger in lay people believing the AI chat over the advise of an experienced clinician as this story so amply demonstrates.
Kaleva
(40,391 posts)People tend to accept that which reinforces their already formed world views and reject that which challenges their world views.
My guess is that AI told him what he wanted to hear.
mwmisses4289
(4,463 posts)PCB66
(138 posts)created by a human. One thing we have learned about computer programs over the decades is that shit in-shit out.
AI is a useful tool for somethings but not for everything.
struggle4progress
(126,462 posts)3catwoman3
(29,608 posts)...visits. I did not trust my ability, or anyone else's, to accurately evaluate a sick young child via a computer screen. No way I would want to be involved with AI.
Tree Lady
(13,336 posts)I also check that with the therapist. It does give good advice about when you are overworking muscles and how to slow down while still getting exercise.
Its hard when you have a lot of questions and doctors or therapists are not available.
I wouldnt use it by itself for anything serious.