Ready to be diagnosed by ChatGPT?

FIREd_2015

Recycles dryer sheets
Joined
May 18, 2015
Messages
341
Location
NorCal
"...In a 2023 study...researchers fed...ChatGPT information on 30 ER patients. Details included physician notes...physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors...JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses...Google's medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That's an "expert" physician level and far beyond the accuracy threshold needed to pass the actual exam...AI is coming, and it will be a transformative tool for physicians and improve patient care...Anthony Philippakis, MD, PhD, left his cardiology practice in 2015..."I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,"..."Can you imagine what it would be like to speak to the EHR naturally and say, 'Please order the following labs for this patient and notify me when the results come in.'...the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner..."

 
Anything to speed things up, also. I just saw a new story talking about it sometimes taking several months to see a specialist and even weeks to see a PCP. They were talking about the major physician shortage and time being spent by physicians on administration tasks.
 
Someone else should be doing admin stuff, not the doctors.
This does not sound like a bad use of AI.
 
By all accounts AI diagnosis is going to be the way to go whether we like it or not. Faster and more accurate, doesn't get tired or grouchy, what's not to like?

Oh, and since it's done by machine it'll be cheaper, like transistors vs. vacuum tubes. Well, one can dream....
 
Is AI one set of computers or like googling something? What’s the point in going to specialized hospitals or Drs if the information comes from one place? AI could take down the healthcare system if that were the case. We would need research to update new information.
 
I read somewhere about a year back that an AI was looking at Xrays, ECGs (or something) and could determine with like 90% accuracy if you'd be dead within a year. The experts and doctors have no idea how it does that.

As OP's article states, the AI was able to review over a million ECG and learn from them. No human could review, compare and collate that many...that's where the power comes from.

Exciting and terrifying at the same time.
 
Can you sue a computer program when things go wrong?
Maybe the programmer. I think of myself as a techie...an old techie but a technie nonetheless. If given a choice, or at least a heads up, I would not want to be diagnosed by a computer. Waaay too early in the development.
 
Maybe the programmer. I think of myself as a techie...an old techie but a technie nonetheless. If given a choice, or at least a heads up, I would not want to be diagnosed by a computer. Waaay too early in the development.
My thoughts, exactly.
 
Maybe the programmer. I think of myself as a techie...an old techie but a technie nonetheless. If given a choice, or at least a heads up, I would not want to be diagnosed by a computer. Waaay too early in the development.
The suing argument is silly.

Personally, I’m all for eliminating human error. I have no doubt that AI assisted diagnosis/care will lead to better outcomes, which the research so far is proving.

As humans, we overestimate our abilities.
 
Someone else should be doing admin stuff, not the doctors.
This does not sound like a bad use of AI.
+1 with the shortage of doctors, each doc should have one or two medically trained admins that take care of the paperwork with the doc just reviewing what the admins input to make sure there are no errors in the records.
 
No doubt some docs are already using this type of technology and are acting only as the human interface, which I have no problem with whatsoever. If it improves the overall quality of healthcare, that's a real plus in my book.
 
And what happens when the AI is down, and the doctor has lost some of his/her knowledge because of not being challenged? How many times have you been told that "the system is down." And no one can do a simple task because they've become too accustomed to the computer performing that task.
AI is a tool that should supplement a doctor. That to me is the danger. When doctors start relying on AI and just treat what AI tells them to do, then we're at risk of dumbing down the doctors.
 
And what happens when the AI is down, and the doctor has lost some of his/her knowledge because of not being challenged? How many times have you been told that "the system is down." And no one can do a simple task because they've become too accustomed to the computer performing that task.
AI is a tool that should supplement a doctor. That to me is the danger. When doctors start relying on AI and just treat what AI tells them to do, then we're at risk of dumbing down the doctors.
I suspect this new tech will reduce premiums for malpractice insurance since wrong outcomes will probably relieve doctors of blame for bad calls.
 
I'm not trying to say that AI is good or bad in healthcare but thought it could add to the conversation about what I heard ( as I remember it!). About 10 years ago I sat in on presentations by Google X (The R&D arm of Google) and IBM Watson. They both said they were going to get into health care through "big data" and become a major player, While Chat GPT is a different company the use of AI / data has been in development for a while. The two examples they gave back then was:

Dermatology- it takes forever to get an appointment and conditions like a mole growth could advance before a doctor could see it. They ran a program where a patient took a picture, sent it to the AI, and AI had a high accuracy rate of diagnosing if it was benign or needed to been seen quickly. The patients that needed to be seen quickly got into the practice much faster then previously.

Oncology: They analyzed several conditions ( cant remember which but some could be gene typed). By analyze I mean they scoured the internet for every peer reviewed study, every treatment protocol they could find, every poster presentation made at various oncology conferences, and interviewed about 100 leading oncologists and spun it through the AI they had at the time and came up with optimal treatment protocols for those conditions. Their outcome was for those conditions they studied was if they followed the protocols they had a 90% accuracy rate for diagnosis and treatment. The other 10% that were identified were flagged to the practice for more intensive care and other protocols.

My 2 cents editorial comments. Doctors have incredibly difficult jobs. Shrinking reimbursement, more demand as boomers age, and an expectation from the public that they are always avaible on a minutes notice, and know everything about all the conditions they treat. I knew an oncologist who worked at Dana Farber cancer center. He saw his hospital patients 7 to 8:30 am, did paperwork for about an hour, then saw his in office patients during the day. Many days after 5 he went back to hospital to see patients. With this type of schedule he was also expected to keep up with the newest treatments and take continuing education courses to stay current. This is impossible as there are only so many hours in a day. If AI can help doctors be more current, accurate, and remove mundane tasks and help with some diagnosis to free up their time to work on the more challenging cases I'm all for it. Time will tell if AI lives up to the hype.
 
The DW and I have seen more doctors in the past 5 years than we have seen in the past 65+ years together. In general, I take whatever they tell me with a gain of salt. It's not rare at all for them to be wrong but they are still way better than my guesses. I'd say they are maybe 80/20. So if AI can get that to 95/5, then it sounds pretty good to me. Of course applying human common sense is still needed. It's not always so black and white and I'm not convinced AI is really up to that..... yet.
 
Last edited:
And what happens when the AI is down, and the doctor has lost some of his/her knowledge because of not being challenged? How many times have you been told that "the system is down." And no one can do a simple task because they've become too accustomed to the computer performing that task.

"...I'm sorry Dave, I cannot do that."

But seriously, I wonder if future AIs will prevent the notorious system down events.

Over the centuries we've already lost a lot of things that our ancestors took as basic life skills and now don't think anything of it. How many of us would perish if electricity permanently went away, especially us city dwellers? No A/C in the southern states? Catch and kill your own food? Make a garment for yourself?

Even within medicine today, there's tools that your everyday physician uses that years ago required exceptional knowledge and skill, often limited to very few practitioners.
 
I consult in the area of clinical decision support. ChatGPT is like using a machete for what they were trying to do. There is some recent research in this area using better data (actual live streaming physiological monitoring data, episodic physiological sensor data, and lab data) to build the prediction algorithm. This was research and development and the algorithm had a fairly decent accuracy rate for some specific diagnosis and/or predictions of cardiac issues when patient presented at the ER. Nevertheless, the operational process aspect to support this algorithm is not readily available to scale (the engineering part of the idea).

The key for an accurate algorithm is the quality and specificity of the data being used to train the algorithm. Most who work in this area also are exploring the use of digital twins for replication of human uniqueness. I am wary of articles like that in the WSJ - as posted above, Google, Microsoft and other very large corporations have come in and out of the healthcare realms touting how they could make things much better. They've all left and/or have downgraded their prognostications with regard to what they can do. In my 40 years as a biomedical engineer, I have realized even more than I did when I went into the profession the human body is a very complex set of systems that is adaptive and constantly changing. It's going to take a long time for any computing system to replicate and/or provide highly accurate diagnoses, prognoses, and therapy for humans (see the statement of uniqueness above).
 
Back
Top Bottom