Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The healthcare diagnosis one may be wrong. For existing known diagnoses, (or at least the sliver of diagnoses in this one study), AI can beat doctors - and doctors don't like listening when it challenges them, so it will disrupt them badly as people learn they can provide data from tests directly to AI agents. Sure, this doesn't replace new diagnoses, but the vaaaast majority of failures to diagnose are for existing well classified diagnoses.

https://www.advisory.com/daily-briefing/2024/12/03/ai-diagno...

Edit: yeah, people don't like this.



I'm familiar with the linked study, which presents legitimately challenging analytic problems. There's a difference between challenging analytic problems and new analytic problems.

A new platform poses new analytic problems. A new edition of the WHO's classification of skin tumors (1), for example, presents new analytic problems.

(1) https://tumourclassification.iarc.who.int/chapters/64


Right, but the vast majority of patient issues today are missing existing diagnoses, not new ones.


I think OP was referring to the case where new illnesses that are not part of the training set are never going to be diagnosed by AI.


It's only a problem if hospitals replace doctors with AI. If they employ AI as well then outcomes will improve. Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.

Of course, that's not what's going to happen. :/


> Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.

That's not how that would work in the real world. In a lot of places a doctor has to put their signature or stamp on a medical document, making them liable for what is on that paper. Just because the AI can do it, that doesn't mean the doctor won't have to double check it, which negates the time saved.

I would wager AI-assisted would be more helpful to reduce things doctors might miss instead of partially or completely replacing them.


Interesting. Do you see any versions of the future where use of AI could actually make the physician take more time?


Let's assume you program it so that if it believes with 95% certainty that a patient has a certain condition it will present it to the doctor. While the doctor doesn't agree with it, the whole process between doctor-patient-hospital-insurer might be automated to the point where it's simpler to put the patient through the motions of getting additional checks than the doctor fighting the wrong diagnosis, thus the doctor will have to spend more time to follow up on confirming that this condition is not really present.

I don't have a crystal ball, so this is a made-up scenario.


Never is a long time.

Sure, LLMs might not do this anytime soon, but once models understand enough biology, they're going to identify patterns we don't and propose new diagnoses. There's no reason why they wouldn't.


Unfortunately, that's not how LLMs work.


It has been interesting to see the excuses from doctors, why we need error prone humans instead of higher quality robots.

>Empathy (lol... from doctors?)

>New undetectable cases (lol... AI doesnt have to wait 1 year for an optional continuing education class. I had doctors a few years ago recommending a dangerous expensive surgery over a safer cheaper laser procedure)

>corruptible (lmaooo)

We humans are empathetic to the thought our 'friendly' doctor might be unemployed. However, we shouldn't let that cause negative health outcomes because we were being 'nice'.


So… we put all of our trust (wait, at that point it might be called faith) into this machine…

If it ever turns on us, begins to malfunction in unforeseen ways, or goes away completely—then what?

Shortsighted, all of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: