The healthcare diagnosis one may be wrong. For existing known diagnoses, (or at least the sliver of diagnoses in this one study), AI can beat doctors - and doctors don't like listening when it challenges them, so it will disrupt them badly as people learn they can provide data from tests directly to AI agents. Sure, this doesn't replace new diagnoses, but the vaaaast majority of failures to diagnose are for existing well classified diagnoses.
I'm familiar with the linked study, which presents legitimately challenging analytic problems. There's a difference between challenging analytic problems and new analytic problems.
A new platform poses new analytic problems. A new edition of the WHO's classification of skin tumors (1), for example, presents new analytic problems.
It's only a problem if hospitals replace doctors with AI. If they employ AI as well then outcomes will improve. Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.
> Using AI to find the ones AI can identify means doctors have more time to focus on the ones that AI can't find.
That's not how that would work in the real world. In a lot of places a doctor has to put their signature or stamp on a medical document, making them liable for what is on that paper. Just because the AI can do it, that doesn't mean the doctor won't have to double check it, which negates the time saved.
I would wager AI-assisted would be more helpful to reduce things doctors might miss instead of partially or completely replacing them.
Let's assume you program it so that if it believes with 95% certainty that a patient has a certain condition it will present it to the doctor. While the doctor doesn't agree with it, the whole process between doctor-patient-hospital-insurer might be automated to the point where it's simpler to put the patient through the motions of getting additional checks than the doctor fighting the wrong diagnosis, thus the doctor will have to spend more time to follow up on confirming that this condition is not really present.
I don't have a crystal ball, so this is a made-up scenario.
Sure, LLMs might not do this anytime soon, but once models understand enough biology, they're going to identify patterns we don't and propose new diagnoses. There's no reason why they wouldn't.
It has been interesting to see the excuses from doctors, why we need error prone humans instead of higher quality robots.
>Empathy (lol... from doctors?)
>New undetectable cases (lol... AI doesnt have to wait 1 year for an optional continuing education class. I had doctors a few years ago recommending a dangerous expensive surgery over a safer cheaper laser procedure)
>corruptible (lmaooo)
We humans are empathetic to the thought our 'friendly' doctor might be unemployed. However, we shouldn't let that cause negative health outcomes because we were being 'nice'.
https://www.advisory.com/daily-briefing/2024/12/03/ai-diagno...
Edit: yeah, people don't like this.