Maybe it’s ok to worry about both? Not trusting ”arbitrary thing A” does not logically make ”arbitrary thing B” more trustworthy. I do realise that these models intend to (incrementally) represent collective knowledge and may get there in the future. But if you worry about A, why not worry about B which is based on A?
You seem to be assuming, without any evidence at all, that LLMs giving medical advice are likely to be roughly equivalent in accuracy to doctors who are actually examining the patient and not just processing language, just because you are aware that medical mistakes are common.
"Six patients 65 years or older (2 women and 4 men) were included in the analysis. The accuracy of the primary diagnoses made by GPT-4, clinicians, and Isabel DDx Companion was 4 of 6 patients (66.7%), 2 of 6 patients (33.3%), and 0 patients, respectively. If including differential diagnoses, the accuracy was 5 of 6 (83.3%) for GPT-4, 3 of 6 (50.0%) for clinicians, and 2 of 6 (33.3%) for Isabel DDx Companion"
Six patients is a long way from persuasive evidence, because with so few patients randomness is going to be a large factor. And it appears that the six were chosen from the set of patients that doctors were having trouble diagnosing, which may put a thumb on the scale against doctors. But yes, it certainly suggests that a larger study might be worth doing (also including patients diagnosed correctly by doctors, to catch cases where GPT-4 doesn't do as well).
It's not whataboutism at its best, no. Just as with self-driving cars, medical AIs don't have to be perfect, or even to cause zero deaths. They just have to improve the current situation.
It depends who the end user is. As an aid for a trained physician, who is in a better position to spot the hallucinations, it may be fine, whereas a self-medicating patient could be at risk.
We absolutely need more resources in healthcare throughout the world, and it may be that these models, or even AGI, have great potential as a companion for e.g. Doctors Without Borders or even at the local hospital in the future. But there’s quite a bit more nuance to giving medical advice compared to perfecting a self driving car.
A self driving car can cause incredible damage straight away. I don't think you should underestimate that. But we also don't have enough healthcare access, so the need is more urgent than that for automated drivers, the health benefit of which is often only about reducing risk of driving while tired or intoxicated.
Yes a patient could be at risk - they're at risk from everything, including a poorly trained/outdated doctor. And even more at risk from just not having access to a doctor. That's the point: it's a risk on both sides; weighing competing risks is not whataboutism.