I feel fortunate that I happened to check into HN today because this is an area where I have a lot of experience and I have made something similar (albeit much less advanced) that is currently in use by some physicians.
Quick background: I am an internal medicine physician, and my focus is hospital medicine, so I actually see a lot of cancer patients suffering from complications related to progression of their disease, treatment, and have lots of end-of-life discussions.
A couple years ago I had a case where I wished I had more data to drive me to make a prediction for a family who wanted to know when their loved one was going to die, and I felt like I didn't do a good job. I ended up contacting an oncologist Dr. David Hui who has done a lot of research into predicting end of life care.
There are dozens if not hundreds of "prognostic calculators" out there that different groups have made over the years to try and find which variables X, Y, Z, can be used to better predict how a patient with cancer A, B or C will do.
Most of them are not great and very specific. Some of them are general to all cancers, some are only for "advanced cancer" etc... So this isn't really a new idea, it's just that this is the first time someone has applied AI to it as far as I'm aware.
The problem is these calculators are all buried in journals and have different inputs. What we did was collate some of the best validated ones and make it easier to check multiple prognostic calculators at once, with the idea that you could get more of a range or gestalt on the patient and use that info to help guide further decisions.
In case anyone is interested the site is www.predictsurvival.com but it isn't for lay use so may not be of much interest to most here. I built it using Python and Flask and it's what I'm most proud of in my programming.
Anyways, I knew someone would eventually use AI to try and tackle this question. I'll read their paper later tonight and I wouldn't be surprised if it ends up being far better than what I've done. I hope they make it open source.
I'd be happy to try and answer any questions people might have about medicine and end of life care
Better end-of-life care isn't a technological problem.
When people have terminal cancer they are still usually advised or persuaded to take chemo and/or radiation, which royally destroy their bodies, as anyone who had a loved one go through treatment knows.
In the case of many late stage cancer cases it's generally known that the person is at the end of their life, yet the medical establishment still tends to focus on treatment rather than actual care. How does machine learning help the cultural problem?
> yet the medical establishment still tends to focus on treatment rather than actual care
Have you actually talked to doctors about it? As in those who get the initial talk, not those who get patients already decided to try something? I know quite a few GPs and they often do a serious talk with people with cancer who want "everything possible done". Or with families who want that for their grandparents.
They try to discourage useless treatment as much as possible. Often by listing the side effects that are going to just decrease the quality of life even worse.
(Just in case: yes, there will be different doctors and different opinions. What I'm trying to say is generalising to medical establishment is not accurate and not helpful. Also I'm glad that if someone makes the decision to get treatment they can get it - it may be a really bad decision, but it's their decision.)
When my mom had terminal cancer, her GP told her she was out of options.
Then a nearby "cancer center" got a hold on her, I believe through one of the diagnostic partners she was using. Two weeks later she was in heavy chemo -- as an elderly person who had almost a zero chance to survive.
She ended up with MRSA and in intensive care in isolation. I lived far away and traveled to see her when I heard her condition gotten worse.
I'll never forget her GP walking in. My mom couldn't hear well. Her GP was both sad and angry. She didn't have any idea how she ended up in the ICU.
He explained to her again that all we had left was palliative care, perhaps hospice. I could tell this wasn't the first time he had had this talk. There had been many other patients.
When he left my mom looked at me and said "What'd he say?" So I sat next to her, took her hand, and explained that the doctors had done all they could for her and she would die soon.
I'll never forget that, either the talk or the look on that GP's face. We have a terribly broken system.
Often in cancer, cure and palliative care are often one. Controlling the growth of cancer cells is needed to control pain. Often the pain caused by cancerous tumours cannot be suppressed by usual pain killers.
The kind of medication given for cancer treatment has vastly improved in the last 10 years, atleast for the less aggressive forms of cancer (and yes, there is such a thing). As a corollary, there are also highly aggressive forms, the kind that sends chills into your heart.
I gather that you exclude opiates from "usual pain killers". That seems odd. For someone dying of cancer, what's the problem being addicted to opiates? Also, an old-school "peaceful death" option is alcohol + morphine + cocaine. The patient dies without pain, and happy. And their family and friends get closure.
I believe that the usual pain killers are things like ibuprofen and acetaminophen (paracetamol for you non-US folks). Cancer pain is this deep chronic pain that we can't really reduce with those standard pain killers. It's just a variety of factors that are not adequately treated by what is essentially an anti-inflammatory. For more info check out this review article on cancer pain physiology: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4616725/
So the alternatives for patients are putting them on drugs like opioids, gabapentin, and even cannabinoids.
> it's generally known that the person is at the end of their life, yet the medical establishment still tends to focus on treatment rather than actual care
That is exactly what they are trying to address. These doctors are basing their treatment on statistics that don't account for the intricacies of an individual patient's condition. A machine learning model can account for correlations between obscure attributes to predict the success of treatment with much higher confidence than they have now.
The machine learning model can also be as wrong as the doctors at predictions. What happens then? Surely not a headline of "Stanford death model is worthless" right?
Even worse if treatments are forgone or triaged based on output on some incomplete model. Older people already have trouble getting access to life saving surgeries due to mortality statistics. (And no doctor wants to have a dead patient on table, liabilities notwithstanding.)
What about taking into account the societal benefit from participation in clinical trials? Farber would have never discovered what became curative chemotherapy for pediatric leukemia if it weren't for the ability to try to cure what pediatricians had dismissed as hopeless cases.
Pediatrics seems like a reasonable exception. If you're seven weeks old, seven months old, or seventeen years old it might be worth a shot. Probably not so much if you're ninety-seven.
Quite often they just don't know. Is it pneumonia in the lungs or has the cancer spread? Likely a bit of both. If we can clear out the infection can we start chemo again and buy another year or two? If people want to err on the side of taking a chance at living, let them.
Is there a minimum empirical success rate you would be comfortable setting for a procedure that could significantly extend a patients life?
Personally, I think the best solution is to push large medical groups to mandate professional development on palliative care in their oncology departments. Start with the nurse practitioners.
> How does machine learning help the cultural problem?
I think you're right that it is a cultural problem. In the US, the end-of-life decision lies in the hands of the patient's care givers, putting them in mental pain of "did I do the right thing" for years later. I believe this isn't the case in UK/France where the docs advise on the matter, relieving the care givers of said agony.[1] It's a mixed bag in India.
Since we're talking about a system that predicts end of life, what if we replace it with a game of "let's play life toss" - where heads is a survival prediction and tails isn't, so that both doctors and care givers can have a discussion about either scenario. Would that serve the same purpose as the "AI"? If the AI's purpose is to be studied, something like this game might need to be a control.
I'm far from dystopian on this as we already use technology for various aids in the care process - declaring coma patients "unconscious", "flat lining" as indicative of death and such. What I do have a problem with is declaring a data fitting function to be an "AI" instead of just saying "statistically there is a 90% chance of death within the next 7 months" or something more straight forward like that. However complicated that function is, the fact that it is only derived from EHR data doesn't make it an "AI" for additional creds. The system must at least be able to evaluate a number of scenarios it hasn't seen and map out their outcomes to even begin to be considered for qualification ... like a game playing engine.
PS: A relative of mine helps terminal cancer patients come to terms with their death. The mental journeys are multi-dimensionally diverse from what I hear.
Everyone from doctors to philosophers has been saying this for ages (literally). No healthcare provider, public or private, is likely to pick this up because people who are afraid of death will freak about about 'computer decides when you should die' and will get up a big enough moral panic to prevent any further discussion of the idea. Given that I think it's literally a waste of time and resources to program a computer to tell us what we already know.
Edit: by 'program a computer' I don't mean to put down the work of the Stanford team, which I'm sure is great, but rather to highlight the reductionist way it will play in public debate.
I am afraid to live after a certain point of time. When you have no direct descendants or significant other, when your bucket list is done, what's the point of keeping a slowly failing mind alive in a more rapidly failing body? I don't want to grow old, I fail to see why I should and there's no law, not even Canada that would allow me to choose a peaceful ending date for my own. And they call this suicidal which I am not. I have my life planned and I know when I want to end it, more than a decade from now. Some people plan to travel the world when they retire, I just want not to live but I can't.
> there's no law, not even Canada that would allow me to choose a peaceful ending date for my own
Canada is getting there. Slowly. For those not in the loop...
Since 2016, medically assisted death in Canada has been available in some circumstances. (Previous to that, arranging or participating in such a death may have been considered homicide which put doctors and families at risk.) The "Carter case" helped get this through courts and parliament.
Our current legislation still requires patients to have a "grievous and irremediable medical condition" (this is not liberal enough, yet, for some), with more than one doctor supporting the assisted death (this is apparently not difficult to find), and a reflection period for the patient (seems reasonable).
So the next battle is apparently to eliminate requirement for a patient to have a "grievous and irremediable medical condition". That's being challenged by courts in BC by the BC Civil Liberties Association under the "Lamb case". https://bccla.org/wp-content/uploads/2016/08/2016-06-27-Noti...
So the Canadian situation has improved immensely in the last couple of years but not for everyone yet. I hope you (and everyone) will eventually find death on your own terms without resorting to feeling like you're doing something "wrong" and without having to criminally implicate supportive family and friends.
Aside: I've worked with the BCCLA in the past and IMO they have long been a challenger of unjust and uncivilized laws for all Canadians. So if you forsee their fight(s) benefitting yourself or your loved ones then I encourage you to volunteer or donate or otherwise support them. Moral advancement is the goal.
Neither there is one allowing assistance :/ I want to end in peace and dignity. Is that too much to ask? Apparently.
There is a lot of assistance available that leads you to extend your life beyond the point where you can keep your dignity but none where you can just ... bow out.
Whenever a trainer has asked what's my goal, my only answer was, I don't want to end up in a wheelchair in twenty years. My back already hurts a little pretty much constantly, I don't want to live long enough where it becomes unbearable and/or I can't even walk any more. Truly what's the point. And voicing these -- in my opinion entirely rational -- thoughts are almost taboo.
Every single assisted dying statute at this point requires you to be terminally ill. The Netherlands is on the path to waive this https://www.theguardian.com/world/2016/oct/13/netherlands-ma... but only for the elderly. We will see what age they set but I feel like it'll be like 70 or something.
True, but it used to be a crime in many jurisdictions and i you're open about it you can be forcibly placed in psychiatric care in many places right now.
> In fact, in my jurisdiction a 3rd party saying you said you were going to kill yourself is enough to detain you in a psychiatric hospital for a few hours at least.
this would be a really good application of AI in healthcare, but it needs to be considered in light of other predictive tools. a sr exec at a large health system said earlier this year that they evaluated a ton of AI tools for predicting death to inform end of life care, but none did materially better than simply asking physicians which of their patients they thought would die in 12-18 months. this AI may be better than the ones this exec studied of course
even if AI performs better, it would need to offer improvements significant enough to justify the cost of implementing it, which depending on what data it uses could be non-trivial
another unfortunate issue is that patient preference is only one consideration in determining how end of life care is managed. profitability is another concern. the same exec said that the health system only agreed to implement their improved end of life plan once they realized that it would be profitable to the system
Having worked in this space - dialysis - sometimes it’s better to let the patient gracefully decline versus degrading their quality of life with treatments that will not lengthen life and mage what time they have miserable. There’s a few companies already using ML to predict when a patient is a candidate for hospice / palliative care. It’s often cheaper, which is why it’s kind of a taboo topic.
>> It’s often cheaper, which is why it’s kind of a taboo topic.
That's a very strange point. Sometimes people think the doctors want to do everything they can "to save you" just to run up the bill. With that thinking, patients and their families would want to "save money". OTOH one may also interpret saving money as giving up and be angry that the doctor would put a price on life. You just can't win. Ultimately people and physicians need to put aside all of that and be able to objectively assess the situation. That's really hard to do particularly for the patient and family - I have no idea what it's like from the doctors side, but we had one that had his eyes wide open but couldn't drive the point home enough for the soon-to-be widow to get it.
I feel like this AI is an attempt at making death marketable.
And I mean that seriously. I think you and eismcc are right, that it is taboo even when it is arguably the best option for the patient.
It would be difficult for a doctor to say "this multivariate optimization found a global maximum by stopping treatment" in a way that sounds ethical. But by attributing it to an AI that is mysterious enough to sound divine, the message might have a chance at being received.
Obviously it is cheaper. Nobody can count days of life lost due to forgone treatments without a real trial with a control group.
Which means using these tools is unethical if not outright criminal.
Why is it unethical / criminal? This is already a calculus doctors, patients, and loved ones have to do. How does adding historical statistics to the discussion introduce an unethical / criminal aspect?
Add deep learning and even boring old survival analysis can make headlines as "AI predicts Death". (Not bashing the paper here, just the reporting.)
Though I suppose making use of the large amount of unstructured health data is a good thing and something which classical statistical models which require relatively clean study data could have problems with.
I'm not quite sure that doctors need AI to see the state of a terminally ill patient for what it is, but if the authority of a mystical-sounding blackbox helps them accept and act on that tough reality, that's fine by me.
If I am in the position of the patient, I would be horrified to hear an analysis tell me how much time I have left to live and how likely the death is. No matter what the system tells me I will still opt for whatever possible treatment in the case that the prognosis is wrong, or I fall in the ex. 10% chance of survival category.
I just can't picture the doctor saying "the AI says there is a 90% chance of dying in the next month" and me choosing to die in peace at home. I'll keep fighting and take my chances.
Quick background: I am an internal medicine physician, and my focus is hospital medicine, so I actually see a lot of cancer patients suffering from complications related to progression of their disease, treatment, and have lots of end-of-life discussions.
A couple years ago I had a case where I wished I had more data to drive me to make a prediction for a family who wanted to know when their loved one was going to die, and I felt like I didn't do a good job. I ended up contacting an oncologist Dr. David Hui who has done a lot of research into predicting end of life care.
There are dozens if not hundreds of "prognostic calculators" out there that different groups have made over the years to try and find which variables X, Y, Z, can be used to better predict how a patient with cancer A, B or C will do.
Most of them are not great and very specific. Some of them are general to all cancers, some are only for "advanced cancer" etc... So this isn't really a new idea, it's just that this is the first time someone has applied AI to it as far as I'm aware.
The problem is these calculators are all buried in journals and have different inputs. What we did was collate some of the best validated ones and make it easier to check multiple prognostic calculators at once, with the idea that you could get more of a range or gestalt on the patient and use that info to help guide further decisions.
In case anyone is interested the site is www.predictsurvival.com but it isn't for lay use so may not be of much interest to most here. I built it using Python and Flask and it's what I'm most proud of in my programming.
Anyways, I knew someone would eventually use AI to try and tackle this question. I'll read their paper later tonight and I wouldn't be surprised if it ends up being far better than what I've done. I hope they make it open source.
I'd be happy to try and answer any questions people might have about medicine and end of life care