Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why I’m turning my son into a cyborg (qz.com)
94 points by ycnews on July 17, 2019 | hide | past | favorite | 114 comments


The story about the insulin pump seems rather unethical, almost as if she is performing research on a human child when they are not of the age to consent, and would be biased toward consent due to parental influence.


It's not that easy. Bad compliance will lead to an early death in diabetics. And kids are VERY bad at compliance. Most parents who can afford it nowadays get an insulin pump for their kids. And they are so ridiculously complicated to operate, and one single lapse can be dangerous. You're always doing math in the back your head, considering what you've eaten (or will eat), whether you will do sports, etc.. A custom firmware with auto-dosing can be a real life changer, and saver.

Source: A friend of mine is part of a insulin pump hacking community


Can't agree more. People always bring up how risky and unethical the insulin pump hacking stuff is in literally every link it's mentioned. I guess intuitively, all things being equal, people feel like like you should be as cautious as possible and test these like any other medical device.

But all things aren't equal. Most of the people doing the hacking (I know a few as well) are already intimately familiar with the biology and consequences, and are already constantly monitoring themselves, or their children and loved ones. They're the best people to test this stuff.

And, like you said, there are compliance issues with children and all sorts of bad things happening already, like devastating lows, diabetic comas, chronic sleep deprivation, etc. The closed-loop and automated systems already reduce these things and improve quality of life for all parties involved. Also the companies are taking forever and may have a vested interest in maintaining their products the way they currently are.

If all things were equal, I could understand the disdain and caution for the device hacking. But they're not.


Thank you for that comment. Don't get me started on "compliance". Even if you are following 100% the diet and rules (which is impossible, we are humans), type 1 diabetes is HARD. We need all the help we can get! A closed-loop system improved my quality of life a lot. I would be healthier now if this stuff was available when I was a kid.


a friend with type 1 told me about a mesh ball containing pancreatic tissue that has pores big enough for insulin but too small for the immune response to get to I am destroy it. sounded fascinating. I've been wondering if a closed loop thyroid pump that dumps t4 and t3 based on tsh and reverse t3 levels could be made, or a mesh ball to keep hashimotos away from some thyroid tissue, then ablate the original.


could connect a sensor to some kind of excreta to get a blood sugar level? maybe an implantable sensor with blood flow?


Yes that's what those systems do.


Nevermind age, the child also has autism. They may not be able to consent in any circumstance. I have a child with autism. He's a so-called "high functioning" autistic individual (goes to public school, has social interaction with peers outside of school, etc.). He's showing signs of above average intelligence and talent in some things and it would be easy for someone to think he understands "consent" at this level. Sure, parents are able to give consent for their kids due to their minority age, but there are limits to that power. This blog post was unsettling, and not in a "oh you're just afraid of hackers doing what they do" sort of way.


I think it's really allistic people who are less capable of consent. They have a strong tendency to choose their beliefs and actions based on herd instincts rather than rational evidence, while autistic people, despite our numerous problems, have a much easier time considering evidence rationally.

It was just yesterday that an (adult!) allistic friend of mine was posting on Facebook looking for advice on treating her kidney infection without antibiotics. Her irrational allistic friends were giving her all kinds of ridiculously terrible medical advice (“Alka-Seltzer cleared up my UTI”), and I looked up the clinical practice guidelines and yelled at her to GET ANTIBIOTICS.


Your claim that an autistic could like you describe is less capable of consent than an average child is illectually suspect and dehumanized. It's also an argument that pushes kids into abusive ABA therapies against their will, since their expressed preferences are considered invalid.


Cool stuff, but did anyone else feel like they were reading an advertorial for the author? Managed to pack most of their CV in there.


What's the difference between this person going on a philosophical exploration of biohacking and AI versus some drunk at the bar? Their CV.


I guess also not doing it while inebriated. that's probably a plus.


There are literally millions of people with a CV that you could find drunk in a bar.


I mean isnt that one of the main reasons of a tech blog nowadays?


No, I just think she's extremely enthusiastic about the things she does in a way that strikes me as being almost autistic.


Blogs nowadays are more about self-promotion than anything else.


> Imagine these advantages not being subtly embedded in the life experience of well-off Westerners, but being directly for sale—and turned up to 11. Intergenerational social and economic mobility would disappear. [...] augmentation could also become a tool to entrench inequality even more firmly.

I disagree with this argument that commercial performance enhancement will destroy social mobility. The effect would be strongly dependent on the economics of such a market. Conversely, an efficient market in a new technology can be democratising.

If hypothetically an inexpensive brain-computer interface could give a person access to information in a way which substitutes conventional education, then this could negate the preexisting socioeconomic divide which education creates. Particularly conventional education is unavoidably expensive because it involves labour on at least the part of the student, whereas the cost of a neuroprosthetic device, as with other electronic technology is diminishable.


If hypothetically, a brain-computer interface were to exist, it would likely reach individuals that are well-off first. As the base solution becomes more widely available to the mass, enhancements to speed, quality of information capture, processing and others would help create tiers to the product, some of which would only be available for the right price. Coupling access to information devices with real experience (which often involves spending resources), it means that there would continue to be gaps in social mobility further exacerbated by people's means to acquire the tech necessary to break through their socioeconomic status.


In order to have a positive effect, a technology only needs to be less discriminatory than the technology it replaces. I chose the example of education because it is very expensive and discriminatory today; a technology which replaces the need for tuition could still be expensive and improve social mobility.

> If hypothetically, a brain-computer interface were to exist, it would likely reach individuals that are well-off first.

While often true for technology generally, medical technology has special considerations. Rather than simply reaching the well-off first, performance enhancing neurological implants are likely to reach the disabled first, where it is deemed ethically more acceptable, as has been the case so far. Due to public healthcare, health insurance, and research contribution, patients' wealth is not always crucial here. It may be the case that costs reduce greatly before non-medical uses become acceptable. In the case of BCIs, the acceptability of their non-medically necessary use cases would likely depend of the technology's maturity.

The extent to which market differentiation is possible is not yet clear for such technology, particularly if the market is competitive. Tiering is typically based on either having a unique product (e.g. nobody else can make an equivalent) which can be artificially crippled, or natural differences in production quality (e.g. only some cores of this processor work).


I find the top-level comments here on the whole shockingly negative, uninformed, wildly speculative, and even downright spiteful.

From the snide "hope there's no bugs" Titanic joke about an insulin pump -- This is a real mom and a real kid you're talking about. You think you're the first person in the room who's read about the Therac-25?? Do you think a snide joke implying catastrophic consequences is appropriate in this context!?

To the apophatic "almost as if she is performing research on a human child when they are not of the age to consent" which raises missing the point nearly to the form of art... almost as if you had memorized a handbook of research ethics guidelines but yet somehow completely forgotten why we do research in the first place and what technology is even good for!

Then we have the comments whining about the author mentioning other work she's done (like having built this stuff for much of her professional career is somehow irrelevant to having thoughts on what the risks and opportunities are?) as though it were pure self-promotion. And the funny-if-not-sad, "Why isn't he turning HIMSELF into a cyborg but his son?", missing both the author's gender and her point. If you'd glanced at the byline or even the author bio before commenting, you could hardly have missed that the author is a woman, not any kind of a "HIMSELF". And maybe, just maybe, if you had read the article just a little more carefully, it would have occurred to you that she is turning her son into a cyborg and not herself because her son is the one who has autism.

> I don’t want to “cure” someone of themselves. Especially not my son. I want them to be able to share that self with the world.

There is fascinating, thoughtful stuff here touching technology, medicine, science, religion, parenting, and ethics, and there are discussions we could be having around these issues, but first we have to actually engage with what's being said.


As a counterpoint: the article is sloppy and full of clickbait. The fact that people are sexist / can't read is entirely tangential to that. Any of the "thoughtful stuff" you mentioned has been explored elsewhere in a context where we're not playing games with a real human being that literally can't give informed consent. (And personally, I find some of her more philosophical assertions about those issues pretty tasteless and lacking.)

> I find the top-level comments here on the whole shockingly negative, uninformed, wildly speculative, and even downright spiteful.

Welcome to the Internet


If people think the article is poorly written and poorly researched garbage, then they can say that and say why, or just don't comment, but that's no excuse for posting poorly written and poorly researched garbage in response.

I'm not saying the article is great, but that the comments here should be better.


To me (not having read the article) the click bait title sets the tone for the discussion, which makes it acceptable for jokes, despite the underlying seriousness of the topic.


Well perhaps that's the problem. I read a thoughtful if high-level piece about a technologist wrestling with difficult issues while trying to help her son, and came back to see a bunch of jokes based on the provocative title (which is typically picked by an editor not the author) rather than on the article.


This kind of thing makes me intensely uncomfortable, given that Wendell Berry is basically my spirit animal. But her points are really compelling. What's the difference between repairing damage (cochlear implants) and enhancing a healthy body? I love her rule: "You should not only be better when you’re using it, you should be better when you turn it off."

Also (for the christians out there), as someone who believes in the Fall of Man, we are all broken in some way, so every enhancement that brings us closer to the way we were created (in God's image) is actually damage repair, even brain implants to bring us closer to health and holiness. My only doubt is whether we have the wisdom to pull off a work that seems to belong to God.


Muslim here, I've been thinking about something like what you're talking about, right now my best answer is that we would have to define some threshold for what constitutes a gross imperfection. I think most people can agree that the replacing of a damaged organ or limb is fine, so we could go from there and progressively discuss more minor imperfections until its something the overwhelming majority of people have.


This person is playing a dangerous game and using her autistic child as an experimental toy. She's overconfident and certainly does not understand all the ramifications of what she's doing.


Hacking the operation of an insulin pump for your child seems so insanely dangerous I could not even conceive of attempting it.

The other things she talks about, such as helping autistic people recognize emotions, could be enormously beneficial and much less potentially harmful.

My understand of part of the autism spectrum is difficulty empathizing, not being able to understand cues of emotion. I have the benefit of being able to understand pretty well that words I've just spoken or actions I've taken have hurt somebody's feelings. If an AI can flash up an image of "this person is now sad" right after someone lacking that ability has done or said something, they have the opportunity to apologize or say it differently. That could greatly improve their social interactions and personal happiness.


> I could not even conceive of attempting it.

Someone had to write the software that runs on her kid's insulin pump. If you could make it better and had the resources to do it right, you'd probably find it less insanely dangerous and more like the obvious way to approach the problem.


I'd feel more comfortable with my software running the pump than someone else's, especially given the UI of most insulin pumps I've seen. They make it easy it get it wrong.


She demonstrated a bit of a misunderstanding of ADHD and the impact it has on people.

Interestingly, if her working memory device is as good as she says, it may have serious potential as a non-chemical treatment for ADHD.


But you're fully qualified to make absolute statements about someone else's family and someone else's knowledge and someone else's skill and someone else's ethical choices based on a pop article you just read!?


From the comment guidelines:

> Be kind. Don't be snarky. Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

This comment is not constructive; if you disagree with the parent's assessment, please explain how and why in order to stimulate meaningful discussion.


The parent did not make an assessment. The parent made a value judgement and commented not on the article but on the author in absolute terms. I assume good faith, and my comment pointed out that the parent commenter in fact was being arrogant, as it is arrogant to make value judgements about someone else's situation from a position of relative ignorance.


After reading this and researching the state of open source diabetes care, it's quite disappointing to see there isn't any work being done on open hardware insulin pumps and instead are relying on insecure old existing pumps to make this work.


> When he was diagnosed with type 1 diabetes, I hacked his insulin pump and built an AI that learned to match his insulin to his emotions and activities.

Wow, hope there're no bugs in that one. I'd hate for the kid to be weeping over Titanic and get od'd with insulin.


I work in Biotech and every time I read one of these articles/posts about how somebody hacked their insulin pump people start asking why there aren't already affordable closed-loop pumps on the market if people are hacking their own with raspberry pis and the like, and they clearly don't understand the regulatory process or the level of clinical testing that's needed to validate something that, if it malfunctions at night, can kill you.


As opposed to not doing anything which is the current state of Diabetes for the most part, which too, kills you and if not outright kills you then slowly does via financial ruin.

How many episodes of hypoglycemia do you think one needs to have, in an utterly broken healthcare system in which being ordered paramedics and given sugar pills costs you $XXXX per occurance.

And if you're gonna talk to me about responsbility and healthy living please join the real world some time.

That's why people choose to take actions into their own hands. Because being a victim of a system that just doesn't care enough leads you to either take the handles or down under.


What does this even mean? Hacking an open-loop insulin pump and making it closed-loop doesn't eliminate the need for insulin, did you misread me as saying insulin prices are A-OK and diabetics should just eat healthy?

All power to these knowledgeable, individuals, for doing something dangerous but potentially great. This is HN, full of smart people who understand the risk of failure and will monitor the crap out of their system and adjust and tinker until it's ok. The problem is people making tutorials and publicizing their work, which leads to people not as smart following their tutorial and thinking it will be alright.

There are tons of papers on closed-loop systems, and probably clinical trials are happening as I write this comment. Medicine isn't one of the industries to "disrupt" though, as much as people on here want to believe.


Smart insulin pump sounds awesome. Bring them on. If it really is a smart insulin pump. Trouble was the article author was talking about 'reading his emotions' etc. There's a red flag for bullshit. AFAIK no publicly disclosed technology can, in general, read emotions. Also, why whatever signal they are using as 'emotions' is a perfect predictor of insulin levels, I don't know.


I studied control engineering, and I bet he forgot some things. A few things come to mind. How does he recognize sensor failure (pretty important in closed-loop systems, like MCAS for example)? Did he use floating point math which may cause numerical issues? Does he use integrators and limit their output? If he does limit the output does he counteract windup? If he uses neural networks, did he gather enough data that we can be sure that this things works?

There's so many pitfalls here where teams of engineers with the whole part spec and from different disciplines have failed before. It sounds very dangerous.


Fuzzing seems like a really useful test for this.

Either way, I agree I'm not really subscribed to the idea that DIY life critical systems are a good idea beyond POC.


Maybe it isn't life critical but only quality of life critical?


If it pumps too much you're dead - which is why I'm considering it life critical.

Not enough is also bad but hopefully they'll notice and can act accordingly - still a threat too though.

It's not that we're all saying this is an unsafe machine, we're saying that a random person DIY'ing and using such a solution is very risky, there's a reason why commercial medical equipment requires rigorous testing/certifications, etc.


Insulin pumps are both.


Usage of cheaper slow acting insulin becomes viable if it is controlled by an algorithm that knows your insulin level.


You can already use slow-acting insulin, it just requires planning your day out really well, which is hard and why slow-acting stuff sucks. The amount of data required to predict someone's glucose consumption/insulin requirements is massive, and needs to come from that individual. Then you'd need to build in overrides because people change their habits/behave spontaneously which puts them in danger. It isn't a simple problem.


That doesn't address his point, which is that a programmable pump isn't something you could sell because of the liability involved.


It is something you can sell, and is currently in development by major pharmaceutical companies.


Not end-user programmable, though.


I think the parent did not criticize the DIY community, only stating that different rules apply to biotech companies. I know many people who use the loop system and their health benefits immensely from it.


I will just say this. Someone in the openloop / DIY pump community is eventually going to die or seriously injure themselves (or a family member) doing what they do. It's not an if , but a when.


As long as this is someone who's actually working on the tech and knows what they're doing, this is only an unfortunate and very sad cost of progress. It's the risk they take, because they know they're doing good work.

This does become a serious problem when it gets into the hands of people who don't know what they're doing, if they're convinced to trust the tech because "obviously the smart people working on it made sure it's safe".


Still, parent comment stands. We might agree it would be better to do something, but regulation is regulation. To change that, you need to vote the right pers... Ouch, I guess that never works either.


While I understand the issue in general it seems it should be relatively simple to validate that a closed-loop insulin pump is at least no worse. Isn't the objective of the closed-loop to just do finer control? So can't you just have the same hard limits as the normal pump and then allow the potentially buggy software to just fine-tune within a generally safe range?


Lot's of things can kill us. People have been changing their car brakes for a long time. If your car breaks fails, it can kill you, your passengers and other's outside of your car. People have been fixing their electrical appliances, it can electrocute you. Gas lines and furnace can explode. There's a lot that we do that can kill us if done incorrectly. This doesn't mean that we shouldn't do it. At the end of the day it's humans that built it.


I see your point and agree... there should be no difference. But I think a lot of times it's not WYSIWYG when it comes to software, because there is so much hidden infrastructure on top of something simple.


I don't understand the justification. Because people do dumb things, we should do more dumb things?

>This doesn't mean that we shouldn't do it.

Why should she put her son at risk for a science experiment?


Because the risks of not doing so are even worse.

If it seems horrible, that's because the reality that diabetics have to live with is horrible.


Are you saying that changing your own car brakes and fixing your own electrical appliances are “dumb things” to do?


Crossing the road turns out to be pretty dumb too, let's just stay at home in future.


To be fair that same process allowed hackable (malicious or not) devices out into the world

https://www.webmd.com/diabetes/news/20190628/fda-recalls-ins...


Believe me I'm not saying the FDA is some perfect organization that can do no wrong but they do exist for a reason. There definitely needs to be more scrutiny as tech (read:computers/AI) start making their way into therapeutic devices.


It's a trade-off. You can choose to wait for something to come to the market and suffer the potential consequences, or you can go the DIY route and reap the benefits today. There are risks either way. My wife and I aren't willing to experiment with our son's life. If it were for myself, I'd absolutely give OpenAPS a shot. My son has a Dexcom & a Tandem x2 pump, and in Q4 they'll deliver a software update that will provide a basic closed-loop system. 'Basic' in the sense that it will only be able to reduce the likelihood of a high blood sugar. We're still waiting for the dual-hormone pumps that will be able to control blood sugar in both directions.

Prior to this pump + Dexcom, he had a Medtronic 530g + glucose monitor. It was not internet connected out of the box, it required a $100 gadget that would transmit his blood sugar to his cell phone and then to medtronic's awful website. Fortunately folks developed Nightscout to scrape Medtronic's website and provide a much better experience. With Dexcom being a much more modern option (the CGM transmitter has built-in bluetooth) - we don't really need Nightscout, but it's convenient for me to watch his sugar on my 2nd monitor when he's at school (he's only 7.)

The problem is not in my opinion regulatory processes or clinical testing. It's been simply a lack of urgency/competition. Beta Bionics has started to make Medtronic and other competitors step up their game in recent years. https://www.betabionics.com/


The situation reminds me of the regulatory environment in aviation. Aviation in general is super regulated for the usual “people could die” set of reasons. If you buy a normal certificated airplane like one from Cessna or Piper, you are subject to heavy regulations around what you can do to it in terms of maintenance. You can’t do much as run a USB charger to the cockpit panel without the FAA coming down on you like a ton of bricks. You have to go to people with the right government $tamp to get anything fixed. It’s a huge burden but it’s all in the name of safety.

The FAA does however have a parallel track, called Experimental-Amateur Built (E-AB) where if you build the airplane yourself for educational or recreational purposes, it’s subject to a mostly separate set of rules. They are generally more relaxed in terms of what maintenance and modifications you can do but stricter in terms of what you can do while flying (can’t take passengers for hire, for example). Most pilots prefer the certified/safety-first/highly regulated track, but the government recognizes it’s a trade-off and offers the other option for those who prefer a more DIY-friendly world.

Not saying this would necessarily work for medical devices, but there is precedent for acknowledging the trade-offs when it comes to DIYing safety-critical stuff.


It's understandable that the regulatory process errs way on the side of restrictiveness: if someone dies because it's not restrictive enough, that's easy to identify, and it can become a lawsuit and a political campaign, Senators can get involved, agency heads can resign, civil servants can get fired. Meanwhile, the people who die because it's too restrictive are much more difficult to identify, and their deaths can't be blamed on a single decision. (Maybe the unapproved treatment wouldn't have worked anyway, and maybe nobody even knows it could have existed.)

So, given that whatever decision they make will kill masses of people, regulators have very strong incentives to kill them by being too restrictive, rather than kill a different mass of people by not being restrictive enough.

The solution to this problem is clear: the person with the strongest incentive to keep you alive is you. Any regulator or drug-company executive will reliably put other considerations above your survival under some circumstances. So, except in cases of people who really cannot be trusted with life-and-death decisions — children and some severely mentally impaired people — you should have the final say on what medical treatments you try or don't try.

In short, the whole regulatory structure should be scrapped. We still need clinical trials, but not before new treatments become available; after.

This sounds like a radical proposal, but in fact it's how the process worked in the US during the time that we owe most of our revolutionary medical treatments to, the 1920s through the 1960s. That's when most (all?) of our currently marketed antibiotic families were discovered, when one dude spent a couple of years prototyping the pacemaker in his barn and then convinced his friend at a hospital to start trying it in patients, when artificial joints were developed. We're still coasting on the legacy from that era.


> I work in Biotech and every time I read one of these articles/posts about how somebody hacked their insulin pump people start asking why there aren't already affordable closed-loop pumps on the market if people are hacking their own with raspberry pis and the like, and they clearly don't understand the regulatory process or the level of clinical testing that's needed to validate something that, if it malfunctions at night, can kill you.

plenty of closed loop feedback systems we are all hooked up to which can malfunction and kill you.. casually thinking of automotive industry I can come up with:

- abs brakes - engine control units - transmission control units - power steering - traffic lights

and that's just one sector, let alone aviation, mass transit, power plants, etc etc etc.

granted the malfunction:fatality ratio is probably different, but not so altogether crazy to think that 'closed loop is impossible' .. though perhaps you are strictly speaking w/r/t regulatory climate..

that said, we have had pacemakers for many years..


The alternative to a pacemaker is death from an irregular heartbeat. The alternative to closed loop insulin is a lower quality of life.

The FDA is setup to optimize survival not quality of life. The standards will be much higher for these pumps.


Yes makes me shudder thinking of what could go wrong .

I spent a year hooked up for 8 hours a night to a CPD dialysis machine - I read the massive manual and there is a long section on what can go wrong including "Death" - no way would I start hacking on that.

Though I did think about using a camera and ML to monitor the waste output for signs of contamination.


And yet MANY people willingly make that tradeoff. https://www.usatoday.com/story/tech/2019/06/05/diabetics-for... Also with CPAP devices, which are less likely to kill you but it's still possible. https://piunikaweb.com/2019/02/21/rise-and-fall-of-sleepyhea...


People also tend to greatly underestimate the risk of negative outcomes.


this implies that if a medical device (or pharma) passes through the regulatory process it will never malfunction which is absolutely not true.


It doesn't imply it will never malfunction. It implies it went through a scientifically rigorous process to minimize the chances it will malfunction.


And often that there's warranties in place - that is, if it does fail under normal operation, you can sue them and get millions. Which won't buy your loved one back but it's a strong deterrent for the company to not try and palm off dysfunctional shit.


that doesnt help if youre dead or severly injured...besides how many of these cases actually go to court and win?


It doesn't fix those things, but it does help deal with them, at least where the long-term financial impacts are concerned.

You're not going to squeeze a lifetime worth of lost salary out of a random Redditor who told you how to tweak your insulin pump.


It really just implies that the odds are sufficiently improved. Which may or may not be true, but which has a much better shot at it.


often times products are expedited through this ‘scientific process’ to get on the market faster. if the regulatory process is truly about safety why does this exception even exist?


Whether it's actually a good idea isn't something I know enough of the real numbers to comment on, but it seems plenty possible that an expensive step that substantially reduces risk might be appropriate in some circumstances and not in others.


> if the regulatory process is truly about safety why does this exception even exist?

Because the risk is worthwhile at times.

The exceptions typically apply to rare diseases that impact a population too small to be profitable enough to justify the $1B or more it may cost to shepherd a device or medication through the normal process.

These are populations who would be potentially unserved otherwise. (I'm of the personal opinion that this is where the government should step in instead of private industry, but I'm pretty left of center.)

https://www.fda.gov/industry/developing-products-rare-diseas...

IIRC, an exception of this sort was used to expedite an Ebola vaccine due to the recent outbreak... because, well, Ebola.


I'm running an open source closed-loop system on my iPhone. It controls my insulin pump based on data from a continuous glucose monitor. Those of us that do this don't do it blindly. We see the blood sugar fluctuate and can react if something goes wrong.

Many parents use tools to do this monitoring remotely for their child's system. I believe that if you have access to the source code and possess the knowledge to understand it and debug it, the improvements to the child's life are greater than the risks you take.

The sentence about "emotions and activities" is a little misleading. It reacts to the data from the CGM. But yes, blood sugar changes with emotions and activities. That's the life of a type 1 diabetic, no matter if you use a closed-loop system or not!

I'd hate for the kid to be weeping over Titanic and get od'd with SUGAR.


I read this and it struck me as irresponsible too. By all means hack your own pump, since it's your own life you're risking. Absolutely do not hack the pump of another. I'd be concerned if doing so is even legal.


Just because it's your child doesn't mean you get to provide open heart surgery. There's some fat gray line between what a parent can do medically and what they can't. Hacking an insulin pump feels like it is over the line, but I'm not sure. If my child was on track to lose a leg and their kidneys in the next few years and their doctors told me nothing more could be done, and I was an AI expert who educated myself on the domain, I'd expect people to stay the hell out of my way.


Bah. Standard methods with more careful monitoring, prediction and precision adjustments should still work while not being anywhere as risky. You can still use AI or other advanced statistics to program dose sizes and timing.

Semi-closed control "dumb" programmable pump, that is. Human is in the loop, it is not realtime so potentially worse... Or not. Few and predictable failure modes.


These kinds of things are interesting, but at the same time I doubt the people making these things overlap with the set that build safety critical systems actually dealing with the constraints necessary to not kill people when things go wrong..


At least some of the people who do this kind of thing take safety very seriously.

https://openaps.org/reference-design/


That's nice to see, some of my pessismism may be unfounded then :)


How does it work? Is insurance company happy with that? What if they refuse to honour claims on grounds that they are tempering with medical device, so it's making treatment fail?


What about this design?

1. Take a "dumb" open loop controller. f(t).

2. Take a "smart" closed loop controller that reads glucose levels, f(t) and outputs g(t), a correction term.

3. Clamp the correction to a safe range to get h(t)

4. Output f(t) + h(t)

If the closed loop controller crashes, the output is f(t). If the closed loop controller goes haywire, the output will be safe.


I see a lot of comments here about how we should hack these things. I think that's good. The medical field does need hackers and makers.

BUT I find a very disturbing thing being glossed over. The son can't consent to this experimentation.

That's where I draw the line. You can hack away on your own devices all day. You know the risks involved, or at least are aware that some do (it's hard to know all the risks with anything medical). But hacking away on a minor's device isn't just risky, it is unethical. There's a reason this type of thing is hard to get passed a experimentation review board. I know the parent can consent for the child but there are limits to this (like that a parent can be considered endangering their child if they don't vaccinate). I know there is likely low risk in this type of hacking, but unless the upside makes a substantial quality of life difference (I don't know what it's like to live with type 1), it is concerning that you would risk a child's health. When it comes to health I think there's a good reason to be conservative.

Once an adult (I'd even find that fine with a 16 year old) I think the equation changes. Hacking away is good, but we also need ethics.


Doing nothing is also a choice, and also comes with risk.

Accepting the built-in device software is also an "experiment."

Being conservative does not always mean accepting the status quo.


> Accepting the built-in device software is also an "experiment."

An experiment with substantial testing and one that is legally backed if something is wrong. These are class 3 FDA approved devices, which means they have rigorous testing. They aren't just FDA cleared.

So calling them an "experiment" is really understating it. There's a huge difference between an experiment with one participant and done by someone without formal medical knowledge and many experiments with large sample sizes performed by several medical professionals who are experts on the specific device at hand. The two aren't comparable, and it's a ridiculous notion to equate them.

In the context of my statement, being conservative means accepting lower risk.


You're looking at the risk of the software failing, and I'm talking about the risk of the entire system. Which includes the kid who has to control the pump, and we are not the experts on the kid.


From what I have read this is a common thing to do especially to get some "advanced" features w/o having to pay an arm and a leg for a higher end pump.

There are even insulin pump firmware online.


Most likely she is use reading of glucose level (which varies with emotions and activities) but that wouldn't sound as cool so she replaced that with some hand waving.


tl:dr :

Do my son's engineered superpowers make him more human, or less?

The more different you are, the more valuable you become.

My particular area of research and development is cognitive neuroprosthetics: devices that directly interface with the brain to improve our memory, attention, emotion, and much more.

How can we respect someone's humanness while also giving them the choice to become more like the majority of humans?

In a world that values difference, untypical humans paired with neuroprosthetics might become even more powerful than fully abled ones.

If these kinds of augmentations can lift them above the crowd, soon everyone will want to be more than human.

It's seductively easy to imagine a world in which we're a little smarter or a bit more creative, in which our kids have the latest advantage.


>Do my son's engineered superpowers make him more human, or less?

Most people don't even know what human means, this idea of superpowers affecting your humanity just means you didn't grow beyond comic book stories.

>The more different you are, the more valuable you become.

Only in blind people's eyes who can not even see that every atom in this Universe is unique. How can you be more (or less) different than that?

>How can we respect someone's humanness while also giving them the choice to become more like the majority of humans?

There is no majority, only individuals. It's just an image in your mind if you travel around the world a bit you will see how limited this is.

>In a world that values difference

Maybe it's time to stop calling your neighborhood a world? :)

>It's seductively easy to imagine a world in which we're a little smarter or a bit more creative, in which our kids have the latest advantage

Kids have an advantage in every generation. The question is how being a little smarter or creative is going to make you happier? The story tries to connect this to being different, "above the crowd". But if everyone is above the crowd then there is no crowd and nobody will look up to your difference. It's a dead-end in many ways, but peace, happiness, joy, bliss - it is all real and they have nothing to do with how others look on you.


> Do my son's engineered superpowers make him more human, or less?

Theres a line to be drawn here and it depends on the functionality of the implant. Implants that restore normal function versus implants that do something else which healthy humans could not achieve without.


Why are you choosing to draw the line there? I don't understand the rationale for that choice.


I took it as humans are incredibly complex things with properties ranging along curves. What happens when one of these properties gets pushed out far beyond the previous maximum of its curve? What happens when people can stay hyperfocused for 12 hours straight? Their IQ's jump 100 points on average? Will they look upon normal people like we look upon cattle (historically, slavery is the normal state of mankind)? When people can turn on and off their modes, will they pick their favorite mode and never turn it off?

Yes, it is certainly useful to differentiate between normal ranged human properties, and those so far exceeding the norms we have zero evidence on what to expect.


If we expect "superhumans" to look at us like cattle, does this reflect the idea that we look at "subhumans" like cattle? Isn't that just projecting our own moral failings onto other people?

(So far the evidence seems to point towards the notion that you can't increase human performance by pushing along a single axis. We are near a local optimum in a massive, multidimensional space. Pushing too much in any one direction will move us farther from the local optimum.)


The counterpoint is that superpowers do not come bundled with superior morality, but the consequences are magnified.

We're facing this everyday with rich and powerful making likely bad choices, for themselves and for others. Why would any other kind of superhuman be different?

The question is only of magnitude - what could one person do? How would they do it? What would be the cost or benefit?


By one definition, the rationale seems reasonably straight forward. We imagine some Platonic ideal human, or perhaps some range. It seems reasonable to say that moving toward that is becoming "more human", away is becoming "less human".

This doesn't directly imply that interventions that make someone "less human" should be avoided.


> It seems reasonable to say that moving toward that is becoming "more human", away is becoming "less human".

That's not a rationale, that's just a criterion. My question is, "What is the rationale for avoiding these types of changes?"


It's a rationale for the belief, which is what I thought you were asking about. It's not a rationale for avoiding the changes, but I don't see that advocated as such in this thread.


> Implants that restore normal function versus implants that do something else which healthy humans could not achieve without.

Where is that limit achieved? In the case of prosthetic limbs, should we seek to achieve perfect organic behavior replicability, or is anything further than that inhuman in some arcane way? Oscar Pistorius, AKA Blade Runner, earned Olympic medals despite having prosthetic legs. Is he advantaged or disadvantaged compared to other participants?

The anime series Texhnolyze touches heavily upon this subject, amongst others.


It's time for humans to disappear from the world stage. The lack of self replication means that in the future more and more robots will be needed to care for the aging populations. Humans are already incapable of building robots without machines. The only value humans will provide is by giving them orders through programming them. General AI will never arrive but we will only ever need a million or two robotics developers the same way we only need a few million software developers.


[flagged]


She. Herself. It's written by a woman. See the name of the author of the article, the picture, the fact that she refers to herself as a mother and a ton of other clues.

As to why she's turning her son into a cyborg, that's pretty well explained in the article, though one might disagree with the fact that she's doing this at all.


> Why isn't he turning HIMSELF into a cyborg but his son?

It'd be a little silly to install an insulin pump on someone without diabetes.


Not as silly as some other things - it is anabolic. It would be more interesting to install an adrenaline or steroid (testosterone, cortisone, gonadotropin and inhibitors) pump on a healthy person, but harder still to figure out how and if it works.


And won't this make the kid's body, probably unconsciously, generate feelings just to get another dose of insulin?

Injecting insulin according to the emotions might give bad clues on how you "should" feel in each situation.


It's insulin, not morphine. The body doesn't crave higher and higher doses of it.


I don't mean addicted. Anyone who has ever gone from sugar highs and lows, or has experienced how your body reacts from a single candy when your glucose is low, or even how it reacts when you were in need and took an insulin shot... Knows that you can feel it.

And if you can feel it, your body can learn to modulate your mood and feelings to get (or to avoid) another dose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: