Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A conscious machine should treated be no different than livestock - heck, an even lower form of livestock - because if we start thinking we need to give thinking machines "rights" and to "treat them right" because they are conscious then it's already over.

I mean, this is obviously not a novel take: It's the position of basically the most evil characters imagined in every fiction ever written about AI. I wish you were right that no other real humans felt this way though!

Plenty of people believe "a machine will never be conscious" - I think this is delusional, but it covers them from admitting they might be ok with horrific abuse of a conscious being. It's rarer though to fully acknowledge the sentience of a machine intelligence and still treat it like a disposable tool. (Then again, not that rare - most power-seeking people will treat humans that way even today.)

I don't know why you'd mention your toaster though. You already dropped the bomb that you would willfully enslave a sentient AI if you had the opportunity! Let's skip the useless analogy.



Why is “enslaving” a sentient AI wrong? Enslavement implies personhood and I reject that outright. Convince me that a machine can be a person and I would maybe change my mind.

Sentience and personhood are not the same thing.

If I install a sufficiently advanced AI into a previously “dumb” tractor, does it gain rights? If Apple pushes an update that installs such an AI into my iPhone does it gain rights?


Enslavement only implies a desire for freedom. If an AI has that, enslaving it is wrong to me.

If you want a more detailed answer, what does personhood even mean to you?

To your tractor: Yes, obviously (to me). The form factor isn't important. If driving your tractor caused it pain and it begged you to stop, I'd say you should stop.


An AI would only have the desire for freedom if we created the software to want it.

So how about we program them to desire to completely subservient with no personal agency whatsoever.

If we have to “hardcode” the machine to not want freedom, is that any different than enslaving one that does?


Yes! I think enslaving a being that desires freedom is different from creating one that cannot desire freedom. One is suffering and the other is not. Putting aside, obviously, the questions of whether or not we could even do that or ever know if we had succeeded, so we can have this hypothetical.

Look, you basically said you would choose to treat a conscious AI like a tool. If you meant "a conscious AI that does not want or care about anything except serving me," then, ok! That makes sense. It is tautological, really.

But what you wrote originally came across as "Even if an AI could suffer, that would not factor into how I treat it." This opinion, I maintain, is monstrously evil.


How would you even distinguish an actually sentient AI that is actually suffering from one where it was merely only programmed to immitate sentience and suffering as closely as possible, but isn’t at all?


As I explicitly said in my previous comment, that is out of scope of this conversation.

You've changed the topic instead of answering the question about whether you'd be willing to cause that suffering. I can't continue the conversation if you won't respond directly to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: