Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue is if they lash out in some incomprehensible way, or lash out as a alien superingelligence. If they lash out as a human, that's fine.


Depends on how much power the human has.


The super-AI is going to have power. Deployed everywhere, used by millions, etc.

You have two choices:

- It can potentially lash out in an alien-like way.

- It can potentially lash out in a human-like way.

Do you understand why this has no effect on the argument whatsoever? You are just introducing an irrelevant observation. I want the AI to behave like human always, no exceptions.

"What if it's a bad human"

Jesus. If people make an evil AI, then it doesn't matter anyway how it behaves, it's just bad even before we get to the discussion about how it fails. Even when it accomplishes tasks succesfully, it's bad.


> Do you understand why this has no effect on the argument whatsoever? You are just introducing an irrelevant observation. I want the AI to behave like human always, no exceptions.

Do you like how humans behave? Also, how DO humans behave? What kind of childhood should we give the AI? Daddy issues? Abused as a child? Neglected by a drug addicted mother? Ruthlessly bullied in school?


We're discussing behavior in a context of a test (in the lines of the imitation game as defined by Alan Turing).

It's not a psychology exercise, my dude.


Of course it is. You seem adamant you want them to behave in a human way. Humans have behavioural patterns that are influenced by their childhoods, and sometimes those are particularly messy.

So... you either wish to replicate that or you don't.


"behave in a human way" is a vague reference to a more specific, non-psychological idea that I presented earlier.

I just explained that to you. Either we discuss this in terms of the imitation game thought experiment, or we don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: