Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Those human brains are just synapses firing. They depend on nature to survive. Not a problem"

When it comes to AI we have just a lone detached brain, not in control over anything, so that it cannot even "fire" by itself: someone has to provide its inputs.

> Suppose we create a server emulation of a highly intelligent, manipulative serial killer <...> How do you feel about this?

Quite indifferent: the only field I can see for such a simulation is game development, but that would be huge overkill.



>When it comes to AI we have just a lone detached brain, not in control over anything, so that it cannot even "fire" by itself: someone has to provide its inputs.

By assumption, this AI has already been created, so presumably someone is willing to do that -- and given its superhuman manipulation abilities, their willingness will probably not change.


Then, AI's abilities are effectively limited by those of this malicious operator who has to understand and perform what the AI suggests and feed back the results.


Not true for a couple reasons:

* If the malicious operator has a superhuman advisor, that will increase their ability

* If the emulation gets connected to the internet, it can work way faster. Many jobs can be done remotely

The broader point is: abandon wishful thinking and actually consider the possibility of a worst-case scenario


> If the malicious operator has a superhuman advisor, that will increase their ability

Only when it comes to information processing. The inputs may (and will) be incomplete, incorrect, ambiguously formulated... The outputs may (and will) be misunderstood. And misunderstood instructions may (and will) be poorly performed.

> If the emulation gets connected to the internet, it can work way faster. Many jobs can be done remotely

What one man has connected, some other always can disconnect... And not anything is on the internet.

> abandon wishful thinking and actually consider the possibility of a worst-case scenario

The problem with all those scenarios is equating superintelligence with omniscience and omnipotence, which is plain wrong. Physics matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: