> One challenge with stimulating areas of the brain associated with mood, he says, is the possibility of overcorrecting emotions to create extreme happiness that overwhelms all other feelings.
Terrifyingly, this risk does not seem far off the wireheading described in gwern's article, Terrorism is not Effective.
> I have just laid out a scheme whereby agents extraordinary only in dedication have exerted world-shaking power. Similar scenarios are true of other sectors. (The Secret Service works hard, but can they protect the President against the 100 fanatics?) Destruction and offense is always easier than construction and defense, but it’s hard to see why the fanatic advantage would be completely negated in constructive enterprises. (Small groups of programmers and engineers routinely revolutionize sectors of technology, without being especially fanatical.) But of course, we see very few such schemes in either direction. That is the point. There is a very large gap between what we can do and what we will do. Coordination is extremely hard (see again the principal-agent problem).
> But the scary thought is - will things remain that way? I have been at pains to keep the agents ordinary. Is there any way now or in the future to create such agents? [...]
> In short, is there any reason to believe wireheading will not work in humans like it works in mice? [...] That is one scenario. Here is another: the electrode is under the control of a program connected to metrics chosen by the subject, like going to the gym. (Related topic: nicotine & habit-formation.) The incentives are much more closely aligned: the subject could gain control of the stimulation, but that would frustrate another goal of his (going to the gym). Imagine the program hooked up to a comprehensive plan for attacking Goldman Sachs; one rather doubts that an agent will break the plan and not eat bulgur pilaf if that means he is simultaneously sabotaging the plan and also depriving himself of pleasure.
> (Small groups of programmers and engineers routinely revolutionize sectors of technology, without being especially fanatical.)
In my opinion, that's because of the current state of software as a field. In the future, as the field grows and the distance between subfields grows, it's going to be more and more difficult to get stuff done.
You could argue that abstraction will offset this difference, but I doubt that, as you generally need people who are invested in those layers of abstraction to fix bugs. Occasional bugs will be fixable by the small team, but already we're outsourcing Big Bugs to other, more specialized teams, even if we don't realize that it is so (Usage of libraries is just outsourcing work to other 'research teams'). But then I guess the question is that if a bug fix for team X utilizes seven different teams that are unconnected to the project worked on by team X, then does it still count as a 'small group', at what point do you account for the critical-yet-unconnected effort of other teams?
Terrifyingly, this risk does not seem far off the wireheading described in gwern's article, Terrorism is not Effective.
> I have just laid out a scheme whereby agents extraordinary only in dedication have exerted world-shaking power. Similar scenarios are true of other sectors. (The Secret Service works hard, but can they protect the President against the 100 fanatics?) Destruction and offense is always easier than construction and defense, but it’s hard to see why the fanatic advantage would be completely negated in constructive enterprises. (Small groups of programmers and engineers routinely revolutionize sectors of technology, without being especially fanatical.) But of course, we see very few such schemes in either direction. That is the point. There is a very large gap between what we can do and what we will do. Coordination is extremely hard (see again the principal-agent problem).
> But the scary thought is - will things remain that way? I have been at pains to keep the agents ordinary. Is there any way now or in the future to create such agents? [...]
> In short, is there any reason to believe wireheading will not work in humans like it works in mice? [...] That is one scenario. Here is another: the electrode is under the control of a program connected to metrics chosen by the subject, like going to the gym. (Related topic: nicotine & habit-formation.) The incentives are much more closely aligned: the subject could gain control of the stimulation, but that would frustrate another goal of his (going to the gym). Imagine the program hooked up to a comprehensive plan for attacking Goldman Sachs; one rather doubts that an agent will break the plan and not eat bulgur pilaf if that means he is simultaneously sabotaging the plan and also depriving himself of pleasure.
http://www.gwern.net/Terrorism-is-not-Effective