>I would suggest that a computer is not 'super intelligent' until it can modify it's goals.
This is a purely semantic distinction. Thought experiment: Let's say I modify your brain the minimum amount necessary to make it so you are incapable of modifying your goals. (Given the existence of extremely stubborn people, this is not much of a stretch.) Then I upload your brain in to computer, give you a high speed internet connection, and speed up your brain so you do a year of subjective thinking over the course of every minute. At this point you are going to be able to quit a lot of intelligent-seeming work towards achieving whatever your goals are, despite the fact that you're incapable of modifying them.
Your assuming you can do work without modifying goals. I have preferences, but my goals change based on new information. Suppose bob won the lottery and ignored that to work 80 hours a week to get a promotion to shift manager at work untill the prize expired. Is that intelegent behavior?
Try and name some of your terminal goals. Continuing to live seems like a great one, except there are many situations where people will chose to die and you can't list them all ahead of time.
At best you end up with something like maximizing your personal utility function. But, defacto your utility function changes over time, so it's at best a goal in name only. Which means it's not actually a fixed goal.
Edit: from the page It is not known whether humans have terminal values that are clearly distinct from another set of instrumental values.
That's true. Many behaviors (including human behaviors) are better understood outside of the context of goals [1].
But I don't think that affects whether it makes sense to modify your terminal goals (to the extent that you have them). It affects whether or not it makes sense to describe us in terms of terminal goals. With an AI we can get a much better approximation of terminal goals, and I'd be really surprised if we wanted it to toy around with those.
We don't call people geniuses because there really good at following orders. Further, a Virus may be extremely capable of achieving specific goals in real life, but that's hardly intelligence.
So, powerful but dumb optimizers might be a risk, but super intelligent AI is a different kind of risk. IMO, think cthulhu not HAL 9000. Science fiction thinks in terms of narrative causality, but AI is likely to have goals we really don't understand.
EX: Maximizing the number of people that say Zulu on black Friday without anyone noticing that something odd is going on.
>We don't call people geniuses because there really good at following orders.
If I order someone to prove whether P is equal to NP, and a day later they come back to me with a valid proof, solving a decades-long major open problem in computer science, I would call that person a genius.
>EX: Maximizing the number of people that say Zulu on black Friday without anyone noticing that something odd is going on.
Computers do what you say, not what you mean, so an AGI's goal would likely be some bastardized version of the intentions of the person who programmed it. Similar to how if you write a 10K line program without testing it, then run it for the first time, it will almost certainly not do what you intended it to do, but rather some bastardized version of what you intended it to do (because there will be bugs to work out).
AI != computers. Programs can behave randomly and to things you did not intend just fine. Also, deep neural nets are effectivly terrible at solving basic math problems even if that's something computers are great at.
Further, maximizing paperclips in the long term may not involve building any paperclips for a very long time. https://what-if.xkcd.com/4/