Hacker Newsnew | past | comments | ask | show | jobs | submit | ParanoidAltoid's commentslogin

https://twitter.com/thiagovscoelho/status/172650681847663424...

Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:

Preamble on his general pro-tech stance:

"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."

On why AI would be dangerous to humanity:

"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."

On his path to safe AI:

"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."

On why skeptics need to stop side-stepping the debate:

"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.

"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."


THE FEAR AND TENSION THAT LED TO SAM ALTMAN’S OUSTER AT OPENAI

https://txtify.it/https://www.nytimes.com/2023/11/18/technol...

NYT article about how AI safety concerns played into this debacle.

The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.

Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:

https://twitter.com/thiagovscoelho/status/172650681847663424...

And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.


Most of us will be at home fighting about whether it should really be called WWIII, while only an unlucky few fight the actual war. Global capital will soldier on.

There's different possible outcomes of course, but I believe only in the absolute worst case (dozens of nukes across Europe and North America) do the banks stop caring who owns Amazon, even if Amazon is manufacturing attack quadcopters.


>https://www.metaculus.com/questions/2534/world-war-three-bef...

Metaculus hasn't moved at all. There's some recent comments debating why this is, and some posting reasons to think the market should move upwards.

My guess is this is some evidence for no WWIII, though metaculus can easily be assumed to be at least 10% off at any given moment, given that it's play money and relies on the interest of it's users. And they might not be interested in using their play money on long-term predictions.

The needle has moved on nuclear detonation by 2050:

https://www.metaculus.com/questions/4779/at-least-1-nuclear-...

I'm no geopolitics expert, there might be a reason why nukes went up but not WWIII. Or it might just be that nukes has one-third the activity and so is more moveable. Not sure which of the two to take more seriously.


Not to nitpick your word choice, but taking you literally:

WWIII isn't synonymous with nuclear armageddon, and not just contingent on Russia's stockpile. To give an example:

China invades Taiwan, but all three conflicts remain frozen like Ukraine. Death tolls reach the hundreds of thousands, smaller regional conflicts emerge, more stuff is happening than the news can cover. But 95% of people are living their lives as normal, fighting online about whether this should be called WWIII instead of actually fighting in WWIII. The economy becomes weirder, but no worse than during Covid, shortages emerge and disappear as global capital continues to operate.

Or a more dire example:

An avalanche of wars break out, only grifters deny it's WWIII. Pakistan becomes completely overrun and drops a warning nuke, feeling as though it has nothing to lose. Fearing normalization of nukes, most world leaders shift to focusing on preventing this, planning a conventional retaliation of Pakistan alongside threats of totally annihilating it Pakistan if it launches another.

That said, the tail risk absolutely exists and should be our primary concern, given that billions dead is much much worse than millions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: