Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we commit to stop competing with and start assisting this project

That bit I never understood. It doesn't even have a qualification as to who is stewarding the project as if any AI should be assisted no matter who ends up owning it. The whole thing is just so utterly disconnected from the real world.



If you believe that aligned AGI will create a post-scarcity society, there's no reason for its creator to hoard the benefits for themselves. The main thing is to ensure that post-scarcity society actually comes into being.

Edit: Note also the "value-aligned, safety-conscious" caveat.


there is no such thing as general post-scarcity though. there is post-scarcity of certain needs but not general post-scarcity. So it's creator would still have reason to hoard certain benefits for themselves while distributing some for all.


It's generally meant that material scarcity becomes secondary to social wellbeing. Ie, material needs are so easy to take care of that the dynamics of social interaction undergo a fundamental shift, as they currently are strongly tied to material conditions.


Once you get past a certain level of wealth, the esteem of your fellow humans matters more than acquiring even more goods and services.


That's statistically speaking not very well supported. The most common element of those that are past a certain level of wealth is that they want more wealth, more power or both.


I don't believe that.

What I believe is that the first party to create an AGI will likely end up being nationalized and that that AI will be weaponized the next day to the exclusion of the rest of the world.


That's fine, but the OpenAI founders (including Sam Altman![1]) explicitly had in mind "AI kills everyone" as one of the main problems they were attempting to tackle when they started the organization. (Obviously starting OpenAI as a solution to this problem was a terrible mistake, but given that they did so, having a board charged with pulling the plug is better than not having that.)

[1] https://blog.samaltman.com/machine-intelligence-part-1


That whole charter is bullshit. You don't invite Microsoft to the party if you care about ethics and you don't disable your fail-safe by giving them full access to all of your stuff.

Hence my position that the charter was a fig leaf designed to throw regulators off.


Elsewhere in this thread you stated: "if OpenAI was 'the good guys' then any chance of the good guys getting a head start has just been blown out of the water"

So basically we have two cases here:

1. The charter is a deceptive fig leaf, OpenAI aren't the good guys. In which case blowing up OpenAI looks reasonable

2. The charter is not a fig leaf, Helen is doing the right thing by upholding it

I'd say upholding the charter is the right thing in intermediate situations as well. Basically, OpenAI are the good guys, in part, to the degree to which the board (and other stakeholders) actually uphold the charter instead of being purely profit-driven.


There's a third case:

It's #1 to some, and #2 to others.


Did you miss the part where it says "value-aligned"? One's that aren't value-aligned are not subject to that clause.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: