Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If few or none, does that not indicate that OpenAI has done a good job at safety?

No. It only indicates that OpenAI is less technically capable than it claims. To judge OpenAI's competence at safety, you have to look at the ethics of their behavior, since building the right ethics into an AI before it gets powerful enough to do serious harm to humans is the essence of AI safety. And the ethics of OpenAI's behavior, to put it as gently as possible, does not look good.



Ok, what actions have they taken that are unethical?


Um, closing access to their model code when "Open" is right there in their name and was the original promise they made?

Plenty of other posts in this discussion give other examples.

That's without even getting into all that happened with Altman leaving and then coming back, which has already been discussed to death in other HN threads.


I can see how the switch to closed source could be annoying to some, but it definitely doesn’t rise to the level of “unethical” IMO. In fact, the explanation of the rationale for the decision seems perfectly reasonable. WRT to Altman’s ousting and return, I still haven’t seen any evidence of compromised ethics on his part. It looked a lot more like clumsy doomers botching a coup. A supermajority of OpenAIs employees, people who I’d venture to guess have a far better handle on Altmans ethics than you or I, insisted on his reinstatement. That’s enough of a refutation on that point for me. What else?


> I can see how the switch to closed source could be annoying to some, but it definitely doesn’t rise to the level of “unethical” IMO.

Sure it does. Particularly for a company that claims to be a responsible steward of a technology that could pose existential risk to humans. If you can't even do a simple thing like keep your promises, how can I possibly trust you as a steward of a potential existential risk?

> WRT to Altman’s ousting and return, I still haven’t seen any evidence of compromised ethics on his part.

I commented on this in a number of previous HN threads and don't want to rehash it all here. (And btw, I also commented that I didn't think the people who ousted Altman showed very good ethics or judgment either. I didn't think anybody came out of that whole brouhaha looking good.) But just the fact that it happened at all should be a red flag. Again, these are people who claim to be stewards of a technology that they say can pose an existential risk. You'd expect them to at least be able to cooperate with each other like responsible adults.

> A supermajority of OpenAIs employees, people who I’d venture to guess have a far better handle on Altmans ethics than you or I

They might well know Altman's ethics better than we do, and be perfectly fine with them, because their own ethics are just as bad. They're getting paid well and the fact that core promises of the company were broken is simply Not Their Problem.


It's closed source but freely available to over 100 million people. To everyone but tech people, it's open.

What's more valuable for the normal person, Lllama, that I will never be able to use because I don't know how to get it to run, or ChatGPT that I can use for whatever I want?


Open AI not being open means it will be used in ways that will benefit shareholders, not humanity as they initially planned it, unless it so happens that humanity's goals are aligned with shareholders', but I've rarely seen this happen with the big four.


> ChatGPT that I can use for whatever I want

For whatever openai wants which is a shrinking set.


> Um, closing access to their model code when "Open" is right there in their name and was the original promise they made?

Tech people who don't like OpenAI have got to come up with a better line. Everytime I mention to someone normal that this is a complaint immediately eyes roll.

No one says "North Korea is bad because they call themselves a Democratic People's Republic despite having a dictator" because that is incredibly not important nor was it ever expected.


Sharing a CEO with Worldcoin




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: