> I can see how the switch to closed source could be annoying to some, but it definitely doesn’t rise to the level of “unethical” IMO.
Sure it does. Particularly for a company that claims to be a responsible steward of a technology that could pose existential risk to humans. If you can't even do a simple thing like keep your promises, how can I possibly trust you as a steward of a potential existential risk?
> WRT to Altman’s ousting and return, I still haven’t seen any evidence of compromised ethics on his part.
I commented on this in a number of previous HN threads and don't want to rehash it all here. (And btw, I also commented that I didn't think the people who ousted Altman showed very good ethics or judgment either. I didn't think anybody came out of that whole brouhaha looking good.) But just the fact that it happened at all should be a red flag. Again, these are people who claim to be stewards of a technology that they say can pose an existential risk. You'd expect them to at least be able to cooperate with each other like responsible adults.
> A supermajority of OpenAIs employees, people who I’d venture to guess have a far better handle on Altmans ethics than you or I
They might well know Altman's ethics better than we do, and be perfectly fine with them, because their own ethics are just as bad. They're getting paid well and the fact that core promises of the company were broken is simply Not Their Problem.
Sure it does. Particularly for a company that claims to be a responsible steward of a technology that could pose existential risk to humans. If you can't even do a simple thing like keep your promises, how can I possibly trust you as a steward of a potential existential risk?
> WRT to Altman’s ousting and return, I still haven’t seen any evidence of compromised ethics on his part.
I commented on this in a number of previous HN threads and don't want to rehash it all here. (And btw, I also commented that I didn't think the people who ousted Altman showed very good ethics or judgment either. I didn't think anybody came out of that whole brouhaha looking good.) But just the fact that it happened at all should be a red flag. Again, these are people who claim to be stewards of a technology that they say can pose an existential risk. You'd expect them to at least be able to cooperate with each other like responsible adults.
> A supermajority of OpenAIs employees, people who I’d venture to guess have a far better handle on Altmans ethics than you or I
They might well know Altman's ethics better than we do, and be perfectly fine with them, because their own ethics are just as bad. They're getting paid well and the fact that core promises of the company were broken is simply Not Their Problem.