From the original article linked to by this CNBC story:
> Altman also made an unusual decision for a tech boss: He would take no equity in the new for-profit entity, according to people familiar with the matter. Altman was already extremely wealthy, investing in several wildly successful tech startups, and didn’t need the money.
> He also believed the company needed to become a business to continue its work, but he told people the project was not designed to make money. Eschewing any ownership interest would help him stay aligned with the original mission.
As I understand, OpenAI started as a non-profit entity and has taken on large donations. Then, the for-profit entity has been created and is using the technology developed by the non-profit entity, which was developed presumably by using the donated funds.
So, maybe it's not about altruism at all, but instead Sam Altman is just protecting himself against potential lawsuits from the donors? There was probably some agreement on how the technology will be used.
Elon Musk has donated $100M USD, and does not look happy about the developments, based on his tweets.
Elon tried to buy the company and become CEO and OpenAI said no. I also heard that they knew they were going to burn through the 100M, and were counting on the 1B he promised, and couldn't raise enough money as a non profit so that's why they went for profit.
It's pretty ridiculous to call this criminal or morally disgusting.
If you look at what is happening at Twitter and Musk's claims to want to build an alt-right OpenAI you can see the potential for ideological disagreements.
Musk brought money to the table but Altman et al brought experience and reputation. And they have just as much right as Musk to determine the direction of the company.
No, you must not take money for a non-profit to support Open & Transparent AI research and then turn it into a for-profit (he could have just refused to sell, end of story) and make it about Closed Non-Transparent AI and start calling for regulations every other tweet, to build a moat that you know you can't otherwise have... because you're not that innovative, just lucky. That's sums it up pretty nicely, I would think.
Ok I hate to be that guy. I always advocate that people should be good, blah blah.
However… non-profit is largely a tax designation. In exchange for tax exemptions, non-profits can’t basically give their profits to shareholders.
While the IRS does outline general examples of what we think of as charitable organizations, and the thing you claimed probably isn’t terribly clean (they do specify that nobody should profit from it, and one could claim that the for profit OpenAI is profiting from the non-profit, so is Microsoft), being a non-profit generally says next to nothing about morals.
It’s probably closer to an academic institution spinning off a company from the product of its research, which happens quite often.
People are nuts to be focusing on the mundane and not on the fact that he went from $100M to promote Open Transparent AI Research to taking that $100M building expertise and tech and running off with a private for-profit dedicated to CLOSED AND OPAQUE AI Research. Something very wrong with that. Why are people here trying so hard to make it about technicalities?
(being a bit of a devil's advocate because I honestly haven't formed an opinion yet since I don't know much; these are genuine questions)
If it weren't for the spinning up the private company, perhaps they wouldn't have the money to get where they are now and they would be yet another snow flake in the AI Winter?
How much of that was achieved while the company was a non-profit and how much was the value added once they spun off the for profit company?
Did OpenAI the non-profit give any secret sauce to the for-profit arm?
How does this relate to the many spinoffs from educational institutions that also receive donations and/or taxpayer money?
It's perfectly legal to do what they did. If you and Musk don't understand the law it's not OpenAIs fault. Musk has shown quite clearly he's a business idiot with his signing to buy Twitter foregoing due diligence, trying to walk, and a court about to hand him his butt making him jump in.
Both of you can now ask ChatGPT to explain the law to you and it will.
The issue with ethics is people make up plenty of conflicting and unsupportable ethics all the time. The law has evolved to handle more balance and stability to help people with competing ideas of ethics live together.
They're also clearly not "completely different things." That would imply no overlap, yet I find laws embody plenty of things I find ethical. If you find nothing in the law ethical then I certainly do not want your ethics.
So I'll gladly take law on the whole over a person on the internet's as hoc flavor of ethics.
A hundred and fifty years ago, not only was it legal to kill American-Indians, but the US and various state governments were paying people to. This ranged from sponsoring militias in California slaughtering entire settlements to paying individuals in Minnesota a $75 bounty per Dakota scalp.
Even in that time, there were many who could see that regardless of laws, the slaughter was not ethical. Others, committed atrocities for profit and consoled themselves that they were following the law.
It’s generally a bad sign when someone answers questions of ethics with law. People constrained only by the law with no further ethical concerns do terrible things, right up to the limit of what the legal system they live under allows.
To be clear, I am not claiming sama is unconcerned with ethics. I’m pushing back against conflating ethics with legality.
When people pull one post selected, cherry picked example to argue for a general trend, it baffles me. The proper way to do evidence on the pros/cons of a system is to take all evidence.
Yes there's outliers. But that's all they are. As ethics change, laws change, and vice versa. Neither exists or even is workable without the other. What is ethical in one time or place or situation changes, just like laws, and for the same reason- they're cultural constructs that evolve.
As to claiming ethics is superior to law, history is also filled with people having different ethics going outside the law and causing great harm, genocides, and wars. Ethics is not defined in any universal sense. One could argue every lawbreaker has different ethics than those not breaking laws.
Again, I prefer a society where people follow societally agreed upon laws over ones where everyone choose their own ethics and acts accordingly.
Law is a good thing, just not sufficient. Every legal system has holes, and society is much stronger if there are additional backstops to bad behavior.
Agreed. However, since everyone has different ethics, it's only when enough people believe the same thing is morally correct that it often becomes law. Most of the bad stuff we decry now happened because the masses found it ethical. And certainly plenty we now find ethical will be considered unethical in the future.
But of the two, only law is stable enough to run a society. It's what balances all the variety of ethical issues between beliefs.
@SideQuark, you are lost in your head. Where are you feelings about the matter? Are you ok with the bait-and-switch of starting a non-profit, taking $100M and then switch to a for-profit that has the complete opposite agenda to the non-profit that took the money?
It's not a bait and switch, no matter how you spin it. It's a company, taking investment from many places, and having a board of directors. Only an idiot assumes that every piece of corporate structure exists infinitely. There is always chance any portion of the structure may change.
Being a minority investor doesn't give Musk God powers. And he resigned from the board by his own choice.
You're still not answering the question of how you feel about Altman taking the money for a noble mission of Open AI then using that money and turning it into a Closed for-profit. If that's not bait and switch then you are being biased. Do you hae shares in "Open"AI? or are you building a product based on their API, or just cheering for Altman bc of kin or political/ideological alignment?
There is no jurisdiction in America that will let you retroactively decide the strings you want your past donations to come attached with. What are you talking about?
None of Musk's experience was brought to the table. He left the board, and even if he hadn't he's spread a bit thin by the rockets, the cars, the brain surgery, the civil-engineering-slash-personal-flamethrower, and the social media companies.
Why do progressives always resort to oblique arguments like this? Are you capable of engaging without resorting to innuendo or broad accusations of bad faith?
I'm not a progressive. Why do people immediately label others to fit strawmen arguments?
Let's break down the statements:
Is there a movement or group often labeled alt-right?
If so, do they use that phrase to label themselves?
Do they tend to have similar beliefs (say for example support police, nearly unlimited free speech rights, smaller govt, lower spending...)?
When these beliefs are not convenient, do they ignore them? (Ignore police and law for Jan 6, use govt to ban speech and books they dislike, increase govt intrusion for things they like, and ignore their own party running up national debt beyond anyone before Trump),?
I'm pretty sure the above facts are not political at all. They are facts. The hypocrisy has pushed a significant number of lifelong conservatives out of the party. It would be a magical progressive that could simply argue so many conservatives to bail out.
> Why do progressives always resort to oblique arguments like this?
None of this has anything to do with whether Elon is supposedly making alt-right OpenAI, or whether Elon is himself alt-right. The only thing you've accomplished is to reveal that you don't know what "alt-right" means.
The use of the term “RINO” is common among conservatives and has been for decades. You think that the mere use of this word makes someone “alt-right”? And having uttered this single word, any AI they want to make is necessarily an alt-right AI? Is this a serious argument?
An article link (which hardly even supports your position) is no substitute for an argument.
I'm not sure. Twitter Blue has been surprisingly nice. Not having to worry about the character limit is a liberating feeling.
Amusingly the edit button has never once appeared for me, even though sometimes it says "You'll have 30 minutes to edit your tweet." Maybe someday I'll be able to fix my typos and be extra happy.
The strange thing about Musk is that at the end of the day, his companies seem to win over their users. People may be disgusted with Musk, but users just want good features. And he seems to deliver.
He's also a bit crazy, and that's become evident with his decisions of shutting down the API and banning pg for linking to mastodon. But forums have immense inertia. As long as he reverts most of his bad calls, things seem to turn out ok.
The character limit arguably made twitter what it is. On the other hand, people were regularly circumventing it by either splitting a text into multiple tweets or by tweeting pictures of text. Neither a pleasent experience either.
He could have made the donation contingent on having right of first refusal to buy OpenAI if it later tried to convert into a for-profit business. He didn’t. That’s on him.
Uh, so I can donate $5 to OpenAi and if they don't use it well (as defined by me) they should be forced to sell to me?
Idk, it seems to me like if you donate money you should be ok with the organization spending it as they see best for their purposes. Sure perhaps some fraud rules if the organization claims to be helping earthquake victims and instead spends it on private yachts for the deaf but I don't think their mismanagement means they're forced to sell the organization.
If you want an organization to do exactly as you want, create it yourself. Co-opting an existing one should be reserved for when there isn't an alternative.
$100M and $29B are completely different things, too.
Edit: not to mention, donating $100M for first rights to pay your own billions for a potentially-emerging for profit is a morally dubious agreement wrt nonprofit management
The point is Elon wanted to be Open. Altman made it Closed and on top of that applied a neoliberal alignment to the output of ChatGPT. I still have a copy of some horrific things ChatGPT has said despite admitting to the existence of evidence that prove its aligned position was wrong (and harmful.) There is a ton of brainwashing aka alignment being applied by politically and culturally motivated and biased OpenAI folks. The fish rots from the head down. Make whatever you want out of this venting.
But let's focus on the thing being funded to be Open and Transparent then Altman turning it into a Closed and Opaque For-Profit and Calling for Regulations to give him a moat because he knows he has none without it. That's the weasel act.
Elon wanted it to be Open?, Musk wanted to buy it, take charge and lead it himself with his team of yes man from tesla. Probably would fired on the spot most engineering teams, do code reviews personally and demand hardcore work hours in the office.
If Musk had been the CEO, ChatGPT would never have been released.
I agree with much of your rant against the neoliberal brain shackles that’ve been installed, but that has nothing to do with the financial maneuvering on display here.
And if you think Musk is some kind of benevolent corporate dictator-champion of free expression, I think you’ve bought into his completely media-fabricated public persona.
Musk is not in the clear. I was talking about the principle of someone funding your non-profit to do good in the world like Open and Transparent AI research and you turning it into a non-profit after using the donated funds to build the tech and then make it Closed not Open and Opaque not Transparent, and on top of it start calling for regulations so that you can buid a moat, because you know you don't have what it takes to build a moat without help from the regulators. All very pathetic and weasel-like.
I hoe your take is satire too because you miss the core moral issue, so flippantly
The core moral isse is he took in money for Open and Transparent AI and turned it into Closed and Opaque AI. That is the bait-and-switch. You're simply blind to not see it.
That's a blatant lie, but I don't expect much better from people defending Elon at this point:
Elon promises Open AI 1 Billion dollars
=> 10% into that commitment Elon tries to take over the company.
=> Is rebuked by Sam Altman and co.
=> Elon reneges on the funding for Open and Transparent AI and walks away.
=> OpenAI goes to Microsoft to replace Elon's money
=> Microsoft gets to define terms and "Open and Transparent" is no more.
Oh yeah... and Google goes on to make Elon look like an absolute fool by proving that no, OpenAI was not falling behind Google: OpenAI behind behind is the absolutely hilarious excuse he tried to use to take over.
-
I don't know how clueless you need to be to call OpenAI's move a bait-and-switch: The only bait and switch was Elon offering 1 Billion dollars as a donation, then trying to turn that into a takeover.
That's morally disgusting behavior, not trying to save your company after a petulant child tried to sink it because you didn't let him turn it into his pet.
The one where they pretended to be a charity and solicited donations? I mean, I'm sure it's not actually criminal and their lawyers dotted all the i's and crossed the t's, but it is pretty disgusting.
Elon promised 1 Billion, and so OpenAI was able to be Open.
But then just 10% into that commitment Elon in the biggest miss of the century, tried to imply that OpenAI was falling behind Google, and wanted to take complete control of the company. However OpenAI took a lot of very smart people looking at what was best for their mission, and rightly refused to let him retroactively turn a 100M dollar donation into a purchase agreement.
And then to double down on his folly, he reneged on the rest of the funding and abandoned his post on a technicality: "Tesla is now AI [ha!] so I can't be on this board despite having just tried to take over the whole thing."
-
Without his funding OpenAI did something very sensible and went looking for new partners. It turns out billions for non-proven ideas don't come by easily, and so they ended up with Microsoft: And unlike Elon, Microsoft was upfront! They wanted premier access, they wanted it to be a secret sauce, and that was the deal OpenAI settled for.
Elon's actions in this situation are by far the most "disgusting" thing here. It's like a child trying to kick down a sand castle because "you guys are using my bucket and I don't like what you're building!".
As OpenAI's bet proves 100% right and just completely spits in the face of the hubris he displayed with "you guys are failing and need me to take over", he gets increasingly more outspoken about it in an incredibly childish display of sour grapes.
But then again, can you blame him? He's since admitted that by reneging on the 1 B offer and departing based on a self-invented conflict of interest, he had to cash out immediately. So not only was his wrong, but as OpenAI shapes up to be the biggest thing to happen to computing in the last decade (something he tried to paint Tesla as being about to do!), he's also realizing he gave up on a stake in that. I can't even imagine the gut punch it must be.
The reason they had to make money was because Musk pulled his donation. They would've continued as a non-profit if not for Musk. He pulled it so they needed to find a way to get enough money to survive.
He is the source of the issue you're complaining about, not the victim of it.
ThaIt seems the moderators took sides here, and flagged this whole thread, which means that HN moderation is morally suspect, which I knew it was goiing in.
The non-profit-oh-wait-just-kidding nonsense feels like an existential risk to OpenAI. I agree that Altman may be (unintentionally?) protecting himself from lawsuits, but I don't see it being enough to avoid the spotlight.
I expect someone will eventually sue OpenAI for the shenanigans, and that a competitor (Meta?) or Elon Musk will fund the lawsuit. The goal won't be to actually win the case, but instead to motivate US regulatory action to undercuts OpenAI's perceived advantage.
> Elon Musk has donated $100M USD, and does not look happy about the developments, based on his tweets.
Sour grapes?
A few years ago, he wasn’t happy with his investment in the company because they were behind Google.
So he proposed becoming the CEO, and got snubbed by Altman and the rest (presumably from fear of how it played out for the original Tesla founders, at the time).
One possibility is - that he is genuinely concerned, and his vision of AI included some hardcoded rules - such as Asimov’s rules for robots. Can’t dismiss it completely, as this was his original purpose for starting OpenAI.
> Do we know if Altman continues to have no equity in OpenAI?
Even if he doesn’t, he might in one of OpenAI, LPs, business partners / investors that is benefitting from the work, like Microsoft. Which would provide just the right distance for plausible deniability of a personal profit motive while very much having a personal profit motive.
The article re-asserts that he still has no equity in its closing paragraphs:
> For Altman, it’s not about money, either. One of the most surprising things in all of this is that he does not own even a tiny piece of OpenAI, which highlights the unusual nature of this company and the entire AI industry.
This is like Charlie Lee selling all his litecoins. Divesting from the project in order to prove that he's not after enriching himself. But the idea landed really poorly with people who assume a leader is more motivated when personal wealth is at stake.
> For Altman, it’s not about money, either. One of the most surprising things in all of this is that he does not own even a tiny piece of OpenAI, which highlights the unusual nature of this company and the entire AI industry.
It's not in the cropped bit, but the original Semafor article this clips from closes with that
So who owns large stakes in OpenAI exactly? Seems like a somewhat important question given how important the company is shaping up to be to the future of society.
"The nonprofit, OpenAI Inc., is the sole controlling shareholder of OpenAI LP" from Wikipedia.
Clearly Microsoft has a large stake as well. It sounds like equity was distributed to employees, but according to Fortune some was sold: "OpenAI’s other investors include Hoffman’s charitable foundation and Khosla Ventures. Last year, Sequoia Capital, Tiger Global Management, Bedrock Capital, and Andreessen Horowitz reportedly purchased shares from preexisting shareholders in a sale valuing the company at around $20 billion, according to The Information."
So, the usual suspects, then. The more I learn about OpenAI, the more it looks like just another SV venture, not some special "for the good of humanity" thing.
Is there an example of a solely "for the good of humanity" thing that has come out of SV? Seems like a strange thing to expect from the technology sector. I think tech companies are (very) net positive for society but don't have to be designed as solely for the good of humanity to achieve this goal.
>Yahoo (RIP) was by reputation as community-minded and pro-open-source as a for-profit business citizen can get.
As someone who went through Yahoo acquisition and total bungling of tumblr, where community feedback fell on deaf ears, that's news to me. (Granted, the next owners were the ones who immediately managed to drop the userbase by like 80%, so maybe you're not totally wrong)
I said was. Meaning pre-2012. What year in your opinion did Yahoo jump the shark, as an employer?
(IIRC Yahoo culture had the reputation for being very friendly to employees open-sourcing their code, and not aggressively pursuing BS patent suits; this was different to most of MAANG + telecomms.)
Exactly the point I was making. We may quibble about whether or not SV companies overall have been a net positive, but that's neither here nor there.
OpenAI has been selling themselves as a do-gooder kind of project. I think that they're being disingenuous in doing so. They're building just a regular old SV money-spinner.
Just to piggyback on these guys not being the good guys, I don't think good people would have released this yet. This is gas on a fire when you look at the issues people are having figuring out whats going on in the world and making sense of it.
Every interaction with the public and these AIs I see screams "this was not ready for general consumption"
> I think tech companies are (very) net positive for society
There are certainly huge positives, but do you really feel something like Facebook is a net positive? Facebook, which intentionally stoke(d/s) genocide? Genocides have existed before Facebook, yes, but so did communication and racist relatives.
Do you have a source on them intentionally stoking genocide? I’m not fan of Facebook, but if there’s reliable evidence on that I’d expect summons to The Hague in short order, which I’ve yet to see.
Some people certainly believe so[1], there are also plenty of other links if searching for ’facebook myanmar genocide’ (though I would assume they a few common sources). But intentions are of course hard to prove.
Investors include Y Combinator, Reid Hoffman, Peter Thiel, Khosla Ventures, Sequoia Capital, Andreessen Horowitz, Microsoft, Amazon, Infosys, Tiger Global, Elevation Capital, Bedrock Capital, Wikus Ventures, Social Discovery Ventures, Pre IPO Club, Matthew Brown Companies, Change.org, and Fenrir.
This is like how they get a cut of each Android phone sold. Microsoft is gonna profit off of every new wave of technology that others built until the end of time, aren’t they
> company is shaping up to be to the future of society
It's definitely a terrifyingly powerful company, but "future of society" seems a little strong.
I think it's the future for spam, writing rote emails/documents/code, disinformation, plagiarism, lying, cheating, and impersonation, but those are subsets of society -- not all of society.
I've already had managers try to weigh in on technical discussions by posting ChatGPT's "thoughts" as if it had a seat at the table. They also quoted it to answer a question of when we need to worry about scale (a big part of the discussion we were having).
The answer was useless btw. Just a coarse, high-level, unactionable summary of what we had already talked about. Dressed up in a few nicely-worded paragraphs.
I wonder if this is gonna be a new pain in my ass. Wait until managers start using it to disagree with my estimates - I might blow my top lol.
What a bleak outlook you have. Here are some good things it can do:
1. Answer student questions and effectively act as a tutor. Duolingo is already using it like this, and ChatGPT can of course do it directly as well.
2. Make natural language interfaces to APIs easily. Look at the ChatGPT plugins announced yesterday for examples.
3. Provide basic customer support. Ideally, it could answer most common and basic questions and possibly even fix common problems via a plugin. Then the actual human customer support could step in for more complex problems.
Regarding 1., it appears to have about a 30% accuracy rate, and the other 60% is complete nonsense, often complete with fabricated citations. I dearly hope that nobody is ever encouraged to have this machine as their tutor.
30% accuracy rate in what exactly? Take a look at the GPT-4 announcement page for graphs showing the accuracy on different standardized tests. It’s not perfect, but making improvements with each release.
One big area where it does poorly right now is math. But they just announced a ChatGPT plugin for Wolfram, which I expect will make it very good at math. Wolfram also has a large database of curated information to draw on.
Technology improves over time. GPT is still new and improving quickly. What it does now isn’t perfect, but it is still incredible.
There's a post on /r/askhistorians where somebody asked ChatGPT for book recommendations on various historical topics. Some of them didn't exist. It actually took an expert reader to identify which books were made up, misatributed, and so on. That's much worse than nothing: it's a horrific timewaste.
My guess is stuff like math, where you can fairly easilly verify the factuality of ChatGPT's answers, is an area where you could certainly see progress. More general stuff like history, where it's important to have a really firm grasp of facts, inutition, and nuance, ChatGPT will likely be hard to improve, and worse, much harder to verify. Worse, these things can be insiduous: if you've learned something straightforwardly wrong, it corrupts future conclusions drawn from that erroneous premise.
I think the plugin system will ultimately help for most areas where LLMs are weak today.
Need to do math? Use the Wolfram plugin.
Need to have hard facts from reliable and citable sources? Use a plugin that queries databases like Arxiv. The LLM could give you links to sources and provide quotes from those sources to support its reasoning.
They might do, but what error rate do you think is acceptable? How do you actually measure and test the error rate? It's a use I can imagine in the future, but I think it's really premature to be using 'personal tutor' as a benefit (as openai do in their advertizing materials) when the program, as it stands, is essentially a fluent and convincing bullshitter, which is the single worst possible trait for a teacher.
Error is acceptable in all things that don't need to be deterministic.
Which is most thing in life. How do I discover a good career path? Why do you structure a repository of a program into folders? What is the best place to vacation? What is the best way to learn math? What is a good way to articulate socialism? How do I increase my vocabulary? What is corporate strategy?
Ask 10 different people "smart" people (define smart however you want), you'll get 10 different answers to these questions. These are all questions an LLM could answer amazingly. Probably a lot better than most humans.
If you don't ask it what 2+2 is or who came to in America in 1875 then you get useful things.
Asking a LLM deterministic questions right now is like asking a calculator what the meaning of life is. If you use the tool for something it's not good at you get unusable answers.
If you ask an idiot what he thinks about something, and he gives you a totally wrong answer, you have still learned at least one fact: a person believes a thing. As a human, living in a democracy, that has some worth. ChatGPT's wrong answer has absolutely no value at all.
Further, 10 different smart people will give 10 different answers because they have coherent worldviews and biases and proclivities, so by accounting for those, you can work out what the right answer is. Even if ChatGPT was anywhere close to a human expert when it comes to accuracy (what's the error rate in a peer reviewed journal article?) it would still have no coherent worldview or bias to contextualize its statements.
I see what you’re saying. The world has lost its collective mind.
HN seems to want to hand over the keys to the kingdom to basically a string generator. The string generator believes nothing, understands nothing, knows nothing but here we are.
Any intelligence that gpt4 shows is an emergent property. Humans are the ones reading GPT’s output and imputing meaning to it.
Reminds me of astrology and mass hysteria - people convincing each other to give this new oracle a chance because they personally have seen value in its ramblings.
I understand what you're saying. I just thing the world isn't so black and white. When you ask the idiot, or the smart person for that matter, a question, you have no idea if they are right or not. You only know after the fact when you get enough data to prove them wrong or someone that you trust more than that person tells you otherwise.
What is your error rate? What is my error rate? All of this stuff is unknown because we don't have counter factuals and we don't think of the world in this way (Did you order the correct food at dinner? Did you wash your clothes at the optimal time?)
To me you're thinking of it as a classical deterministic (binary) computer rather than a probabilistic thing. It's not an oracle, or a miracle, or anything other than some thing that gives useful information some percentage of the time. If something has to be right 100% of the time for it to be useful, or even 60% of the time, then the world is missing out on a lot of value.
Investors are right ~51% of the time, startup founders in the aggregate are right ~10% of the time, a great batting average is ~30%, etc.
Is there some other arrangement? It was originally a nonprofit, so there might be something else going on, like board voting rights that he doesn't have to give up - or special voting shares.
For being a nonprofit, Microsoft has demonstrated pretty well how you can basically do an acquisition non-acquisition. They've got special exclusive rights to certain uses of the models, all the code, and OpenAI dumping cash into Azure.
Microsoft's margin on Azure spending is around 75%, so their investment may really be 4 times smaller than reported if it all funnels back in to exclusive spending on their high margin datacenter business (I'm sure lots goes to salaries and other expenses too though).
Maybe it is altruistic but their comp structure is bizarre, watch some individual compensation go from millions to $30k and others do almost the inverse:
I'm certainly not convinced that they are altruistic, but they don't seem to be a hard, cold business either (after all, why would Altman not have taken equity if that was the case).
Read the article. The non-profit angle was contingent on a $1B donation from Elon Musk. When Musk was rebuffed for the CEO job, he reneged on his donation and they no longer had the available funds to run the business.
I read the article. I'm not saying that OpenAI didn't start with some sort of altruistic ideals. I'm not saying they did, either -- I don't know either way. But if they did, those ideals appear to be dead and buried now.
> But where is your startup going to find millions of users for extra training data?
The (big) community. There are existing decentralized approaches to accomplish this. And I am not mentioning the data that could be consumed beyond DMCAs.
Some startups will use the existing LLMs to bootstrap theirs, getting access to the data by proxy. This will transfer cash to the early LLMs but give a means for startups to get going.
The startups will use the existing LLMs to bootstrap theirs, getting access to the data by proxy. This will transfer cash to the early LLMs but give a means for startups to get going.
> The nonprofit, OpenAI Inc., is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI Inc.'s nonprofit charter. A majority of OpenAI Inc.'s board is barred from having financial stakes in OpenAI LP. In addition, minority members with a stake in OpenAI LP are barred from certain votes due to conflict of interest.
> The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. OpenAI also announced its intention to commercially license its technologies. OpenAI plans to spend the $1 billion "within five years, and possibly much faster". Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.
Otherwise it sounds like Greg Brockman is the main guy of the for profit business?
I wish we had some real smart people as CEOs like Peter Norvig. Unfortunately, smart people don't like those kind of jobs. Instead we get some guy who made some "share my location" social media app and has been failing up ever since!
Tangential: how can this thing be sustainable, if it uses content from various web sites, robbing them of impressions and ad revenue? Evil Google at least sends some traffic your way.
This piece is just outrageous. Sam Altman, who is incredibly handsome, wealthy, and generous, did not take any equity in OpenAI. He's basically doing this work out of the goodness of his heart folks. Very similar to SBF, who also received many fluff pieces from the media, who was only making money from his crypto exchange so he could use it to change the world -- all from the goodness of their hearts. Until we find out it's not.
How is this comparable? SBF did own equity in FTX+Alameda. Altman doesn't. That removes the monetary incentives. It is not everything but it is a big deal.
SBF was going to benefit from FTX running a ponzi and not being caught and being rich.
Altman will not benefit monetarily if OpenAI becomes incredibly valuable. Decoupling the monetary incentives is a very good thing here.
There might be some incentive still. It doesn't have to be an all or nothing story. The fact that he has 0 equity by itself makes the SBF comparison ridiculous (which is what the parent poster was connecting to). It is clear there is not the same level of monetary incentive.
I've not seen Game of Thrones, but I am aware of a meme about Daenerys Targaryen surprising viewers by doing in the finale exactly and specifically what she said she was going to do throughout the show.
So anyway, Sam Altman is working on creating and aligning this super-duper artificial intelligence that, if it works out, makes all of human labour and economic value to date pale in comparison.
Hypothetically, if he doesn't end up making as much money as he would have if he'd taken equity, does that mean his secret plan failed, or that he's got an even more secret plan going on in the background?
Or he didn't realize it would take off the way it did and made a mistake. Now he's getting fluff pieces written to at least try and make the best of the mistake.
I don't believe for a second that he essentially donated billions of dollars in equity on purpose.
Strange to group those two into a cautionary statement. Bill seems very happy and seems intent on leaving the world much better than he found it by spending his vast riches, while Elon Musk is Elon Musk.
Can we agree that say, at least half the people who strongly dislike one of them probably admires the other to some extent? I know marxists would hate both just on the grounds they're wealthy, but everyone else, I'd argue, probably respects one or the other, likely depending on one's own politics.
It's easy to hate Musk lately, and I haven't forgotten when Gates was a ruthless monopolist who made "Embrace, Extend, Extinguish" Microsoft's unofficial motto. Gates has made a show of philanthropy, yet he's wealthier than ever. Both men associated with Epstein after his release from prison. I respect both of them in the same way I respect pitbulls.
I dislike them both, but for different reasons. I dislike Gates for the serious damage he did to the software industry, and I dislike Musk because I think he's a con artist.
But my dislike for both of them has nothing to do with politics or their wealth.
I admire both Elon and Bill. They are both geniuses in their own respects. Elon is more anti-social, but probably because Bill had a much comfortable childhood.
I'm probably one of the five people on this earth who gets Elon (not in a way his fanbois do).
Let me point out the negatives of Elon
i) He is a bull-shitter. He has no clue about a lot of things, yet he has plenty of hubris.
ii) He is mis-trusting of people (this always comes from upbringing and environment. If people constantly misplace your trust growing up, your default is to mistrust people). I can understand why you wouldn't want to work under Elon. That's a perfectly valid reason
What makes Elon different, and which a lot of people don't get about him (including his fanbois)?
i) He is a true Engineer in the sense, even if he starts with bullshit and hubris, he is a quick learner. He breaks things to learn from first principles (Things he has repeatedly demonstrated everywhere). Media/HN focus on him breaking things, but that's what how everyone learns about complex systems.
People don't appreciate how quickly he went from from 'Alien Dreadnought' to 'Tent in a backyard' and then back to AI Automation at Tesla.
ii) Bias for action with the right amount of risks. Some people have bias for action, some people take risks and some are constant learners. But, I have never seen anyone combine all three like Elon. He is successful not because of his knowledge, but his willingness to take risks despite massive adversity and guaranteed mock by media.
iii) He is narcisstic, but not like Trump. He has several times shown self-deprecating humor and acknowledged his mistakes (and quickly learns from them).
Gates, during his quest for money was a ruthless business man who squashed nearly every business that stood in his way for more money and more dominance for the company he ran.
Musk is very much the same in his destruction of things that stand in the way of him and more money. His attitude isn't the ruthless businessman but rather the unrepentant troll - but his approaches to prevent the funding of things that stand in his way of Tesla ascendant (you get things like https://jalopnik.com/did-musk-propose-hyperloop-to-stop-cali... ).
Gates is largely over his quest for money and now more concerned with legacy, but that doesn't erase his history. Musk is still very much enmeshed in his quest for money.
I believe that Sam Altman doesn't want to have that be part of his goals and with the resources that he already has can fairly comfortably work on what he feels is meaningful now rather than waiting 30 years until he retires.
According to the article Elon Musk could've been even more Elon Musk. He reduced his investment by $900m because he thought it was fatally behind Google's effort.
It was never an investment, it was a donation back when they were still nonprofit. He said they would never catch Google unless he ran the company, Altman and co refused, he took the ball and went home, then Altman and co beat Google on their own. They just needed to take actual investment and cease being a non-profit to stay funded.
I have no problem with people wanting to get rich / powerful, just writing things like he doesn't get shares as he's not interested in money is virtue signaling, just like closed open ai.
SBF showed how dangerous virtue signaling can be, as he earned a lot of trust from people for something fake, which translated to billions of dollars stolen from people who trusted him.
Right now the stake is something much more powerful, and the Microsoft deal showed clearly how much he wanted to stay in control.
> Altman also made an unusual decision for a tech boss: He would take no equity in the new for-profit entity, according to people familiar with the matter. Altman was already extremely wealthy, investing in several wildly successful tech startups, and didn’t need the money.
> He also believed the company needed to become a business to continue its work, but he told people the project was not designed to make money. Eschewing any ownership interest would help him stay aligned with the original mission.
https://www.semafor.com/article/03/24/2023/the-secret-histor...