Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Charity is only a moat if it’s not profitable.


This is the timeline that's scaring the shit out of them:

Feb 24, 2023: Meta launches LLaMA, a relatively small, open-source AI model.

March 3, 2023: LLaMA is leaked to the public, spurring rapid innovation.

March 12, 2023: Artem Andreenko runs LLaMA on a Raspberry Pi, inspiring minification efforts.

March 13, 2023: Stanford's Alpaca adds instruction tuning to LLaMA, enabling low-budget fine-tuning.

March 18, 2023: Georgi Gerganov's 4-bit quantization enables LLaMA to run on a MacBook CPU.

March 19, 2023: Vicuna, a 13B model, achieves "parity" with Bard at a $300 training cost.

March 25, 2023: Nomic introduces GPT4All, an ecosystem gathering models like Vicuna at a $100 training cost.

March 28, 2023: Cerebras trains an open-source GPT-3 architecture, making the community independent of LLaMA.

March 28, 2023: LLaMA-Adapter achieves SOTA multimodal ScienceQA with 1.2M learnable parameters.

April 3, 2023: Berkeley's Koala dialogue model rivals ChatGPT in user preference at a $100 training cost.

April 15, 2023: Open Assistant releases an open-source RLHF model and dataset, making alignment more accessible.


This really ought to mention https://github.com/oobabooga/text-generation-webui, which was the first popular UI for LLaMA, and remains one for anyone who runs it on GPU. It is also where GPTQ 4-bit quantization was first enabled in a LLaMA-based chatbot; llama.cpp picked it up later.


this doesn't even include the stuff around agents and/or langchain


The post mentions that they consider "Responsible Release" to be an unsolved hard problem internally. It's possible that they are culturally blind to agents.


They're basically saying that Pandora's Box, assuming it exists, has already been open. Even if OpenAI, Facebook AI Research and Google DeepMind all shut down tomorrow, research capable of producing agents will continue worldwide.


Interesting! It's like nothing has happened on the field for the last three weeks heh


OpenLlaMa came out last week I think.


The doc was written a bit ago.


There's "immediately profitable" and "eventually profitable". Vast compute scale allows collection of customer generated data so the latter is possible, AI as of yet is not the former.

So GP point still stands. FAAMG can run much larger immediate deficits in order to corner the market on the eventual profitability of AI.


All this talk that every investment pays off in the end is faulty and dangerous. Many investments don't pan out, 95% of the firms you see in the ticker this decade might be gone, and yet everyone is very confident is underwriting these "losses for future gains" but really it's economies of scale. It doesn't cost MSFT much more to run the GPU than to turn it on in the first place.


The amount of valuable data generated from professionals using these services to work through their problems and find solutions to industry problems is immense. It essentially gives these companies the keys to automating many industries by just...letting people try and make their jobs easier and collecting all data.


> . FAAMG can run much larger immediate deficits in order to corner the market on the eventual profitability of AI.

This assumes that there is a corner-able market. Previously, the cost of training was the moat. That appears to have been more of a puddle under the gate than an actual moat.


In other words, engage in anti-competitive behavior.


It seems the plan is to be a loss leader until scale is sufficient to reach near AGI levels of capability.


There was some indication recently that OpenAI was spending over $500k/day to keep it running. Not sure how long thats going to last. AGI is still a pipe dream. Sooner or later , they’re going to have to make money.


Oh no, they're going belly up in 20,000 days! (i.e. $10B / 500k) Compute is going to keep getting cheaper and they're going to keep optimizing it to reduce how much compute it needs. I'm more curious about their next steps rather than how they're going to keep the lights on for ChatGPT.



Assuming you're talking about the free ChatGPT product, it's important to consider the value of the training data that users are giving them.

Beyond that, they are making a lot of money from their enterprise offerings (API products, custom partnerships, etc.) with more to come soon, like ChatGPT for Business.


I know there are use cases out there, so it's not a dig. I'm curious how many enterprises are actually spending money with OpenAI right now to do internal development. Have they released any figures?


$500k/day for a large tech company is absolutely peanuts. Open.AI could probably even get away with justifying $5M/day right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: