I agree AI is useful, but not to that extent to what it is valued on the market. I do not think that AI companies can deliver as much as they promise. With the driving core at OpenAI basically gone, I bet they will soon implode under the weight of their promises. Which means, investors will start pulling out their stakes. boom
Speaking for my own n of 1, ChatGPT Pro has almost entirely (>90%) replaced the Google search engine in my daily life. The results from ChatGPT are just so much better and faster.
That's got to be worth something, since Alphabet is a $1.7T company mostly on the strength of ads associated with Google search.
Google doesn’t care if you’re going elsewhere to ask deep questions about Rust or whatever. They care way more that people go to them to look for the best bread mixer, or find a good restaurant, or a local massage therapist. In that regard I think Amazon is still a much bigger threat to them.
GPT is very useful as a knowledge tool, but I don’t see people going there to make purchasing decisions. It replaces stackoverflow and quora, not Google. For shopping, I need to see the top X raw results, with reviews, so I can come to my own conclusion. Many people even find shopping fun (I don’t) and wouldn’t want to replace the experience with a chatbot even if it were somehow objectively better.
There is a wide variety of services available to people for specific use cases. When stack overflow came along, I used that for programming questions instead of google. But I still use google for most other searches.
I go to Amazon if I want to find a book or a specific product.
For the latest news, I come here, or Reddit, or sometimes twitter.
If I want to look up information about a famous person or topic, I go to Wikipedia (usually via google search). I know I can ask ChatGPT, but Wikipedia is generally more up to date, well-written and highly scrutinized by humans.
The jury’s still out on exactly what role ChatGPT will serve in the long term, but we’ve seen this kind of unbundling many times before and Google is still just as popular and useful as ever.
It seems like GPT’s killer app is helping guide your learning of a new topic, like having a personal tutor. I don’t see that replacing all aspects of a general purpose search engine though.
Chat gpt is not a good source of truth so can’t be used for information retrieval at scale. You might have a specific usage pattern that is very different to the majority of Google Search users so it works for you
Personally, I don't have use case for comparing Google and ChatGPT that has truth as a requirement in the output.
For the majority of my use of ChatGPT and Google, I need to be able to get useful answers to vague questions - answers that I can confirm for myself through other means - and I need to iterate on those questions to hone in on the problem at hand. ChatGPT is undoubtedly superior to Google in that regard.
Searching Google is not a good source of truth either; especially their infoboxes which have been infamously and dangerously wrong. And if you follow a random search result link - well, who knows if the content on that site is trustworthy either!
But you’re in control of your information retrieval, you didn’t have an unreliable agent synthesise bits in the middle.
Again - to each their own. But what people use google for GPT doesn’t replicate anyway (and what the Google business was built on) - which is commercial info retrieval.
As of a recent update, ChatGPT can do an internet search to answer "find a Thai restaurant near me." Of course, it uses Bing, not Google.
And for my single query above, ChatGPT searched multiple sources, aggregated the results, and offered a summary and recommendations, which is a lot more than Google would have done.
ChatGPT's major current limitation is that it just refuses to answer certain questions [what is the email address for person.name?] or gets very woke with some other answers.
Google is not a good source of truth at all, for anything other than hard facts. And nowadays, even the concept of "hard fact" is getting a bit fuzzy.
Google search reminds me of Amazon reviews. Years ago, basically trustworthy, very helpful. Now ... take them with a tablespoon of salt and another of MSG.
And this is separate from the time-efficiency issue: "how quickly can I answer my complex question which requires several logical joins?", which is where ChatGPT really shines.
Even if OpenAI implodes it will hardly impact other LLM-focused startups. In fact it would probably be a boon for them as people search for GPT alternatives.
Sam & Greg could start a new AI company by Monday and instantly achieve unicorn valuation. Hardly a burst.
This is almost certain to happen if they can snag the talent, I bet his phone is blowing up with VCs right now, revenge move and now unshackled from a non-profit nature of OpenAI
Honestly this is exciting. Are they going to be the first company to achieve a $1 Billion evaluation within 3 days? Would they file the incorporation papers on Monday meaning they get that valuation within 24 hours?
There have been bubbles in the housing market in the past - houses are quite useful.
It's a bubble if the valuation is inflated beyond a level compared to a reasonable expected future value. Usefulness isn't part of that. The important bit is 'reasonable', which is also the subjective bit.
Could be. Housing bubble happened even though most people lived in houses and still do. It's all in price vs utility. If the former gets way ahead of the latter, and people start trading
just on future price raises, you've got a bubble.
Electric cars are useful as well, but still, most electric car startups are -90% down from the peak. A financial bubble does not mean the underlying product is bad.
It can be useful in certain contexts, most certainly as a code co-pilot, but that and yours/others' usage doesn't change the fundamental mismatch between the limits of this tech and what Sam and others have hyped it up to do.
We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.
> it's not going to get "smarter" and it'll always lack true subjective understanding
What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.
My basis for these claims is from my research career, work described so far at aolabs.ai; still very much in progress, but form what I've learned I can respond to the 2 claims you're poking at--
1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.
2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.
So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.
There is no answer or understanding "out there;" it's all what we experience and come to understand.
This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).
I don't see why "does not hallucinate" is a viable definition for "intelligent." Humans hallucinate, both literally, and in the sense of confabulating the same way that LLMs do. Are humans not intelligent?
Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.
Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.
But I'm not seeing an explicit reason why experience is needed for intelligence. You're repeating this point over and over again but not actually explaining why, you're just assuming that it's a kind of given.
I would appreciate another example where a major new communications technology peaks in its implementation within the first year after it is introduced to the market.
Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.
Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.