There's definitely not nothing of value here. This could be a useful new medium. I however hate the tone of the two hosts. It sounds like two pompous millennials talking about things they don't really understand.
The ridiculous overuse of the word "like" is as nails on a chalkboard to me. It's bad enough hearing it from many people around me, the last thing I need is it to be part of "professional" broadcasting.
I'm super impressed with this, but that one flaw is a really big flaw to me.
I live in Idaho currently, but have lived in many different regions in the US at various points in the past (though, not California). It does seem particularly strong in California and increasingly in western Oregon, western Washington, southern Nevada, and northern Utah (which, probably not coincidentally, have been top destinations for people moving out of California over the last 10 to 20 years).
Out of curoisity, how long ago did you move to California from the UK? And is the "like" commonly used in the UK?
I moved ten years ago, so I couldn't tell you about "like" prevalence in the UK today - I think it was a lot less common than in California a decade ago.
I really want to like it more, it could be interesting to drop in a textbook and get a dedicated series of podcasts about each chapter for example but the tone is so off-putting that I can't listen for more than a few minutes. Its pure cringe.
To anyone with eyes? Job seekers looking for startups to join, investors looking for places to put money, etc.
I'm sorry if your company got accepted into YC, better luck next time. At least you can hang out with the founders of... 100 AI-assisted Code Editors, 'The first Travel Credit Card for Gen Z', 'Starbucks memberships for restaurants', 'a video first food delivery app, tiktok meets doordash', and 'the operating system for vacation rentals'. Truly a staggering group of talent.
Not going to name names, but so far my favorite has been “AI for [somewhat arcane process]”.
I had no idea how “AI” could possibly be of use, so clicked through out of curiosity. Hilariously, it boiled down to “we occasionally use an LLM to email people for you.”
A) Security is always an afterthought at YC companies - I know from firsthand experience.
B) YC companies are risky to use, obviously we meme about people using IBM for "saftey", but there is an opposite side of that which is going with a seed stage company - it's very risky.
C) Even if you are a happy customer, if you are too niche they will typically abandon you. I've been on the decision making side for this, sometimes your early customers don't fit your new market, so you have to let them down slowly.
I mean I can't speak for that commenter, but I hold any VC backed startup in suspicion for a good amount of time because if they can't reach the size demanded by their investors, which runs the gamut from ambitious-but-achievable all the way to not-in-my-or-your-lifetime, a perfectly profitable modestly sized business is almost bound to be shut down and it's services terminated with little drama, leaving behind potentially useless products, or yet another fucking ZIP file to add to the pile of the things I have yet to open up and sort into other mediums.
Given what Sam has done by clearing out every single person who went against him in the initial coup and completely gutting every safety related team the entire world should be on notice. If you believe what Sam Altman himself and many other researchers are saying, that AGI and ASI may well be within reach inside this decade, then every possible alarm bell should be blaring. Sam cannot be allowed to be in control of the most important technology ever devised.
I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.
There are two main interpretations of what he's saying:
1) He sincerely believes that AGI is around the corner.
2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.
Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.
This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.
I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if
OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).
It's interesting because one of the points Sam emphatically stresses over and over on most podcasts he's gone on in the past 4 years is how crucial it is that a single person or a single company or a collection of companies controlling ASI would be absolutely disastrous and that there needs to be public, democratic control of ASI and the policies surrounding it.
Personally I still believe he thinks that way (in contrast to what ~99% of HN believes) and that he does care deeply about potential existential (and other) risks of ASI. I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.
I don't know why the promises to the safety team weren't kept (thus triggering their mass resignations), but I don't think it's something as silly as him becoming extremely power hungry or no longer believing there were risks or thinking the risks are acceptable. Perhaps he thought it wasn't the most rational and efficient use of capital at that time given current capabilities.
It's possible that both things could be true. He may be a greedy liar while still being very concerned about ASI safety and wanting it to be controlled by humanity collectively (or at least the population of each country, via democratic means).
Maybe he is only a greedy liar. I don't know. I'm just stating my personal belief/speculation.
>I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.
Not sure how you can believe this given all of his recent actions and the ever growing list of whistleblowers dropping out of OpenAI explicitly saying Safety is not taken seriously.
I mean just generally the ability to actually stop and reorient around working on safety seems incredibly non trivial. To say nothing of the race dynamic he has perpetuated, the other frontier companies are unlikely to do the same.
Having seen some of the aftermath I find it extremely hard to believe this was the result of overloading batteries. It looks like small grenades exploded in their hands. If lithium batteries can indeed explode like this I would suspect no one would ever carry one again after this. They should certainly be illegal to have on planes for example.