Hacker Newsnew | past | comments | ask | show | jobs | submit | humansareok1's commentslogin

The dismal level of discourse about this bill shows that Humanity is utterly ill equipped to deal with the problems AI poses for our society.


There's definitely not nothing of value here. This could be a useful new medium. I however hate the tone of the two hosts. It sounds like two pompous millennials talking about things they don't really understand.


Indeed, you nailed it.

The ridiculous overuse of the word "like" is as nails on a chalkboard to me. It's bad enough hearing it from many people around me, the last thing I need is it to be part of "professional" broadcasting.

I'm super impressed with this, but that one flaw is a really big flaw to me.


Out of interest, where do you live?

I’m wondering if people’s tolerance for “like” is affected by their geography.

I live in California (from the UK originally) so I honestly don’t even notice this any more.


I live in Idaho currently, but have lived in many different regions in the US at various points in the past (though, not California). It does seem particularly strong in California and increasingly in western Oregon, western Washington, southern Nevada, and northern Utah (which, probably not coincidentally, have been top destinations for people moving out of California over the last 10 to 20 years).

Out of curoisity, how long ago did you move to California from the UK? And is the "like" commonly used in the UK?


I moved ten years ago, so I couldn't tell you about "like" prevalence in the UK today - I think it was a lot less common than in California a decade ago.


I really want to like it more, it could be interesting to drop in a textbook and get a dedicated series of podcasts about each chapter for example but the tone is so off-putting that I can't listen for more than a few minutes. Its pure cringe.


They seemed to trade quality as well. Its now a net negative signal for a company's success if they are accepted into YC.


You’ll have to expand on this for us plebs. To whom is it a net negative signal?


To anyone with eyes? Job seekers looking for startups to join, investors looking for places to put money, etc.

I'm sorry if your company got accepted into YC, better luck next time. At least you can hang out with the founders of... 100 AI-assisted Code Editors, 'The first Travel Credit Card for Gen Z', 'Starbucks memberships for restaurants', 'a video first food delivery app, tiktok meets doordash', and 'the operating system for vacation rentals'. Truly a staggering group of talent.

Those are all real companies in W24 btw...


"100 AI assisted code editors" is not even an exaggeration.

I checked, and over 300 (of ~500) 2024 YC startups have some sort of AI tag. I'm quite curious how the current AI hype is gonna end...


have a look at this, it's hilarious

https://www.ycombinator.com/companies?batch=F24&batch=S24&ba...

I expect "AI Nip Alert" to show up any day now


Many of these are "Use AI For Something" startups. A few seemed meaningful but most seem destined to fail.


Not going to name names, but so far my favorite has been “AI for [somewhat arcane process]”.

I had no idea how “AI” could possibly be of use, so clicked through out of curiosity. Hilariously, it boiled down to “we occasionally use an LLM to email people for you.”


> The first Travel Credit Card for Gen Z

Huh. Is this just that Fyre Festival guy’s thing, only increment the generation by one?

More generally, most of these read as parody.


I was laughing thinking how you made up a list of the most stupid ideas. Just to be baffled they are really fucking there.


hahahaha, wow! I really thought this was a joke list. That’s stunning.


Yeah not a joke. Straight from the W24 batch page... I'm sure these weren't even the most absurd.


A) Security is always an afterthought at YC companies - I know from firsthand experience.

B) YC companies are risky to use, obviously we meme about people using IBM for "saftey", but there is an opposite side of that which is going with a seed stage company - it's very risky.

C) Even if you are a happy customer, if you are too niche they will typically abandon you. I've been on the decision making side for this, sometimes your early customers don't fit your new market, so you have to let them down slowly.


I'm bearish on the giant YC classes but (C) is an entirely necessary evil at any successful startup anywhere.


I mean I can't speak for that commenter, but I hold any VC backed startup in suspicion for a good amount of time because if they can't reach the size demanded by their investors, which runs the gamut from ambitious-but-achievable all the way to not-in-my-or-your-lifetime, a perfectly profitable modestly sized business is almost bound to be shut down and it's services terminated with little drama, leaving behind potentially useless products, or yet another fucking ZIP file to add to the pile of the things I have yet to open up and sort into other mediums.


> if they can't reach the size demanded by their investors

In this sense, how is YC any different from any other VC firm?


It's not, which is probably why I said VC not YC.


Given what Sam has done by clearing out every single person who went against him in the initial coup and completely gutting every safety related team the entire world should be on notice. If you believe what Sam Altman himself and many other researchers are saying, that AGI and ASI may well be within reach inside this decade, then every possible alarm bell should be blaring. Sam cannot be allowed to be in control of the most important technology ever devised.


I don't know why anyone would believe anything this guy is saying, though, especially now that we know he's going to receive a 7% stake in the now-for-profit company.

There are two main interpretations of what he's saying:

1) He sincerely believes that AGI is around the corner.

2) He sees that his research team is hitting a plateau of what is possible and is prepping for a very successful exit before the rest of the world notices the plateau.

Given his track record of honesty and the financial incentives involved, I know which interpretation I lean towards.


This is a false dichotomy. Clearly getting money and control are the main objectives here, and we're all operating over a distribution of possible outcomes.


I don't think so. If Altman is prepping for an exit (which I think he is), I'm having a very hard time imagining a world in which he also sincerely believes his company is about to achieve AGI. An exit only makes sense if OpenAI is currently at approximately its peak valuation, not if it is truly likely to be the first to AGI (which, if achieved, would give it a nearly infinite value).


What's the effective difference between exiting now and if it does achieve in your words "nearly infinite value" to him personally?

Either way he is set for life, truly being one of the most wealthy humans to have ever exist... literally.


...or he's just Palpatine who wants shitload of money regardless of future speculations, end of story.


It's interesting because one of the points Sam emphatically stresses over and over on most podcasts he's gone on in the past 4 years is how crucial it is that a single person or a single company or a collection of companies controlling ASI would be absolutely disastrous and that there needs to be public, democratic control of ASI and the policies surrounding it.

Personally I still believe he thinks that way (in contrast to what ~99% of HN believes) and that he does care deeply about potential existential (and other) risks of ASI. I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.

I don't know why the promises to the safety team weren't kept (thus triggering their mass resignations), but I don't think it's something as silly as him becoming extremely power hungry or no longer believing there were risks or thinking the risks are acceptable. Perhaps he thought it wasn't the most rational and efficient use of capital at that time given current capabilities.


Or maybe he is just a gready lier? From the outside looking in how can you tell the difference?


It's possible that both things could be true. He may be a greedy liar while still being very concerned about ASI safety and wanting it to be controlled by humanity collectively (or at least the population of each country, via democratic means).

Maybe he is only a greedy liar. I don't know. I'm just stating my personal belief/speculation.


>I would bet money/Manifoldbux that if he thought powerful AGI/ASI were anywhere near, he'd hit the brakes and initiate a massive safety overhaul.

Not sure how you can believe this given all of his recent actions and the ever growing list of whistleblowers dropping out of OpenAI explicitly saying Safety is not taken seriously.

I mean just generally the ability to actually stop and reorient around working on safety seems incredibly non trivial. To say nothing of the race dynamic he has perpetuated, the other frontier companies are unlikely to do the same.


There's literally a dedicated major called History of Science. They teach fundamentally different things for different reasons.


XAi seems to be able to dump 10-20x more compute into their Grok models each time. Don't see any signs this is slowing down...


Having seen some of the aftermath I find it extremely hard to believe this was the result of overloading batteries. It looks like small grenades exploded in their hands. If lithium batteries can indeed explode like this I would suspect no one would ever carry one again after this. They should certainly be illegal to have on planes for example.


3Blue1Brown has a pretty great video series walking through Transformers:

https://www.youtube.com/watch?v=wjZofJX0v4M


How confident are you that these brain organoids are incapable of qualia and thus suffering?


Username relevant.-

PS. Also, while congratulating the company, I also wanted to appreciate and second the question ...


We have no information on this topic. Consider that at this time we have the number of neurons equivalent to a larva fly.


Flies have demonstrated signs of consciousness so if that's supposed to be reassuring, it's not.


Compared to chips, where would flies be? 1 bit microcontroller level? Where our brain is tops of course (latest generation AMD or Intel)


Sometimes the Torment Nexis is allegorical. Sometimes they build the actual thing...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: