Hacker Newsnew | past | comments | ask | show | jobs | submit | kermit___'s commentslogin

Maybe it’s time to pivot?


I hear you, and there are even greater concerns than the “you kids get off of my lawn” vibe, e.g. the massive amounts of water required for cooling data centers.

But the “bubble” threat the post mentions is just emotional; things are accelerating so quickly that what is hype one day isn’t hype in a few months. LLM usage is just going to get heavier.

What should get better are filters that prevent bots from easily scraping content. Required auth could work. Yes, it breaks things. Yes, it may kill your business. If you can’t deal with that, find another solution, which might be crowdfunding, subscription, or might be giving up and moving on.

Working towards a solution makes more sense than getting angry and ranting.


The playing fields is very asymmetric. Can you negotiate with these guys? No. They're faceless when it comes to their data operations. Secretive, covert even.

Creating a solution requires cooperation. Making these models fit on smaller systems, optimizing them to work with less resources need R&D engineering, which no company wants to spend on right now. Because hype trains need to be built, monies need to be made, moats need to be dug (which is almost impossible, BTW).

My 10 year old phone can track 8 people in camera app in real time, with no lag. My A7-III can remember faces, prioritize them, focus on mammals' and humans' eyes, track them and keep them in focus at 30FPS with a small DSP.

Building these things is possible, but nobody cares about it right now, because AI Datacenters live in a similar state of ZIRP economy. They're almost free for what they do, even though they harm the environment in irreparable ways just to give you something which is not right.

As if people lost their mind about this thing.


People’s basic needs are food and water, followed by safety, belonging, esteem, and finally self-actualization. In addition, convenience over non-convenience.

With LLMs come a promise of safety in that they will solve our problems, potentially curing disease and solving world hunger. They can sound like people that care and make us feel respected and useful, making us feel like we belong and boosting our esteem. They do things we can’t do and help us actualize our dreams, in written code and other things. We don’t have to work hard to use them anymore, and they’re readily available.

So, people are going to focus on that.


Ah.

We have been using models, and accurate ones at that, for drug discovery, nature simulation, weather forecasting, ecosystem monitoring, etc. well before LLMs with their nondescript chat boxes have arrived. AI was there, the hardware was not. Now we have the hardware, AI world is much richer than generative image models and stochastic parrots, but we live in a world where we assume that the most noisy is the best, and worthy of our attention. It's not.

The only thing is, LLMs look like they can converse while giving back worse results than these models we already have, or develop without conversation capabilities.

LLMs are just shiny parrots which can taunt us from a distance, and look charitable while doing that. What they provide is not correct information, but biased Markov chains.

It only proves that humans are as gullible as other animals. We are drawn to shiny things, even if they harm us.


This work seems like it was a some sort of advanced LLM output. Thought-provoking and curious, but no soulful thread of consciousness. It could also be schizoid rambling, but it seems too detached for that. I read it bottom to top and it made no more sense. I guess this is what makes frontpage HN these days. It’s probably the third or fourth thing over the past month that was obvious garbage.


I've no idea if TFA is good or not, but it is very new and has only 4 upvotes. The HN front page is a mix of new and popular-despite-age. So it's not surprising to find some barely-vetted things on the front page. (AFAIK the idea is that we're all suppoed to vet them, and the survivors stay on the front page for longer)


They should be vetted by people that pay attention and read the content. If LLM garbage content is regularly getting around the filter, that’s a stone’s throw from dangerous content that could damage the community.


Like any mind-altering substance, I would suggest caution. Bad decisions are made and dangerous thoughts are had. Relationships can end, lives can be ruined, and people can die. It’s not a hallucinagen-specific problem, but here be dragons.


Like driving a car.


https://archive.ph/2025.03.18-084516/https://gizmodo.com/rem...

(Had a page crash and strange refreshes from ads.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: