> A company (Sun King, SunCulture) installs a solar system in your home
> * You pay ~$100 down
> * Then $40-65/month over 24-30 months
But also:
> The magic is this: You’re not buying a $1,200 solar system. You’re replacing $3-5/week kerosene spending with a $0.21/day solar subscription (so with $1.5 per week half the price of kerosene)
And earlier they say “$120 upfront might as well be a million when you’re making $2/day”. The whole article reads like it was vomited up by an LLM trained exclusively on LinkedIn posts. The math errors are consistent with that.
They say $120 might as well be a million and they immediately follow that up with “$100 down payment to get started”. But I thought it was like a million dollars??
LLM slop. Author couldn’t even be bothered to read the slop before clicking publish.
Yeah way too many tell-tale ChatGPT rhetorical devices in this article, which is a shame because the topic and premise are fascinating, but those turned me off from finishing it.
AI slop hits 700+ upvotes on Hacker News. The Dead Internet and the triumph of quantity over quality loom. A sign of things to come.
I think it's more of a link with a nice title getting 700+ upvotes. IMO many (and most non-tech) articles linked on HN are bad and most of their value lies in them being a conversation starter.
Yeah I think this is right. I want to read an HN conversation about solar in Africa, and all the interesting anecdotes that come out of the woodwork. Valuable even if the article itself is mediocre.
It was a hard read, but actually a fairly interesting article. It's a shame the author chose to format it how they did.
The key takeaways - that solar is super cheap, that technology has unlocked offering hire purchase to incredibly poor remote communities, and that westerners looking to buy carbon credits are basically subsidising that - are pretty interesting. There was a lot of fluff around those points though.
> The Dead Internet and the triumph of quantity over quality loom
always_has_been.jpg
The Internet has drowned to death in garbage back when they coined the term "content marketing". That was long before transformer models were a thing.
People have this weird impression that LLMs created a deluge of slop and reduced overall quality of most text on-line. I disagree - the quality arguably improved, since SOTA LLMs write better than most people. The only thing that dropped is content marketing salaries. You were already reading slop, LLMs just let the publisher skip the human content spouter middleman.
I am too, and the wisdom of old age has taught me: once Eternal September hits a community, it's time to walk away.
I dunno if HN is at that point yet, but it's certainly creeping closer compared to where it was 5-10 years ago. Reddit passed the point of no return within the last few years.
Back in my day, we lamented the loss of bang paths for email... and you had to pay Robert Elz to bring in a news group because munnari connected Australia to the world...
It's getting boring how every. single. article. has comments how the article is AI generated. The funny thing is the article writing styles are completely different, but every one has apparently "tell-tale" signs of "AI slop". It just gets tiring.
For what it's worth I pasted the first couple of paragraphs into several AI detectors and 4/5 said it's clean, while one said mixed (partly AI generated partly human). So either all these AI generation tools are crap, or the text is not so "obviously" AI generated. Not saying either way, but it seems to at least not be so obvious.
All of those tools are garbage. There is no reliable automated way to detect ai generated text. In 2023 OpenAI had a tool for this as well and they eventually took it down because it wasn't accurate enough. The major AI labs are probably best positioned to make such a tool work. If even they can't, then some random company with access to a fraction of a data and a fraction of the compute almost certainly also cannot.
Agree those tools are unreliable. Unless you have a massive amount of ML models trained on individuals' writing[0], the best you can do is vibe-checking[1].
You realise the irony right? You say say AI "slop" has a distinctive structure, but at the same time you (and the other poster) say that AI tools can not detect it? For what it's worth I'm an AI sceptic, but one thing that AI tools are good at is pattern matching (that's really all they do). But somehow pattern matching AI writing is so obvious to human's but it completely fails all AI tools (just tried another tool which said 100% human).
It doesn't match up. Moreover it's getting tiring, because every single article has these comments on them, and I've seen enough examples where authors showed up in discussions or texts were from before LLMs were widely available, but posters were still adamant that the text was AI generated.
I highly doubt that people here would reliably pick out (success rate > 60%, i.e. you get 60% of guesses correctly if text was generated by a human or LLM) LLM generated text that completely fools 90% of AI detectors.
Regarding the setup-punchline format, guess what, those were popular way before LLMs (not surprising LLMs must have learned them from somewhere).
What detection tools are you using and why do you have such confidence in them? How reliable are they and how do you know? Why do you think these particular tools are better pattern matchers than actual humans (on HN no less)?
Food for thought, fwiw I think you have some valid points.
> A company (Sun King, SunCulture) installs a solar system in your home > * You pay ~$100 down > * Then $40-65/month over 24-30 months
But also:
> The magic is this: You’re not buying a $1,200 solar system. You’re replacing $3-5/week kerosene spending with a $0.21/day solar subscription (so with $1.5 per week half the price of kerosene)
$1.5 week is $6 a month, not $60.