Hacker Newsnew | past | comments | ask | show | jobs | submit | Angostura's commentslogin

I rather disagree with this position.

To risk an analogy, if I throw petrol onto an already smouldering pile of leaves, I may mot have ‘caused’ the forest fire, but I have accelerated it so rapidly that the situation becomes unrecognisable.

There may already have been cracks in the edifice, but they were fixable. AI takes a wrecking ball to the whole structure


This is fair as a criticism of the leading AI companies, but there's a catch.

When you attribute blame to technologies, you make it difficult to use technologies in the construction of a more ethical alternative. There are lots of people who think that in order to act ethically you have to do things in an artisanal way; whether it's growing food, making products, services, or whatever. The problem with this is that it's outcompeted by scalable solutions, and in many cases our population is too big to apply artisanal solutions. We can't replace the incumbents with just a lot of hyper-local boutique businesses, no matter how much easier it is to run them ethically. We have to solve how to enable accountability in big institutions.

There's a natural bias among people who are actually productive and conscientious, which is that an output can only be ethical if it's the result of personal attention. But while conscientiousness is a virtue in us as workers, it's not a substance that is somehow imbued in a product, if the same product is delivered with less personal attention then it's just as good - and much cheaper and therefore available to more people, which is the product is good for them, makes it more ethical and not less.

(I'm making a general point here. It's not actually obvious to me that AI is an essential part of the solution either)


I agree with this. We've made existing problems 100x worse overnight. I just read the curl project is discontinuing bug bounties. We're losing so much with the rise of AI.

That seems a bit fatalistic, "we have lost so much because curl discontinued bug bounties". That's unfortunate, but it's very minor in the grand scheme of things.

Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.

Indeed, intelligent researchers have used AI to find legitimate security issues (I recall a story last month on HN about a valid bug being found and disclosed intelligently with AI in curl!).

Many tools can be used irresponsibly. Knives can be used to kill someone, or to cook dinner. Cars can take you to work, or take someone's life. AI can be used to generate garbage, or for legitimate security research. Don't blame the tool, blame the user of it.


Blaming only people is also incorrect, it's incredibly easy to see that once the cost of submission was low enough compared to the possible reward bounties would become unviable

Ai just made the cost of entry very low by pushing it onto the people offering the bounty

There will always be a percentage of people desperate enough or without scruples that can do that basic math, you can blame them but it's like blaming water for being wet


"Guns don't kill people, people kill people."

> Also, the fault there lies squarely with charlatans who have been asked/told not to submit "AI slop" bug bounties and yet continue to do so anyway, not with the AI tools used to generate them.

I think there's a general feeling that AI is most readily useful for bad purposes. Some of the most obvious applications of an LLM are spam, scams, or advertising. There are plenty of legitimate uses, but they lag compared to these because most non-bad actors actually care about what the LLM output says and so there are still humans in the loop slowing things down. Spammers have no such requirements and can unleash mountains of slop on us thanks to AI.

The other problem with AI and LLMs is that the leading edge stuff everyone uses is radically centralized. Something like a knife is owned by the person using it. LLMs are generally owned one of a few massive corps and at best you can do is sort of rent it. I would argue this structural aspect of AI is inherently bad regardless of what you use it for because it centralizes control of a very powerful tool. Imagine a knife where the manufacturer could make it go dull or sharp on command depending on what you were trying to cut.


I suppose to belabor the analogy, its still not the petrol’s fault - the same fuel is also used to transport firefighting resources, in fact, a controlled burn might have effectively mitigated the risk of a forest fire in the first place. Who left those leaves to smolder in the first place, anyway? Why’d you throw petrol on the pile?

You just have to be careful not to say “this is AI’s” fault - it’s far more accurate, and constructive, to say “this is our fault, this is a problem with the way some people choose to use LLMs, we need to design institutions that aren’t so fragile that a chatbot is all it takes to break them.”


> we need to design institutions that aren’t so fragile that a chatbot is all it takes to break them.

Like, we need to design leaves that aren't so fragile that a petrol fire can burn them.

I don't agree that's more constructive. We need to defend the institutions we've got.


Destruction is always easier than creation, and humans really prefer to be lazy.

It took 2 world wars to motivate us to create the current institutions. You think we will be less lazy and more motivated than those people were?


or, having a glass of wine with dinner or a few beers on the weekend is fine. but drinking a 6-pack per day or slamming shots every night is reckless and will lead to health consequences.

I agree and disagree with parts of what you said.

AI may have caused a distinct trajectory of the problem, but the old system was already broken and collapsing. If the building falls over or collapses in place doesn't change that the building was already at its end.

I think the fact that AI is allowed to go as far as it has is part of the same issue, namely, our profit-at-all-costs methodology of late-stage capitalism. This has lead to the accelerated destruction of many institutions. AI is just one of those tools that lets us sink more and more resources into the grifting faster.

(Edit: Fixing typos.)


Could it not be as simple as aspiration (we want to move to digital sovereignty) versus pragmatism (we need to implement this thing next month)?

If there were ratings, presumably the incentive would be to have your beans rated as higher quality.

Thus doesn’t feel particularly evil to me - though it treats beans as fungible.

Something similar is done with milk sales from individual farms in England.


Is the issue with this that mobile OSs - iOS in particular are rather aggressive about shutting down apps in the background after a while?

iOS definitely made a name for itself to the ire of many for this many moons ago, but it's a fairly ubiquitous default behavior for mobile phone operating systems now (because battery life) even on android

I work at a hospital. I think this could be a really interesting emergency fallback system in the event that there is catastrophic failure of mobile, bleep and WiFi

I’ve only read the first one. My main thought was ‘I wish he could write people as well as he could write spiders’ :)

I think humans and spiders and octopus and viruses are for him just a background for the object he wants to narrate. In difference to many other fiction where the persons are the objects. I also missed a human part of it.

Well, it help me. So thanks!

So, no need to speak of them at all

I'd love to see you substantiate that - bet you can't.

I certainly can't. I'm running GrapheneOS.

And you don't think they will include an abstraction layer?

An abstraction layer doesn’t prevent Google from seeing the data. Last year the story was that Apple would be running a Google model on their (Apple’s) own server hardware.

This story says the custom model will run on-device and in Apple's Private Cloud Compute. The implication is that Google will not see the data. The "promise" of Private Cloud Compute is that Apple wants it to be trusted like "on-device".

Presumably cutting Google out of getting the data from this is part of why this story first was mentioned last year but is only now sounds close to happening. I think it's the same story/project.


Yes, and that's still the story, as far as I can tell. So an abstraction layer would let them swap out the underlying model

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: