Hacker Newsnew | past | comments | ask | show | jobs | submit | intended's commentslogin

The articles didn’t blame LLMs, they talked about how they would get used, precisely through the lens of systems, incentives and culture.

MIT actually has a paper on how ChatGPT use impacted cognitive skills for essay writing.

> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

> https://arxiv.org/abs/2506.08872

> Cognitive activity scaled down in relation to external tool use. …

> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.


I’d andd another layer - for American tech workers, regulation also reduces profits. This hurts salaries, stocks options and career growth.

Incentives make the world go round, so even if people recognize the issue, they would rather it become someone else’s problem, than willingly harm their own future.


Our society, pre internet, built systems to manage trust. The conditions that allowed those systems to exist (the speed of transmission of data, the ratio of content generation to verification, the ability to shape consensus), have changed.

You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.

The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.

We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.

Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.

That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.


Very interesting, I've thought in a completely different direction, towards human verification. "IRL KYC for friends" or something

I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:

I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?

(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)


Proof of human is the natural first stop.

Your solution shares its essence with a club, a WhatsApp group or interest group.

It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.

That is the broader environment you are a part of.

Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.

If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.

The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.

To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.


People have measurably lower levels of ownership and understanding of AI generated code. The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.

In essence, we get the output without the matching mental structures being developed in humans.

This is great if you have nothing left to learn, its not that great if you are a newbie, or have low confidence in your skill.

> LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.

> https://arxiv.org/abs/2506.08872

> https://www.media.mit.edu/publications/your-brain-on-chatgpt...


While I agree with this intuitively, I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc.

> "I also just can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran etc."

There was no "switch"; the transition took literally decades. Assembler and high level languages co-existed in the mainstream all the way until the 1990s because it was well understood that there was a trade off getting the best performance using assembler (e.g. DOOM's renderer in 1993) and ease of development and portability (something that really mattered when there were a dozen different CPU architectures around) using high level languages.

There is no need to get past the argument because it doesn't exist. Nobody said that.


No one's saying 100% of code will be LLM generated starting in June this year either though (at least if you're not named Dario or Sam).

There is a massive difference in outright transformation of something you created yourself vs a collage of snippets + some sauce based on stuff you did not write yourself. If all you did to use your AI was to train it exclusively on your own work product create during your lifetime I would have absolutely no problem with it, in fact in that case I would love to see copyright extended to the author.

But in the present case the authorship is just removed by shredding the library and then piecing back together the sentences. The fact that under some circumstances AIs will happily reproduce code that was in the training data is proof positive they are to some degree lossy compressors. The more generic something is ("for (i=0;i<MAXVAL;i++) {") the lower the claim for copyright protection. But higher level constructs past a couple of lines that are unique in the training set that are reproduced in the output modulo some name changes and/or language changes should count as automatic transformation (and hence infringing or creating a derivative work).


>they are to some degree lossy compressors

Is this even a controversial statement? Seems very clearly correct to me.

My original point wasn't worried about the copyright though. I'm completely ignoring it for now because I do agree it's a problem until Congress says something (lol) or courts do.


>can't get past the argument that people said the same thing when we switched from everyone using ASM to C/Fortran

that's a bad comparison for two reasons. One is that C is a transparent language that requires understanding of its underlying mechanics. Using C doesn't absolve you from understanding lower concepts and was never treated as such. The power of C comes squarely with a warning label that this is a double edged sword.

Secondly insofar as people have used higher level languages as a replacement for understanding and introduced a "everyone can code now" mentality the criticism has been validated. What we've gotten, long before AI tooling, were shoddy, slow, insecure tower-of-babel like crappy codebases that were awful for the exact same reason these newest practices are awful.

Introducing new technology must never be an excuse for ignorance, the more powerful the tool the greater the knowledge required of the user. You don't hand the most potent dangerous weapon to the least competent soldier.


Sure, a lot of people are incompetent. But the world generally works. Which is of course, the problem. The only time anything really gets questioned is when you start having a GitHub like 0 9s situation.

The study compares ChatGPT use, search engine use, and no tool use.

The issues with moving from ASM to C/Fortran are different from using LLMs.

LLMs are automation, and general purpose automation at that. The Ironies of Automation came out in the 1980s, and we’ve known there are issues. Like Vigilance decrement that comes when you switch from operating a system to monitoring a system for rare errors.

On top of that, previous systems were largely deterministic, you didn’t have to worry that the instrumentation was going to invent new numbers on the dial.

So now automation will go from flight decks and assembly lines, to mom and pop stores. Regular to non-deterministic.


The HLL-to-LLM switch is fundamentally different to the assembler-to-HLL switch. With HLLs, there is a transparent homomorphism between the input program and the instructions executed by the CPU. We exploit this property to write programs in HLLs with precision and awareness of what, exactly, is going on, even if we occasionally do sometimes have to drop to ASM because all abstractions are leaky. The relation between an LLM prompt and the instructions actually executed is neither transparent nor a homomorphism. It's not an abstraction in the same sense that an HLL implementation is. It requires a fundamental shift in thinking. This is why I say "stop thinking like a programmer and start thinking like a business person" when people have trouble coding with LLMs. You have to be a whole lot more people-oriented and worry less about all the technical details, because trying to prompt an LLM with anywhere near the precision of using an HLL is just an exercise in frustration. But if you focus on the big picture, the need that you want your program to fill, LLMs can be a tremendous force multiplier in terms of getting you there.

> The people using GenAI reap a major time and cognitive effort savings, but the task of verification is shifted to the maintainer.

The people using GenAI should be the ones doing the verification. The maintainer's job should not meaningfully change (other than the maintainer using AI to review on incoming code, of course).

Why does everyone who hears "AI code" automatically think "vibe-coded"?


Because that's what they're seeing? If only a small fraction of submissions can use the tool correctly, that's on the tool.

Drones have upended the unit economics of combat and made older doctrines less relevant. Drones seem to combine the benefits of missiles level payloads, aircraft level control and ability to project force over a distance.

I don’t see any technical way we can stop them - but it’s not like we stopped guns.

The drone and LLM era are the end of many things we older folk are used to. The information commons are sunk with LLMs - we simply do not have the capacity (resources, manpower, bandwidth, desire) to verify the content being churned out every second.


I dunno, seems to me that they're slow enough and fly low enough you could shoot them down with 50 cal ammo. The hard part is aiming and hitting them. But seems to me that someone could make a radar assisted point defense system that automatically aimed and fired a 50 cal gun, like automatic skeet shooting. Such a system would have limited range and could not hit very fast moving or high altitude targets, but would be cheap enough to deal with the cheap slow drones.

Look into what Ukraine is doing, they are at the forefront of all this.

They already have the system you describe. The German one's with radars are expensive compared to the technicals aimed by soldiers. Neither has very large effective range and the drones do not fly straight paths. You'd need millions of them.

They are mass producing interceptor drones for a reason.


I'm skeptical about the "cheap drones: who knew?" narrative. Such drones have existed since 1944.

There's been a massive step change in their capability per unit cost.

What used to cost millions per unit now costs tens of thousands. That's significant.

It's like saying artillery isn't that big a deal in 1914. After all, it's been around since 1452.


it’s basically smart grenades that “throw” themselves, high tech shit, there’s def going to be some kind of automatic helmet-mounted counter devices coming

In many examples, LLMs betray the fact that they are not reasoning, because when provided with problems that can be solved with the ability to reason, they fail.

Even in this discussion someone provided an example of coming up with board game rules. LLMs found all board game rules valid, because they looked and sounded like board game rules. Even when they were not.

In short, You can learn a subject, you can make a mental model of it, you can play with it, and you can rotate or infer new things about it.

LLMs are more analogous to actors, who have learnt a stupendous amount of lines, and know how those lines work.

They are, by definition, models of language.

IF you want a better version - GENAI needs to be able to generate working voxels of hands and 3D objects just from images.


I don’t believe the board game rules example. I think this would be a piece of cake for an llm. I’m happy to be proven wrong here if you share an example.

This is the user I took the example from: https://news.ycombinator.com/item?id=47689648#47696789

This assumes the limiting factor is content generation, not ability to read and verify.

You make the point later in your comment, but consider it a minor issue. “Randos”

the actual limits are verification, and then attention. Verification is always more expensive than generation.

However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.


> This assumes the limiting factor is content generation, not ability to read and verify.

Content generation is the thing copyright applies to. If you want to create a reward system for verification, it's not going to look anything like that.

It mostly looks like things we already have, like laws against pretending you're someone else to trade on their reputation so that people can build a reputation as trustworthy and make money from subscriptions or ads by being the one people to turn to when they want trustworthy information.

> However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.

I suspect the real problem here is the voting thing. When people derive significant value from information they're quite willing to pay for it. Wall St. pays a lot of money for Bloomberg terminals, companies pay to do R&D or market research, individuals often pay for financial software or games and entertainment content etc.

But voting is a collective action problem. Your vote isn't very likely to change the outcome so are you personally going to spend a lot of money to make sure it's informed? For most people the answer is going to be no, so we need something that gives them access to high quality information at minimal cost if we want them to be informed.

Annoyingly one of the common methods of mitigating collective action problems (government funding) has a huge perverse incentive here because the primary thing we want people to be informed about is political issues and official misconduct, so you can't give the incumbent politicians the purse strings for the same reason the First Amendment proscribes them from governing speech.

So you need a way to fund quality reporting the public can access for free. Advertising kind of fit but it never really aligned the incentives. You can often get more views by being entertaining or inflammatory than factual.

The question is basically, who can you get to supply money to fund factual reporting for everyone, whose interest is for it to be accurate rather than biased in favor of the funder's interests? Or, if that's not a thing, whose interests are fairly aligned with those of the general public? Because with that you can use a patronage model, i.e. the content is free to everyone but patrons choose to pay money because they want the work to be done more than they want to not pay.

The obvious answer for "who" is then "the middle class" because they're not so poor they can't pay a few bucks while still consisting of a large diverse group that won't collectively refuse to fund many classes of important reporting. But then we need two things. The first is for the middle class to not get hollowed out, which we're not doing a great job with right now.

And the second is to have a cultural norm where doing this is a thing, i.e. stop teaching people illiterate false dichotomy nonsense where the only two economic camps are "Soviet Communism" in which the government is required to solve everything through central planning and "greed is good" where being altruistic makes you a doofus for not spending all your money on blackjack and cocaine. People rather need to be encouraged to notice that once their basic needs are met, wanting to live in a better world is just as valid a use for free time and disposable income as designer shoes or golf.


I don’t think thats how fair use works.

Yes.

1) Quantity is its own quality: Scale makes a difference

2) The tools themselves automate tasks and consolidate their outputs. The “sale” of a piece of content, and its consumption, shifts away from the people producing it Example: We have entire networks and systems that depended on consumption occurring on the site itself. News websites, or indie sites depend on ad revenue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: