Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. We should absolutely censor thoughts, and certain conversations. Free speech be damned - some thoughts are just so abhorrent we just shouldn't allow people to have them.


Rebuking, shunning and ostracism are key levers for societal self-regulation, and social cohesion. Pick any society, at any point in time, amd you will find people/ideas that were rejected for not confirming enough.

There are limits to free speech even in friendship or families- there are things that even your closest friends can say that will make you not want to associate with them anymore.


Well, the arguments out there aren’t that LLM’s are too brash, or discourteous or, insensitive. People are saying they’re “dangerous”. None of your examples speak to danger. No one is censored for being insensitive, or impolite or an opportune or discourteous. I totally support society regulating those things, and even outcastIng individuals who violate social norms. But that’s not what the anti-LLM language is framed as. It’s saying it’s “dangerous “. That’s a whole different ballgame, and I fail to see how such a description could ever apply. We need to stop that kind of language. It’s pure 1984 bullshit.


> Well, the arguments out there aren’t that LLM’s are too brash, or discourteous or, insensitive. People are saying they’re “dangerous”.

I didn't say that...

> None of your examples speak to danger.

Why should they have supported an argument I didn't make.

My comment is anti-anti-censorship of LLM. People already self-censor a lot; "reading the room" is huge part of being a functional member of society, and expecting LLMs to embody the "no-filter, inappropriate jerk" personality is what's against the grain - not the opposite.

I'm pragmatic enough to know the reason corporate LLMs "censor" is their inability to read the room, so they default to the lowest common factor and be inoffensive all the time (which has no brand risk), rather than allowing for the possibility the LLM offends $PROTECTED_CLASS, which can damage their brand or be legally perilous. That juice is not worth the squeeze just to make a vocal subset of nerd happy; all the better if those nerds fine-tune/abliterate public models so the corps can wash their hands of any responsibility of the modified versions.


> We need to stop that kind of language. It’s pure 1984 bullshit.

Sounds like you're saying, in this specific passage I'm quoting, "this language is dangerous and must be stopped".

Surveillance AI is already more invasive than any Panopticon that Orwell could imagine. LLMs and diffusion models make memory holes much easier. Even Word2Vec might be enough to help someone make a functional Newspeak conlang — though I wonder, is it better for me to suggest the (hopefully flawed) mechanism I've thought for how to do so in the hope it can be defended against, or would that simply be scooped up by the LLM crawlers and help some future Ingsoc?


I think you're joking, but the Bible basically says that*, so you might be serious, and even if you're not someone will say it unironically.

* https://www.biblegateway.com/verse/en/Matthew%205%3A28




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: