Hacker Newsnew | past | comments | ask | show | jobs | submit | cowboylowrez's commentslogin

how can crime be bad if it forces us to police crime?

Depends on the observer and your definition of "bad".

The police are happy they are paid. The victims are sad they are hurt. Is society better as a whole because it can handle crime? I'm not sure.

What does bad mean? Seems like an overloaded concept, ask around and good luck.

You can have a lot more fun by completely reducing the original question and plugging in different values for "strictly awful" and "AI content" and "it forces us to..."

How can eating be good if we just get hungry again? Implies eating is bad despite the value we derive from it.

How can hard work be bad if it produces meaningful results? Implies hard work is good despite the pains we take on from it.

I would argue that this kind of reduction and replacement significantly changes the original question, but it is a fun thing to explore. I'm not sure we'll get closer to an answer to the original, though. And I'm not sure it's safe to take the answer from one of the derived questions and use it for the original.

But don't take my word for it, I'm mostly restating one of the key points from Thinking Fast and Slow.

Can I safely assume that what you were implying is that AI content is undesirable because it is a strain on human systems? I think that's the point the article was trying to make.


it was more a simply reply to

>How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

I should have quoted what I just did in my original reply, I feel like I wasted your time by not including it. Still you did post interesting things so not all is lost.

>how can crime be bad if it forces us to police crime?

crime can be bad whether we police it or not. we actually police crime because its bad, at least in societies that are so inclined to have a police force. a desire to reduce somethings occurrence is not speaking positively of such occurrences.

> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?

this is neither a disqualifier for being "strictly awful", nor the newly arrived unique event finally necessitating fixes to trust and reward systems. I would hope that we dont evaluate the goodness of AI based on whether we have functioning trust and reward systems.


Fair points, and thanks for clarifying.

Your last point helps me tease out what I think rubs me the wrong way. Another analogy, "these newly introduced, extremely fast cars make it entirely unsafe to drive drunk."

Of course to be fair, we'd have to point out that the purchase, operation and production (and more) of said vehicles has a terrible impact.

I'd just love to hear that we are going to crack down on drunk driving which was even a problem when we were going slower. Obviously, the metaphor falls apart - trust and reward are much more interesting nuts to crack.

It's a really hard point to make because expressing an interest in wanting to see one part of the problem solved seems to indicate to others that I don't care about all the other aspects.


We're not cracking down on LLMs in any meaningful way. They're built on copyright infringement on an unprecedented way. It's the kind of thing where the law is looking at the other way while people's lives are destroyed, and some, if lucky, will be compensated 30 years from now, probably with pitiful amounts of money.

Corruption generally works by inflicting a diluted, distributed harm. Everyone else ends up a bit worse off except for the agent of corruption, which ends up very well off.


I try to deceive my opponent in a few openings, simply by artifically delaying my move in openings in order to appear that my PREVIOUS move was an error on my part and that I am currently at a loss at what to do. I don't know how effective it is, but I'm low rated and all sorts of dumb schemes would work at my rating.

As a counterpoint, I found wikipedia's "perennial_sources_list" to be a pretty reasonable efficiency measure. Additionally whats the problem with wikipedia's entry about "artic frost"? (your [1] did not link to anything regarding that entry)

>This is broadly demonstrated by Wikipedia's constant decline in traffic from 2022 (~165M visits/day) through the present (~128M visits/day)[3].

This demonstrates only the decrease in web traffic, and there are plenty of discussions about the reasons why and I suspect that conservatives didn't all of a sudden decide to hate wikipedia starting in 2022 as you seem to imply.


Sure sure, but what happens if someone isn't 100 percent behind their opinions? Initial assessments for instance could very well attempt to see problems or anticipate arguments for or against particular viewpoints.

That's fine, but you should attribute your certainty or uncertainty, as the case may be, to yourself and not to "the devil."

It's vastly more honest to say, "I'm not sure about this" than "Devil's advocate: blah blah blah". Besides hiding behind the devil, the devil's advocate makes the devil look more confident than he should be.


yeah but you're sort of attributing dishonesty to someones post when I don't think it merits it.

>There's a big difference between listening to other perspectives and inventing other perspectives.

while there's a big difference, the difference doesn't invalidate thinking through issues and searching for the actual conflicting views. "Devil's advocate" is a common enough term, whats the big deal? Is it the word "devil"? Do you think someone is calling you Satan?


> yeah but you're sort of attributing dishonesty to someones post when I don't think it merits it.

There's a potential for dishonesty, but lack of honesty can also mean just opacity or reticence. Either way, openness and honesty are superior.

I do think that sometimes people say "devil's advocate" when it's their own opinion but an "unpopular" opinion that they may be embarrassed to admit, so they hide behind the devil, pretending they're not the devil themselves.

> "Devil's advocate" is a common enough term, whats the big deal? Is it the word "devil"? Do you think someone is calling you Satan?

No. The issue is not the term. A different term would not help. But the term is instructive about its own usage. In the Catholic Church, nobody wanted to argue against a potential saint, so someone had to be specifically appointed by the Church to argue the other side, a position the arguer didn't necessarily believe. The problem with devil's advocates online is that they're self-appointed for some reason, despite the fact that usually there are already people who sincerely believe that opinion and would argue for it, without the need for a devil's advocate. The Catholic Church canonization process is completely different from online arguments, and there's no need for the special role of the devil's advocate.


I actually like the role of devils advocate and can appreciate it. This fondness is not decreased by your assertion that there is no need. I do like your history on the terms origin, but again I don't think it follows that there is "no need" for the role, but maybe the role can exist without the appropriation of the historical term.

a small percentage of them will that's for sure!


My guess is mssql as I've seen the term quite a bit with those guys.


honestly I'd be ok with a being treated as different than b in this colorized use case. sure, maybe it'd be "bug inducing" or be a subconcious push toward human brains to forget they produced two aliases etc but I think what you're proposing with the c colorizing is impractical and yes I get it, "the tools caused the crash that brought down civilization" but c'mon, if you're still using c, folks have already talked you out of safety nets lol


yeah but can't you use the ipads built in ssh to just use bc on a linux that you remote to?


Is this a parody of the Dropbox comment or is this sincere? I don’t think iPads have built in ssh… and even if they do, this is a far cry from an app. It assumes you have a Linux machine on your local network and are willing and able to set up ssh to connect to it as well as learn command line tooling for making calculations.


ooooh boy, gotta mentally prepare myself for this one

<press enter>

damn these ai's are good!

<begins shopping for new username>


"The user will start a comment with 'I'm a social libertarian but...' only to be immediately downvoted by both libertarians and socialists. The irony will not be lost on them, just everyone else."

I can't say I'm not impressed. That's very funny


I don't think its too problematic, its hard to say something is "reasoning" without saying what that something is, for another example of terms that adjust their meaning to context for example, the word "cache" in "processor cache", we know what that is because its in the context of a processor, then there's "cache me outside", which comes from some tv episode.


It's a tough line to tread.

Arguably, a lot of unending discourse about the "abilities" of these models stems from using ill-defined terms like reasoning and intelligence to describe these systems.

On the one hand, I see the point that we really struggle to define intelligence, consciousness etc for humans, so it's hard to categorically claim that these models aren't thinking, reasoning or have some sort of intelligence.

On the other, it's also transparent that a lot of the words are chosen somewhat deliberately to anthropomorphize the capabilities of these systems for pure marketing purposes. So the claimant needs to demonstrate something beyond rebutting with "Well the term is ill-defined, so my claims are valid."

And I'd even argue the marketers have won overall: by refocusing the conversation on intelligence and reasoning, the more important conversation about the factually verifiable capabilities of the system gets lost in a cycle of circular debate over semantics.


sure, but maybe the terms intelligence and reasoning aren't that bad when describing what human behavior we want these systems to replace or simulate. I'd also argue that while we struggle to define what these terms actually mean, we struggle less about remembering what these terms represent when using them.

I'd even argue that its appropriate to use these terms because machine intelligence kinda sorta looks and acts like human intelligence, and machine reasoning models kinda sorta look like how a human brain reasons about things, or infer consequences of assertions, "it follows that", etc.

Like computer viruses, we call them viruses because they kinda sorta behave like a simplistic idea of how biological viruses work.

> currently-accepted industry-wide definition of "reasoning"

The currently-accepted industry-wide definition of reasoning will probably only apply to whatever industry we're describing, ie., are we talking human built machines, or the biological brain activity we kinda sorta model these machines on?

marketting can do what they want I got no control over either the behavior of marketters or their effect on their human targets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: