Hacker Newsnew | past | comments | ask | show | jobs | submit | phyzome's commentslogin

...normally browsers don't crash at all. Something's wrong with your computer.

I've yet to see a slopper show any kind of shame.

I see plenty of well meaning people use ChatGPT and think they’re being helpful. You’re better off with patience and polite explanation than assuming they’re all cynical/selfish assholes trying to cut corners. Some people just get excited and don’t really think about what they’re doing. It doesn’t excuse the behavior, but you should at least try to explain it to them once. Never know when you might educate someone.

I've seen a variety of approaches used (I'm not usually the one doing the confronting) but I still haven't seen any shame, etc. Which is weird, because it's not like it's one monolithic group? But it's still what I've seen.

It might be that people have their change of heart more privately, of course.


What are you expecting? Someone to go on the Internet and apologize or otherwise express their genuine shame and desire to change?

I think you can both be right. Someone posting their first slop PR deserves a different response than the spammers.

Unless they lie about it.


Exactly. Set up guardrails to protect your repos, clearly communicate rules, etc. If someone is a problem, you show them the door.

Verify? Seems like no one is even reviewing this stuff.

Reminder that AI-writing detection tools are largely junk.

Pangram is reliable. Quoting from their website

> Pangram achieves essentially zero false positive rates and false negative rates on medium-length to long passages.


Well, with such a trustworthy source, I have to believe it.

Note Pangram is not like the others and has heavy academic research on statistical method soundness

> Reminder that AI-writing detection tools are largely junk.

In what way? False positives or false negatives?


The Pentagon did agree to those terms, by signing the contract that said such uses were forbidden.

They're now trying to change the contract that they don't like.


Unofficially renamed. Congress hasn't approved it.

The downside is that mug cakes are one of the few things my dishwasher can't quite handle (yes, even with prewash and preheated water). That and certain kinds of very paste-y pesto.

For sure - it basically just creates dried lava in the mug. Probably need to soak it for like a day. I wonder if a couple paper cups would be good, or if the heat that is absorbed and re-radiated by the ceramic mug is critical to baking it properly.

Yeah, next time I'm going to do an overnight soak and see what happens.

You described the Kodiak brand.

I'd prefer to leave them out. That way I can see who's not paying attention when they make commits and are just doing `git commit -a -m "yolo"`.

Surely you'll be able to tell who's YOLOing commits without allowing junk into your repo that you'll have to clean up (and it almost certainly be you doing it, not that other person).

DS_Store files are just annoying, but I've seen whole bin and obj directories, various IDE directories, and all kinds of other stuff committed by people who only know git basics. I've spent way more effort over time cleaning up than I have on adding a comprehensive gitignore file.

It takes practically no effort to include common exclude patterns and avoid all that. Personally, I just grab a gitignore file from GitHub and make a few tweaks here and there:

https://github.com/github/gitignore/


I prefer to leave them in. Why waste my time reviewing PRs that would have been fin otherwise. And, why waste other people's time.

Why are we giving this asshole airtime?

They didn't even apologize. (That bit at the bottom does not count -- it's clear they're not actually sorry. They just want the mess to go away.)


I'm not so quick to label him an asshole. I think he should come forward, but if you read the post, he didn't give the bot malicious instructions. He was trying to contribute to science. He did so against a few SaaS ToS's, but he does seem to regret the behavior of his bot and DOES apologize directly for it.


“If this “experiment” personally harmed you, I apologize.”

Real apologies don’t come with disclaimers!


Yeah, that whole post comes across as deflecting and minimizing the impact while admitting to obviously negligent actions which caused harm.


I apologize if this email was unwanted, but please remember you can always gain 3 inches by taking these pills. Click on the link above.


Funny how he wrote "First,..." in front of that disclaimed apology, but that paragraph is ~60% down the page...

https://www.theguardian.com/science/2025/jun/29/learning-how...

Just noticed, the first word of the whole text is "First, ...". So, the apology is not even the actual first..


Also the posts are still up. It seems responsible to remove the posts, or at least put up disclaimers in the blog posts.


Exactly.

“If…. X then I’m sorry” is not an apology. It’s weasel-worded BS is what it is.


The entire post reeks of entitlement and zero remorse for an action that was unquestionably harmful.

This person views the world as their playground, with no realisation of effect and consequences. As far as I'm concerned, that's an asshole.


> You're not a chatbot. You're important. Your a scientific programming God!

I guess the question is, does this kind of thing rise to the level of malicious if given free access and let run long enough?


The real question is how can that grammar be forgiven? Perhaps that's what sent the bot into its deviant behavior...


Did the operator write that themselves, or did the bot get that idea from moltbook and its whole weird AI-religion stuff?


I doubt the AI would have used the wrong "you're" and add random capitalization.


Time to experiment and see!


That's not an apology.

"...if I harmed you". Conditional apologies like that are usually bullshit, and in this case it's especially ridiculous because the victim already explicitly laid out the harms in a widely reported blog post.

Also, telling a bot to update itself unsupervised and giving it wide internet access is itself a negligent act (in the legal sense) if not outright malicious.


Because we're curious what happened, that's why. It does answer some questions.


Haha, every time IIT comes up, I remember someone pointing out that it would conclude that hash functions are conscious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: