I'm on your side in this argument (approximately; asking what ethics even is and where it comes from can be productive but shouldn't conclude "and therefore AI agents working with humans don't need to integrate a human moral sense" -- at least that'd be a really bad conclusion to humanity as AI scales up).
Can't recommend letting an LLM write for you directly, though. I found myself skipping your third paragraph in the reply above.
Yeah but nobody is gonna read it if they waded through five paragraphs of insubstantial LLM slop from you before. You betrayed the trust of everyone reading that post, wasting their time, energy and quite frankly making us feel a little dirty for reading in good faith what turned out to be something you put zero effort into generating and took us a lot of effort to read. Fool me once, shame on you; Fool me twice, shame on me and all that.
This is exactly, genuinely, 100% what I was talking about when I said you were being direspectful of good discussion culture. You're turning it from high-trust into low-trust and soon nobody will be reading any comment longer than two sentences by default.
Can't recommend letting an LLM write for you directly, though. I found myself skipping your third paragraph in the reply above.