Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human.
I addressed that directly in the comment you’re replying to.
It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses.
It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.
They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.
That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
>It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those.
>They are not human, so attributing human characteristics to them is highly illogical
Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans.
Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism.
After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms.
"Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior.
Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too.
>That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors.
>Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
Good thing that we aren't talking about RDBMS then....
It's something I commonly see when there's talk about LLM/AI
That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that.
> They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational.
What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them?
> That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them.
But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational.
> Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math).
Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior.
Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on.
I’d love to hear an actual counterpoint, perhaps there is an alternative set of semantics that closely maps to LLMs, because “text prediction” paradigms fail to adequately intuit the behavior of these devices, while anthropomorphic language is a blunt crudgle but gets in the ballpark, at least.
If you stop comparing LLMs to the professional class and start comparing them to marginalized or low performing humans, it hits different. It’s an interesting thought experiment. I’ve met a lot of people that are less interesting to talk to than a solid 12b finetune, and would have a lot less utility for most kinds of white collar work than any recent SOTA model.