Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You know, it doesn't really seem like a mistake for people to anthropomorphize the thinking machines.


Attributing "thinking" is also a mistake. Anthropic have shown that the "thoughts" it produces to explain what it's "thinking" are (like all the rest of its output) just plausible text, unrelated to the actual node activation happening inside the model: https://transformer-circuits.pub/2025/attribution-graphs/bio...

These tools don't have the capacity for introspection, and they are not doing anything that really resembles the thinking done by a human or an animal.


i think that we need to subset our definition of “thinking” on lines like

  - contextualising 
  - packaging / assembling
  - parsing
  - recognizing / labeling
  - comparing / contrasting 
  - analyzing / subsetting
  - checking
  - reasoning 
  - introspecting
in my view both human and LLMs perform most of these aspects of thinking in a similar way … I suspect that a large fraction of the human brain performs LLM like language manipulation

for sure, today’s LLMs lack the last two on the list, and there is probably a rational debate to be had whether these can just emerge in the substrate provided by the LLM-like setting or whether the brain provides some hardwired additions to loop around focus selection and perceive-model-decide-act that will need to be grafted on to LLMs to achieve AGI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: