Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think C.S. Peirce's distinction between corollarial reasoning and theorematic reasoning[1][2] is helpful here. In short, the former is the grindy rule following sort of reasoning, and the latter is the kind of reasoning that's associated with new insights that are not determined by the premises alone.

As an aside, Students of Peirce over the years have quite the pedigree in data science too, including the genius Edgar F. Codd, who invented the relational database largely inspired by Peirce's approach to relations.

Anyhow, computers are already quite good at corollarial reasoning and have been for some time, even before LLMs. On the other hand, they struggle with theorematic reasoning. Last I knew, the absolute state of the art performs about as well as a smart high school student. And even there, the tests are synthetic, so how theorematic they truly are is questionable. I wouldn't rule out the possibility of some automaton proposing a better explanation for gravitational anomalies than dark matter for example, but so far as I know nothing like that is being done yet.

There's also the interesting question of whether or not an LLM that produces a sequence of tokens that induces a genuine insight in the human reader actually means the LLM itself had said insight.

[1] https://www.cspeirce.com/menu/library/bycsp/l75/ver1/l75v1-0...

[2] https://groups.google.com/g/cybcom/c/Es8Bh0U2Vcg



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: