But these AI researchers don't even understand these figures except as advertising reference points. The Socratic dialogue in the "sparks of AGI" paper https://arxiv.org/abs/2303.12712 has nothing whatsoever to do with Socrates or the way he argued.
Fourteen authors and not a single one seemed to realize there's any possible difference between a Socratic dialogue and a standard hack conversation where one person is named "Socrates."
> Prompt: Can you compare the two outputs above as if you were a teacher? [to GPT-4, the "two outputs" being GPT-4's and ChatGPT's attempts at a Socratic dialogue]
Okay, that's kinda funny lol.
It's a bit worrying how much the AI industry seems to be focusing on the superficial appearance of success (grandiose marketing claims, AI art that looks fine on first glance, AI mimicking peoples' appearances and speech patterns, etc.). I'm just your random layperson in the comment section, but it really seems like the field needed to be stuck in academia for a decade or two more. It hadn't quite finished baking yet.
As far as I can see there are pretty much zero incentives in the AI research arena for being careful or intellectually rigorous, or being at all cautious in proclaiming success (or imminent success), with industry incentives having well invaded elite academia (Stanford, Berkeley, MIT, etc) as well. And culturally speaking, the top researchers seem to uniformly overestimate, by orders of magnitude, their own intelligence or perceptiveness. Looking in from the outside, it's a very curious field.
> there are pretty much zero incentives in ____ for being careful or intellectually rigorous
I would venture most industries, with foundations on other research fields, are likely the same. Oil & Gas, Pharma, manufacturing, WW2, going to the moon... the world is full of examples where people put progress or profits above safety.
> I would venture most industries, with foundations on other research fields, are likely the same.
"Industries" is a key word though. Academic research, though hardly without its own major problems, doesn't have the same set of corrupting incentives. Although the lines are blurred, one kind of research shouldn't be confused with another. I do think it's exactly right to think of AI researchers the same way we think of R&D people in oil & gas, not the same way we think of algebraic topologists.
Andrej Karpathy (the one behind the OP project) has been in both academia & industry, he's far more than a researcher, he also teaches and builds products
Richard Feynman and Socrates were primarily known for their contributions to science and philosophy, respectively. Feynman was a renowned theoretical physicist, and Socrates was a foundational philosopher.
Bill Gates, on the other hand, is primarily known as a businessman and co-founder of Microsoft, a leading software corporation. While he also has made contributions to technology and philanthropy, his primary domain is different from the scientific and philosophical realms of Feynman and Socrates."
Thank you for this AI slop. It's the right answer but incoherent reasoning. It could have equally reasonably said:
"The one that doesn't fit in is Socrates.
Richard Feynman and Bill Gates are primarily known for their contributions to science and philanthropy, respectively. Feynman was a renowned theoretical physicist, and Gates is a world-famous philanthropist.
Socrates, on the other hand, is primarily known for foundational contributions to philosophy. His primary domain is thus distinct from the scientific and philanthropic realms of Feynman and Gates."
One of these doesn't quite belong ;)
But these AI researchers don't even understand these figures except as advertising reference points. The Socratic dialogue in the "sparks of AGI" paper https://arxiv.org/abs/2303.12712 has nothing whatsoever to do with Socrates or the way he argued.
Fourteen authors and not a single one seemed to realize there's any possible difference between a Socratic dialogue and a standard hack conversation where one person is named "Socrates."