"Neural networks (at least in the configurations that have been dominant over the last three decades) have trouble generalizing beyond the multidimensional space that surrounds their training examples. That limits their ability to reason and plan reliably."
But humans also have this limitation. Every science discovery is just some new data we can train on and create models for, both mentally and mathematically.
Agreed. It sounds like the author’s bar for intelligence is to materialize new knowledge out of nothing. Which sounds more like magic than intelligence.
But humans also have this limitation. Every science discovery is just some new data we can train on and create models for, both mentally and mathematically.