Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article feels like it came from some alternate universe where the history of AI is exactly the opposite of where it is in ours, and specifically where “The Bitter Lesson” [0] is not true. In our world, AI was stuck in a rut for decades because people kept trying to do exactly what this article suggests: incorporate modeling and how people think consciousness works. And then it broke out of that rut because everyone went fuck it and just threw huge data at the problem and told the machines to just pick the likeliest next token based on their training data.

All in all this reads like someone who is deeply stuck in their philosophy department and hasn’t seen anything that has happened in AI in the last fifteen years. The symbolic AI camp lost as badly as the Axis powers and this guy is like one of those Japanese holdouts who didn’t get the memo.

[0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html



The idea that symbolic AI lost is uninformed. Symbolic AI essentially boils down to different kinds of modeling and constraint solving systems, which are very much in use today: linear programming, SMT solvers, datalog, etc.

Here is here symbolic AI lost: any thing where you do not have a formal criteria of correctness (or goal) cannot be handled well by symbolic AI. For example perception problems like vision, audio, robot locomotion, or natural language. It is very hard to encode such problems in terms of formal language, which in turn means symbolic AI is bad at these kind of problems. In contrast, deep learning has won because it is good at exactly these set of things. Throw a symbolic problem at a deep neural network and it fails in unexpected ways (yes, I have read neural networks that solve SAT problems, and no, a percentage accuracy is not good enough in domains where correctness is paramount).

The saying goes, anything that becomes common enough is not considered AI anymore. Symbolic AI went through that phase and we use symbolic AI systems today without realizing we are using old school AI. Deep learning is the current hype because it solves a class of problems that we couldn't solve before (not all problems). Once deep learning is common, we will stop considering it AI and move on the to the next set of problems that require novel insights.


Symbolic AI didn't lose, it just stopped being called AI. We call it "decision trees" and stuff now.


Today's symbolic software is just software that was written by humans. Software existed as long as there are computers. AI was never just another term for software. I don't think any human written software today captures what proponents of symbolic AI wanted to achieve 50 to 60 years ago. Well, okay, it beat Kasparov at chess in 1996, but chess algorithms were old news even in 1970. I don't think Deep Blue used anything fundamentally new. It was not an AI breakthrough, it was a feat which showed how fast computers are.

The fact is, "AI" was always about much higher ambitions, about solving truly fuzzy tasks. Recognizing handwritten digits is exactly such a problem that has been solved, even if you don't want to call it "AI" anymore because it has stopped to be impressive.


It's from 2018. Time was not kind to Pearl's picture of AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: