Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm trying to suggest (perhaps badly) that formal models of logic modeling like LS / CS have an innate achilles heel (from what I've seen): either A) the knowledgebase is precise but tiny, and will remain so because it had to be meticulously hand crafted to comply with a formal semantic model, or B) the KB is representationally sloppy but big and scaleable, because it was populated automatically and informally (probabilistically), thereby forever limiting its amenability to precise models of reasoning. To wit, you may have your cake, but the morsel is so tiny you'll starve.

I think this tradeoff in quality vs quantity has been central to the inability of most Good Old Fashioned AI (GOFAI) formal symbolic methods to 1) scale up beyond toy academic clean-room problems (from the admittedly little I've seen of commercial LS/CS), as well as 2) support the flexibility and imprecision needed to deal with a messy ill-defined very big world with a seemingly unknowable number of unknowns.

It may be that this venerable yin v yang of knowledge modeling will forever bedevil AI. I don't have the sense that any techniques in the past 30 years have really made headway in breaking this log jam. Deep nets have just tilted the playing field in its favor, so long as we're willing to ignore fussy details like dependence, FOL, and causality.



That describes the weaknesses of existing academic attempts not the theory in any way. I could say the same things of machine learning actually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: