Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a long-time user of Prolog I didn't find the explanation so horrible. Unification is indeed like variable assignment in the constraint-satisfaction sense: bindings are not stateful. As for equality, I believe the author meant == rather than =, or more accurately the 'is' operator in Python and Perl. But maybe regular expression matching would be a better analog for Prolog's theorem proving.

Your statement 2==1+1 is (I'd claim) a weird case in Prolog. Unless you want to code the Peano postulates you need some way to do numerical calculations, and "2 is 1+1" does indeed work in the body of a clause.

Your mathematical explanation of Prolog's theorem proving is precise and correct but almost impenetrable to the average software engineer trying to "get" Prolog. Such explanations only make sense once someone already mostly understands what's going on, and hence aren't useful for beginners. As a first approximation I thought the OP's work wasn't bad.



The simplest way to test equality between two arithmetic expressions in Prolog in a syntax that anyone familiar with imperative programming languages will be comfortable with is to use =:=/2, like this:

  ?- 2 =:= 1+1.
  true.
Prolog has a few different predicates to test (dis-) equality, (non-) equivalence, (non-) unification, subsumption and so forth, and most of them (but not all) use the "=" symbol in some combination, so it's easy for a novice (or an advanced user) to get twisted into knots trying to figure out what exactly each should be used for. My strong intuition is that stating that unification is "like testing equality" will only help to confuse matters further.

The same goes for "simple" explanations that completely miss the point. Logic programming and Prolog are not simple subjects. That's not by design: theorem proving is bloody hard by nature and making a first-order logic language run on a digital computer is even bloody harder. First-order logic is undecidable in the general case and we must jump through hoops constantly to stick to the special cases where a theorem can be proved automatically and the proof procedure is sound and complete, or at least sound. Without an understanding of this fundamental difficulty in automated theorem proving, Prolog does not make sense, in the long run. For example- why those weird Horn clauses? Why not arbitrary clauses? Why depth-first search? Why unification? And so on.

It is certainly possible to learn Prolog incrementally, without first having to digest, I don't know, the proof of the Subsumption Theorem. But every small step in the way must be taken on firm ground, else the student loses her footing and falls down bottomless pits of despair. I know this from first-hand experience and there is even some scholarship on the trouble that university students get in with Prolog because they form the wrong mental models for it. Let me see if I can locate it online.

Yep, here it is:

What's wrong? Understanding beginners' problems with Prolog

https://link.springer.com/article/10.1007/BF00116441

I'm quoting a bit of the abstract:

  The problems that underlie these and other errors seem to be (a) the
  complexity of the Prolog primitives (unification and backtracking) and (b)
  the misfit between students' naive solutions to a problem and the constructs
  that are available in Prolog (e.g. iterative solutions do not map easily to
  recursive programs). This suggests that learning Prolog could be helped by
  (1) coherent and detailed instruction about how Prolog works, (2) emphasis
  on finding recursive solutions that do not rely on primitives such as
  assignment and (3) instruction in programming techniques that allow students
  to implement procedural solutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: