No it's not. Might as well say that SQL is a brute force search against your database. If you don't add indexes, you end up with slow queries.
The art of efficient in prolog is understanding your database, facts, and your search space and how to trim it. If you need 3 items, search for 3 items only, don't search for all possible items then try to pick 3.
If you can figure out when to cut your search, do so.
For deterministic operations that you can meomize, do so.
If your system supports tabling use it.
Difference list is like the TCO of Prolog, the efficiency can be amazing.
The difference in performance between an average programmer and a pro prolog programmer is usually much more than you will find in other languages.
Uncalled for. Yes you can implement brute force search very conveniently in Prolog but there's obviously much more to it. You can dismiss ML in a similar way: It's fine as long as you know that it's just glorified curve fitting.
Prolog hits a sweet spot where you can start with built-in resolution and then use Prolog as implementation language for specific searches.
Prolog should also be very popular with developers who like pattern matching and destructuring; it's pattern-matching by unification is, er, unmatched.
Prolog is also excellent for DSLs, either usings it's built-in facility for defining new functors (unary or binary operators/keywords), or standard DCG recursive-descent parsers or more complex grammars. Prolog was designed for NLP, used for prototyping Erlang, and a major inspiration for Haskell after all.
Resolution isn't really brute force search, in that it doesn't search all possible solutions at random; the search is directed by the programmer's rules.
But yeah, it involves quite a lot of backtracking, that happens when the resolution engine needs to go back and try something different. When this is a problem you can sometimes use the cut rule http://www.cse.unsw.edu.au/~billw/dictionaries/prolog/cut.ht...
Can't edit or delete at this point, so I'll note that I see this is against the "no snark" rule, and what I wanted to do was link to the c2.com wiki page "JustIsaDangerousWord" - a page that's now offline - about how a sentence gets better if you remove the word "just".
"Prolog is just a brute force search" -> "Prolog is a brute force search"
All libraries are "just" some code which does something so you don't have to. Even if Prolog is "just a brute force search", well in the same kind of way, regex is "just" a state machine brute-force matching against a string. It's still a useful tool.
Rules engines are about forward chaining (fan-out of all implications) whereas prolog is backward chaining (querying an implication).
CHR (constraint handling rules) is a forward chaining system built into most prologs that has prolog like syntax, and so is a replacement for business rules engines.
So the choice of what to use is based on that (if you want forward chaining - business rules, or CHR of prolog), if you want backward chaining (similar to SQL) then prolog.
Prolog is more elegant than the business rules engines, and also has extra features (constraint programming with finite domains for example), that business rules don't have.
So in short, prolog is more comprehensive and more elegant but requires more expertise, busines rules are for "untrained" people to get working with relatively quickly.
Prolog is a standardized programming language for doing backward chaining with multiple mature implementations eg. you can ask for solutions to discrete graph search and other predicate logic problems as described in TAOP book. Prolog is also used for implementing more advanced/complete logics than just resolution with negation-as-failure such as various description logics. "Business rule management system" such as Drools do multiple things, but are understood to perform mostly forward chaining based on RETE where you assert some facts and rules and the BRMS evaluates all logical consequences (unlike Prolog where you ask for a particular goal). BRMSs aren't standardized; the closest to a standard would be IF ... THEN ... rules using OPS5-heritage syntax (OPS5 was an "expert system" during the original AI spring followed by the infamous "AI Winter" around 1988). But the matter of the fact is Drools and other BRMs don't use interchangeable syntax and engines. Drools in particular is a Java-/maven-/XML-heavy solution sold to enterprises so that "business can define accounting/rating or other business rules" themselves. If that sounds like too good to be true, you're right. The Drools-based apps I've encountered are usually written and maintained by developers rather than mythical "business users", and apart from being in need of maintenance because no-one wants to develop using Drools also lack basic testing and other engineering discipline so become a PITA in any modern test-driven environment.
If you are actually interested in learning and understanding, rather than just blinding following stack overflow procedures to solve instances of templated problems, then I would highly recommend a simpler language like Prolog, a theory-oriented text like the Art of Prolog.
I learned the basics of Prolog, really liked the idea, but found it rather impractical for complex problems. IMO, there needs to be a stronger separation between rule descriptions and search algorithm description/manipulation. I want to be able to specify an infinitely recursive rule, but avoid falling down the rabbit hole while running queries. Cuts could also be moved into this "search algorithm description".
One approach to taking existing Prolog code and executing it with different semantics (like a different search algorithm, but also tracing, debugging, etc.) is through metainterpreters. These are Prolog programs that interpret Prolog programs; you can write one in a few dozen lines. You can find several examples at https://www.metalevel.at/acomip/, including one in the "Changing the search strategy" section that does what you want.
The drawback of metainterpretation is that it is, well, an interpreter layered on top of (on many systems) another interpreter. If you need your Prolog code to be really really "fast", then this approach may be "too slow". (Quotes because people love complaining about the speed of computation even in many situations where it really does not matter.)
A different approach is to use code generation. Prolog programs are collections of Prolog terms; once you have written the abstract computation you want, you can write a program that takes that code, mangles it into a different form, then compiles that new form just as if you had written it by hand.
So in summary, the separation you ask for is not there, but you have much more powerful tools than most other languages to implement it yourself quite easily.
You learned the basics.
You need to really understand the language to gain from it.
Learning the basic is like writing non pythonic code or trying to write Lisp as if it's C. You are going to end up with major frustrations.
It's like driving a truck and then learning to try a sports car cautiously like an overloaded truck, you won't enjoy it!
Lispers love to talk about "code is data" In prolog, "data is code" and until you grok that, it's really difficult to experience the Prolog enlightenment.
Functional programmers work with infinitely recursive structures and data all the time and it doesn't really bother them, so this isn't automatically an absurd thing to say.
Just because structures, rules and trees are infinitely recursive doesn't mean no insight can be created from them in finite time.
Other people are explaining why you're wrong, but I'd like to point out that even if you were right, your comment would still be nearly worthless.
Merely pointing out that someone else isn't correct might even have negative net value. For one thing, it can be construed as simple disagreement, opening up a debate that shouldn't happen when there really is a definitive answer. For another, it's insulting (especially as you did it) and that's not a necessary part of correcting someone's understanding.
Invest a few more minutes next time to actually make a substantive comment. You'll be helping the person you correct, you'll be helping whoever else reads the thread, and you'll ultimately be helping yourself – by opening up your views to an actual constructive process.
Yes, your production database is a text file really.
You can encode your facts in prolog style syntax and save it in a file and that's your database. You load it, and then ask your questions at the REPL.
If you wanted data that multiple people can change, you can actually connect prolog to an SQL database, then using query apply prolog rules and ask questions about your data.
Also, I found that AllegroGraph is a production graph database that supports Prolog queries... I don't think you load it with Prolog syntax, though--probably RDF triples instead.
I had heard about prolog for years, but, it was not taught at the university I went to. But, as other have mentioned, I have always wondered about it, hearing whispers here and there. The recent resurgance of interest came from working with Erlang. Erlangs syntax is from Prolog (because it was prototyped in Prolog first) I am reading the basic intro text: https://www.amazon.ca/Programming-Prolog-Using-ISO-Standard/.... which is nice. All I can say is, its very strange and interesting.
MIT Press published a book called "The Warren Abstract Machine: A Tutorial Reconstruction" by Hassan Ait-Kaci.
This short -- 114 pages -- book about the Prolog VM has been made available for free download by the author (it's out-of-print). [0]
A good description of what it's about is at MIT Press' page for the author [1].
(While out-of-print, apparently a paperback version can be had for $21 on Amazon [2]).
Chapter 4 of "The Art of Prolog" ("The Computation Model of Logic
Programs") is about that very topic. It doesn't go into gory details on how to optimize an interpreter for fast execution, but it does go into enough detail that I was able to create a Prologish interpreter in ~30 lines of Haskell many years ago. Apparently "The Art of Prolog" is a free download now (there is a PDF link on the left side of the page).
I'd venture to say.. Prolog and Shen. Specifically, I am implementing a modern Prolog in Rust that will encompass the results of some more recent (ie. current century) research within the logic programming/Prolog community. It's here:
New language constructs. The 'WAM' in the name refers to the Warren Abstract Machine, one of the more efficient ways to implement Prolog in a procedural language.
Also, using the SICStus interface and semantics for attributed variables should make for some very general constraint solvers, yes.
The Mercury compiler is written in Mercury, which is syntactically derived from Prolog. The original compiler was bootsrapped by stripping the non-Prolog parts from the code and executing it on NU-Prolog. The compiler first successfully compiled itself in 1995, after which NU-Prolog was no longer needed.
And just to tie this back to the original topic, Leon Sterling (first author of The Art of Prolog) was the head of the Department of Computer Science at the University of Melbourne when the development of Mercury began there.
The original Art of Programming I read when I was ~16, and I found to be extremely influential in how I thought of programs and data and their interrelation.
Might be useful for the folks who want to learn how the language ticks but don't have the time to go through a whole book.
That being said, the book is pretty great! I recommend it.