Ok, I'll be the nudge and say I don't understand the point. I never "got" this type of example from the lispers using destructuring and recursion on simple examples either.
This (and others) has a fairly obvious and equally readable counterpart in C++ (or any other imperative language, really), but there's a reason why one doesn't implement it this way in those languages -- and a reason why (some of) the examples given have a certain bias towards iteration.
Wouldn't the same reason hold in haskell? Or do we care about runtime efficiency at all?
It complies to neither an iterative NOR a recursive version. It compiles to LAZY version. This code doesn't loop at all. It forces the argument just deep enough to go three nodes into the list spine. Then, picks the branch based on the number items in the list, forces the first element of the list, and sets up a thunk for appending everything on later.
This was a little bit of a simplification of things; I don't need to go into great detail to make my point.
Okay, but in doing so, it's not pushing an extra stack frame with each step, but rather just jmping its way along, correct? That's what most people think of as being iterative (or tail-recursive), rather than simply recursive, behaviour, and it can apply equally to a lazy algorithm.
No, you can't simply apply that analysis to lazy algorithms. foldr is not a tail-recursive function in a strict language. However, in a lazy language, foldr may not push O(n) frames on the stack.
The code you are looking at here is NOT tail-recursive. Haskell's implementation of ++, in a strict language, will always push O(n) frames on the stack.
But it IS true that this code is O(1) in stack consumption with Haskell's execution model.
Yes, it can apply. Developing an intuition for when Haskell produces strict code and when the compiler is not smart enough to remove the lazyness, helps tremendously in programming bigger programs. I am still in the process of acquiring said intuition.
I think the point is, recursive solutions like the above are often easier to read.
In fact the example above looks tail-recursive to me (at least if you add in an accumulator parameter). That means a good compiler will make it efficient.
This (and others) has a fairly obvious and equally readable counterpart in C++ (or any other imperative language, really), but there's a reason why one doesn't implement it this way in those languages -- and a reason why (some of) the examples given have a certain bias towards iteration.
Wouldn't the same reason hold in haskell? Or do we care about runtime efficiency at all?