Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought people say learning Haskell as a first language is easy because

1) you don't have do a lot of unlearning. See, you still don't understand why you want things to mutate. Experienced programmers (in other languages) mutate things in every single line of code they write. They have to unlearn mutating things to learn this.

2) It's closer to mathematics which many people know.

I think the problems you faced are actually problems because you read a book that was meant for programmers. Hell, you'll face these problems even if you read a C book that is meant for programmers.

I still think, Haskell (as a language) is a good first language.



> 2) It's closer to mathematics which many people know.

No. They don't. Most people have the "how to" knowledge of basic math in their heads. The imperative parts like the algorithm for dividing and multiplying numbers that they run on their wetware computer. The part concerned with reasoning about "what is", the proofs part, the part mathematicians call "math" is not what most people's knowledge of math consists of.

Imperative languages really are closer to how our minds work everyday. Yeah, on occasion we do some "deep thinking" about "what is" but most of the time our brain goes about things like "ok, what are the steps to do X? [...] and now I'll always label the result of the last step y [...] and when condition z happens I'll goto step s etc.".

...this is how laws, regulations, institutional processes etc. are formulated. And this how you see nature work: that place on the tree branch where you saw a bird now holds nothing (... 'null bird pointer' ?) or maybe you look around a few hours later after the storm and you see that branch is actually broken (... 'segfault' ?).


> Imperative languages really are closer to how our minds work everyday

Why do you believe this? I think it is a cognitive bias at play, where you've been trained to think this way until it comes natural, and now it's hard to see it any other way. With these lens, you see imperative instructions everywhere.

I randomly opened a wiki page: http://en.wikipedia.org/wiki/Sine

In it, you'll find a sine is not described as a series of imperative steps to compute it, but is defined as what it is. Most descriptions and models you'll find of most things are this way.


Indeed, this is how things are described in a textbook. And how they should be. But, for example, after you've learned how to draw an ellipse using a string, a pen and two pins (like https://www.youtube.com/watch?v=7UD8hOs-vaI) your intuition and unconscious perception of what an ellipse is will be tied to the process of producing it and a visual representation of this process, not to the arid definition of it like "a curve on a plane surrounding two focal points such that the sum of the distances to the two focal points is constant for every point on the curve".

This is why people find really learning mathematics so incredibly hard, because our intuitions are process oriented. And why you can take anyone who is not what I call a "mud-mind" (people that just can't do rigorous logical thought) and and teach him/her how to write code in an imperative language until they can get something working, while making the same person understand (not memorize!) a mathematical proof or envision such a proof is just 10x times harder.

I love functional programming and being able to do "what is" reasoning about programs. But I think this is just inherently hard for the human mind to do, we're not optimized for it, we're optimized for generating sequences of instructions that we send to our muscles to execute (and our muscles and bodies and mind too are obviously stateful, btw) and we're much better at generalizing this "dirty" way of thinking even for abstract pursuits like software development...


> your intuition and unconscious perception of what an ellipse is will be tied to the process of producing it and a visual representation of this process

What? I don't think this way at all, and I'm surprised anyone does!

> because our intuitions are process oriented

That is very weird for me to hear. I wonder if some sort of survey could clarify if this is actually a common way to think.

> But I think this is just inherently hard for the human mind to do, we're not optimized for it

Again, I think you're extrapolating from your own training/way of thinking, and I doubt this is commonly true.


Interesting. It might make be fun to try and look for "process oriented" vs. "existence/properties" oriented people around, and will sure make for some entertaining lunchtime discussions...

Like the thing with algebra-oriented vs calculus-oriented people eating corn: http://bentilly.blogspot.ro/2010/08/analysis-vs-algebra-pred... :)

I tend to be process oriented by default I guess, even your sine example... The sine function only "clicked" for me after I saw it as the y projection of the radius of the trigonometric circle and imagined that if you attached a pen to this point moving on the y axis while a paper scrolled at constant speed behind it you'd get that ondulating line. Now, I understand that this way of thinking quite biased (I tend to get very good intuitions about "how to build things" but not very good at explaining why they work afterwards), and it can make me miss obvious shortcuts, like looking for what properties and relationships conserve etc. and this is why I make an effort do more "what is" type of thinking about stuff... but "how to" thinking just seems more natural to me. Maybe it's "scientists" vs. "engineers" thing and programming just happens to be a middle-ground where both kinds of people are frequent :)


By the way, I relate to the sine as y projection of a unit circle too, far more than the triangle ratio.

But I relate to it by thinking of the y coordinates of the points between x=-1..1 in a unit-circle, without any drawing utensils/pens/etc.

I don't see why it makes anything easier to consider the process here, rather than just the set of y points, I need to ask some people at work to see how others see it.

Can more people chime in here, perhaps?


Just to make something clear before going away from this discussion: by "process" I don't necessarily think in terms of the drawing utensils/paper etc., I see it more as a "process of generating pixels or data points in a time dependent fashion and visualizing the animation of of the production instructions' results in my head". Or even more abstractly as an abstract ordered sets of instructions that mutate some shared state, it doesn't have to be something visual.

Yeah, I guess less abstract-minded process-thinkers would be more fixated to physical tools and representations, as physical reality is where this way of thinking originated I guess. I imagine drawing something in the sand then erasing it and drawing something else as a story teller moved to illustrating some other stage of the story he was telling Or using an abbacus with beads on a string. Generalize and abstract from this and a von-neumann machine with a bunch of registers suddenly becomes a pretty intuitive way to think about computing things and writing programs... then generalize more and from registers you get to variables, even more and you can have pointers.

It's quite a simple conceptual route from the intuitions of cave men drawing in the sand to programming in C (a good one if you really want to sound condescending to C programmers....) Whereas the conceptual route to lambda calculus and category theory and type systems and monads... that's a long, alien and tortuous one, even if it seems to lead to a wonderful castle.

...ok, now back to banging more imperative oop code for work together with my fellow coding cavemen :)


I don't see how you can separate the two. A point on a unit circle describes a triangle formed by the horizontal step along the X axis, and then a vertical step to get to the circle. The sine is the vertical step, for a unit circle. For a non-unit circle, this has the wrong scale, which is normalized by dividing by the circle radius. And that radius happens to be the third side of the triangle, the hypotenuse. You can ignore the triangle aspect when you assume the unit circle, which makes the hypotenuse 1.


Not that I'm going to claim the "default state of human cognition" is either sequential or "functional"... but I'll add anecdote to Peaker's anecdote and state that the way you're describing your mental processes feels very alien to me as well.


There's less fighting the student's ego involved, but it's overall more work because there's just so much they don't know if it's their first programming language. Experienced programmers repeating the trope that it's easier to teach/learn Haskell for a new programmer are:

1. Not speaking from experience

2. Taking a lot of knowledge for granted

Example - explain what a "side effect" is. Why is it a "side" effect? What's ()? Why is that "nothing"? Why does putStrLn return IO ()?

That said, it's been a pleasure learning how to teach Haskell and writing the book so far.


I suppose there are two ways to interpret that trope. First, that it's easier to teach someone carte blanche to program at some goalpost level of skill via Haskell than other languages. Second, that it's easier to teach someone who has no knowledge of programming otherwise to understand Haskell than it is to teach someone with "imperative experience" to understand it.

I wonder what your thoughts are these days on each of these. My understanding is that you have the requisite experience to talk about it, like few others!

It's finally interesting to me to state that I think the best version of that trope, though one that is impractical and untestable, is that it's easier to teach a newbie to program Haskell than an experienced Java programmer ignoring some set of standard, non-linguistic training points. This is the most interesting since it gets directly at there being some kind of "imperative mindset impedance" which is presumed to exist, but it's the least testable since I'm certain nobody will ever agree on what those "non-linguistic training points" are. It's probably folly to assume they exist, even.

Anyway, I'd love to hear your thoughts.


Even newbie computer users understand mutation, because they have moved files (or other objects, like mail box items) from one place to another, deleted or renamed them or edited their contents.

Mutation is also something exhibited by everyday objects. This is modeled much better by object-oriented programming with mutable state. More importantly, the rigid typing of Haskell isn't found in the real world in which backpacks can hold dissimilar items, and donkeys can mate with horses to produce sterile offspring.

Haskell is close to mathematics, sure. Many people know mathematics. However, many people do not know that mathematics which Haskell is close to! So your (2) is a slight equivocation on the term "mathematics".


Mutation is just how you cut up time.

If I pick up a coin and place is 6 inches to the left I've mentally cut the world into three epochs, the prior, the mutation, and the posterior. Thus I feel I mutated it.

If I drop a ball, I can do the same trick and cut the world into the before and after of my releasing the ball, but for the duration of it falling I think of physics just happening. This is smooth, continuous, and not a set of discrete mutations.

The math backs this up as well in the sense that integration is exactly what lets us convert the world of mutation into the world of constant physics (FRP from the "continuous time denotational semantics" perspective of Conal Elliott not the usual way the term is abused).

So to claim that "everyday objects" are beholden to mutation or immutability I think is a game of confusing the map with the terrain.


Mutation is a function over time, for sure. Most people understand the change and not the continuous function behind the change, and many processes are too chaotic to be described by nice clean continuous functions (a physical simulation with instantaneous collisions...). I believe even FRP uses steppers for those kinds of things.

Most people think in terms of mutation, not in terms of the abstract functions that cause values to change over time.


I think this is just a vocabulary difference, not a genuine mental one.

When I see a ball falling I don't think of it moving forward and accelerating in an infinite number of infinitesimal mutations---I see it as being in the state of falling and recognize the space of actions I could perform in time with it. There's mutation in the sense that I cannot arbitrarily go back in time. There's immutability in the sense that the rules governing that motion are not "staged" in any way outside of the interference of my hand.

A better example is perhaps catching a fly ball as an outfielder in baseball. I absolutely cannot be processing that motion as a set of mutations in order to function---instead, I predict according to a "mathematical pattern" if not a formula, where it will land based on scant observation of segments of its space-time trajectory.

And that's all more or less instinctual, I believe. It must be in order for me as an outfielder to execute my task. That stuff needs to just be built in to my brain.

So while ultimately FRP and physics might be "implemented" (ha, ha, philosophers forgive me) in discrete steps, there's a vital model of each which is continuous.


Many behaviors are continuous, it is just that many are not. I am here today and gone tomorrow, why? It doesn't matter, it just is! If all I had to worry about in a physics engine was F = ma, life would be a simple continuous function, but it turns out collisions just ruin that.

If you really want to get in the thick of it, take Conway's game of life (or NKOS if you can stomache wolfram): there is quickly in a point in a physical process where you don't know what happens next without the previous state.


To be clear, I'm not arguing that either mechanism describes all things—actually the exact opposite, that neither is sufficient.


The issue with arguments of the form "Mutation is..." is that all these things are models of mutation: merely representations. Different models help us elucidate different properties of the same underlying phenomena. This does not mean that either one is "correct" or the one true representation of mutation.


Most people using Haskell are using it to write imperative programs, giving up mutation specifically is less of a big deal than you'd think.


I'm pretty sure I can implement any imaginable algorithm without mutation.

Implementing it with any efficiency seems like a different question.

A common activity in many programs is scanning a list and changing a few items based on some criteria. So far I've heard in Haskel you either duplicate the list or create a complicated data-structure that somehow essentially makes this mutation OK. Either ways seems silly - especially do what you did before but with complex dance around it.


>Implementing it with any efficiency seems like a different question.

Nah, same asymptotic limitations, just different constants and patterns in what's sensible.

Example: https://www.youtube.com/watch?v=6nh6LpcXGsI

Data.Map, Data.Sequence, Data.Vector all serve for most anything you'd want.


> Either ways seems silly - especially do what you did before but with complex dance around it.

Interesting you think that. I think the opposite. It seems silly to overwrite something that someone else may have been relying on to not change when I can do a persistent update with pretty much the same performance.


Well,

That makes immutability a nice technique if you happen to be dealing with a situation of synchronous access and otherwise an unneeded approach.

The topic was haskell as first-language. Despite the hype, most people aren't going to be writing fully synchronous applications most of the time and thus immutability all the time seems mostly like unneeded bondage and discipline programming.


Concurrency requires the someone else to see the change. Persistent collections work by throwing out concurrency.


Haskell relies heavily on the compiler being intelligent. In most cases, it will use stream fusion to avoid duplicating the list.


> I'm pretty sure I can implement any imaginable algorithm without mutation.

We know this from the equivalence of Universal Turing Machines (which mutate a tape) and partial recursive functions (application of operations to immutable values).

Any mutation is a functional mapping from the total previous state to the total new state.

Of course, you generally don't want your ethernet driver to clone the entire OS just because it has another packet to add to a queue.

Mutation can be tremendously resource-saving, when you don't need the previous state. The problems occur when parts of the software develop nostalgia for the way things were before the mutation. :)


Even in non-Haskell, a lazy list is a obvious possibility for that. It's not for no reason that many languages have implemented map, reduce, filter; laziness can still be pretty efficient while increasing composability and readability.

Even Java has finally been dragged into the 1960s!





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: