Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Learning Haskell as a non-programmer (2014) (superginbaby.wordpress.com)
106 points by sridca on April 5, 2015 | hide | past | favorite | 71 comments


> Honestly, I still don’t really understand imperative languages. Why would you want things to mutate?

Because in the computer, things need to mutate, and so a language that doesn't mutate is therefore far from the computer.

EDIT: Put another way: Haskell does support mutation, but inside of particular monads. So obviously mutation is useful. Learn why it's useful in Haskell and you'll see why it's useful elsewhere.


In my opinion you're seeing it wrong. The beauty of FP is that from your code's point of view there is no mutation. Everything is immutable, everything is clean and beautiful. But then the compiler is able to tell that your variable that's used as an argument to a tail-recursive function is actually being redefined on every recursive call and treats it as a mutable variable in the output assembly. Similar ideas are applied throughout those languages.

For example, in OCaml you have the notion of "functional record updates". Basically you update structs by getting a whole copy back with the modified fields changed accordingly. Under the hood though, what you get is just the changes and the other fields point to the original struct.

Basically there is a difference between what your code says and what the compiler outputs, but in the end the result is the same, the compiler is just smart enough to rewrite it for you. You're right that it's further from the computer, but only for what you type in your editor. I highly recommend the book Functional Data Structures (it's on Amazon) if these sort of brilliant compiler optimizations interest you.


> The beauty of FP is that from your code's point of view there is no mutation.

The beauty of pure FP is that from your code's point of view there is no mutation. The only functional language you mentioned in your post is OCaml, which is impure and supports ad-hoc mutation, like every other language in the ML family, and like every other functional language except for a select few like Haskell and Miranda and their derivatives.

Moreover Haskell supports other kinds of mutation besides fusion and tail-call elimination; it just does it behind a pure interface. See, for example, http://hackage.haskell.org/package/array-0.1.0.0/docs/Data-A... .

So we're back at the same place: obviously mutation is useful since even Haskell goes out of its way to make sure you have access to it.


I think the pure interface is the useful part. If there's mutation, but I can firewall it behind a (non-leaky) pure interface, then I don't have to reason about that mutation if I'm looking at code outside the interface. That's a huge win.

Now, OO tries to do the same kind of thing - hiding things behind an abstraction that you don't have to think about from outside the abstraction. The problem is, all the abstractions leak. Does Haskell's hiding mutation behind a purely functional interface leak? I don't know enough to say, but I think it's a critical question.


> in the computer, things need to mutate, and so a language that doesn't mutate is therefore far from the computer

I hear this argument a lot but I don't buy it. You could also say

"In the computer, things are highly asynchronous quantum-level electrical fluctuations, and so a language that doesn't model highly asynchronous quantum-level electrical fluctuations is therefore far from the computer."

which is clearly invalid. Some explanation is needed of why the argument for mutation holds, but for quantum fluctuations does not hold.


The difference is abstraction.

When you code, you're building on top of assembly / cpu registers and instructions. The fact that X86/64 runs on a bunch of electrical fluctuations is actually inconsequential and an implementation detail.

Any x86/64 processor built of transistors naturally needs to take into account the behavior of the transistors it's made up of. But once you've gone above the micro-code/processor level, those details are fully abstracted.

Regardless what programming language you use, that is abstracted away identically and matters not.

However, the fact that registers in the process and memory locations exist and mutate is not something abstracted away from every language. It's abstracted in some, but not others, so it's valid to say that this difference is within the scope of the programming language.

That is why the argument differs. If the above isn't clear, I'd be happy to give another go at explaining it.


But surely you must see that this is just a choice which happens to lie favorably for the argument but is otherwise arbitrary.

You could write a language which expressed manipulations of the electrical fluctuations in the same way that DNA is a programming language which expresses manipulations of ambient signaling protein levels to operate our cellular computers.

Or, taken another way, GLSL is a language which shares a lot of similarity with "functional" languages (despite its C like syntax)---and it certainly doesn't run on an x86 architecture!


Yes, exactly this. Different tools are good for different abstraction levels. Nobody expects people to write applications entirely in assembly anymore, because we have higher levels of abstraction, but that doesn't mean assembly is good for nothing. Assembly is a great and necessary abstraction level. Similarly, technologies in other abstraction levels are useful too.

Too often I see people conflating "X is good" for "X is good for everything" when that is so rarely, if ever, true.


So it's just a matter of abstracting less or more of the implementation details. In Haskell, the mutation necessary to implement pure, lazy computation is an implementation detail, just like manual memory management or quantum fluctuations.


One of those implementation details is hidden away underneath the hardware interface and not visible to software.

There are plenty of better examples to make this point with- named variables and functions, control structures, etc. And these make it clear that the argument is really about the importance of this particular abstraction: where and when memory is used for which values.


Sure, I don't think it is very important where the abstraction occurs for this argument though.


In the computer, you also have no descriptive variable names and make excessive use of gotos. This is far from a good argument.

> Haskell does support mutation, but inside of particular monads.

You are mistaken. Monads do not involve mutation; that would defeat their entire purpose. Although it's true that the monadic code is being compiled into assembly that involves mutation, just as $LANGUAGE code is being compiled into assembly that includes a lot of goto.


This is said a lot and technically valid in a sense, but I think over-pedantic.

Some monads, IO, ST, State, do support mutation in the internal language. Then their implementation, their representation in the meta language, Haskell, can be "pure or whatever who cares".

    http://www.reddit.com/r/haskell/comments/30l46z/haskell_for_all_algebraic_side_effects/cptlduv


> You are mistaken. Monads do not involve mutation.

Indeed. IO and ST involve mutation. They just so happen to be instances of Monad, but that's tangential.


they don't involve mutation on the language semantics level. The mutation happens through the runtime (via calls to the C FFI).

In reality they're implemented in a mutation-heavy way for performaance reasons, but on the language level absolutely no mutation happens ( it's the whole World -> (Result, World) analogy)


The whole point about where the mutation happens and how the language remains pure in spite of it is not awfully relevant. The point is that Haskell does support real, bona-fide mutation in its standard library (see for example its support for stateful IO arrays). So obviously mutation is useful, regardless of whether you're doing that mutation behind a pure interface.


They're not done via cffi . its all compiler supported primops that get optimized and lowered to target native assembly.


To add to what rtpg says, if you look at an actual, monadic example of an IO:

  putStrLn "What is your name?"
    >> getLine >>= 
    \name -> putStrLn ("Hi, " ++ name ++ "!")
It's just a giant chain of instructions, each previous result feeding into the next. Nowhere do we assign anything, modify anything, etc. No mutation.


It's an implementation of mutability in a non-mutable language implemented on a platform with mutability.


> To me, it’s a bit like saying that if you already know English, then learning a second language will be more difficult for you than if you didn’t know any language.

To master a first native language takes at least about 16 years for most people ... I think you can reasonably master a second language way faster by leveraging language analogies. I also have a very hard time believing that learning Haskell is easier if you lack programming experience in imperative languages.

> along with learning Emacs, but they are mere constituents in a continually growing pile of stuff I still need to learn.

At no point she gives a reason for why to learn Haskell in the first place. I mean you only have so much time - why spend time learning Emacs just out of nowhere?

> True Confession: I didn’t know Github existed until I started learning Haskell; I didn’t even know what version control was. So, as I was starting with Haskell, I had Linux, Git, an invitation to join an IRC channel (a wut?)

So where is her GitHub repo? Can't find the link. There is probably a lot of very smart Haskell code to be found, I guess.

> Honestly, I still don’t really understand why people like imperative and OO languages.

She writes that she has no experience in other language paradims but she does know why it's worse than functional approach? Sure ...

The only thing missing in this post is a humblebrag comment about how she just got a dev position at Google ...


This feels needlessly judgmental.

She has a friend who likes Haskell and taught her. The things he taught make sense to her. Perhaps she doesn't have the experience to make a fuller judgement, but that doesn't stop her from talking about what she's personally discovered.

The amount of derision you've managed to read into that is remarkable.


I think you are right - my comment is definitely a bit harsh. Certainly I don't hold a grudge against her b/c of this. But this type of over-hyped blog post fits into a weird and annoying pattern on HN as I see it. At the moment I cannot say specifically why I do react so allergic to it.


>over-hyped blog post

It's not over-hyped until there's a $1mm kickstarter ;)

Also you're not leaving my co-author a great impression of the tech industry.

She's been reading this thread and you people are being terrible.


Okay, I'm sorry! Didn't want to make her feel bad! :)

Then again - she's already making bold statements about correct software development paradims ... I guess, some healthy humbleness may be useful if you don't want to be judged yourself.


Stop making excuses. She didn't post this to HN. She can write whatever she wants.


Sure she can. But at least one of plongeur's criticisms was completely valid, even if stated a bit harshly. She does in fact write that she has no experience in other language paradigms, but then exactly says "Honestly, I still don’t really understand why people like imperative and OO languages." That is, she criticized people for choosing other paradigms that she admittedly doesn't have the experience to criticize.

Then she doubles down on defending that part against criticism, while complaining that that seems to be the part that HN is attacking. Um, maybe because that part is weak?

Look, the point isn't to tear her apart. The point is that it's valid to criticize that part of what she said, without it becoming a personal attack.

As for the rest of what she said, that she mentions that we're not criticizing: That's because (other than the "learning a first language is easiest" part), there's not much more to criticize. She wants to learn Haskell? Great. Go for it. She wants to teach it to others who don't know any programming language? Great. Go for it. She wants to document her trail to make it easier for others? Wonderful! There's nothing whatsoever to criticize with any of that.

And as for why it's getting criticized here on HN when she didn't post it here: Somebody did, and we're commenting on it. That's kind of how the comments section of HN works. The only way it could be different is if it wasn't posted here (so nobody read it here; probably not a net win for anybody), or if HN had a way to post something where no commenting was allowed.


> To master a first native language takes at least about 16 years for most people

What's your definition of master? That doesn't sound like the kids I'm thinking of when I think 'learnt English' and mastering a second language to the same level of mastery as the first I am unsure can be done in a shorter timespan (though conversational fluency to the point where one can understand and be understood is on a significantly shorter time-scale.)


I thought people say learning Haskell as a first language is easy because

1) you don't have do a lot of unlearning. See, you still don't understand why you want things to mutate. Experienced programmers (in other languages) mutate things in every single line of code they write. They have to unlearn mutating things to learn this.

2) It's closer to mathematics which many people know.

I think the problems you faced are actually problems because you read a book that was meant for programmers. Hell, you'll face these problems even if you read a C book that is meant for programmers.

I still think, Haskell (as a language) is a good first language.


> 2) It's closer to mathematics which many people know.

No. They don't. Most people have the "how to" knowledge of basic math in their heads. The imperative parts like the algorithm for dividing and multiplying numbers that they run on their wetware computer. The part concerned with reasoning about "what is", the proofs part, the part mathematicians call "math" is not what most people's knowledge of math consists of.

Imperative languages really are closer to how our minds work everyday. Yeah, on occasion we do some "deep thinking" about "what is" but most of the time our brain goes about things like "ok, what are the steps to do X? [...] and now I'll always label the result of the last step y [...] and when condition z happens I'll goto step s etc.".

...this is how laws, regulations, institutional processes etc. are formulated. And this how you see nature work: that place on the tree branch where you saw a bird now holds nothing (... 'null bird pointer' ?) or maybe you look around a few hours later after the storm and you see that branch is actually broken (... 'segfault' ?).


> Imperative languages really are closer to how our minds work everyday

Why do you believe this? I think it is a cognitive bias at play, where you've been trained to think this way until it comes natural, and now it's hard to see it any other way. With these lens, you see imperative instructions everywhere.

I randomly opened a wiki page: http://en.wikipedia.org/wiki/Sine

In it, you'll find a sine is not described as a series of imperative steps to compute it, but is defined as what it is. Most descriptions and models you'll find of most things are this way.


Indeed, this is how things are described in a textbook. And how they should be. But, for example, after you've learned how to draw an ellipse using a string, a pen and two pins (like https://www.youtube.com/watch?v=7UD8hOs-vaI) your intuition and unconscious perception of what an ellipse is will be tied to the process of producing it and a visual representation of this process, not to the arid definition of it like "a curve on a plane surrounding two focal points such that the sum of the distances to the two focal points is constant for every point on the curve".

This is why people find really learning mathematics so incredibly hard, because our intuitions are process oriented. And why you can take anyone who is not what I call a "mud-mind" (people that just can't do rigorous logical thought) and and teach him/her how to write code in an imperative language until they can get something working, while making the same person understand (not memorize!) a mathematical proof or envision such a proof is just 10x times harder.

I love functional programming and being able to do "what is" reasoning about programs. But I think this is just inherently hard for the human mind to do, we're not optimized for it, we're optimized for generating sequences of instructions that we send to our muscles to execute (and our muscles and bodies and mind too are obviously stateful, btw) and we're much better at generalizing this "dirty" way of thinking even for abstract pursuits like software development...


> your intuition and unconscious perception of what an ellipse is will be tied to the process of producing it and a visual representation of this process

What? I don't think this way at all, and I'm surprised anyone does!

> because our intuitions are process oriented

That is very weird for me to hear. I wonder if some sort of survey could clarify if this is actually a common way to think.

> But I think this is just inherently hard for the human mind to do, we're not optimized for it

Again, I think you're extrapolating from your own training/way of thinking, and I doubt this is commonly true.


Interesting. It might make be fun to try and look for "process oriented" vs. "existence/properties" oriented people around, and will sure make for some entertaining lunchtime discussions...

Like the thing with algebra-oriented vs calculus-oriented people eating corn: http://bentilly.blogspot.ro/2010/08/analysis-vs-algebra-pred... :)

I tend to be process oriented by default I guess, even your sine example... The sine function only "clicked" for me after I saw it as the y projection of the radius of the trigonometric circle and imagined that if you attached a pen to this point moving on the y axis while a paper scrolled at constant speed behind it you'd get that ondulating line. Now, I understand that this way of thinking quite biased (I tend to get very good intuitions about "how to build things" but not very good at explaining why they work afterwards), and it can make me miss obvious shortcuts, like looking for what properties and relationships conserve etc. and this is why I make an effort do more "what is" type of thinking about stuff... but "how to" thinking just seems more natural to me. Maybe it's "scientists" vs. "engineers" thing and programming just happens to be a middle-ground where both kinds of people are frequent :)


By the way, I relate to the sine as y projection of a unit circle too, far more than the triangle ratio.

But I relate to it by thinking of the y coordinates of the points between x=-1..1 in a unit-circle, without any drawing utensils/pens/etc.

I don't see why it makes anything easier to consider the process here, rather than just the set of y points, I need to ask some people at work to see how others see it.

Can more people chime in here, perhaps?


Just to make something clear before going away from this discussion: by "process" I don't necessarily think in terms of the drawing utensils/paper etc., I see it more as a "process of generating pixels or data points in a time dependent fashion and visualizing the animation of of the production instructions' results in my head". Or even more abstractly as an abstract ordered sets of instructions that mutate some shared state, it doesn't have to be something visual.

Yeah, I guess less abstract-minded process-thinkers would be more fixated to physical tools and representations, as physical reality is where this way of thinking originated I guess. I imagine drawing something in the sand then erasing it and drawing something else as a story teller moved to illustrating some other stage of the story he was telling Or using an abbacus with beads on a string. Generalize and abstract from this and a von-neumann machine with a bunch of registers suddenly becomes a pretty intuitive way to think about computing things and writing programs... then generalize more and from registers you get to variables, even more and you can have pointers.

It's quite a simple conceptual route from the intuitions of cave men drawing in the sand to programming in C (a good one if you really want to sound condescending to C programmers....) Whereas the conceptual route to lambda calculus and category theory and type systems and monads... that's a long, alien and tortuous one, even if it seems to lead to a wonderful castle.

...ok, now back to banging more imperative oop code for work together with my fellow coding cavemen :)


I don't see how you can separate the two. A point on a unit circle describes a triangle formed by the horizontal step along the X axis, and then a vertical step to get to the circle. The sine is the vertical step, for a unit circle. For a non-unit circle, this has the wrong scale, which is normalized by dividing by the circle radius. And that radius happens to be the third side of the triangle, the hypotenuse. You can ignore the triangle aspect when you assume the unit circle, which makes the hypotenuse 1.


Not that I'm going to claim the "default state of human cognition" is either sequential or "functional"... but I'll add anecdote to Peaker's anecdote and state that the way you're describing your mental processes feels very alien to me as well.


There's less fighting the student's ego involved, but it's overall more work because there's just so much they don't know if it's their first programming language. Experienced programmers repeating the trope that it's easier to teach/learn Haskell for a new programmer are:

1. Not speaking from experience

2. Taking a lot of knowledge for granted

Example - explain what a "side effect" is. Why is it a "side" effect? What's ()? Why is that "nothing"? Why does putStrLn return IO ()?

That said, it's been a pleasure learning how to teach Haskell and writing the book so far.


I suppose there are two ways to interpret that trope. First, that it's easier to teach someone carte blanche to program at some goalpost level of skill via Haskell than other languages. Second, that it's easier to teach someone who has no knowledge of programming otherwise to understand Haskell than it is to teach someone with "imperative experience" to understand it.

I wonder what your thoughts are these days on each of these. My understanding is that you have the requisite experience to talk about it, like few others!

It's finally interesting to me to state that I think the best version of that trope, though one that is impractical and untestable, is that it's easier to teach a newbie to program Haskell than an experienced Java programmer ignoring some set of standard, non-linguistic training points. This is the most interesting since it gets directly at there being some kind of "imperative mindset impedance" which is presumed to exist, but it's the least testable since I'm certain nobody will ever agree on what those "non-linguistic training points" are. It's probably folly to assume they exist, even.

Anyway, I'd love to hear your thoughts.


Even newbie computer users understand mutation, because they have moved files (or other objects, like mail box items) from one place to another, deleted or renamed them or edited their contents.

Mutation is also something exhibited by everyday objects. This is modeled much better by object-oriented programming with mutable state. More importantly, the rigid typing of Haskell isn't found in the real world in which backpacks can hold dissimilar items, and donkeys can mate with horses to produce sterile offspring.

Haskell is close to mathematics, sure. Many people know mathematics. However, many people do not know that mathematics which Haskell is close to! So your (2) is a slight equivocation on the term "mathematics".


Mutation is just how you cut up time.

If I pick up a coin and place is 6 inches to the left I've mentally cut the world into three epochs, the prior, the mutation, and the posterior. Thus I feel I mutated it.

If I drop a ball, I can do the same trick and cut the world into the before and after of my releasing the ball, but for the duration of it falling I think of physics just happening. This is smooth, continuous, and not a set of discrete mutations.

The math backs this up as well in the sense that integration is exactly what lets us convert the world of mutation into the world of constant physics (FRP from the "continuous time denotational semantics" perspective of Conal Elliott not the usual way the term is abused).

So to claim that "everyday objects" are beholden to mutation or immutability I think is a game of confusing the map with the terrain.


Mutation is a function over time, for sure. Most people understand the change and not the continuous function behind the change, and many processes are too chaotic to be described by nice clean continuous functions (a physical simulation with instantaneous collisions...). I believe even FRP uses steppers for those kinds of things.

Most people think in terms of mutation, not in terms of the abstract functions that cause values to change over time.


I think this is just a vocabulary difference, not a genuine mental one.

When I see a ball falling I don't think of it moving forward and accelerating in an infinite number of infinitesimal mutations---I see it as being in the state of falling and recognize the space of actions I could perform in time with it. There's mutation in the sense that I cannot arbitrarily go back in time. There's immutability in the sense that the rules governing that motion are not "staged" in any way outside of the interference of my hand.

A better example is perhaps catching a fly ball as an outfielder in baseball. I absolutely cannot be processing that motion as a set of mutations in order to function---instead, I predict according to a "mathematical pattern" if not a formula, where it will land based on scant observation of segments of its space-time trajectory.

And that's all more or less instinctual, I believe. It must be in order for me as an outfielder to execute my task. That stuff needs to just be built in to my brain.

So while ultimately FRP and physics might be "implemented" (ha, ha, philosophers forgive me) in discrete steps, there's a vital model of each which is continuous.


Many behaviors are continuous, it is just that many are not. I am here today and gone tomorrow, why? It doesn't matter, it just is! If all I had to worry about in a physics engine was F = ma, life would be a simple continuous function, but it turns out collisions just ruin that.

If you really want to get in the thick of it, take Conway's game of life (or NKOS if you can stomache wolfram): there is quickly in a point in a physical process where you don't know what happens next without the previous state.


To be clear, I'm not arguing that either mechanism describes all things—actually the exact opposite, that neither is sufficient.


The issue with arguments of the form "Mutation is..." is that all these things are models of mutation: merely representations. Different models help us elucidate different properties of the same underlying phenomena. This does not mean that either one is "correct" or the one true representation of mutation.


Most people using Haskell are using it to write imperative programs, giving up mutation specifically is less of a big deal than you'd think.


I'm pretty sure I can implement any imaginable algorithm without mutation.

Implementing it with any efficiency seems like a different question.

A common activity in many programs is scanning a list and changing a few items based on some criteria. So far I've heard in Haskel you either duplicate the list or create a complicated data-structure that somehow essentially makes this mutation OK. Either ways seems silly - especially do what you did before but with complex dance around it.


>Implementing it with any efficiency seems like a different question.

Nah, same asymptotic limitations, just different constants and patterns in what's sensible.

Example: https://www.youtube.com/watch?v=6nh6LpcXGsI

Data.Map, Data.Sequence, Data.Vector all serve for most anything you'd want.


> Either ways seems silly - especially do what you did before but with complex dance around it.

Interesting you think that. I think the opposite. It seems silly to overwrite something that someone else may have been relying on to not change when I can do a persistent update with pretty much the same performance.


Well,

That makes immutability a nice technique if you happen to be dealing with a situation of synchronous access and otherwise an unneeded approach.

The topic was haskell as first-language. Despite the hype, most people aren't going to be writing fully synchronous applications most of the time and thus immutability all the time seems mostly like unneeded bondage and discipline programming.


Concurrency requires the someone else to see the change. Persistent collections work by throwing out concurrency.


Haskell relies heavily on the compiler being intelligent. In most cases, it will use stream fusion to avoid duplicating the list.


> I'm pretty sure I can implement any imaginable algorithm without mutation.

We know this from the equivalence of Universal Turing Machines (which mutate a tape) and partial recursive functions (application of operations to immutable values).

Any mutation is a functional mapping from the total previous state to the total new state.

Of course, you generally don't want your ethernet driver to clone the entire OS just because it has another packet to add to a queue.

Mutation can be tremendously resource-saving, when you don't need the previous state. The problems occur when parts of the software develop nostalgia for the way things were before the mutation. :)


Even in non-Haskell, a lazy list is a obvious possibility for that. It's not for no reason that many languages have implemented map, reduce, filter; laziness can still be pretty efficient while increasing composability and readability.

Even Java has finally been dragged into the 1960s!



Dijkstra's comments on using Haskell as first language for CS Undergrads http://www.cs.utexas.edu/users/EWD/OtherDocs/To%20the%20Budg...


Here's an audio interview with (I believe) the same person on the same topic:

http://www.functionalgeekery.com/episode-19-julie-moronuki-a...


Yep, that's Julie and I. Had a lot of fun talking to Proctor.

I don't think it was the case at the time of the interview, but Julie is my coauthor for http://haskellbook.com/ now.


As a comment on the post itself already says

The Haskell Wikibook https://en.wikibooks.org/wiki/Haskell is the very best option for those with no programming background.


I'm going to disagree, particularly for people with no programming background, as it's often incomplete and lacks exercises.

Some (not all) of the material that is there is good though.


Um, the Wikibook has exercises, and you can add more, you know.

The incompleteness is not relevant here. The last quality someone needs when they have no experience is completeness. Sure, the Wikibook isn't complete (but you can add to it where it's missing!), and more exercises would be good. But for anyone coming from zero, the priority is being the best overall introduction to the concepts. Once you're well in the door, then you aren't a total beginner anymore.

The elementary track of the Wikibook is a superb overall introduction. I wasn't saying it stands alone and is all anyone ever needs.


I don't really agree with the approach enough to make the wikibook into what I think is needed without rewriting most of it, which is why I started writing a book instead.

I go over some of the problems with other resources here: http://bitemyapp.com/posts/2014-12-31-functional-education.h...

I didn't touch on the wikibook in part because people have this weird partisanship about it that I haven't yet fully understood.


Learn to program and do it with Haskell — In the back of my mind I thought this was covered, but apparently I was wrong. Could an intro to computation textbook using Haskell be a good alternative introduction to CS?


Hutton's Programming in Haskell covers that.

I'm not sure if it works as a good intro to modern Haskell, though.


The inspiration that brings Haskell to me is the similarity when you do : ls . | wc -l on console. Each monad is to be composed to each other, without knowing others. Each monad is self-controlled and has its own world. Pureness!


That's not "monads composing" there...


Oh, i'm wrong with terminology. I mean "chaining".


"So, as I was starting with Haskell, I had Linux, Git, an invitation to join an IRC channel (a wut?) and, er, “a text editor” thrown at me. "

If this doesn't strike you as very WRONG, I don't know.

How about codecademy in a browser instead. Some folks shouldn't teach.


I don't agree with your comment, but I did want to point out that I think there's no Haskell codecademy course.


There's a very good reason for that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: