Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Please Don't Learn Category Theory (2013) (jozefg.bitbucket.org)
54 points by psibi on Jan 15, 2014 | hide | past | favorite | 79 comments


The problem with being a programmer and learning category theory is two-fold.

First, category theory is very abstract. Indeed, mathematicians often jokingly refer to category theory as "abstract nonsense." The story goes that Norman Steenrod, one of the creators of category theory, coined this term himself. So, category theory can be abstract even for mathematicians.

Second, the "stuff" that category theory was built to abstract is highly mathematical and foreign to virtually every programmer who doesn't have a math background. What's worse, the bits of category theory that Haskell uses most frequently are typically not the bits of category theory that mathematicians use most frequently.

When you put these together, it's very hard to connect the dots between "category theory", "category theory as Haskell uses it," and "Haskell as a typical Haskell programmer uses it." To build up to the bits of category theory that Haskell uses requires understanding things like categories, morphisms, functors, adjoint functors, natural transformations, adjunctions, and commutative diagrams, to name seven. Trying to understand these without understanding their (inherently mathematical) motivations would be a tiny nightmare — they'd feel like a bunch of disjoint facts and diagrams that are supposed to mean who-knows-what.

And even if you get there, the categorial nature of monads in Haskell is not exactly apparent on the surface. Haskell exposes more programmer-friendly interfaces like "bind" that take some effort to translate into the language of category theory.

A major motivator in the development of category theory, for example, was as a tool to explore the relationship between topological spaces and groups, a field of mathematics called algebraic topology (http://en.wikipedia.org/wiki/Algebraic_topology).


I always feel like I'm at least seven or eight layers removed from anything I can relate to when I read about category theory.

I'll think, "Hey, let me check this out on Wikipedia..." and I'm six linked articles deep trying to understand the first paragraph of the original article, and I'm no closer to anything concrete.

It's obvious at that point that the "top-down" approach is the wrong way to go and that really the only way to develop an understand this is to take a class, or buy a book that may or may not be more tractable than Wikipedia.

At that point, I'll consider that there are many other languages that are supposedly esoteric that I've never had a problem grasping, and that I know of few advantages that Haskell offers over, say, F#, for doing things I need to do. And even though I've written some parsers in Haskell and it was a generally nice experience, I just don't have time to learn category theory from the ground up.

I also realize I really, really need to understand how my tools work from cradle to grave or I'm deeply unsatisfied.

At that point I abandon Haskell as a curiosity and go on with the rest of my life.


If you're interested in more of a bottom-up approach, I have a series of articles on my blog explaining category theory with the programmer's perspective in mind (though I do assume you know a little bit of higher maths; sets and functions, mostly). See the section titled "Computational Category Theory" on this page: http://jeremykun.com/main-content/

I've let the series sort of fall to the wayside recently as I focused on other content, but there's enough to get you up to a categorical understanding of map, fold, and filter (and a mathematical justification for why fold is the "most" universal one of the three)


Thanks, these articles are excellent. In particular, the first that explains ML syntax in terms that I can relate to other languages before you launch into the concepts is great.

Almost every Haskell intro I've read tries to explain concepts in Haskell's syntax. If the reader is unfamiliar with both, this is baffling.


The real problem, if you ask me, is that we've all gone through elementary and secondary school working with all kinds of mathematical objects without learning their proper names. Every kid ought to be told what semigroups, monoids, groups and rings are shortly after learning addition and multiplication.


Like the article here states, though, you don't need to learn category theory to learn Haskell. Most math is useful in driving intuition and insight, and Category Theory is no different in that respect, but learning Haskell (in particular enough to use it and write normal applications and libraries, not necessarily create the neatest cutting-edge type-trickery libraries - some of which are amazing) requires much more of a concrete "here's how you use these things" than any knowledge of Category Theory.


> I always feel like I'm at least seven or eight layers removed from anything I can relate to when I read about category theory.

I'll think, "Hey, let me check this out on Wikipedia..." and I'm six linked articles deep trying to understand the first paragraph of the original article, and I'm no closer to anything concrete.

I feel like that with most Wikipedia math articles.


While the origin of category theory can indeed be described that way, and while the formalisms are weighty to anyone not familiar with mathematics style proofs, the notions should not be too hard to someone with good FP background. Especially typed FP.

The real problem is more a question of why should someone learn it to begin with. Category theory will inform you that pairs are also called products and they're somehow the best kind of product-like thing. Big deal?

I don't have a really great answer to this question yet besides CT is like taking an X-Ray to computer languages and really asking "what does this mean?" Getting to see how a CCC develops into a programming language with sensible notions of computation, dynamics, simplicity is incredible and will end debate about language semantics.

That and dualization. Once that's in your head you don't want to go back.


What a ridiculous assertion. Category Theory and Chomsky Hierarchy are two easy to understand and key theories of Computer Science. If you don't have any understanding of these, you're not a professional programmer. Maybe it's "even abstract for mathematicians" should be "too abstract for detail oriented math nerds, but pretty easy to comprehend at a basic level"


I don't think the Chomsky hierarchy is very useful for Computer Science, except the distinction between regular and context-free languages (which is extremely useful).

"Context-sentitive language" in the Chomsky hierarchy is not the same as what a programmer means when they say a language is context-sensitive. In programming languages, "context-sensitive" nearly always refers to semantic context-sensitivity (for example, the set of symbols that are defined at this point and their types), not the syntactic context-sensitivity described by the Chomsky hierarchy. For example, none of the context-sensitivity described on this page has anything to do with Chomsky context-sensitivity: http://eli.thegreenplace.net/2007/11/24/the-context-sensitiv...

I can't think of a single case where Chomsky type-0 or type-1 grammars are relevant to any practical concept in computer science. And due to overloading of the term "context-sensitive" learning about the Chomsky hierarchy can be actively misleading unless you are aware of this distinction.


> I can't think of a single case where Chomsky type-0 or type-1 grammars are relevant to any practical concept in computer science.

Not the grammars per se, but the notion that there are things computer programs cannot do -- and that we have a good handle on what these things are -- is an important one.

As for Chomsky's Type 1, I've seen no evidence that the concept of a context-sensitive language has been of practical use to anyone, ever. It looks like an idea that turned out to be a dead end.

EDIT. To clarify: Certainly there are CSLs that have been of practical use. But the fact that a certain language is or is not a CSL, does not seem to be something worth knowing.


This is exactly what I'm talking about, but because people already have a basic understanding of context and so on, they don't appreciate the significance of Chomsky's work. I don't know how anyone can even argue with me about category theory, I just want to put it off on them being disingenuous instead of thinking about it further or trying to argue with them.

When I see that replies about Chomsky Hierarchy being "extremely useful" and "not very useful" in the same sentence, while I'm downvoted for suggesting otherwise.. Obviously, other commentators need to get their opinion straight before trying to impress their opinion upon mine.


Wow, I actually think you are trolling. I never thought that trolling HN about obscure technical concepts would be a worthwhile use of someone's time.

Just in case you're not, and in case anyone else is following along at home, here is my position.

The Chomsky Hierarchy draws three distinctions or boundaries between grammars:

1. type-0 (unrestricted, recursively enumerable) vs type-1 (context-sensitive)

2. type-1 (context-sensitive) vs type-2 (context-free)

3. type-2 (context-free) vs type-3 (regular)

Distinction 3 is a deep and practical concept that all programmers should know. Distinction 1 is practically useless. Distinction 2 is actively misleading.

Also, when Chomsky developed this hierarchy it was in the context of natural language. He was trying to generate English sentences, not programming languages (which were still rather primitive in 1956 when Chomsky wrote his paper). It is not surprising that most of Chomsky's hierarchy is of limited use to Computer Science, since it was never intended to be.


I'm not sure what the Chomsky Hierarchy has to do with what we're talking about, but re: category theory, there's a difference between "understanding" and "familiarity."

I agree that the definition of a category is not a hard thing to learn. However, can you give me, say, three non-trivial examples of categories you'd expect a "professional programmer" to know? How about three non-trivial examples of functors?

If understanding category theory is a pre-requisite for being a professional programmer then 95% of the people I know who get paid to program aren't "professional programmers.

You're just being silly. Or trolling. Probably trolling.


Others have explained why the Chomsky Hierarchy isn't particularly useful to people working on designing or implementing languages. What IS useful and which developed around the same time as Chomsky's theories is BNF notation: https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_Form


A bit of a link-baity title. It's not that you shouldn't learn category theory, it's that you don't have to. Even if you want to program in Haskell.

But is it worth learning on its own? Probably. At least the basics are. You'll get more insight into the design of languages like Haskell and ML and acquire a bunch of new abstractions which have a very good "power-to-weight" ration: that is, abstractions which are surprisingly general and expressive but also simple. The compromise, of course, is that these abstractions are really abstract--they do not admit good concrete explanations. At first, thinking about this sort of abstraction is difficult, but they become extremely powerful and convenient once you get used to them.

Category theory also helps people design libraries. In a sense, category theory is focused on composition, which is obviously integral to writing good code. Additionally, some extremely useful constructs--like prisms from the lens library--were discovered largely thanks to category theory.

Finally, category theory gives you a new perspective on programming. I find this very valuable because it gives me more than one way to think about whatever I'm working on. It's useful in the same way Curry-Howard is useful--in fact, there's a very natural extension of Curry-Howard to include categories with certain structure.

So yeah: please don't feel you have to learn category theory, but consider learning it anyway.


But the alternative he proposes actually sounds more daunting than learning category theory: "In fact, try substituting

    Monad    -> FuzzyWuzzy
    Functor  -> Banana
    Category -> Cheerios"

Sure, uh, all the other languages I've learned involved using opaque labels whose meaning I had to infer over time. No, that approach is for something like esthetic theory. All the languages I've learned used pretty concrete intuitive terms for their constructs, even the terms weren't really accurate. Having to manipulate a construct without the slightest hint to qualities sounds just maddening.

Learning a programming language is hard enough that I think the learner needs terms that suggest they're dealing with something concrete even if they're not.


Then try

    Monad -> Sequenced
    Functor -> Mapable
    Category -> ... I don't know, I don't use it much (not that it doesn't have a nice understandable alternative name, I just don't know it)
Monad and Functor are really quite simple classes to understand, and Tony Morris' 20 intermediate exercises listed in the article helped me greatly when trying to understand the concepts.

I've been a haskell programmer for 6 years, and feel I know about zero cetrgory theory. It's never seemed to hurt me (except when Edward Kmett pipes up in IRC and I don't understand a word after that =)


For Monad, try "Flattenable". A monad is any type which:

1. Supports map. 2. Can be instantiated given a single value. 3. Can be "flattened" when nested two layers deep.

For example, an ordered list is a monad, because:

1. You can map a function over a list. 2. Given a value x, you can construct the list [x]. 3. Given a list like [[1,3], [2], [], [4,5]], you can flatten it to be [1,3,2,4,5].

There are a couple of other technical constraints, mostly related to what happens when you (for example) take a list, stick into another list using rule (2), and flatten it using rule (3), in which case you're supposed to get back the original list. That sort of thing.

Monads are useful, and they appear everywhere—quite a lot of generic types support "map" and "flatten". Haskell obscures this simple structure in two unfortunate ways: (a) the first monad that everybody sees is IO, which is a freakish outlier, and (b) Haskell uses an equivalent formation of monads based on a function "bind", which is essentially a map (to a doubly nested type) followed by a flatten.

Another way to look at monads is as "generalized list comprehensions" where "list" can be replaced with hundreds of different clever things. It's a clever little pattern that turns up everywhere, and when it does turn up, it tends to be a sweet spot in the design space. A good recent example would be the Promises/A+ spec for JavaScript, which is basically Haskell's Error monad.


I think the fact that two people suggesting this idea chose two completely different names to replace "Monad" demonstrate pretty well why trying to use an "intuitive" name is a bad idea.

I can just about get behind "Mappable" for Functor, but there simply isn't a unifying notion that explains what a Monad is in one word. Some Monads are better described by their join, and others are better described by their bind. You call IO a "freakish outlier", but Reader, Writer and State all follow a similar notion of sequence.


I think the fact that two people suggesting this idea chose two completely different names to replace "Monad" demonstrate pretty well why trying to use an "intuitive" name is a bad idea.

Uh so,

Because when you have a complex idea which can be approached multiple ways, the best way to help people approach it is ... no way at all, no helping hands or training wheels. See my other complaints about "we don't want to let you be any bit OK until you have it all".


I'm not saying we can't use these words to help people understand, just that we shouldn't rename core elements of the language just for the benefit of beginners. Especially if the new names don't fit with half of their common use cases.

Learning a new word is not what makes Monads hard.


Yes, that works much better.

But that isn't what the article said.

I'm a c++ programmer who consumes the occasional Haskell articles on hn. By dint of complaining and hearing sensible explanations, I feel like I'm "getting it". But I don't think that takes from the notable phenomena of opaquely ridiculous explanations of Haskell ideas that seem to pop-up regularly.

http://www.chrisstucchio.com/blog/2013/write_some_fucking_co...

Edit: I kind of feel like there's this phenomena where someone is "real math guy" and so they'd rather people to wholly ignore the structures the language is built on, call functors Poptarts etc, rather than giving a simplified explanation which would leave them open to other real math guys complaining the explanation wasn't exact/cool/powerful enough.


I like the abstractions that _do not leak_.


> the people who designed Haskell decided they weren’t going to pretend they didn’t use math

This is almost certainly the decision that has kept the most people from successfully learning Haskell.

If you write "Appendable", many programmers go "oh, okay, got it." If you write "Monoid", many programmers go "yeeeah, I don't know if I have time to learn all this."

This is either a perfectly acceptable cost of avoiding redefining the wheel, or a cautionary tale for future language developers, depending on your perspective.


But "Appendable" is a terrible way to describe what a monoid is. Some of the really cool tricks in the way abstractions in Haskell match mathematical notions come from extending the abstractions beyond what you would expect. For example, integers under sum are monoids; naturals under the maximum operation too. And so is the "over" operator from computer graphics. All of these find applicability as general monoids in Haskell, but you'd have a hard time calling them "Appendable". All of these just need a binary operation that's associative and with an identity element.

The same thing happens for functors and monads: their use come from the properties they respect. And if you accept that abstractions will be defined as a set, some operations, and some algebraic laws respected by those operations, well, then, you arrived at abstract algebra, and you might as well use the right terms.


Jeesh,

I have an MA in mathematics but without knowing category theory, it has been opaque to me till just now that the monoids discussed around Haskel are the same beast I briefly dealt with in abstract algebra. Indeed, only by looking up monoid and appendable did this become semi-evident.

The Haskellians don't even help you out enough to point towards ordinary abstract but instead send you all the way to category theory with sign-posts along the way.

The abstract structure monoid may not really be fully described by appendable but it's a leg-up. The objects in object-orientation get extended by convenience but the term object still helps the learner feel like they've got something concrete.


First hit on google for "Haskell Wiki Monoid" was this: http://www.haskell.org/haskellwiki/Monoid

Lots of examples there, including a post by Dan Piponi that says, right at the beginning, "In Haskell, a monoid is a type with a rule for how two elements of that type can be combined to make another element of the same type. To be a monoid there also needs to be an element that you can think of as representing 'nothing' in the sense that when it's combined with other elements it leaves the other element unchanged. "

I don't know how you can construe that as "The Haskellians don't even help you out enough to point towards ordinary abstract but instead send you all the way to category theory with sign-posts along the way."


There are fabulously useful monoids that aren't - in terms of intuition - appending anything at all. For instance, (Sum Integer, Max Integer)


Nevertheless, if you're a programmer with no particular exposure to that sort of math (which is to say, the overwhelming majority of programmers), saying "this function takes a Monoid" is as useful as saying "this function takes a Foogblort."

Saying "this function takes an Appendable" conveys useful information.

It's a matter of priorities. If your priority is using the correct mathematical term, you necessarily sacrifice clarity to newcomers. Hence, fewer newcomers.

...of course, not that Haskell as a language has ever strived to attract newcomers!


I think Combinable is more appropriate as 'append' pertains to only one of many associative binary operations.


What about a group? Is that combinable too? Monoid is short, precise, and googlable. Which also happens to be the case with group, monad, functor, etc. Why invent new vocabulary?

Have you noticed how little sense the words "class", "struct", "union", "object" make to someone who hasn't programmed before? It's just that we got used to them.


Classes, unions and objects are all everyday words. I knew them from mathematics and everyday life before I ever used them in programming.


I've considered Reducable, but I'm not sure that makes much sense either (such as when using a -> a as a monoid, you're not really reducing them). Combinable doesn't work because you're not necessarilly combining the values (Max and 0 for example doesn't really combine, it selects). So it is my conclusion that Monoid is a perfectly good name because that's exactly what a monoid is: something with an associative operator and a value which is the identity for the operator:

    (&&), True
    (||), False
    (+), 0 -- addition
    (*), 1 -- multiplication
    (++), [] -- concatenation of lists
    max, minBound -- maximum
    (.), id -- function composition and the identity function id x = x
    
there are all monoids, it's such a simple concept that high school students could understand it within 10 minutes, and yet people whinge for calling it what it is. Good programmers are not people who are turned away by terms they don't know this easily.


Oh yea, I completely agree. There's no reason to be afraid of the terminology. I was just suggesting that Combinable seems a closer fit than Appendable for the general case (although, as you and others have pointed, even it falls short).


AssociativeCombinableWithIdentity

... but I would rather learn what a "monoid" is, than type that out every time.


I also think that there's a clash of naming conventions going on. Programmers are used to name types for their values are, as in "4 is an Int", or even abstractly, "4 is Serializable". Haskell conforms to this convention with type and class names like "Int", "String", or "Num". But classes with names like Monoid don't conform to that, as 4 is not "a Monoid". The set of integers with + is a monoid, but that's a property of the type itself, not of its values. (Edit: And don't get me started about "IO". What's "an IO" even supposed to be?)


And similarly, the list [1,2,3,4] is not a monad, and that's why "flatmappable" isn't a sensible alternative to "monad", as Bracha recently suggested.

Though some have argued that this confusion indicates a hole in our vocabulary:

http://blog.plover.com/prog/haskell/monad-terminology.html


Great link, that's about what I'm getting at. MJD states it more clearly than I could.


You noted a pattern in how programmers name types, and then complain that the naming of a type class doesn't conform to that pattern. Of course 4 isn't a Monoid, rather, Sum is a Monoid.

As for IO, the best way to read it would be as an "IO action", which is what much of the literature refers to it as. Yes, this isn't trivial to understand, but IO in Haskell isn't trivial, and you can avoid the formalism if needed until once you understand things a bit better.


I'm fine with Haskell using a different convention if there actually is one.

And your other explanation proves my point: If an "IO" instance is best described as an "IO Action", then why is the type not called "IOAction"?


Because not everyone loves excessive verbosity.


Well, it's you who said that "IO action" is the best description, not me. I personally wouldn't consider 8 characters excessive for a type that's imported into your program's namespace by default. Especially considering that you would hardly ever have to explicitly write that in Haskell.

Then again, my code doesn't have to fit into 20-character research paper columns.


I was responding to your question, in which you were asking what "an IO" was. You seem to have some kind of a grudge against Haskell. I'm not going to argue with you about it. You're welcome to continue programming in Java for as long as you'd like.


I'm only arguing about naming conventions here -- I probably wasn't clear enough about that. I know what the IO type constructor does. My point was merely that if no one ever says "getChar is an IO", then maybe its type shouldn't be called IO.

For all of Haskell's beautiful concepts, it does have some flaws. I think that overly cute (and, as I explained, sometimes inconsistent) names are one of them, because that's at often odds with readability. Haskell's identifiers are not Perl-level bad, but they are not shining examples of clarity either.

I hope I didn't offend you with this. I don't actually think we'd disagree much if we had discussed this face-to-face.


Nope, no offense taken at all. I just disagree with the verbosity approach to writing software. You referred a couple of times to the name as a "description", but there's a big difference between names and descriptions. They serve different roles. IO is a good concise name, and if someone wants to know more about it, they can go read its description in the Haddocks or other supplementary texts.

People don't say "getChar is an IO", but they (when typing) would probably say that "getChar is an IO ()", or when speaking "getChar is of type IO unit". Reading it as "IO action" is useful pedagogically, to answer questions like "what is an IO?", but typing out IOAction every time is annoying and unnecessary. Especially because then MonadIO would have to become MonadIOAction, which is just silly.

One of the worst trends in contemporary software engineering is to conflate names and descriptions, so you end up with absurdities like SimpleBeanFactoryAwareAspectInstanceFactory or InternalFrameInternalFrameTitlePaneInternalFrameTitlePaneMaximizeButtonPainter* . Yes, I'm picking on Java and these are extreme examples, but it's just the exact same principle applied to a much greater extent.

*Found in Spring and JDK, respectively.


The problem with analogies is that they only work in one direction on the abstract<->concrete spectrum. It's perfectly fine to talk about lists under the append operation as an example of a monoid. It makes no sense whatsoever to try to reverse that relationship because not all monoids have anything remotely analogous to appending.

This principle applies to pretty much all ways of categorizing objects. We can say that dogs are an example of mammals but to turn around and call all mammals "doglikes" or "doggables" would be silly.


Kinda off-topic, but I also have to say "appendable" is a terrible choice. It would mean that those objects (or variables, or whatever) support efficient appending operation, which is usually not the case.

(Sorry, I couldn't get over the fact that appending a character to a string (or a list) is O(N) operation in Haskell.)

Now, if you named them "prependable", then maybe...


"Max 3 <> Max 5" isn't really appending or prepending, though.


And maybe it is a tale promoting more theoretical computer science in the education of programmers. For example monoids should be familiar because the states of a minimal DFAs are essentially the elements of the syntactic monoid [1] of the accepted language and the transition function is essentially the monoid operation.

[1] https://en.wikipedia.org/wiki/Syntactic_monoid


It's not really Appendable though, because e.g. you can write a Monoid instance for various number-y things like Sum and Product.


And you could write an Appendable instance for them as well, it would just fly in the face of what the word "append" means to most people. (Nitpick, of course.)


Sure, but is that more understandable than using "monoid"?


Don't know enough to know that the current name is optimal, but at least "Monoid" correctly says to the initiated "you don't understand this yet." "Appendable" lies to you and says "you understand this" even if you don't.


"Monoid" also has the advantage of being very searchable. First result on Google (from a private browser window) is the Wikipedia page with the snippet "In abstract algebra, a branch of mathematics, a monoid is an algebraic structure with a single associative binary operation and an identity element". DDG gives similar results, though the Doctor Who aliens are higher up...


Yes. If I've got used to the idea that lists are appendable and then someone tells me I can "append" two numbers by adding them and that will fit the interface... yeah, that makes sense. It might be a bit confusing on the way, but better to use a term that covers the most common case than a term that's completely alien.


Appendable is more akin to a semi-group than a monoid though, no?


Yes, "appendable" to me doesn't imply the existence of an identity.


Yes, but the point stands. :)


If this is all it takes to turn people away, then I would say that the problem is that those people have an irrational fear of mathematics. At that level, it's not about having a tool that is too abstract or difficult, or not being gifted at abstract thinking; it's about those people having a conditioned knee-jerk reaction to math-sounding things.

Mathematics is hard, at least for me, but I have accepted that I have to get over my preconceived notions and at least try to look beyond the jargon. And these are mere names, not "let's cook up an ad-hoc notation and heavily overload and abuse it in places because 'the meaning is obvious from the context'...". this is only a question about choosing name A, or name B. I can't imagine it will get any harder by choosing either of them.

At least Haskell clearly states, with it's esoteric naming; "what you are about to get into is quite strange, and quite different from anything you've probably encountered in programming before." And if they can't look beyond scary-sounding words, maybe that kind of programming is not for them - and in that case, it's good that they found out straight away - or maybe they need to work on their own phobias.


I call mega-exponential-factorial-to-the-n-bullshit. I tried doing exactly that. You can't learn Haskell beyond superficial hello world apps without learning what a monad or functor are, because there aren't any substantial teaching materials that don't force you to understand it in order to continue learning, and all the common libraries require you to know how to use them because they are undocumented and have no examples.

Can you use Haskell without understanding how those things work? Sure...if by "use" you mean you are limited to copied and pasted code snippets without an ability to understand what your code is doing, with the slightest change causing unintelligible compile errors.


My impression of the article was not that you don't need to know what monads and functors are, but rather that you don't need to understand them on the level of category theory. I think that's a true statement, if your interest is in simply writing software in Haskell and not in breaking new ground in the Haskell community (like writing the lens library itself).


There's a significant difference between a wide understanding of category theory in general and understanding what a monad or a functor is used for in haskell. They're just type classes, nothing special. There's not all that much more to know about them besides their definition.

I rather disagree that they are undocumented and have no examples. Section 6.3.6 in the haskell report 2010 defines the type class and provides examples: http://www.haskell.org/onlinereport/haskell2010/haskellch6.h...

See also, the typeclassopedia, which walks through a simple definition, many examples, and discusses general intuition: http://www.haskell.org/haskellwiki/Typeclassopedia#Monad

Perhaps I've misunderstood you, and maybe you mean that common libraries are undocumented and have no examples? That certainly hasn't been my experience; most libraries I've run into have plenty of documentation for me. Can you give me some examples?

Yes, I agree that trying to write haskell without being able to understand simple type classes would be very difficult and confusing, but I disagree with what I understand the implied point to be, perhaps you could clarify this for me? I understand you to be saying here that you expect to be able to proficiently use a language with negligible understanding of how data types work in the language, or the common data types defined in the standard library? That sounds like nonsense to me, so I suspect I'm confused here; can you be a bit more explicit in your criticisms here?


You have to learn what Monad and Functor are, but you do not by any means need to learn what a monad is, or what a functor is. To put it another way, you need to learn the Haskell type classes that have those names, but you do not need to learn the category theoretical constructs from which those names were taken. I believe this was the entire point of the article. The Haskell type classes are actually not too hard to learn at all, if you give it a bit of effort.


This is a nice, short article which basically says, "You don't actually need to go learn a bunch of category theory if you want to learn Haskell, so please don't stress about it." Despite the title, it's not any kind of general argument that no programmer should learn category theory.

Category theory is a strange branch of math—it's almost entirely definitions, with only a handful of interesting theorems. As far as I've been able to understand, category theory is a very abstract analogy between very different branches of math: abstract algebra, topology, mathematical logic and (here's the interesting part) the lambda calculus, via Cartesian closed categories.

This analogy has interesting payoffs, especially for programming language designers and people trying to do seriously weird stuff. For example, let's imagine you have a value of type A, a function from type A to type B, and another function from B to C. But if you squint at these types right, it's the same as a logical system where A is given, A implies B, and B implies C. Now, it turns out we have automated theorem proving software that can take statements like this and figure out how to get from A to C (even in much more complicated situations). And it turns out the analogy between types and logic is sufficiently strong that you can actually use a theorem prover to write certain kinds of programs without any human intervention, given nothing but the type signatures of your functions.

And this is not the only clever analogy lurking here: My favorite is the way that probability distributions can be mapped onto the lambda calculus.

So, no, you don't need to know all this math to write Haskell programs. (Although some library authors are a little nuts, generally in a good way.) And yes, category theory is a weird branch of scarily-abstract math without many theorems, and Haskell only uses certain corners of category theory. Still, there are times when category theory will allow you to see how far-flung branches of math can be mapped tidily onto the lambda calculus. And that should be of interest to Lisp developers, at least.


You need the words to be able to use Haskell. There are too many of them to learn otherwise. I have a similar experience with scalaz, where I tend to only spot useful things in the library after I've reimplemented them, because they don't have names that let me find them based on what they do.

(Possibly the only benefit of a previous company that wouldn't let me use scalaz was that, in reimplementing it myself, I got to give the typeclasses sensible names like "CanFlatMap")


Finally someone has the courage to tell the Truth about CT.

Next up: the idea that the notions of Monad, Functor, and Category (in whatever guise) are even needed or helpful to do strong FP.


I was able to be fairly effective (imho) in clojure/scala without knowing anything about fuzzyWuzzies, bananas, and cheerios. I felt learning about them becomes interesting though as you dig into the language more. Mostly as a curiosity thing... spend some time to "finally learn about monads" to see you've been using them the entire time.


I like how he wrote a post a few days later, Learn You Some Category Theory:

http://jozefg.bitbucket.org/posts/2013-10-22-category-theory...


Strictly speaking very little is needed. You can write everything in assembly if you'd like. Helpful, though? There's absolutely no question. Type classes like those you mentioned make programs more concise, more understandable* , and more general. All of those are good things.

* Specifically, understandable to people who know the language and concepts. Trying to be understandable to everyone is a terrible idea. For example, Cucumber.


I regularly miss Monad as an abstraction in my C code and Functor in my C#. They're just plain helpful, FP or not. Needed? Probably not, though a case could be made that monad is sufficiently helpful to deserve that title for dealing with sanely ordering IO in a lazy language.


Are they needed? No. Are they useful abstractions? Yes. Like most abstractions in Haskell the theory is always optional.


Makes sense if your goal is to use some particular language (in this case Haskell). But a lot of us don't necessarily need yet another programming language, whereas adding to our theoretical knowledge can be beneficial no matter what environment we're programming in.


Side note from this submission is that I did learned you can host html like github

https://confluence.atlassian.com/display/BITBUCKET/Publishin...

Nifty!


Linkbait trash.

"Don't do this thing everybody says you should do. But I do and I like it."


That's a very dishonest summary of the article. What he's actually saying is this:

"Don't do this thing everybody says you should do, except if you're actually interested in the math."


Then he really should have titled it, "You don't need to learn category theory", not the imperative "don't...". As it is, this is a textbook case of linkbait: choosing a dramatic title to get clicks.


With such a great number of people around me dabbling in Category Theory, I'm not sure it's worthwhile anyway. I don't like crowded areas. Maybe learn something useful but less hot - cryptography perhaps?


Are you too cool for category theory?

Fashion is a terrible reason to learn math, or to decide against learning it. Don't be a hipster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: