I'm really curious why Haskell has seen so little adoption in industry. Is it just the difficulty? Or a chicken-and-egg effect with tooling and libraries?
One thing I've wondered is if it actually isn't ideal for a lot of cases. FP is beautiful for certain things. But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space. For these cases, the FP answer is usually "recompute and replace" (generally with immutable data structures that make this efficient). This can be syntactically clunky when it's a major part of your application, and not just a necessary evil to be swept out to the edge. The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).
It's 100% the difficulty. Anyone saying otherwise is lying because they want Haskell to be popular, adopted it very early on in their career when they could really invest in it, or is a natural at this type of stuff and simply doesn't know any better.
I've learned about 10 different languages now and Haskell easily had the highest learning curve. Most languages I was able to get semi-usable in within a couple days, at least to do some minor stuff that works. But Haskell was a real commitment that took months until I was comfortable doing real stuff.
You have to learn about how to mutate data and how to manage state using monads and functors, how to query through complex objects using lenses, even getting it to parse some complex data coming from a JSON feed into a usable form requires a good understanding of the type system.
But ultimately besides when I first learned Clojure (my first exposure to FP), I don't think there's a language that has taught me as much about programming as Haskell. It was very rewarding and something I still continually dabble with and learn from on the side.
I've yet to pull the trigger and actually build a full side project with Haskell. Which is typically my biggest test. I've found Erlang/Elixir to be the far better middle ground from my day-to-day work in Ruby/JS when I want something modern, fast, and functional.
Once PureScript becomes stable I have a feeling I'll be diving harder into Haskell and may finally make that full commitment it requires for a real project.
I agree. You master monads and IO, but then you want to use a web framework and there a bunch of other category theoretical concepts and/or Haskell advanced features you need to understand to serve up "Hello World". Compare that to expressjs. I love functional programming, but given the free choice of what to use to knock up a side project, I chose JS at home. Nothing like grabbing some data from a server, and it being an object you can use immediately in your code and access using indexers, properties etc (rather than some strange lens operator like ~~!!#).
Typescript cures most of the JS ills. Sure TS is more akin to Java/C# and not the much better Haskell type system. But it is good enough and catches 99% of the real world problems of naked JS.
To me Haskell is training for your brain. Once trained you are a better JS/C#/Java/C++ programmer.
I could go on about the economics of getting a Haskell job:. Haskell and Elm developers are taking a paycut compared to what they would earn if they used any language. Even if they are paid well they could get paid more. Supply/demand at work there.
> I agree. You master monads and IO, but then you want to use a web framework and there a bunch of other category theoretical concepts and/or Haskell advanced features you need to understand to serve up "Hello World". Compare that to expressjs.
I started using haskell at my last job after having done a couple years of functional programming in scala, but I honestly never found it required that much knowledge of category theory. The FAM trio of classes (functor, applicative, monad) are the most common ideas used from category theory in an explicit way, and they're frankly so prevalent at this point in the industry at large that they're in basically every language in extremely common libraries. Most programmers are pretty familiar w/ map and bind (which is also sometimes called flatMap, then, and_then, or chain). Applicative functors might be a little more exotic, but they're easy to pick up if you understand the other two. And I don't think you really need to understand what a morphism is or how to read arrow diagrams to really use any of these things. If you can use arrays in javascript, you can use the IO monad.
Most of the challenge in haskell for me was figuring out stuff like how I should structure my code to enable mocking out effects for testing (there's multiple answers to this w/ different tradeoffs), or how to get good editor support without ghc-mod breaking. I haven't used haskell in industry in about the past two years, but I imagine these are still some of the biggest challenges w/ using it.
Do you think it would be dramatically improved if Haskell had a bigger community of people contributing user-friendly libraries and tutorials? That was something that made Ruby the perfect newbie language for me. And something I feel is downplayed in Haskell given it's more advanced user base who is a little too obsessed with it's power and demonstrating their knowledge as such, rather than help make it accessible to others.
I can't count the amount of times I came across a popular Haskell library with very little documentation rather than it's type/function API and general blurb about what it does. This is very different that most popular JS/Ruby/Python/etc libraries which include quick-start/getting started/usage examples/etc etc.
> Do you think it would be dramatically improved if Haskell had a bigger community of people contributing user-friendly libraries and tutorials? That was something that made Ruby the perfect newbie language for me. And something I feel is downplayed in Haskell given it's more advanced user base who is a little too obsessed with it's power and demonstrating their knowledge as such, rather than help make it accessible to others.
Absolutely. I think it's been getting better at that on the tutorial front. For a long time there were very few books I'd actually recommend for haskell, but Haskell Programming from First Principles [1] changed that for me (personally). And I think Stephen Diehl's excellent What I Wish I Knew When Learning Haskell [2] is a fantastic general resource for newbies. But I think the community could stand to have more of this, absolutely.
And I definitely agree about libraries. I'd like to see more haskell libraries with extended tutorials, and written w/ a non-expert audience in mind. I think the community has for so long been dominated by long-time haskell developers and people ensconced in functional programming, much of the documentation is written for those kinds of people.
Ergonomics is probably the number one thing that matters on the long run for computer languages.
Make the frequent things easy and safe. And this means syntax should be easy to read and write, documentation should be plentiful and easy to read, easy to start using, even if you are not expert in the area that that particular library covers, etc.
Rust also suffers a bit from this, as more and more libraries are just the generated docs. Here are these 30 structs and 100 impls, godspeed. Yeah, but what is this library, when I would use it? What's the most 100 common use cases?
And a 100 might seem like a lot, but people will chose that instantly works for them. And there are a lot of strange stacks out there. Sometimes people just want the low-level bits of your library. No docs/API for that? Damn. Sometimes people just want to use it as a one-liner, no config files, no import-server-deploy, no binary? Damn.
The more flexible a language is, the more documentation its libraries need.
One often-ignored benefit of constrained languages like Java where there's really only one way to do most things, is that when you get a library you immediately have a reasonable idea of how to use it (given the entity names and types). If a language has a more advanced type system (or none at all), you can't lean on that existing framework and the author needs to do more legwork to explicitly lay out exactly what it is they've made. This doesn't make more advanced languages bad, but library documentation is often neglected and that probably contributes a lot to the inaccessibility of those ecosystems.
How far can you go without monads and category theory? I was looking at Clean which developed in parallel with Haskell and uses uniqueness typing instead of IO/mutation monads. Seems like it would be less offputting for a newcomer. Clean lacks community and a package manager, I believe which makes it less attractive. The language itself seems like a sweet spot for me.
If you want to be productive in Haskell, the Monad typeclass is an important tool to familiarize yourself with. That said, unless you are working on the internals of a few libraries, you don't really ever need to know serious category theory in order to be very productive. If you don't already have a background in abstract algebra or category theory, I think a better approach to learning these abstractions is slowly work through the typeclassopedia[0] while solving problems the naive or clunky way and then start to use the fancy-name abstractions (e.g. Functor, Applicative) as you see how they could be useful.
On that front, the Monad typeclass is far more general and useful than just for IO and State, so if you are thinking of it as primarily a hack to deal with those, you probably won't get the hype. In addition, it's really useful to work with a large number of examples of different Monad instances (IO, State, Maybe, List, STM[1] if want to get a bit further into the deep end) instead of just staring at the methods in the typeclass and hoping it make sense. It's a pretty broad abstraction, so it will only make sense if you are familiar with what it's abstracting.
The other partial reason for not wanting to know is that I can't then unknow. Willfull ignorance it is. Can those who know say they're happy using languages on the daily without the features you miss and think in?
Thanks for this info. What I really want to know is the value prop for learning/using monads etc. The examples of IO, State, Maybe, List, STM given seem like they'd be dealt with just fine in Clean.
I think it's important to be precise about how the monad abstraction and type system features interact in order to combine pure and impure code. I wrote a comment elsewhere in the thread (https://news.ycombinator.com/item?id=20112333) where I conclude that while the monad abstraction is useful for making a usable interface and writing programs which are agnostic to how their state is implemented, the fundamental work of distinguishing pure and impure computations is accomplished with a combination of type system features and compiler magic.
These both implement IO as a function which takes the state of the world as its input and returns a new state of the world as it's output. This is encoded using the state transformation that I mention in the other post (https://acm.wustl.edu/functional/state-monad.php) Both implementations also go on to define Monad instances for their new IO type. The major difference is that the Haskell standard library only exposes bindIO and returnIO to the user and hides the internals of IO from the user and Clean allows the implementation to be a normal library.
That difference is Clean's uniqueness types showing their strength. Clean can explicitly expose it's predefined World type (https://cloogle.org/doc/#CleanRep.2.2_6.htm;jump=_Toc3117980...) to the user with the guarantee that you can't write a function of type IO a -> a because it would violate the uniqueness properties and thus be a compilation error. Haskell instead uses the module system to keep State# RealWorld from being exposed to the user. This means that if you as a user want a different set of abstractions for impurity in Clean, you can build from the World level rather than needing to construct it out of what can be done with bindIO and returnIO. For details on what's going on with State# see https://www.fpcomplete.com/blog/2015/02/primitive-haskell
From the perspective of "the value prop for learning/using monads", this discussion leaves us in a worse place than where we started because the conclusion is that uniqueness types aren't a get out of monads free card and that Clean uses the many of the same Monad-related abstractions as Haskell does and uses them for the same purposes. In order to not leave you out in the cold as to the value prop for the monad abstraction, you can see how it works for a number of different Monad instances in Tikhon's answer on Quora (https://www.quora.com/What-are-monads-in-functional-programm...), though he chooses to use join, fmap, and return as the fundamental parts of a Monad, rather than bind and return, as I have here. As he touches on in his discussion, it's common and straightforward to implement one definition in terms of the other, so anything you learn from about that definition can be ported the definition I use without much fuss, so don't worry if it doesn't match at first. What this means is that if you have a tools in the standard library that only depend on features of Monad, you can use the same small collection of functions to solve a ton of problems.
This is want I needed to hear. I haven't used either Clean nor Haskell other than playing with them and my introduction to Clean seemed more lightweight where there some syntax to make uniqueness seem easier than the same in Haskell. On further reading, when seeing the u:[...] syntax rather than * I can see they're really quite the same. They way the uniqueness attributes are described seems like a separate axis than data types where in Haskell there's just so much type. Also the docs for Clean avoids using category theory terms for the most part, getting you started by showing you the syntax to do certain things.
I expect to be coming back to this comment and the references quite a few times until it clicks for me.
Finally think I get monads. And I don't believe it's hard to explain or hard to understand just that almost all the explanations are bad and you have to go through so many of them to put the pieces together.
> How far can you go without monads and category theory?
Without monads? Not pretty far, but they're far simpler than the wide web would have you believe.
Without category theory? Sky's the limit. I say this as experienced Haskell programmer occasionally dabbling in category theory for funsies. The practical impact on "how easy is it to program Haskell" of category theory is basically zero.
I didn't mean how far in Haskell without monads. I meant how far in something else like Clean that uses uniqueness typing to handle some of the same aspects where monads would be used with Haskell.
What i'd love is an imperative language where you declare what effects a method can have and the compiler enforces them. E.g. if it promises not to mutate any parameters, it can only call functions that make the same promise etc.
I think it'd be possible to write similar effect systems in other dependently typed languages like ATS, which is a relatively imperative language (C+ML-like).
Over time, I don't think the amount of stuff you have to learn in Haskell is more than in other major languages. Haskell requires that you learn some math, and that math had some large humps to get over. But it's a small number and you're actually just learning math, which is general and useful outside of one language.
In C++, I had to learn a bunch of corner cases and committee decisions. In Java it's a huge library and stack of idioms to get anything done. In both cases, that information didn't make me any better a programmer in the general sense, just a disappointed one in each language. I'd argue that the sum amount of information I had to learn for either of those languages was more than Haskell, because I can leverage the abstractions much more, and only need a few of them.
Monads, applicative functors, lenses, and you've got your primary toolbox sorted out. That's three humps. C++ has few humps, but it's got miles of an uphill march. I have no polite way to describe the Java experience.
The Java experience is a vast vista of beautiful sci-fi-ish landscape, elegant industry laid out according to a master design. Big enterprises linked with glowing pipes and service buses, nice portals pulsating from the radiance of alien beans. Stylish OSGi towers reach to the sky in the background. And the Mavens offer everything that is good on the mvn-central plaza.
But as you get closer, as you try to set up your own factory, you find that you need to use that thing, that nobody uses anymore, from the dark side of the moon, and to interface with that you need to venture deep into the core, and it's factories all the way down. Layers and layers of boilerplate, and inventors screaming in horror at the banality of the gaping holes in the type system that holds the huge sphere of backward compatibility on its shoulders.
And then there's no way back. You are already versed in the dark arts, you have been to the MetaSpace, your heart is no longer yours, but it's a slightly patched G1GC and you dream with the lush murmur of cybernetic trees from Shenandoah.
The problem with that explanation is that Haskell is no more difficult than Rust. (Someone who knows Rust well must learn about the IO monad and then can start being productive in Haskell. Unlike what one comment on this page implies, there is no need to learn any category theory to be productive in Haskell.) Yet Rust has been used in about as many successful projects as Haskell despite Haskell's having a head start of about 30 years on Rust.
I agree Rust is similarily difficult to learn to the point of production real-life software people use (rather than toys). But it's still very much in early-adopter territory. Even today on HN was the first stable version of a web framework that seems to be the best of the pack in Rust.
Haskell has been around for 29 years. It's really not a fair comparison.
I never said you have to learn category theory or any math with Haskell either. My learning curves I mentioned were strictly practical (mutation/effects, state, lenses, more than basic types etc).
The quality and quantity of books, tutorials, community, libraries, etc plays a big role in any languages learning curve no doubt. I believe this is something that could still be greatly improved in Haskell.
But at the same time I don't think it's surprising that Rust was able to get those all up to Haskell's level of quality in a short time. Rust also has far more analogies and similarities to what most C/C++/Go/Python/Ruby users have been exposed to which makes the initial get-a-basic-script-working far easier.
So I'm not dismissing Haskell like it's destined to unpopularity just because it's hard. The larger investment in docs, websites, libraries, and similar languages like PureScript/Elm the more popular Haskell would be.
Rust had a bunch of smart marketing-friendly people join in over the last couple of years which took it from a niche systems language into something far more mainstream. Other successful languages had similar growth patterns early on while Haskell people were comfortable with it's fringe academic position for a long period (which it seems to have grown out of finally). Haskell could still achieve a similar trajectory but adoption by early-adopter non-academic developers will be critical for that growth. Even if that must include the trendy Ruby/JS they tend to look down upon that group knows how to sell a language to the public and make it practical.
Despite being functional, Elm is quite minimalist when it comes to type system features. For example, functions can have generic type parameters, but there's no good way to require that the type be able to support certain operations, e.g. being printable as a string. Haskell's solution to this is "typeclasses", which are the same thing as Rust "traits" and Swift "protocols", and somewhat similar to Java "interfaces". Elm has a handful of builtin typeclass-like things that work by compiler magic, but there's no way to define your own. For a somewhat colorful rant about this, see "Elm Is Wrong":
The community is also fairly toxic; check out this passive-aggressive response by the /r/elm moderators to a (mildly worded) blog post complaining about the aforementioned restriction:
I'm currently using Elm at my day job, and I agree 100% with what you are saying.
Elm lacks extensibility, tooling, and documentation is not that great. The biggest pain point however is the people who run the Elm language. The design decisions they took hurt the language and the users a lot, breaking more and more with every version bump, restricting freedom and creating a walled garden that people are getting tired of.
What you say about JavaScript libraries is not 100% technically correct though. You can still access any native JS library you like, but you got to use ports. You can't hook into native elm functions bound to the global scope, but that's always been a very shady, undocumented and terrible thing to do.
The following reasons are what, I believe, really ruined elm adoption:
1) You can't create so called effect modules (like the http module of the standard library, and so on) if your package is not within the `elm` namespace.
2) As a company, you can't have shared, common elm modules if they are not published in the Elm package public registry. You can't install a package from GitHub without resorting to ugly hacks like ostracized elm package managers written in Ruby.
3) No development timelines, no companies publicly endorsing or using Elm to develop open source libraries besides the one where the language founder is employed.
I've never tried anything purely functional and typed to do frontend programming, so I'd like to hear if Purescript, ReasonML, etc share the same struggles with Elm
I've worked in Elm a decent bit, and used PureScript a little.
Elm is a very opinionated language - it's very deliberately missing some abstraction power (typeclasses), and some functions that are the bread-and-butter of every functional programmer have steadily been getting removed from the base libraries, so if you're used to Haskell, you'll find yourself falling back to duplicating code by hand a lot. Elm also makes certain stylistic choices into parse errors - "where" clauses are strictly forbidden, and indentation preferences are strictly enforced. It's basically taken an awkward edge-case from Haskell's indentation rules and made it not only a requirement, but a prerequisite to seeing if there are any other errors in your program. The back-and-forth trying to get the compiler to accept things that would just work in Haskell, but don't because of someone's stylistic preferences, is absolutely maddening.
PureScript, from the bit I've used it, is like a strict version of Haskell with row polymorphism, a feature Haskellers have been hoping for for a while. I've chosen Elm over PureScript in the past because of PureScript's dependency on Bower (which I think has changed since then), but that's the only reason.
> indentation preferences are strictly enforced. It's basically taken an awkward edge-case from Haskell's indentation rules and made it not only a requirement, but a prerequisite to seeing if there are any other errors in your program.
Some parsing is less efficient than Haskell because Elm doesn't have 20+ years of PhDs working on it, but there is no such thing as compiler-enforced formatting. I can't think of compiler errors regarding format that are the expression of a choice, as you put it, rather than the expression of less manpower.
Likewise, `where` clauses aren't forbidden, they are simply not implemented, which, given that you can already use `let.. in`, is not especially shocking.
The omission of where clauses was explicitly a stylistic choice[1].
This is perfectly valid Haskell:
#!/usr/bin/env stack
{- stack script --resolver lts-12.19 -}
data Test = Test {count :: Int}
test = let
something = Test {
count = 5
}
in 5+(count something)
main = putStrLn $ show test
To get the equivalent Elm to compile, it must be indented like this:
test = let
something = {
count = 5
}
in 5+(something.count)
Note that `something` must be indented beyond the beginning of `let`, and the closing curly brace must be indented to the same level as `count`. These are both not a warning, but a parse error - you can confirm it with Ellie[2]. If that were due to a lack of resources, it would absolutely be understandable, but this also was an explicit choice[3] which developer time was spent implementing.
While there was nothing particularly wrong with bower for PureScript use case, it's always failed to attract people for that reason sadly. However, you can use Spago now to great success.
Honestly I've never fully given Elm enough of a run-through to say yet (ditto with OCaml). I remember 2yrs ago I was evaluating it but decided to learn React/Vue.js for professional reasons. Additionally I wasn't convinced that Elm would be an ideal long-term commitment and more of a compromise between JS, FP, FRP (functional reactive programming) and more of a framework competitor than a full-blown language.
While I see PureScript as a full long-term language-level commitment to Haskell style FP minus the purity. But again my exposure to it has been too limited to have a strong opinion.
ReasonML is a much more reasonable (pun intended) approach to FP from the front-end side that doesn't appear to be as dogmatic as Elm. By making some concessions over interoperability (namely supporting raw JS and npm libraries), Reason thinks it will be easier to win existing JS devs over. Check it out - https://reasonml.github.io/
PureScript has a (much) more advanced type system but that's about it. The tooling and general developer experience of Elm is probably as good as programming gets in 2019. (I've also quite enjoyed Rust.)
That may sound like hyperbole but the combination of elm-graphql and elm-ui is something else. I'm from a JS background and the whole React/TS/CSS-in-JS soup just seems like a bad dream now.
> PureScript has a (much) more advanced type system but that's about it.
There's a lot in "... but that's about it.". Elm has a very low upper bound on abstraction by choice.
As an additional note: like many other communities in programming it has a very cult-like feeling to it and like others noted in these threads the mere mention that maybe this low upper bound on abstraction could be bad usually draws people a lot of fire.
In my experience most of the community in Elm is made up of people who don't really know what type classes can give you, for example, but they'll happily argue that it's too advanced or not needed. Most of that comes from parroting the popular in-community opinion instead of informing themselves.
This kind of inbred opinion is not unique to Elm: you can find it in Elixir, Clojure and pretty much every other community that relies too much on the benevolent dictator or the prominent founder/inventor paradigm.
In my opinion this is something that PureScript got right: Phil Freeman actually left the community to some extent and is not involved in the compiler anymore. He also does not flood the community with opinions that people give too much weight and so there is no cult of personality formed around him. The same cannot be said for the aforementioned languages.
I also find it interesting that a lot of these languages that rely on this paradigm have leaders that constantly complain that it's hard to run this kind of community. The reason it's so hard is because they've made themselves a benevolent dictator and they keep that status quo because presumably they like that they can sort of control opinion in the community that way as well.
I have absolutely zero sympathy for people who do that kind of thing because there is a very clear solution to it and they're just unwilling to commit to it. You can't have your cake and eat it too. If you enjoy this cult of personality you'll have to take the bad parts of it as well. I find it interesting that a lot of these people end up being babies about it as well, but I guess you have to be somewhat immature to end up in this position from the beginning.
I more or less agree with all of that, but at this stage Elm's BDFL has earned my trust, and he is well within his rights to do whatever he wants with his own creation.
Elm is a language + a framework whereas Purescript is just a language.
There are a number of different frameworks you can use with Purescript from copies of the Elm architecture to wrappers over React to Halogen which can be thought of as a componentized Elm with multiple update loops. Halogen is awesome, really hits the sweet spot for me.
I wrote some Purescript and would definitely recommend anyone try it for themselves.
What eventually turned me off was tooling/workflow things like no accepted code formatter, poor graphql support, too many competing ways of doing basic tasks, too many libraries that were just JS wrappers.
Also I got a vibe from functionalprogramming.slack.com that there was more interest in the latest FP whitepaper than beginner friendliness and what the realities of making an app in Purescript are like. Which is fine, but will limit the adoption of the language in the face of Typescript (Microsoft) and ReasonML (Facebook).
It's funny, Elm gets criticised as a 'DSL for building SPAs', even though that's exactly what it is, and that focus is the reason it is so productive for that task, and has the smallest asset sizes of any front end solution.
And for all practical purposes, no runtime errors.
I think that's why I like it so much. There are lots of ways to do things. I don't think I am using any significant libraries that are JS wrappers. Halogen is written in pure Purescript. But the advantage of PS over Elm is that it is easier to wrap JS, hence the many options..
For sure its focus is not as a beginners language and it will never reach Typescript levels of popularity. But it has found it's niche, and it is in a good place.
I actually think it's seeing quite a bit of adoption in industry. What I'm seeing is a class of developer that won't learn it or thinks they can't learn it, of course "to each their own", but I truly think Haskell/PureScript/Idris/Agda are onto something remarkable: making the software industry more like an engineering discipline and less of a craft (i.e. like the difference between civil engineering and carpentry).
Your argument RE: immutability and purity hampering implementations: Haskell excels at allowing developers to start with impure IO-heavy code, allowing you to later refactor chunks of the implementation out to pure code. In-fact, I talk about Haskell as the language you use to "move fast and _not_ break things". It also encourages a more principled approach to software design and reasoning than many other programming languages.
I've used Haskell (and seen it used) in production for many different problem domains for the last seven years of my career. I have yet to see something Haskell/GHC cannot handle well with a few niche exceptions.
In my time using Haskell I've come to think of "mutation" as a cardinal sin in software and you better have a good reason for committing it instead of letting the compiler do it for you (you can write mutable code in Haskell, too, btw - it just strongly discourages you from doing so and it makes some classes of mutable code impossible to write, which is a very, very good thing).
As a (very) fallible generalist, Haskell is a godsend and we use it extensively where I work for shell-scripts, for code generation, for (performant) network packet parsing, HTTP web servers, gRPC micro-services, and even our "build bot".
I think you mean, by interactive, something with a user interface like a gui. First, see functional reactive programming.
The only project I used it for that was interactive in that sense was an interactive CLI tool. However Oskar Wikstrom wrote a screen cast editing tool so he could make his Haskell screen casts.
I don't think functional style prohibits interactivity (see: purescript which is strongly influenced by Haskell, we use it for all frontend web work now).
Real-time applications and really low-level systems software (however, there are some EDSLs that enable you to write real-time applications with Haskell's type-safety guarantees that can generate C-code: Haskell's Ivory library https://ivorylang.org/ivory-introduction.html).
Cross-compilation of GHC used to be a huge pain in the ass but that's improved significantly these days, on a project a few years ago I had to choose another language/ecosystem due to that limitation but I wouldn't have to now.
If you'll forgive the analogy, Haskell is kind of like the Mercedes of production-ready research-grade languages. You may not actually use haskell, but the features that haskell is pioneering are what you'll see trickling down into other languages, as language designers look over and borrow things. Some examples (though they're not all invented by haskell, but haskell has popularized them IMO):
- non-nullable types
- immutability as a default
- typeclassses + structs
- abstract data types
- errors-as-values
- monads
Haskell or any other ML language didn't come up with all these things of course, but Haskell is one of the best languages
Take a look at rust -- it's basically got a near haskell-grade type system (and there are lots of ML languages with more flexible type systems than Haskell as well), with C++ like performance. It would have been a lot harder for rust's designers to incorporate such a nice type system without the exploratory work haskell did and continues to do.
All that said, I'm pretty sure Haskell is seeing little adoption because of the learning curve.
> The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).
Haskell almost certainly does this, this is actually one of the best features of haskell -- it lets you do functional things and stateful things separately and encourages you to keep them separate.
It's less that you specifically want an int/int case and more that you want consistent behaviour in a generic context - you might have some logic in a template that uses variant<int, T> and treats the int case specially (e.g. the int is an error code), but then you get a nasty surprise when it gets used in a case where T=int.
A common example is validation/result types, which often look like string (error message) or valid result. So e.g. you might have a username validator that returns Result<String, String> and then various other user creation validation things that return e.g. Result<String, EmailAddress> and in the end you compose them all together to get Result<String, User>. That's a very powerful style that has the advantages of exceptions (the "happy path" through the code is obvious and not obscured by all the failure handling) without their disadvantages ("magic" control flow, seemingly trivial refactors changing the behaviour). But it's less practical if you can't have Result<String, String> at the base level.
Yes definitely, I meant ADTs and GADTS -- Algebraic Data Types
They're a super good idea, so I'm glad that other languages are adopting it, but this is the kind of thing that haskell has had for a long time and is just second nature.
The learning curve is one thing; dealing with purity and immutability is another. But the real stumbling block (besides the tooling issues) is the effect of lazy evaluation.
Lazy evaluation can make it quite difficult to reason about the time and memory resource requirements of Haskell programs, and debugging those isn't a whole lot of fun.
It is do-able, and like anything, gets better with experience, but it's hard to push things to production when you aren't sure they won't OOM or capriciously start doing some long thunk chain evaluation.
There is unsafePerformIO, but it's generally considered a really bad idea unless you really know what you are doing, and even then it is probably a bad idea. It is useful for debugging though and putting trace statements in.
I havr never noticed immutable data structures to be more difficult to deal with. For the most part, because of monads and effects and such you can essentially write better imperative code in haskell.
Im noy sure why youve decided that haskell is incapable of the syntactic appearance of mutation.
To be fair I haven't really used it, only read the core of the guide, so I may be mistaken.
But I've done a bit of Clojure, and I've done a bit of Immutable.js, and particularly when you have a deeply-structured piece of data, "mutating" something a few levels down gets really ugly. Now, maybe something about Haskell Enlightenment obviates this case entirely in a way I'm not seeing. But I also remember Haskell seemingly forcing you to push all your imperative code up to the surface layer of your otherwise pure program, which sounds great unless you need to do really meaningful things that are by nature imperative.
"Now, maybe something about Haskell Enlightenment obviates this case entirely in a way I'm not seeing."
Many times the answer is that with a different structure you don't need deep mutation to be a critical part of your program. After all, "deep mutation" isn't considered a great idea in object-oriented languages either, where it constitutes a violation of the Law of Demeter [1], either in letter or in spirit (i.e., creating a chain of methods to set some deep value may in letter follow the Law but can still be a violation in principle).
But if you do need it, Haskell does have a rather nifty mechanism for mutation patterns to be first-class elements themselves through "lenses", which capture as a first-class value some access pattern and mutation pattern on a given value. And while one of its original purposes is to allow Haskell like
value %~ property1 ^. property2 =. newValue
such that in a monadic context that will pretty much do what you'd expect as an imperative programmer, it also means (property1 ^. property2) is itself a value that can be used and passed around like any other, and allows for things like creating generic functions that take "a thing, and a thing that will extract a Name from that thing, and will return a new copy of the original thing with the name all uppercase" or something like that. And there's a whole bunch of other ways to make that stuff sing and dance too, if you're in the mood.
You can even do really melty stuff like have a lens that will expand an int into its bits, allow you to manipulate those bits as if you had an array of bool, and then will re-pack them into the int for you. Lenses can take any arbitrary slice out of an object as long as you can express the extraction and the creation of a new object putting the stuff back, and then they can be composed together as-needed. It can be powerful, but it can get pretty brain-melty too.
Let me add that lenses are not just a Haskell thing. It's a simple (and beautiful IMHO) concept from functional programming that can be introduced in many languages. Also, you can have lenses without the cryptic operators.
The cryptic operators are by far the worst thing about learning the basics of Haskell.
The language is already hard to learn because of all the wonderful and mindblowing concepts but the syntax is super frustrating and takes the difficulty to another level.
Haskell obviates the need to do deep manipulations in an ad hoc 'ugly' way usually via lenses. Personally, i think most imperative languages are sufficiently less sophisticated than a state monad plus lens thay it is substantially more difficult to use them.
> But I also remember Haskell seemingly forcing you to push all your imperative code up to the surface layer of your otherwise pure program
I have no idea what you're talking about. You seem to be confusing effectful code with imperative code. Haskell's do notation -- which lets you write with an imperative syntax -- can appear anywhere, including pure code. On its own imperative code does not necessarily mean effectful code and mutation does not require us to give up on purity. Moreover, if you do want machine level mutation for performance reasons or because an algorithm is more easily expressed in that way, you can always drop into the (again pure) ST monad.
Basically, I think you are criticizing haskell from a place of ignorance.
Frameworks like immutable.js are not comparable to haskell. These are immutable data structure libraries built for languages where immutable data is an afterthought at best, if its even considered at all. Obviously these are going to be clunkier to use. Haskell is not that though.
But I've done a bit of Clojure, and I've done a bit of Immutable.js, and particularly when you have a deeply-structured piece of data, "mutating" something a few levels down gets really ugly.
You're lucky enough if you can find a developer who knows mutable data structures outside of SF. If you want immutable ones you need to add a zero to their wage.
> You're lucky enough if you can find a developer who knows mutable data structures outside of SF.
That's one of the most arrogant statements I've read on HN in quite a while.
Nearly all developers know mutable data structures, it's bread and butter. Immutable ones aren't some exotic life form, it's just that using them effectively (read: efficiently) as mutable ones can be really tough, and leads to more complex code.
I am out of SF. The number of times I've seen people start trying to parse XML with regular expression tell me all I need to know about their ability to code.
I've had random interns enter a purely functional codebase with 0 FP background. With a normal amount of onboarding, they were using immutable data structures with ease.
In fighting games, characters are sometimes described in terms of their “skill cap” and “skill floor”. The “skill cap” is how well you can play, if you really invest in this character. The “skill floor” is about how bad things can get if you don’t play that well.
Some characters are very approachable and easy to play. If you don’t know what you are doing, it’s alright — you can muddle through. If you play them in a really exceptional way, it doesn’t make all that much difference.
Some characters are really tough to play at all; and once you figure it out, they aren’t particularly exceptional and don’t reward further investment.
A “high skill cap” character is one where you can keep learning and learning and your performance at the game will actually get better and better. Some of these characters are also approachable — they have a gentle learning curve. Some of these characters are basically unplayable until you can play them really well — there is an inflection far to the right where you go from dying all the time to actually winning a fair number of matches.
Haskell is like one of these characters. Until you’re really good and know a lot, you’re basically going to ship nothing. This effect is sort of invisible to senior programmers learning Haskell because they are already so skillful and are used to having to skim a CS paper or two, once in awhile, to be able to get their work done. Once you are good at Haskell, vistas really open for you in terms of the kind of programs you can design and build. Many year after setting it aside, I still rely on what I learned about Haskell API design and effects modeling to design reliable, transparent and modular distributed systems (it all starts with the types).
“High skill cap” characters tend to be admired, but not frequently played.
I don’t say “I failed high school maths” because I’m proud of it. I’m not special. I’m not particularly clever. I don’t know how else to drive the point home that you don’t need to be a genius to build software and enjoy doing it in Haskell.
> No one is arguing that you need to be a genius to use Haskell.
I’ve come across this sentiment so many times. Haskell definitely has a reputation for being an “ivory tower” language.
> How does my argument come across that way?
I’m not too familiar with Zelda, but it sounded like you were saying “some people are just able to do these things that most others can’t.”
If that isn’t what you were saying, then I am sorry for misinterpreting. I’m genuinely not trying to argue or take you out of context or anything like that.
I think anybody can write Haskell and anybody can play Zelda or similar characters in fighting games.
What the skill-cap / skill-floor thing is about, is how often do people bother. When the base level of skill required to play at all -- not necessarily play well -- is really high, often those characters don't get used as much. People find a character that demands less up front investment and play that character instead. It's not about ability, it's about time.
> But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space. For these cases, the FP answer is usually "recompute and replace" (generally with immutable data structures that make this efficient).
It can get worse than that. My problem space has mutable state that is shared between multiple threads. FP's initial answer is "shared mutable state is evil". And they're right! But if that's the nature of your problem, then you're kind of stuck with it.
But the problem with the "recompute and replace an immutable data structure" is that I now have to notify all the relevant threads that they need to replace their reference to the data structure (avoiding race conditions in the process), and that seems at least as nasty as the problems I have doing it the imperative way.
But in my world (embedded systems), those I/O writes aren't just writes to a file or a network socket. They're writes to device hardware, which has to be in the written-to state the next time that another thread interacts with it. That is, the I/O operation has to be part of the transaction, not queued up to run after the transaction commits.
Still, this approach goes farther than I thought possible to solve the problem...
> That is, the I/O operation has to be part of the transaction, not queued up to run after the transaction commits.
That's the kind of thing Haskell excels at - explicitly sequencing things that need to happen before other things, so that you can have code that does stuff in the right order without getting into the trap of "can't ever refactor this in case I change the order something runs in".
(I can't believe that threads are actually part of the problem statement unless you're doing something strictly tied to C. Concurrency or even parallelism might be a requirement, but there are other ways to achieve it than OS-level shared-memory threads)
The Haskell philosophy is easily misunderstood. It's not really that shared mutable state is evil but that it's difficult and so should be treated seriously and explicitly. The language still supports it—quite well. GHC's green thread scheduler is top-shelf. The main author of the GHC runtime wrote an excellent O'Reilly book called "Parallel and Concurrent Programming in Haskell" (which you can read for free online).
Was going to say exactly this. Major common misunderstanding about even Haskell. Mutable state has to eventually be a part of basically every program that does something useful. The Haskell philosophy is more about having an explicit and predictable boundary between the pure and effectful parts of your code/system.
Though you'd be surprised at how much you can do while forgoing mutable state entirely.
> But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space
True, but I would argue that it is far, far more common for people to writing stateful code to do things that are better done in functional style, than the other way around. This comes from my own experience of learning functional programming in JavaScript (before knowing about Haskell et al) and refactoring a project into a functional style that relies heavily on immutability.
I think Haskell is not more widely used in the industry because it has a very academic reputation. If you say you write Go, people think you are a practical programmer who turns coffee into solid business logic that powers some profitable website. You write Haskell? Then you are more likely to be perceived as some eccentric scholar or their like.
> I would argue that it is far, far more common for people to writing stateful code to do things that are better done in functional style, than the other way around.
For sure. But still, there exist cases. Many of them in JavaScript, actually. In my JavaScript UIs I relish every opportunity to write something as a pure function. But I also need to manage a lot of deeply structured, non-homogenous state that's genuinely meaningful to the application. Separating the two is crucial, but both exist. I've really enjoyed MobX, as it allows you to make the most of both types of programming and hook them up together in a cohesive way.
> I think Haskell is not more widely used in the industry because it has a very academic reputation...you are more likely to be perceived as some eccentric scholar or their like.
For me, the reason why I abandoned Haskell (for a couple years I was writing about half of third projects in it) is the complexity associated with laziness. FP and immutable data structures and monads are cool and mostly understandable once you grasp the concepts, but laziness is a double-edged sword. Laziness is cool, it enables a lot of nice things, but it makes me unable to easily reason about space complexity of any nontrivial algorithms, and that repeatedly bites me when I have to write such code.
In imperative code, space complexity is obvious and explicit, time complexity is intuitive for me, and correctness is shaky and needs to be tested.
In Haskell, correctness tends to be obvious (if there are no typos causing syntax to fail, it almost always gets the exact result I intended 100% correctly in the first try), but the space and time taken by the algorithm may be and often is surprising to me; If I make tiny modifications to the code, the execution may suddenly explode from 0.001 second to an hour because suddenly processing n entries involves creating and disposing n^2 thunks, taking all available memory and extreme amounts of time.
And despite trying a bunch to wrap my head about it, it keeps happening to me, it's just not intuitive to me - for me, eyeballing efficiency and exact time/space execution of a nontrivial Haskell function is just as hard as eyeballing whether random C code does everything correctly without a memory leak or overwriting being possible in an edge case. I also have trouble with Prolog for the same reason.
> The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).
This is absolutely wrong. Sorry to be so direct, but I want to make sure other people not familiar with Haskell don’t get the wrong impression.
Haskell is explicitly designed to separate purely functional and stateful code with a clear interface between them: monads. It does exactly what you are asking for.
That's what happens when you're working in a pervasive-mutability-by-default (or pervasive-IO-by-default) language as well - your whole codebase is full of hidden state mutations and hidden interactions with the outside world. You just don't have any idea where they're happening. You can write Haskell in the same style with monads everywhere and you're in essentially the same situation (just with more visibility into it) - but then you can actually start isolating the parts where mutation or outside interaction happen, and separating them from your core business logic. Which is the same thing you'd do in a high-quality codebase in any other language, but in Haskell you can do it in a way that's actually enforced and visible rather than just convention.
Sorry for the additional pedantry, but I think this important to be precise about given the target audience of your comment.
Monads aren't the separation between purely functional and stateful code. The Haskell type system maintains that separation. Anything that's doesn't return IO a for some a appears to be a pure function from the perspective of the programmer. Once a function returns IO a, there aren't any* functions provided by the compiler that can make a function that uses those results not also return IO b for some b. For example, the type of getLine is IO String (because it impurely produces a String) and the type of putStr is String -> IO () (because it takes a String and mutates the world without returning anything).
If the compiler provided a function for computing on the a in the IO a, for instance, bindIO :: IO a -> (a -> IO b) -> IO b and a function to wrap the results of non-IO functions, such as returnIO :: a -> IO a, you could do arbitrary computation with these IO-wrapped data types, but know at a glance if your functions were impure.
This approach doesn't require the Monad typeclass at all, just a magic type called IO that tags impure computations that are implemented with compiler and runtime magic. It happens to be the case that this is exactly how GHC implements the IO type. bindIO is implemented here[0] and returnIO is implemented here[1] and the compiler magic used to implement them isn't* exported, so all IO operations have to go through those functions. It is not a coincidence to that these functions have the right types to form a Monad instance for IO and indeed, that is also present[2], but the IO type and the type system that ensures it can't be sneakily hidden are doing the heavy lifting, and the Monad instance (and accompanying syntactic sugar), are just there to make it nicer to work with and easier to abstract over.
If you have a passing familiarity with Haskell, the phrase "state monad" is the obvious place where my claims stop making sense. In fact, the State type only supports computations that are entirely pure. If you want to simulate global variables in a language that didn't have them, you could always pass all of your global variables to every function and get updated ones back from the function along with the nominal results of the computation. The State type is just a regular data type that wraps stateful functions constructed by such state passing. A type of the form State Int String is just a function that takes an Int and returns and String and an Int, no compiler or runtime magic needed.
You can play the same trick as in the IO case and provide functions bindState :: State s a -> (a -> State s b) -> State s b and returnState :: a -> State s a in order to compute on these "stateful" values while making sure the result state got passed to the next function in the chain correctly. Like IO these two functions can be used to create a Monad instance for State. Unlike IO, State is just a data type holding a regular Haskell function, so it's extremely reasonable to write a function of type State s a -> s -> a which runs the State s a computation with an initial value of type s. This is written by unwrapping the State type and then passing the initial state value to the function inside and return the result while ignoring the returned new state. More details on how State is implemented are available here[3].
A complication to this is that if you want stateful mutation for performance reasons, the ST type[4] also exists, which looks identical to the State type from the programmer's perspective, but plays similar tricks to IO in order to actually mutate under the hood while not exposing the implementation details to the user, so it can be reasoned about exactly as if it was pure and using the same implementation as State.
These Monad instances for IO, State, and ST start to pull their weight when you write functions that only use features provided by the Monad typeclass and they work seamlessly with any implementation of stateful computation despite their very different internals. Monad is quite general, so if all you care about is abstracting over stateful computations, you can also use the methods from MonadState[5] which allow you to interact with the state along with the results of the computation independent of the implementation of stateful computation.
* In the name of not getting bogged down in details, there are a few parts of this discussion that are not entirely accurate, particularly around functions like unsafePerformIO[6].
Note: The approach of structuring the interactions with the IO type with the functions (bindIO :: IO a -> (a -> IO b) -> IO b) and (returnIO :: a -> IO a) is still using the abstract idea of monads to organize the impure code and make it ergonomic to work with, so "monadic I/O" or "monadic state" aren't entirely misnomers. The thing I wanted to emphasize is that you don't need to know the word "monad" or understand anything in particular about the design process for the Monad typeclass in order to use these libraries.
I think focusing on the "monad" part over the "IO" part of "monadic IO" is particularly confusing to new users because the abstract idea of a monad is very general, so if you assume all places where it shows up are basically like the case of IO, you will be very confused. Further, it makes the idea of a monad seem like a Haskell-specific hack, rather than a general abstraction that can be used in any programming language you want to.
This is particularly important to emphasize because the abstract idea of monads only makes the IO approach to impurity nice to use, it doesn't make it possible. Haskell had I/O (and other impure capabilities) before the monadic way of organizing impure code was introduced. The heavy lifting for IO is done by having a type system strong enough to prevent a function of type IO a -> a from being written by an end-user. If you have written a monad abstraction in a language without such a type system[0], it can still be a nice abstraction, but it doesn't guarantee that pure and impure computations can be distinguished on the type level.
Very few programmers are proficient in Haskell, and operating systems and language tooling are built around imperative C-style semantics.
Combine that with the fact that most of the software industry does not care about correctness or stable software and generally lacks professionalism. "Just ship this half-assed software as soon as possible" is the attitude at the majority of software companies.
Software engineering is all about trade-offs and making sure stakeholders are fully informed thereof. Pressure to deliver is one of the most challenging problems an engineer can face, because it stands in opposition to every ideal. Yet it's about as normal as death and taxes.
Haskell sounds amazing. I would be thrilled to learn it, and I hope the ecosystem flourishes. I hope there will eventually be millions of jobs to write code in the language. I'm a little bit envious of those who speak fluently about monads and set theory, and I've learned a lot from brushing shoulders with those people.
Meanwhile, I'll continue solving real-world, extremely stateful problems in an as-purely-functional-as-I-deem-convenient manner with the tools I already have under my belt. You can pry my precious semicolons from my cold, dead, carpal-tunnelled hands.
Haskell is useful whatever your priorities are (unless you have a really low quality requirement, like a script that fits on a single page and gets run only once). If you think of the project management triangle, switching to Haskell gets you a bonus that you can distribute between the points as you wish: you can produce higher-quality code for the same scope/cost/time, wider-scoped code at the same quality/cost/time, code at the same quality/scope/cost in less time, or so on.
IME a lot of Haskell advocates spend this windfall in a way that's poorly aligned to business requirements: we spend it all on increasing the code quality (and perhaps even overshoot, taking more time than users of another language to produce code of the same scope). But that's not an inevitability. (I would speculate that it tends to happen because most people in the software industry claim to value quality a lot more than they actually do, and a lot of Haskell programmers take them at their word).
One of them is VC funded. We have the stakeholders. We have the pressure to deliver.
Haskell is making this easier, not harder. We can maintain pace as the software grows because the language is generally well-principled, and the compiler keeps us in check rather than us having to rely on human discipline.
Honestly this is true. Most of the world doesn't give a shit if a page on their web store is broken. They get an exception email and then fix it, no real loss. While switching over to haskell may make your software more stable, at the end of the day the amount of extra time spent writing it in a more stable language is going to cost the business a lot more than a slightly buggy website will.
Completely false. Many “real world” businesses are shipping web apps in Haskell. Anecdotally, they take less time to write than the equivalent Rails app.
I don't know. I've been slinging Haskell on the side for the better part of a decade and do most of my day to day in RoR (trying to start moving clients over to Elixir/Phoenix.)
Haskell is an amazing language. I would totally buy that Haskell teams probably win in the medium to long-term as the wins you get in terms of support/maintenance/extensibility are pretty obvious.
However, anecdotally, Haskell forces me (and I imagine other programmers) to invest a lot more time up-front into getting your design in order. Haskell punishes an "oh I'll just hack that out" attitude pretty badly. Which, as I said above I would completely believe leads to wins in the medium to long-term. If I need to bang out an MVP over the weekend, I'm probably not choosing Haskell unless it's well-trodden Haskell territory.
Additionally, while the language itself is amazing, the ecosystem has issues (enumerated in the article.) Tooling sucks, there aren't enough examples of people doing normal things, there are frequently not libraries for basic things, obviously integrations with popular services are lacking, and the list goes on.
When Haskell has a decently mature and actively developed web framework that has reasonable docs, examples, and a not pathetic ecosystem (by the standard of modern web frameworks) I'll happily jump into using Haskell in production. Unfortunately, these aren't things enough of the community seems interested in to have significant movement on.
Servant looks very interesting with regards to what I'm looking for, but I'd be lying if I said I understood the types.
I spent 7 months trying to build a json api using yesod with a friend. We had nothing but headaches. Its insanely hard to find example code or search issues when using yesod. Hardly anything exists about it on stack overflow already so any time I had an issue I had to post it on SO and wait a day for someone to answer it which meant I could only do about an hour of programming a day. I ended up giving up used RoR and replicated more than the 7 months worth of work in a few weeks.
There is a very good reason RoR is far more popular than haskell web apps. Its just so easy to get started with rails, there is a near infinite amount of information online.
Many web apps might be written in haskell but I assure you many many many more are written in ruby. I very much doubt you will find a haskell developer willing to build your web store in haskell for $15/hour. With rails you can just import Spree, make a few modifications and host it and you are done.
I write Haskell web apps, and I used to write Ruby web apps. You don't need to assure me; I'm well aware.
If you're in the realm of "import Spree, make a few modifications and you are done", then sure, use Ruby.
The products I work on just aren't that generic/trivial.
> I very much doubt you will find a haskell developer willing to build your web store in haskell for $15/hour.
The market is larger than you think. I have hired people before at $15 per hour. I have people working for me now at $23 per hour. Not everything needs to be stupid SV money.
I think Haskell actually has a few qualities that can potentially turn people off at different points, all of which have been mentioned by others:
* Lazy Evaluation -- usually becomes an issue at some point if the project isn't trivial.
* Lack of dependent types -- while Haskell's types system is great, this is a missing piece that proves a sore absence for some. It's not very surprising that several functional languages developed after haskell, inspired by haskell, and implemented in haskell (idris, agda, coq)
* category theory -- While you can get by plenty well just knowing a few basic abstractions from category theory, it's true that many libraries rely on highly-theorectical concepts, and that a solid portion of haskell development efforts are still tinged with an academic flavor. Just take a look at some of Edward Kmett's coding videos--he typically has one (or several) mathematics/highly theoretical comp sci papers open in one window, while in the other he's implementing a hugely popular library like lens. This sort atmosphere of academism turns a lot of people off. It pegs the language as a theoretical exercise from the get go, and plenty of people are either don't believe in the value of theory or are far too busy to take the time to dive into how it applies to their situation.
I think the academic veneer around haskell is its biggest weakness when it comes to adoption, but that's said, it's still had more success than a lot of other academic, purely functional languages thanks to the efforts of core researchers/users/contributors to break it out of the ivory tower. The ideas are good. Almost every other modern programming language has stolen concepts from haskell at this point--so in some sense even though it's still somewhat niche its influence is practically ubiquitous (just take a look at the list of languages its noted as having influenced on wikipedia! https://en.wikipedia.org/wiki/Haskell_(programming_language)).
In a broader sense, the functional programming paradigm has proven not only that it's a viable one for industry applications, but often superior to object oriented or imperative techniques when it comes to fidelity of expression and preventing bugs. If nothing else, haskell is great because it provides a fairly rigorous environment in which you can explore functional programming concepts, which you can bring with you to almost any problem and any language and manage to derive some benefit.
It's loads to unlearn for most of programmers and a perfectly natural knee-jerk reaction of a refusal to. I mean, the more one knows already, the more is the desire to reuse such knowledge. Well, at least I have been there.
> Haskell doesn't, really (from my cursory reading about it).
Cannot speak for industry, so speaking from experience in academia when we were learning functional programming (FP).
The key difficulty there is the paradigm shift in thinking. For most people our thinking matches imperative programming. Ask a person to do something like run an analysis of an accounting book using paper, pen, and a calculator, and you'll see them keep some tallies which they keep updating as they go along. Even if there was a way to convert their work into a method which just involves repeatedly tapping out the numbers into a calculator in a formulaic way, almost every time it's going to be easier to reason about the logic in terms of stored state and progressive steps.
When you get hit with the paradigm of functional programming, which is beautiful when it works btw, you need to switch your thinking from pure logic to formulae driven logic. That's an unnatural shift and given how poorly early schooling helps shift our thinking to that mindset when we do maths, it's a hell of a leap.
Anecdotally we had a couple of pros at math - people who understood the fundamental beauty of math and proofs - and they took to FP like ducks in water. Even when we built a library management system in Haskell.
My theory is that the FP folks would have seen more success had they figured out ways to bring their features to mainstream languages, rather than asking people to adopt wholesale their weird languages (from an average programmer's point of view). For example, why can't I annotate functions as lazy? Swift takes one step in that direction, with lazy var, but why not a lazy func or lazy class? Even the lazy var can't be used for local variables.
People are generally more willing to accept incremental changes than an entirely new way of doing things. For good reasons: they're skeptical, availability of talent, tools, libraries, IDEs, help if they run into trouble...
An analogy is the spread of yoga in the west the past decade or two, because it was easy to do so without changing your lifestyle. If Westerners were asked to fly to the Himalayas for a month to learn yoga, how many would have learnt yoga?
> My theory is that the FP folks would have seen more success had they figured out ways to bring their features to mainstream languages, rather than asking people to adopt wholesale their weird languages (from an average programmer's point of view).
I'd argue this has been happening for years and years now. If you want a pithy saying, you could say that over time languages become closer and closer to Haskell.
Option types, pattern matching, pure functions (see: Vue/React with computed properties, or any other frameworks doing things close to functional reactive programming), type inference (even Java is getting a var keyword!), more powerful type systems (things like the Typescripts of the world have) are now becoming trendy, but Haskell had them years and years ago.
The above is definitely oversimplifying (a short HN comment is no place to get into the fundamentals behind the various kinds of type systems), but I've found all the things I love in Haskell and other FP languages seem to slowly drift on over to other languages, albeit often a little suckier.
(I'm still waiting on software transactional memory to become mainstream, though. Of course, in languages that allow mutable state and without some way to specify a function has side effects, you're never gonna get quite as nice as Haskell. Oh well.)
I'd bet a lot of this has resulted from improvements in computation power. It used to be you couldn't afford to abstract your hardware away in a lot of real-world cases. Whereas now we can afford the programmer benefits of immutability, laziness, dynamic types, and first-class functions. Static things like var can probably be attributed to the same advancements happening on developers' workstations.
This is a good point which I hadn't thought of, but I think there's a lot more opportunity to bring concepts from FP to mainstream languages. For example, why don't imperative languages have an immutable keyword to make a class immutable, given how error-prone it is otherwise? https://kartick-log.blogspot.com/2017/03/languages-should-le...
Python has tons of functional features [1] while not being a purist FP by any means. This article liked to say quite often that "X would be impossible in Go or Java!" and I kept thinking, "I bet it's possible in Python, just not as academically wonderfully as you were hoping".
As it turns out, Python continues to be one of the top three languages in terms of popularity. It's not clear if this has helped feed a bigger userbase into purist FP languages though.
One thing I've wondered is if it actually isn't ideal for a lot of cases. FP is beautiful for certain things. But in some domains (or pieces of domains), state and mutation aren't just unfortunate implementation details, but a core element of the problem space. For these cases, the FP answer is usually "recompute and replace" (generally with immutable data structures that make this efficient). This can be syntactically clunky when it's a major part of your application, and not just a necessary evil to be swept out to the edge. The most successful languages let you be pure-functional where it makes sense and then stateful where it makes sense. Haskell doesn't, really (from my cursory reading about it).