Haskell is fun, lots of fun, but it is not a practical language by virtue of it being strongly typed.
By practical I mean strongly typed PLs are not forgiving as your world and perceptual view of the world changes over time -- as it does for any business.
Your program must pass a proof to execute. For the most part you do not need to explicitly express your types; Haskell can infer them. This is often pointed at by Haskell enthusiasts as some kind of nicety. But it's not because (1) the types are still there; your program still has to cohere wrt to the type system; (2) strongly typed programs with explicit type annotations are much easier to read and manipulate, so you should use them anyway.
As your world and assumption changes over time you will find yourself having to do a ton of work to have everything cohere for the type proof again. Haskell enthusiasts also like to say things like with strong typing "Refactoring is easier". This is incorrect. What they are pointing at is that Haskell will of course catch certain (type mismatch) bugs during a refactor, but it will also find many many things that would not be a bug were you using a lighter or no static type system. So ironically this in my experience discourages refactoring because you become exhausted from all the unnecessary labor required to make rather nominal changes to your domain model.
Dynamic languages on the other hand, when in the right hands, don't obligate this whole unnecessary labor of effort.
I feel that there's an elephant in the room whenever I talk to someone who is enthusiastic about strong typing in a business or real world context. And that's this: that their enthusiasm is more to do with the pure fun of manipulating a type check/prover and playing in these weeds than actually getting real stuff done over time.
Another common retort from the strong typing enthusiast is they will point back at a dynamically or loosely typed system they've worked on and point out what a mess it is and then proceed to show how they don't run into certain classes of bugs any more, like null pointer exceptions. What goes unanalyzed is the competence of the team that made the mess, that it was the fault of the skills at play and not a necessary fault of not having types. What also goes unanalyzed is whether the extreme cost of playing the strong typing game is worth an occasional NPE (which are of the immensely easy and fixable kind of bugs) here or there -- in a standard business (ie, not mission critical) context.
I realize this is going to be hotly contested and that I'm stepping on toes here, but I think all of this is true.
Learn Haskell by all means, but be honest with yourself when comparing it to the successes of more pragmatically designed languages. This should also be no surprise; Haskell is a research langauage born in academia, NOT born out of long practitioner experience. Compare to Elm -- another strongly typed language for the frontend -- which was born out of a doctorate with limited real world experience. Then compare to, say, a Clojure (another esoteric language) but which was born from extensive pragmatic experience and look at the choices it made and how in many cases they run against Haskell's.
> with strong typing "Refactoring is easier". This is incorrect.
I can't tell if you're just trolling, but have you ever done a large refactor in a dynamically typed language such as Python? It's a runtime minefield, taking a huge amount of testing effort to gain any level of confidence that you got it correct. In a language like Haskell or Rust, it's usually as simple as getting the thing to compile.
IME a code base with high test coverage lets me refactor far more securely than an expressive static type system. Static typing is useful, but if I had to pick one it would be tests all the way.
The cost of a high test coverage could be an additional 3x lines of code for the tests. While you still need tests with an expressive type system you can basically cut the code coverage to a fraction. And maintain the same level of confidence in refactoring.
While you still need tests with an expressive type system you can basically cut the code coverage to a fraction.
Where on earth do you get the idea that static typing lets you cut the code coverage to a fraction? I am going to need specific examples to believe this claim.
> Where on earth do you get the idea that static typing lets you cut the code coverage to a fraction? I am going to need specific examples to believe this claim.
I don't know how I can give a "specific example" when the example is effectively my whole programming career (most of which was proprietary codebases). But it matches my experience: code with static types and a fraction of as many tests ends up with fewer production bugs than code with substantial test coverage in a dynamic language.
Think about how much of your code contains actual business logic rather than just plumbing, and imagine never having to test the plumbing parts, only the actual logic. For most functions there's only one possible thing for that function to do, the type tells you that it does that thing, so there's no need for a test. E.g. a (parametric) function that takes a set and returns a sorted list can only ever sort that set or return an empty sorted list, so the only thing you need to test is that the returned sorted list is non-empty.
Using a comprehensive and a well crafted type system limits the choice you can input to your functions. Drastically. If you utilize the type system right it essentially forbids you from supplying faulty data as input to your functions.
After a large range of (faulty) inputs are already forbidden by the type system you're left to write tests that actually test the business logic. I don't see how this would not cut the required test coverage compared to for example dynamic type systems where virtually any input is allowed and has to be covered by tests.
You can get input data verifications dynamically or statically. You don't have to use static type verification to verify data inputs and dynamic validations are much more expressive and simpler. See the immensely expressive Clojure spec or Ruby Validations and compare to any type system.
Couldn't find anything on Ruby validations (drop me a link?) just something about the db ORM/engine.
However, Clojure's specs indeed seem to be runtime checks. Seems very different from a type system, really just packages functionalities for the primitives rather than having an expressive type system.
I may be understanding this wrong but I fail to see what's the added benefit of moving checks to runtime in contrast with the performance penalty it produces. Or maybe it's just an improvement to an otherwise dynamic language, like Erlang/Elixir has Dialyzer.
Seems to me it's more of a mechanism to handle the errors or produce code that handles inputs a static type system would prevent from being used in the first place.
Runtime checks are much, much more flexible (because they are ad hoc), more compose-able (they are written using the PL itself), _and_ more capable (you have the full breadth of the PL semantics).
I could go on with concrete examples to demonstrate each of these merits but here is a big one:
In a statically typed architecture you typically see a typed domain object (ADT) used -- eg, Person -- and you let the type itself "validate" (eg, Person.name, Person.age are non-null/required fields, etc). And wherever Person flows you are obligated to adhere to this form/validation requirements OR you must create a new type and map from/to these two types. This is already obviously a bad idea.
In a dynamic language you can define specs universally (like you are obligated to in the static typing case) or you can define a spec for different scenarios and different functions.
Let's say I have a process (sequence of function calls) that ultimately cleanses/normalizes a person's name (I'm making up a use case here). This code does not need age but in a statically typed system you will be passing around a Person object and all sources of data must "fill in" that age slot. Now there are a lot of responses from the statically typed people to this scenario [1] but if you follow them all down honestly, you will end up with a dynamic map to represent your entity and you will best be within a dynamic system like Clojure to optimally code and compose your code.
[1] The only other response is to say who cares if I have to "fill in" age and to this person I say I feel sorry for the people six months, one year, etc that have to build on top of your code.
Not an expert on spec, but I suspect that there are a lot of data that are not representable with combinations of predicates: GADTs, rank-n types, existential types, type families, data families, and other more advanced type system features.
Either way, the tradeoff is this: statically typed languages are guaranteed to be free of type errors, but no such guarantees can be made with runtime assertions. Whether or not you use a tool like spec to help you make those assertions, the fact that they happen at runtime prevents them from making any guarantees about type safety. Even if you use spec literally everywhere, as you suggest (and no one actually comes close to doing anyway), you are still not guaranteed of type safety. Dijkstra explains why: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/E...
I will say that immutability helps a lot. Clojure is dynamic but immutable-oriented, and refactoring with deferentially transparent S-expressions is a delight— + you don’t need type system for this.
It seems like you're arguing that dynamically typed languages are best because they don't get in the way of programmers who are good enough to pretty much never screw it up in the first place. Am I misunderstanding you?
But I also worry about the team that thinks a statically typed language is going to save them.
I think at the end of the day you have to do this job well or you don’t get any guarantees. I’ve seen people make equal messes in strongly typed languages and dynamic; the PL is not the thing. But one thing that is true is that strongly typed languages tax you strongly; anyone who says otherwise (and enthusiasts often do) is lying even if to themselves as well.
By practical I mean strongly typed PLs are not forgiving as your world and perceptual view of the world changes over time -- as it does for any business.
Your program must pass a proof to execute. For the most part you do not need to explicitly express your types; Haskell can infer them. This is often pointed at by Haskell enthusiasts as some kind of nicety. But it's not because (1) the types are still there; your program still has to cohere wrt to the type system; (2) strongly typed programs with explicit type annotations are much easier to read and manipulate, so you should use them anyway.
As your world and assumption changes over time you will find yourself having to do a ton of work to have everything cohere for the type proof again. Haskell enthusiasts also like to say things like with strong typing "Refactoring is easier". This is incorrect. What they are pointing at is that Haskell will of course catch certain (type mismatch) bugs during a refactor, but it will also find many many things that would not be a bug were you using a lighter or no static type system. So ironically this in my experience discourages refactoring because you become exhausted from all the unnecessary labor required to make rather nominal changes to your domain model.
Dynamic languages on the other hand, when in the right hands, don't obligate this whole unnecessary labor of effort.
I feel that there's an elephant in the room whenever I talk to someone who is enthusiastic about strong typing in a business or real world context. And that's this: that their enthusiasm is more to do with the pure fun of manipulating a type check/prover and playing in these weeds than actually getting real stuff done over time.
Another common retort from the strong typing enthusiast is they will point back at a dynamically or loosely typed system they've worked on and point out what a mess it is and then proceed to show how they don't run into certain classes of bugs any more, like null pointer exceptions. What goes unanalyzed is the competence of the team that made the mess, that it was the fault of the skills at play and not a necessary fault of not having types. What also goes unanalyzed is whether the extreme cost of playing the strong typing game is worth an occasional NPE (which are of the immensely easy and fixable kind of bugs) here or there -- in a standard business (ie, not mission critical) context.
I realize this is going to be hotly contested and that I'm stepping on toes here, but I think all of this is true.
Learn Haskell by all means, but be honest with yourself when comparing it to the successes of more pragmatically designed languages. This should also be no surprise; Haskell is a research langauage born in academia, NOT born out of long practitioner experience. Compare to Elm -- another strongly typed language for the frontend -- which was born out of a doctorate with limited real world experience. Then compare to, say, a Clojure (another esoteric language) but which was born from extensive pragmatic experience and look at the choices it made and how in many cases they run against Haskell's.