Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, there are differences. Fundamentally, the Haskell type system is not just about correctness: it actually makes the language more expressive! You simply wouldn't be able to do some very awesome typeclass things with a system like this, for example. I've found things like QuickCheck--or even just using different monads--to be much more awkward without typeclasses.

The way this works is by allowing you to overload a function on its return type. It's very easy to imagine a function overloaded on its argument; for example, a hypothetical:

    to_string :: a -> String
This function can take an argument of some sort and returns its string representation. Many languages support this sort of overloading--it's very straightforward. But what about the symmetric from_string function?

    from_string :: String -> a
Now this is trickier! Most languages would force you to be more explicit about which function you're using, so you would have to write something like Double.from_string or Integer.from_string and so on. This loses the symmetry inherent in the types and forces you to add a whole bunch of redundant noise to your code. Very reminiscent of Java's Foo i = new Foo()!

With typeclasses, you can actually write a from_string function like this. Moreover, it works using exactly the same type inference system you already know and love! In practice, this means that the overloading is very straighforward: it just chooses whatever type you need. And since the type system is pervasive, you know that it will always choose the right instance if possible: if you use from_string expecting an int, it can't accidentally use Float.from_string because that wouldn't even typecheck.

This turns out to be useful in a whole bunch of different places. One of my favorites is with numeric literals: in Haskell, numeric literals are polymorphic. This means that if you write f 10, 10 will be the right type--integer, double, arbitrary precision integer, whatever--for the function. No more arbitrary sigils after your numbers (like 10L). More importantly, this makes new numeric types feel just like native ones[1]. So you can use normal literals with rational numbers or constructive real numbers or 8-bit words or symbolic bitvectors (most of which I've used in real code). These types no longer feel like second-class citizens!

Haskell type inference is even more useful than that though. Not only does it reduce the burden of using the type system, it also actually enables new workflows. Instead of giving the compiler your types and waiting for it to show you where they're wrong, you can actually just ask the compiler what type it expects. You can use this to interactively grow your program, informed by your types. The newest version of GHC will make this more convenient by adding a feature called "type holes", which lets you leave holes in your code and will tell you the types of those holes. Once this gets good editor support, it'll be incredible.

The Haskell type system--partly because it's pervasive and partly because it controls effects--also makes some high-level optimizations like steam fusion possible. The basic idea is that it can rewrite multiple logical traversals over an array into a single tight loop. The crazy thing is that this optimization is actually specified by a library, and not built into the compiler! So there are actually a whole bunch of rewrite optimizations like this, most of which are only guaranteed to be correct thanks to the type system. Since they're somewhat tricky to debug even with all the existing guarantees, I don't even want to imagine what it would be like with a weaker type system.

Honestly, I think you would be missing out on most of the real advantages Haskell gives you. And for what? At least in my experience, you do not get much of a boost in productivity or expressiveness. Even without invoking the optional type system.

So yeah: lots of deeper tradeoffs.

[1]: Take a look at "Growing a Language", a brilliant talk by Guy Steele that goes into why it's important to allow new additions to feel just like native parts of a language.

http://www.youtube.com/watch?v=_ahvzDzKdB0



All true, but at least Clojure (I don't know about other dynamic languages) provides some this power with protocols (which are very efficient) and multi-methods. core.logic provides the missing pieces.

> Honestly, I think you would be missing out on most of the real advantages Haskell gives you. And for what?

Forget about a static type system in particular: every bit of power you get from a language has a price. Sometimes the cost is performance and sometimes – and this is often a much higher cost – language complexity.

When choosing a programming language, you should not aim for the most powerful language, but for the one that gives you the power you need for a price you're willing to pay. Clojure strikes a pretty remarkable balance: a lot of power at a very low price (it's very simple, very easy to learn, integrates extremely well with legacy code, and can give you great performance when you need it). It is arguable whether Haskell gives you more power, but even assuming it does, this power comes at a much higher cost. Haskell is not easy to learn, and it's not simple to integrate with legacy code. The right question, I think, is how much are you willing to pay for extra power? Is extra power worth it?


Haskell, fundamentally, is actually quite simple: it's just the lambda calculus. You could fit the typing rules on a single piece of paper, and the evaluation rules are even more trivial. It is very difficult to come up with a simpler model than the lambda calculus.

Sure, it might be difficult to learn, but that's very different from being complex. Haskell is, if anything, different--but this is also why it's so much more expressive! I think this is a great example of Rich Hickey's distinction between simple and easy. Also, learning a language to some level of proficiency is an O(1) cost but the benefit is O(n) to how much you use it, so I really don't think you should consider that much.

Avoiding a language because it seems difficult to learn is just shortsighted.

Now, Haskell does have some complexity that other languages don't. But this is mostly in the compiler and runtime system: things like the incredible parallelism and concurrency features. There are languages that have much simpler runtimes, but Clojure ain't it: the JVM is probably the most complex existing runtime, full stop.

Legacy code, also, is more a matter of being on the JVM than language design. If you have a bunch of legacy code in C, it'll fit better with Haskell than Clojure. Calling out to native programs from the JVM is a real pain. The Haskell FFI, on the other hand, is relatively nice.

More generally, I think the idea that every bit of power has a price is not accurate. I agree that language design is going to be a complex optimization problem against several axes. However, this does not mean every improvement along one carries a cost along another--not every language is along the Pareto frontier! Since all languages are imperfect, you can get strict wins.

Also, as I understand it, protocols do the easy part I talked about. Certainly useful, but not very interesting. I do not see how you would accomplish return type polymorphism in Clojure--something like the from_string function or polymorphic numeric literals--because the information needed is simply not there. I'm not saying it's impossible, but I've certainly never seen it done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: