Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Algebra Driven Design (algebradriven.design)
137 points by agentultra on Sept 1, 2020 | hide | past | favorite | 144 comments


The top comment on yesterday's post on the Keli language (https://news.ycombinator.com/item?id=24331635) sums up FP's failure to become mainstream pretty clearly IMO:

> The other reason is that many FP users are too enthusiastic about creating abstractions. This is of course something that FP is exceptionally well suited for. An api that was written to simply process a list of Orders into a Report might be abstracted into a fold on some monad, which at first seems a great idea. But if you're not careful, readability suffers a lot. It's much easier to get to know an application when its code deals with business objects that you already understand well, than to read hundreds of lines of code that deal only with abstractions.

Industry programmers didn't sign up to learn about category theory. They do what they do because they like building things. There is a subset of those Industry Programmers that does indeed like learning about category theory and all of these other concepts necessary to grok languages such as Haskell - but it isn't a critical mass by any means.

That's why I think languages like Elm are a good example of "the sweet spot" of FP languages. Yes, Elm lacks the horsepower that other pure FP languages have, but at least it doesn't lead a developer to think "hey, I didn't sign up to learn about Arrows and Monads. I just want to write a TODO list".


I think this is a long term investment that goes beyond engineer / language /etc. preferences.

It's a matter of abstraction 'stability'. There's a cost for each abstraction, so modeling without fully understanding what you try to model or something that changes wrt the abstraction model will yield incomplete abstractions, be hard to read, etc.

Many things are being remodeled over an over (incomplete, with bugs, etc.) when in fact they don't change much. These are good candidates for good abstractions. I don't think you'd want to do 3d graphics without a system that understand the algebraic aspects (e.g. orthogonality) behind rotations, translations, etc. Categories generalize this all the way so optimizations would be possible at a whole different level (makes me think about what Tensorflow enables).

I also think you only need one category (just like you only need one group / semigroup concept) for a model and it could be language agnostic.


> 95% of Go programs could be replaced with a single traverse

This is a rather abrasive way of putting it that tries to communicate this point. I would have found it difficult to even entertain when my Gopher squishee on my desk and I came across it as someone very in love with Go at the time.

The point though is that for loops in Go are unstructured, but very flexible. That lack of structure lets you have any organization you have, probably influenced by previous languages you've used.

Structured recursion like folds and maps enforce a certain pattern to this, the same goes for traverse.

That idea isn't specific to traverse though, it generalizes to all of the Haskell abstractions. The reason people talk about composability so much is because composable code means re-usable, concise as in KISS , and elegant code.


For me the best analogy is carpentry vs. architecture. I’m not saying anything about FP In particular. But there’s a cohort of people that just want to “build stuff” like you said. And then there’s people who want to architect a solution because they feel that what they’re building needs to last, or can’t fail.

I don’t think either mentality is wrong. You wouldn’t hire an architect to build you a closet, and you wouldn’t hire a carpenter to build you an office building. It’s about context.

I think each type of person bothers each other when they try and apply their thinking as dogma to all contexts. I can name a million times where a global variable was fine, or a long function was fine, or some duplication was absolutely fine. I can also name a million more where I really wanted exactly one location for a piece of knowledge, and a million more where I wanted to separate database queries from in-memory logic.

I think the problem is when you use the term “Industry Programmer.” Not all programmers want to work on the same thing. This theoretical “average” person who just wants to “get work done” doesn’t exist in my experience. It’s just one type of programmer who likes one type of context. That’s ok, but no one should hate on people for working on what they’re interested in.

It’s the same thing with math. Many people feel that math is not practical or relevant. Until you want to do something like build a skyscraper, or a house, or design a power grid. Then all of a sudden math isn’t an option. If we only listened to the “average” person, we probably wouldn’t have that many useful things.


> The other reason is that many FP users are too enthusiastic about creating abstractions.

Are they? I'd argue most real-world Haskell code doesn't abstract enough, perhaps out of over-reaction to this untrue assertion that seems prevalent these days.

> This is of course something that FP is exceptionally well suited for. An api that was written to simply process a list of Orders into a Report might be abstracted into a fold on some monad, which at first seems a great idea. But if you're not careful, readability suffers a lot. It's much easier to get to know an application when its code deals with business objects that you already understand well, than to read hundreds of lines of code that deal only with abstractions.

A lack of that isn't the fault of re-using powerful abstractions at the base, it's a failure of not making the concrete implementations more obvious.

It would be most constructive if we could work with examples, but I don't know what the origional commenter had in mind since that hasn't been my experience with industrial Haskell.

> Industry programmers didn't sign up to learn about category theory. They do what they do because they like building things. There is a subset of those Industry Programmers that does indeed like learning about category theory and all of these other concepts necessary to grok languages such as Haskell - but it isn't a critical mass by any means.

Why all the conversation about category theory? I've read most of Algebra Drive Design and I think you've mentioned it more times than the book!

> Yes, Elm lacks the horsepower that other pure FP languages have, but at least it doesn't lead a developer to think "hey, I didn't sign up to learn about Arrows and Monads. I just want to write a TODO list".

You can write your TODO list in Haskell without thinking about Arrows for sure, and very likely without thinking about Monads.


Which industry are you talking about(maybe fang)? The kind of factories making factories and so on in the industry Java code is mind-boggling . And none of it has to do with business logic.

Asking OOP factory makers to learn about category theory is going to be a difficult task. There are a lot of FP languages that don't require learning category theory. Ocaml has been around for ages and even Microsoft has its own ML flavour mainstream language called F#. Abstraction doesn't seem like the reason to me, it might be inertia.


Just to be totally explicit here, there are no FP languages that require learning category theory. There is no reason to conflate [a small subset of] research on programming languages with the practice of using those languages day-to-day to Actually Get Shit Done.


The more I do stats in the workplace setting (esp. PCA on the one hand, and time series models on the other) the more I think that statistics are just a way for you to zoom in on areas of interest in your data.

The closer I could get category theory in the workplace, the happier I would be. Or have been. My 2 cents on the matter is that when you work with data, rather than with sets that are placeholders for containing data, the more your questions or answers would be concrete and eventually have very straightforward solutions, or at least straightforward deliverables. In terms of the programming view I would say that this means that any abstract solution would have a counterpart concrete version or versions. The reason for abstraction should be to decrease the time involved with solving something.

Abstractions are meant to reason about things that you want to be broad enough to include suppositions. The whole of mathematics is studying which statements follow from which axiom statements. Once you settle on the data, it becomes less of a mathematics exercise and instead is a coding exercise to find the peculiarities of the concrete example, and then the need for abstraction becomes less.

Abstraction gives me a lot of joy; but, you could instead do mathematics by enumerating all combinations of symbols and checking whether some are valid proofs of theorems. Programming will always have some context, often the context means peculiar data and hence you can solve the problem without any abstractions at all. However, the larger the data, the more difficult, and eventually concrete approaches become unfeasible. Again, one can think about this as enumerating solutions. If you had all the time in the world, you could just enumerate all programs, especially more concrete ones, and just pick the one with the right output for the input domain.

If you build a wooden table, you can get by with a small, ordinary toolbox. If you want to build a skyscraper, I think you could get by with a small toolbox, but it may take you an interminable amount of time. And moreover, without the abstraction of a large crane-like device of sorts, the design of building a skyscraper with a ordinary workman's toolbox becomes much more difficult, if not impossible. The design of a crane to me is an example of the right depth of abstraction.

I am not sure whether my analogy is clear, but my point is simply that abstraction is increasing the plausible data space and concretisation is selecting from possibilities. Another way to think about it: {{car, bike, plane},{spoon, knife, fork}, ...} abstract to, but specialise from {{vehicles}, {cutlery}, ...}.

You want some kind of balance in this thought process that fits the code base you are building or maintaining, or the problems or questions that you are solving.


You don’t have to know category theory to use Haskell. You have to learn some basic concepts like monads, but complaining about that is like complaining “I don’t want to use Java because then I’d have to learn what an object is”.


I disagree that objects and monads are the same level of building block in the respective paradigms.

Objects <> Java and functions <> Haskell might be a better comparison.

Not to mention that there's already a person's intuition about what an "object" might be before they even learn programming. But there's no real-world analog to what a Monad is. So, yes, you do get into the situation where a programmer will say, "what? I didn't sign up for this" when being told to learn Monads to use Haskell.


There’s also no real-world analog for a For-loop, but it’s still not a credible complaint to avoid learning C.

Also, just because java objects are called “objects” and not some other word with less colloquial meaning doesn’t mean that they actually map well to real-world objects. Fully understanding the true semantics of Java objects is at least 100x as complicated as fully understanding the true semantics of monads. The only thing it’s easier to do with Java objects is to delude yourself into thinking you might understand them, because they have a familiar name.


thats not true at all, repetition is natural.

map is also quite natural, but isn't it telling that the way a human will do this on paper is by reptition of the same task in sequence... i.e. a for loop.

> The only thing it’s easier to do with Java objects is to delude yourself into thinking you might understand them, because they have a familiar name.

i'm not sure why you consider this 'delusion'? that familiarity of name and concept is precisely why its so easy to learn.


> map is also quite natural, but isn't it telling that the way a human will do this on paper is by reptition of the same task in sequence... i.e. a for loop.

Would they?


this is not true.

objects and classes are a fundamental concepts people are familiar with from real-life experiences. 'teaching' them is barely necessary at all, its more a case of assigning labels and some specificity to already familiar concepts.

monads are not like this at all. they are not a basic concept. they are not involved in day-to-day life from a very early age.


> objects and classes are a fundamental concepts people are familiar with from real-life experiences.

The idea that actions one can perform on real-world items are actually embedded in those items themselves (as opposed to being imposed on them from the external world) is neither universally accepted ontology nor required understanding for day-to-day interaction with them.


> 10x is often cited as the magic number beyond which technology is good enough to overcome network effects. I'm personally convinced that functional programming is 10x better than any other paradigm I've tried

Better for who? Bad code that ships is better than good code that doesn't. From a business perspective, I don't care if it's written in COBOL as long as it ships.

The critical missing piece in the functional argument is an example (or probably several) where a company using functional programming consistently outperformed one that wasn't. By default, the business will be opposed to using anything that isn't one of the big programming languages because haskell/scala programmers are expensive. Why would I pay a bunch of expensive haskell programmers to build a chat app? I could build something that got me 80% of the way there for 30% of the cost.

I think Haskell can flourish in places where correctness matters - fintech, factory control systems, and weapons come to mind. But in the vast majority of places that employ programmers, correctness doesn't actually matter. One argument for this is that requirements change so often that "correct" is impossible to define, so the overhead of being correct is actually a bug, not a feature.

With all that said, I do wish that we had better software, and that correctness mattered more. I like functional programming, and I think learning it can make any programmer better.


“Bad code that ships is better than good code that doesn't”

Maybe better for your bottom line (no criticism, we all have to eat), but not better for the world. Look around at our software universe: bad code that shipped, everywhere.

Bad code that ships gets us Windows.


> Bad code that ships gets us Windows.

as much as an enthusiast linux user I am, I'm pretty sure that the world would be a worse place without DOS & Windows


How so? I mean, obviously it's impossible to make a decent educated guess since basically the entirety of most peoples' interactions with computers stems from the near-monopoly Microsoft had on personal computing in the 90s. But I'm still curious to know what you predict would be different if MS didn't strike their deal with IBM.


MS Window's presence in the market made PCs true commodities and drove down the price for home and office computers in a way none of their contemporary competitors would have achieved.

Most of their competitors were tied to hardware, and received a lot of their revenue and profit from the combination of hardware and software sales. MS only needed to sell the software, and let HP, Dell, Gateway, and others compete on hardware prices. This probably put more computers out there (within that time span) than would have been achieved without them.


It's less about the deal, and more about MS following a pretty strict philosophy that if your software, the software you bought runs on OS version X, the only reason it may not run on version X+1 is a dumb bug in the software.

Also they kept at "computers accessible to everyone", and as much as they fought competition, I have to admit they did it.


> How so? I mean, obviously it's impossible to make a decent educated guess since basically the entirety of most peoples' interactions with computers stems from the near-monopoly Microsoft had on personal computing in the 90s. But I'm still curious to know what you predict would be different if MS didn't strike their deal with IBM.

I've occasionally seen speculation that in the absence of the de-facto DOS monoculture that deal created, CP/M may have been the most likely desktop platform winner, at least initially.


“Bad code that ships is better than good code that doesn't”

When you work on code long enough two things will eventually happen:

  Shippable Bad code will eventually be unshippable.

  Unshippable Good code will eventually be shippable.
Your quotation simplifies a concept by hiding the extra dimension of time. Bad or good, shippable code depends on a point in time.... It could be shippable now, or shippable in the future...


Pretty terrible example.

Bad code gets us the most used, most popular desktop (and maybe server, definitely server for certain industries) operating system in the world? Mac is great, I have one, but Apple made it's cash stores on the back of the iPhone, not MacOS.


IMO MacOS is not an example of good code, or good engineering.

The engineering decisions made are often terrible, including recycling the legacy of NeXT for what I assume were ego-based reasons...

Nothing in the history of Windows compares to that scale of bad.


I wasn't clear, I didn't mean to imply that MacOS is technically better than Windows, just that Windows is by far the most popular OS and was trying to cut off any rebuttal boiling down to "but I use a Mac."


But good code that doesn't ship gets us nothing. Awful as it is (and especially as it was), Windows was good enough and cheap enough for a whole lot of people to be able to afford computers that they could use.


This assumes that bad code is, at worst, worth nothing which isn't the case. Bad code that crashes a mission critical system is absolutely worse than nothing. Or to take your Windows example, I could argue that the problem with Windows was the opportunity cost. If Windows didn't take over the world maybe we would have more choices when it comes to desktop OSs


No. Bad code that crashes a mission critical system may in fact be worse less than nothing, but my argument doesn't assume that. It assumes that Windows, though bad code, was still worth more than nothing. What's more, it takes the massive success of Windows as market-based evidence that Windows was worth quite a bit more than nothing - or at least, computers running Windows were worth something.

Opportunity cost isn't usually considered in terms of a market, it's considered for an organization. If Microsoft hadn't done Windows, could they have done something more successful with the resources? That's pretty hard to imagine. Certainly you can say that Windows was bad for (the rest of) the desktop OS market. But the fact is, no mind control rays were involved. Microsoft created an OS that unlocked enough functionality for the average user, for little enough cost, that Microsoft took over the world. (Yeah, I know, lock-in and monopoly shenanigans. It's true. But none of it would have mattered if the OS were not good enough to mostly work for most people.)

I can't believe I'm being an apologist for Windows. Gack.


Yeah, fair enough. I guess at least from MSFTs perspective bad code was better than nothing.


> Bad code that ships is better than good code that doesn't

But for how long? At some point, the technical debt that comes along with rushing out bad code will come back to bite you. You might be able to get a workable prototype for cheap but if it's a project that you want to sell and support long term, you may just be setting yourself up for failure down the road.


I think you misunderstand what "correctness" is really about. To me at least it's about creating a small set of powerful abstractions that can be applied generally to any problem. In order for those abstractions not to "leak" they need be really rigorously defined. The alternative I think is to have a lot of ad-hoc abstractions for each application that break as soon as requirements change. I really, really disagree with the idea that bad code that ships is always better than code that doesn't. That is how we get software that is riddled with security issues. Even if the specific application isn't that important, it very well might be a major catastrophe if it has a severe security vulnerabilities that allows an attacker to get a root shell on a server in your internal network.


> I really, really disagree with the idea that bad code that ships is always better than code that doesn't. That is how we get software that is riddled with security issues. Even if the specific application isn't that important, it very well might be a major catastrophe if it has a severe security vulnerabilities that allows an attacker to get a root shell on a server in your internal network.

Your argument fails to be an argument at all.

Can you justify that first sentence? I do not see any convincing reason why you disagree... or even any reasons here tbh.


I assume by first sentence you mean the statement that "That is how we get software that is riddled with security issues"

The argument is that you assume that the minimum value of code is zero, so that any code that ships has to be better than code that doesn't ship because at worst it is worthless, which is the same value as code that doesn't ship.

My point is that bad code can be worth less than zero if it actively causes harm. So for example, code that has a critical security vulnerability which leaks personal information is worse than nothing. You would be better off not shipping anything at all than shipping something that causes actual harm.


You are completely jumping to the conclusion that when the author says "10x better" he means some dimension that you do not care about.

Programmers care about productivity too. And when somebody puts a number as in "10x better", that is usually the metric they are talking about. (Although this one seems so fuzzy that the statement is meaningless.)


I don't know many programmers who are jumping to use Haskell for their personal projects. What you use at work is usually determined by the surrounding infrastructure but at home you can use whatever you want for productivity and yet most people continue to use their favorite non-functional language. If it's really 10x better and can overcome network effects then shouldn't we see a lot of Haskell personal projects?


Did you read the article? This exact argument is made and an entire book was written to address it.


> I don't know many programmers who are jumping to use Haskell for their personal projects.

If you don't use Haskell or are suspicious of it, would you expect to know other programmers who use it?

We have data that can more strongly make the opposite claim that people do use haskell for personal projects:

> The functional language Haskell is the tag most visited outside of the workday;

https://stackoverflow.blog/2017/04/19/programming-languages-...


I think for a chat app Haskell may not be useful(but note that whatsapp was built on top of Erlang, a somewhat functional language)

I think Haskell and other functional languages can really improve software in domains with complex business logic and rules which is most of the software that deals with the real world and is useful to the world.

Correctness does actually matter a lot in software, nobody would trust a calculator that doesn't give the right results. Ofcourse people can tolerate edge cases but it's far better to remove them upfront.


As a professional Haskell programmer for a bit now, I see no problem with a chat app in Haskell.

In fact, something close is wire[0] whose server source code is here[1].

They are a secure alternative to slack and mattermost for the curious.

0: https://wire.com/en/explore/competitive-advantage/ (this gave the best explanation in the comparison section of what it actually is imo)

1: https://github.com/wireapp/wire-server


This notion that functional programming is about correctness kept me away from static types and FP for the better part of my career.

I mean - who cares about correctness when I'm trying to ship yet another cookie-cutter web application that I don't even know is going to see more than a thousand users in its life.

What I cared about was speed. A few bugs in production is a better trade-off than all the mathematical mumbo-jumbo and slow, meticulous programming that Typed FP seemed to demand.

But to my surprise, after getting started with ReScript/Reason/OCaml, I recognized that correctness is just a side-effect (!) of this mode of programming. (Note that unlike Haskell, OCaml is an imperative programming language with as much or as little mutation as we need. We can almost line-by-line translate a regular mutation-heavy piece of Python or JavaScript code to OCaml, if we wanted to.)

Typed FP, contrary to what I'd came to expect, is all about speed. Quoting from something I wrote a while ago:

"Refactoring a typed FP program is safe, but menial. When we say safe - it means no anxiety. There is going to be tons of mechanical work for every major type refactor - going thru all the compiler errors and fixing them one by one. That can't be avoided. We've been in multi-day refactoring sessions where we had to dredge thru page after page of compiler errors before we could even run the application. But - it is safe - we know that once it compiles, there won't be any mistakes.

That gives us the freedom to build fast and loose - and take stock periodically before we have to abstract things out and tidy up the place. We should compulsively rely on that safety. Typed FP forces us to go slow in places where a dynamic environment would've allowed us to blaze through. For all that trouble, we get unmitigated refactoring dexterity and we must exploit it to benefit from the paradigm."

There are many places where a dynamic, imperative approach is faster than a typed FP approach. But there are as many or more places where it is the other way round. I'm now fastest with this way of programming than anything else, and I wish that more literature around Typed FP communicated this rather than go about the indirect route of correctness, and expect people to pick up that correctness-by-construction results in increased velocity. That's a difficult jump to make unless one has written non-trivial amount of code in that style.

This book by Sandy - I've read the introduction and skimmed thru the Tile construction equations - is to me poetry. I'm having trouble fitting in the idealistic notion of the Escher tile to my imperfect real-world domain, but otherwise, what a book!


Over the past couple of years I've become proficient with haskell and can throw out an arbitrary program with it about as easily as I could with Python, which I have 10 years of experience with. I've built and worked on large real-world projects for both, and I've noticed something quite interesting at the threshold of when it becomes too complex to comprehend how all the logic interacts. With python I feel a mild panic, because I know that all I have are tests and human QA covering a rapidly shrinking fraction of possible use-cases. Adding features beyond this point can sometimes feel like groping in the dark, perhaps with a debugger serving as a candlelight.

On the other hand with haskell, I know that most functions I write are completely deterministic, and will never have runtime exceptions regardless of how complex the compositions are. I can engineer my application to contain perilous code in a well-defined minority of functions and use rigorous testing to make sure they're right. There are many other excellent safety features, but these in particular give me a huge peace of mind.

In summary python excels when you need to do a simple high-level tasks like send an email or run a small REST API, but the development rate falls sharply when the logic and state is deeply composed. In contrast Haskell with its static-types, immutability, terseness and emphasis on safety (in both the language and library ecosystem), lets you maintain development velocity far beyond.


> I mean - who cares about correctness when I'm trying to ship yet another cookie-cutter web application that I don't even know is going to see more than a thousand users in its life.

I feel this perspective, but I don't understand it. To me, correctness means that the program is actually doing what it's supposed to. If we don't care about correctness, isn't the particular program we're writing pretty arbitrary?

Correctness doesn't seem like some ivory tower ideal to me. I want to write software that meet a need for my users. Writing correct software means uncompromisingly meeting that need.


Correctness and robustness happen to all software, when exposed to end-users for a long enough time.

Internally the software would be a ball of mud, a fragile patchwork of incoherent code that nobody wants to touch. A lot of these programs become immutable logs - if you want to change a feature, you add more code because you're afraid to touch anything that already exist.

Yet they are robust - all the bugs in the commonly traficked code paths have already been sussed out, and from outside the software looks like an impenetrable, rugged piece of craftsmanship. This has happened to me in the early years of my career (proud of the robustness, not proud of the code), and I've witnessed it so many times outside. So yeah - robustness and correctness is either a matter of correct-by-construction, or correct-over-time.


> Yet they are robust - all the bugs in the commonly traficked code paths have already been sussed out

There are two ways this can be true: if the code paths users traffic are debugged, and if users only traffic code paths that are not buggy. In the end, all and only the code paths that are used are useful. Isn't that a truism?

Like I said, I feel this perspective (and thank you for responding!). But I'm kind of sick of working on codebases that are absolutely littered with dead code paths that are hopelessly interwoven with live code paths, to the point that when you know you're breaking something, you hope you're only breaking something that isn't used. To me, it's a really depressing, fatalistic worldview.

> A lot of these programs become immutable logs - if you want to change a feature, you add more code because you're afraid to touch anything that already exist.

Amen to that. Can we as an industry find ways to do this better in the first place, instead of resigning ourselves to "immutable log" as an inevitability?


I'm naive, but I think the adoption of Typed FP, and the resulting cultural education and change, is one route to solving this problem. But it is a social and economic problem more than anything else. Or I guess - since the spectrum of competence in other fields of human endeavor is distributed widely (a bell curve?), we'll have to expect it to happen to software as well. Not a very optimistic thought, but at least we can be content with what we have :)


> Yet they are robust - all the bugs in the commonly traficked code paths have already been sussed out, and from outside the software looks like an impenetrable, rugged piece of craftsmanship.

I disagree. I think they only feel robust. I think more importantly this mode of development is defensible and always gives one something concrete in the case of being asked "so what's the product of this 3 days of work?".


> Bad code that ships is better than good code that doesn't.

Sometimes, but not always. The bad code that ships however always feels better and gives the feeling that progress is being made.

That works fine until a wall is hit and things have to be rewritten or worse hacks have to be added. That ends up taking more time than the original solution, but guess what... no one is going to bring that up.

That means the correct solution never has a chance to get the points it earned to be weighted more highly next time a decision of correctness vs speed is made.

There are pre-built frameworks for handling the common "ship the bad code, never say it's so bad it's a net loss, and just iterate on the crap on top of crap you have" so that all levels of the organization feel more comfortable.

> With all that said, I do wish that we had better software, and that correctness mattered more.

It can't matter until technical debt is analyzed on a meaningful time scale and actual observations happen, rather than what amounts to making things seem like they're going smoothly.


> Bad code that ships is better than good code that doesn't.

Logical fallacy: false dichotomy.

What about good code that ships? Best of both worlds.


Note: Before I possibly get railed for not knowing functional programming or anything like that, I actually really like functional programming, for a limited set of applications (mostly compilers, and other algorithm heavy stuff).

> What about good code that ships? Best of both worlds.

Right, but what is good code?

The author of this page claims that code out of the "Algebra-Driven Design" paradigm is good code. But where's the proof?

It's not enough to pull out the math textbook and give us fancy words like monoids, lattices, whatever (this is coming from someone who loved algebra classes far more than analysis classes in my math degree).

I'd really like to see real world code that uses this. And by real world code, I mean code that isn't some contrived example, code that actually runs somewhere, and provides real-world value, e.g, a semi-popular website, e.g HN, Lobsters, Reddit, or something that runs on a robot, or something. I don't know. The common theme of all of these Haskell/Idris/SomeOtherMathyLanguage posts is that almost none of them bring any real world software to the table, that has been tried and tested by the real world, e.g, consumers, customers, whatever, except for a perhaps GHC, a handful of well maintained Haskell libraries, the Idris compilers, that Haskell spam filter at Facebook.

We keep saying that functional programming is much better than any other paradigm. But the share of functional programming languages is tiny in comparison with the imperative/OO family languages, and there's a reason for that. Most functional languages are hard to use, have very little library support, and functional abstractions don't _really_ matter in the long run when it comes to bugs. Sure, C/C++/Java code is never bug free, but it's not tremendously difficult to write real-world applications using these languages, simply because of how well their memory models/object models/features map to the way most programmers think.

I just find it absurd that on these functional programming posts, everyone just keeps saying that the only good code is functional code. It's simply not the case.


> C/C++/Java code is never bug free, but it's not tremendously difficult to write real-world applications using these languages, simply because of how well their memory models/object models/features map to the way most programmers think.

My argument that it has nothing to do with object oriented mapping to the way most programmers think, and is almost totally a function of familiarity.


Facebook rewrote the Messenger in ReasonML. Before they got hundreds of bugreports a year. Afterwards? Less then ten.


I worked on a 6 month rewrite of the core of a system. Critical bugs were drastically reduced. We rewrote in the same language.

I would much rather use ReasonML than js but the Facebook result unfortunately doesn't prove much.


Monoids: Map-Reduce and Hadoop

Monads: future chains, streams/sequences

Immutable data structures: the transaction log for distributed databases


Very good point. A lot of the bad code I have seen doesn't ship _because_ it is bad. It can't even get through the testing phase because it is full of crashes type thing. Whereas good code usually has a good design that results in _less_ code and development speeds up as you go on. So the end of the project feels like putting cherries on top of a cake instead of trying to balance cherries on stilts.


There's more to coding than churning out business logic. Even if not used in production, learning a different paradigm can have an enlightening perspective on ones ability to solve a problem.


Can you help me understand this logic - what business can make money off of software that doesn’t work?


Bad code still works. It may not work well in some regard:

- Hard to maintain

- Slow

- Hard to extend (slightly different than maintain, but related)

- Crash-prone (but infrequent enough)

- Corrupts data (but infrequent enough)

- Just wrong enough to be annoying but not wrong enough to be useless

I had colleagues on a project (I really wanted to help them fix it, but was never brought on board) that was truly bad code. There was copy/pasta everywhere. The same function written a half dozen or more times with slight differences (or none, no idea why it was copied in the first place), so bug fixes would partially address an issue but not when all copies weren't fixed. It was massive (hundreds of thousands of lines by this point, maybe over a million now) with mostly new team (original coders were contractors, and original team for us all moved on). The size and quality made it hard to extend in a sane way because there were so many unknowns. It had random concurrency issues which would only crop up reliably when fielded.

The fundamental design was flawed. They had to connect N different protocols (each different but communicating the same kind of information) to each other. Instead of coming up with an intermediate format they did 1-to-1 connections, which meant an explosion in complexity. Instead of N translation units to an internal format and N translation units out, it was N * (N - 1) translation units, every protocol needed a way to translate to every other. The sane way would mean adding a new protocol required 2 new translation units, the insane way required 2 * N (the new protocol to all existing protocols, all existing protocols to the new one).

It shipped, it made millions (in revenue) every year, but it was not good code.


You specifically said correctness. You brought up speed and maintainability. I specifically want to know how _incorrect_ code, which is code that doesn’t perform its intended function, can make a business money or have any value at all.


The project I described is arguably incorrect, it did not fully satisfy the customer needs for many reasons. It was unstable, it had errors in its outputs, but it was correct enough that they still bought it. It brings in millions a year in revenue, even though there is competition (though not much) in that niche market.

There is a wide span between totally 100% correct and totally 100% incorrect code. Most software sits between the two.


Well, eventually it has to work. But if we write a buggy Python program in most industries, and it breaks, the customers complain a bit and you fix it.

I suspect what the parent means is there are some industries where you can't afford for it to break at all. I.e., you don't need to guarantee correctness.


I love Haskell and appreciate its abstraction power. But recently I started to think that one of the reasons that prevent it from becoming more popular is that the community focuses too much on "mathematical" abstractions. Almost every abstraction becomes a mathematical one. "Oh, this thing turns out to be a Monoid! Let me abstract it." "Oh, I can simplify this using Traversal", etc.

They usually make perfect sense and useful abstractions, but at the same time they scare many programmers away. Especially non-math-savvy programmers will be easily turned away when they see code that is heavily abstracted in a mathematical manner.

I also come from Lisp and Lisp focuses on building a language using the terms/concepts from the given problem domain. And when you do it right, it makes the code very natural in that specific problem domain and any domain experts can understand the code easily.

Haskell communities do it very differently by making everything a mathematical problem, and it's only math experts who feel comfortable with the code.


I'm not sure the Haskell community really wants the language to become more popular as a primary goal. Sure, it would be great if more people appreciated its power and adopted it, but not if it requires compromising on its core strength: rigorous correctness. And that's fine! Not every language needs to focus world domination. It's probably better that we have a lot of niche languages that are uncompromising and highly optimized for specific types of problems (or rather highly optimized for certain types of programmers). Also, you don't need to be mathematician or really be grounded in math at all to understand the abstractions favored in Haskell. I think https://www.scalawithcats.com/dist/scala-with-cats.html is a really good example of explaining abstractions that are grounded in Category Theory without ever even trying to connect them to their mathematical origins. The abstractions can be explained and applied on their own terms.


> But recently I started to think that one of the reasons that prevent it from becoming more popular is that the community focuses too much on "mathematical" abstractions.

Many times being in the middle of any two extremes is a pretty bad experience.

If you shy away from math though, you get closer to tapping into the average programmers experience in other languages.

Because of that, it's easier and more familiar.

Those things are good for adoption, but not necessarily a Haskell experience that is so good it's clearly better.

I think the direction Sandy goes in does achieve something objectively better.

My advice is to try it out and don't let the "it's too mathy like Haskell always has been" comments bog you down.

Do be prepared to learn a new paradigm and solicit help from others to be able to learn it in a timely manner.

- A Haskell programmer who never made it past algebra


Mathematical abstractions are often at the wrong level of abstraction. Mathematicians mostly care about value semantics but programmers need to care about how it's realized in hardware too.

For example, sure, you can think of a list as a "monoid" where the add operator is concatenation of two list. But adding two lists is not a fast operation even using specialized functional data structures. So making concatenation a primitive for equational reasoning just results in really slow code.


> Mathematicians mostly care about value semantics but programmers need to care about how it's realized in hardware too.

Exactly! The thing is this book gives a method where you care about value semantics, keep the ideal representation, and then care later about how it's realized in the hardware!


I'm confused, with the right data structure concatenating lists is an O(1) operation.


You're probably thinking of storing it as some linked list but that doesn't work with mathematical abstractions (which is my point).

For example in mathematics if you write "C = A + B" and then you ask what A is, it's still always going to be the same.

To make this work in programming you basically have to make all your data structures persistent[1]. AFAIK you would need a store the list as a finger tree or something if you need fast concatenation and that will still only be O(log(n)) [2].

[1] https://en.wikipedia.org/wiki/Persistent_data_structure

[2] http://hackage.haskell.org/package/fingertree-0.1.4.2/docs/D...

[3] https://stackoverflow.com/questions/28406512/why-does-concat...


In math everything is persistent/immutable anyway, so persistent data structures are a given when working with purely mathematical abstractions.

There is a memory issue because of this, but this problem isn't exclusive to FP languages. Many languages solve this with something called garbage collection.


The very definition of pure FP (which is basically just programming founded on the principles of mathematics) rest foundationally on just a program that is immutable and has no side effects. Those are the only two requirements needed to do FP.

Immutability is therefore the axiomatic foundation of FP.


You don't need a linked list you can just store C as 'A + B'.


> "Oh, this thing turns out to be a Monoid! Let me abstract it."

You seem to have an example in mind where this leads to the "making everything a mathematical problem" , can you share it?

> Haskell communities do it very differently by making everything a mathematical problem, and it's only math experts who feel comfortable with the code.

I program in Haskell every day and I have no idea what Haskell code you have in mind.

Also as I've said elsewhere, nearly everyone reading this has more math experience than this Haskell programmer without a degree or calculus class under their belt.


It's a common curse. Lisp has no value for mundane business applications that need to massage some data and bolt a gui to print reports.

The thing is when you start attacking odd problems you can find solid solutions in 100LoC instead of brittle sandcastles made with 27 classes.

Let's let haskellers go wild in their abstractitis, it's great. Sure it's not important for many cases but let them roll.


You might have been bitten by this before:

    1. Make it work
    2. Make it right
    3. Make it fast (uh-oh your nice design prevents performance optimizations, time to scrap it)
The reaction to that is usually to think more about performance ahead of time so you don't get trapped by your theoretically perfect design when the real world comes crashing down.

With your hard-won experience you do away with those childish notions of being able to make a perfect elegant design where you don't have to worry about performance until the end.

The most valuable thing you'll find in Sandy's book is that you can actually ignore performance until the end and nearly always (always maybe?) be able to make it fast.

You can regain your optimism from your younger years before the bad experiences and resume this path:

    1. Make it work
    2. Make it right
    3. Make it fast
I think it goes without saying that I highly recommend Sandy's book and think it could be the way forward for quickly writing correct, elegant (composable/re-usable!), and FAST programs.


IMO

1. Make it correct and not insane asymptotically.

2. Tune constant factors, if you need to.

In school I was confused why they kept on talking about asymptotics and ignoring the different costs of the building blocks. Now I see that, while it is true that asymptotic analysis doesn't well explain performance when "things are good":

1. There are a lot of stupid things people do write that it does catch.

2. Since it is scale free, it makes sense as a way to evaluate programs throughout their entire development.

3. Constant factor optimizations will always tempt one to ruin abstraction boundaries, but asymptotic analysis shouldn't----or if it does, sorry your abstraction sucks and needs to go. (Hi ORMs.)

The latter points are really powerful. Never make me work without efficient tree map/set union and intersection ever again.


>theoretically perfect design

We have no metric to use to quantitatively score one design as something unequivocally better than another design.

This is the first thing you will need before you can prove one "design" is "better" than the other. Until than we are doomed to go in circles and have endless debates.

I believe even if we do find some quantitative measure on program "designs" it won't be rating something as "better" or "worse" it will likely be metrics on many dimensions measuring tradeoffs.


When you design software focusing only on the business logic and domain the end result will be better than if you have to do it while juggling implementation and performance concerns.

> We have no metric to use to quantitatively score one design as something unequivocally better than another design.

Given the current state of software development, it's not fair to place that burden of proof here when it's not used elsewhere.


>When you design software focusing only on the business logic and domain the end result will be better than if you have to do it while juggling implementation and performance concerns.

Do you have a foundational proof for this? What are your axioms and theorems? A qualitative description (that you have provided) is open for debate. I can say the opposite and supply you with an endless chain of examples while you come at me with couter-examples and we go nowhere.

>Given the current state of software development, it's not fair to place that burden of proof here when it's not used elsewhere.

The entire mathematical community has this burden placed on them. The entire scientific community has lesser statistical validations placed it. The programming community has endless design pattern debates online as the burden.


This is something I spend a lot of time thinking about myself - it seems like functional program ought to be > object-oriented program which ought to be > procedural programming. However, I've almost never seen a functionally-designed application "in the wild" and honestly, it's pretty rare to even see an object-oriented one. It seems like every program I ever dig into is mostly written procedurally, even if it's written in a language that supports higher-level abstractions. And, for the most part, software works. As much as I love the concept of pure functional programming, how do we really know that "functional programming is better" and that's not just wishful thinking on our part? Looking forward to digging into this - hopefully the author can make a better case than others have.


Try out Parsec library in Haskell (http://book.realworldhaskell.org/read/using-parsec.html). If you are familiar with parsing in general you'll see benefits immediately. Also FP tends to promote composition over inheritance (see: https://en.wikipedia.org/wiki/Composition_over_inheritance).


Neither paradigm solves the expression problem. This forces people to break the paradigm as the code grows. Look at the code in some Julia packages, and you will see good use of abstractions around a language design where the expression problem does not exist. This is practical stuff, being used in applied scientific research on a large scale.


Here's how you can solve expression problem using object algebras: https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf


I'll make a stronger statement than "can": I have and do productively use object algebras in my day-to-day in Java. It's a really fantastic tool when you need an "interface" that governs multiple types or across more than one receiver. Even when you're not trying to solve the expression problem.

When you're defining an interface on a type that doesn't need to appear in its own method parameters or return types, and where the actual type doesn't matter (as in classic OOP), Java interfaces are fine. But not all problems are best framed this way.


Personally I prefer being on the other end of the expression problem - where extending underlying data (mainly sum types) is the costly operation. Like in Rust, because it actually makes you think about the data model instead of the implementation. Adding new functions is "free" in the sense that it's not going to break your model.

But I'm curious if you don't find object algebras in Java cumbersome and kind of fighting the type system? Because while I like this kind of thing in Scala, in Java it feels very much against the design of the language.


> Personally I prefer being on the other end of the expression problem

I agree, I prefer sum types in general. And you can still get dynamic dispatch from trait objects (in Rust), so you have both approaches at your disposal. (In Java, all you have is a hammer...)

> But I'm curious if you don't find object algebras in Java cumbersome and kind of fighting the type system?

Yes and no. I try to use them where they make more sense than OO-style interfaces, so somewhat by definition I don't find them any more cumbersome than OO-style interfaces would be. Java feels like it has a dearth of modeling techniques, so even a seemingly-niche one like object algebras gives me a lot more to work with.

Serializers/deserializers are a good fit for an object-algebra approach, since you don't want to define those methods directly on the data type you're serializing (if you even have control over that type).

Traditionally algebraic types, where you can combine two instances to get a third, are also a good fit. (From a Haskell perspective, you can think of the `x.add(a, b)` as explicitly naming the dictionary of typeclass methods you'd get from writing `a + b` -- kind of like `a +_x b`, if you will.)

And there's almost no choice if you want to define a signature over multiple types -- you need a mediator object anyway.

It's pretty ergonomic if you stay nearby these and similar patterns.


Thank you, this was very illuminating!


From the wikipedia:

> The expression problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts).

Why should I care about recompiling? That's an implementation detail.

Taking the compilation stuff out, it looks very similar to classy lenses.


> Why should I care about recompiling? That's an implementation detail.

Sometimes you don't have access to 3rd party src code you call into, hence why you can't always modify and recompile it.


Does that mean Kotlin's Extension Functions solve it?


Its more about the modification rather than recompilation.


But type classes solve the modification problem. (Even tough no language that I know of solves the recompilation.)


I think it's more about not being able to recompile third-party library code than your own code.


I don’t know. Don’t ask me, ask Wikipedia.


You aren't going to see entirely functional applications "in the wild" because it's convenient to keep the top layer procedural. But, it's generally useful to use OO and functional design for components. Then you combine the components in a procedural manner. Here's a decent example of creating functional components and gluing them together imperatively:

https://www.destroyallsoftware.com/screencasts/catalog/funct...

OO and Functional are just good design goals for making an easy to use API for your components. Even if those components are written in C, I think you'll find that many components are designed to be:

Object Oriented:

- has an interface

- hides implementation details

Functional:

- the same input always gives the same output. I.e. there's no hidden state changes which you need to keep track of

You can think of it like concrete vs cement. The OO and functional components are the rocks and rebar in concrete, and imperative code is the cement. The rocks make the concrete stronger, but it's easier to shape with cement in between the rocks and rebar.


> it's pretty rare to even see an object-oriented one

That's not even vaguely true. OO is the dominant paradigm in many areas of commercial programming, including web backends, phone client code, desktop applications using GUI frameworks, written in Python, Java, C#, Objective C, Ruby, etc. Think about Java -- that basically mandates OO.


> Think about Java -- that basically mandates OO.

You would think, but in my experience most people write mostly procedural code in Java: data-only types that expose all their properties with getters and setters, and all methods (and most data) declared static. Most Java developers write as if they were working in COBOL.


Interesting; I admit I've been involved in very little java development. You must be exaggerating a tiny bit though? We all know that the kingdom of nouns gets a lot of stick, but surely it's not fair to characterise the community of java programmers as being numerically dominated by those who do not understand how to create distinct instances of their nouns?


Why is the comment being modded down? What he/she said is true.


You're completely right. There is no theory behind program design and structure. We have no way of measuring why one paradigm is better than another paradigm. This is the first thing that needs to be solved before we can unequivocally prove that one paradigm is better than the other.

The first question that needs to be answered is, what exactly is a well designed programming language? What exactly is a well structured program?

Until you are able to answer the questions above axiomatically we are doomed to run in circles forever.


This looks interesting and the "freebie" is quite extensive at 80 pages. It would be nice to have a full table of contents to see what more you can expect, rather than the "executive summary" of what you could hope to learn.


As both a mathematician and a programmer who often works on projects where high reliability is essential, I want to like the concept behind this book. However, based on the website and sample chapters, it’s not clear that there will be strong arguments made here about either scalability or performance, which are the usual practical problems with trying to employ formal methods and very “mathematical” abstractions. Sadly, the early repetition of the usual faux quicksort example and the casual dismissal of arguments by “purists” about the difference is going crush the author’s credibility among a large group of programmers by the end of page 7, which is not a good omen.


> However, based on the website and sample chapters, it’s not clear that there will be strong arguments made here about either scalability or performance

I've read most of it, performance is addressed after correctness.

The trick is that your correct API never has to change to get better performance.

I don't want to give too much away though and I highly recommend this book.


The trick is that your correct API never has to change to get better performance.

That would be a very good trick indeed.


Do you mind sharing what industry you work in and how you got there? I am mainly caught by "as both a mathematician and a programmer who often works on projects where high reliability is essential". I was a mathematician (or more strictly am a failed mathematician) who also does software, and so I am always curious what other domains people with this background work in.

In general, I am turned off as well from the abstractionauts in the software world, particularly those found in the Haskell realm, and highly prefer more pragmatic functional languages like F#, Racket, and Elixir/Erlang.


Do you mind sharing what industry you work in and how you got there?

I’ve worked on software in a few different industries, but the ones that needed particularly good reliability were mostly related to hardware deployed in challenging environments. For example, it’s never good to discover a bug in production when your firmware update process involves helicopters. :-) I’ve also worked on software that used quite intricate data processing algorithms, where any bugs could manifest as subtly incorrect behaviour that might not be noticed until it was too late and some serious failure had been caused.

I didn’t specifically pursue these kinds of projects, they just happen to be common themes in the roles I’ve taken over the years. In time, I developed an interest in tools and processes for producing software that is significantly more reliable than much of what our profession produces, yet not necessarily on the level of say threat-to-life or running-a-satellite code where extreme measures might be justified almost regardless of cost. I firmly believe that even with the inevitable pressures driving commercial software development, the standards in our business could be much higher than they often are today, and that much of the problem is due to people — developers, managers, users — not realising what else is achievable or what it would really cost.


> I am turned off as well from the abstractionauts in the software world

Is there any way you can give examples of this in Haskell?

If I'm just blind to them by now I'd like to know, but most "abstractionaut" criticisms I've heard seem to ignore the pragmatic ideals driving the reasons for the abstraction and ignoring inherent complexity.


I'm starting to think that every thread about FP is pretty much identical on HN, especially when Haskell is involved. If you still need a ton of push to try it out, you'll never get it. If decades upon decades of provably superior concepts haven't convinced you, just give up. Just go and get hyped when the next minor FP feature gets integrated into your favourite language, something that's been used in Haskell/Erlang/etc. for 20+ years.


Yeah I've focused on building things with Haskell (using way less energy and time in the process due to exactly what this book claims)

There's really no point arguing with people. Sometimes I can't help myself, but I barely ever get anything useful out of trying.


> I'm starting to think that every thread about FP is pretty much identical on HN

Spot on.

Where is Haskell actually used? No I meant apart from Facebook and Uber and Klarna.

> Just go and get hyped when the next minor FP feature gets integrated into your favourite language

We already have immutability in Java - why, the designers themselves made Strings immutable [1]

We said goodbye to nulls in Java [2] but in case you really don't like nulls you can also say goodbye to them by switching from Java to Kotlin, which doesn't have them [3] but also does [4].

If you want to cut down on some boilerplate (without moving to Kotlin), you can try type inference in Java [5]:

> For example, consider the following variable declaration:

    Map<String, List<String>> myMap = new HashMap<String, List<String>>();
> You can substitute the parameterized type of the constructor with an empty set of type parameters (<>):

    Map<String, List<String>> myMap = new HashMap<>();
Go got rid of Generics which made programming so much simpler. I can't wait for its new Generics feature though.

But if you really need that functional stuff - the best thing writing Java code is that it runs on the JVM, so you can code in Scala instead. And if that argument doesn't make you want to code in Java I don't know what will.

[1] https://javarevisited.blogspot.com/2010/10/why-string-is-imm...

[2] https://programmer.help/blogs/finally-i-m-happy-to-say-goodb...

[3] https://medium.com/kayvan-kaseb/say-bye-to-nullpointerexcept...

[4] https://stackoverflow.com/questions/50427490/nullable-type-s...

[5] https://docs.oracle.com/javase/tutorial/java/generics/genTyp...


I use Haskell because it actually delivers on the promise of software reusability. In his book, John Ousterhout wrote a wonderful phrase: "increments of software development should be abstractions, not features.". I completely agree.

Even though the JVM ecosystem is great, I dislike the languages that run on it, Clojure being an exception. To filter out OOP, it fundamentally abstracts the wrong thing. Modules should abstract functionality, not data. And that's why I don't want anything to do with Java.


> Go got rid of Generics which made programming so much simpler. I can't wait for its new Generics feature though.

What? If getting rid of generics made programming so much simpler why can you not wait for its new Generics feature?


> I'm personally convinced that functional programming is 10x better than any other paradigm I've tried.

The failing is having a simple model that assumes that any one paradigm is ever best.

A multi-paradigm language will always have the capacity to be better, or equal, by any metric.

I also strongly disagree with this sentiment based on experience. There are some tiny niches where functional programming is particularly useful, but outside of them the code is classically difficult to read, write or debug.... a problem made worse by the, very honestly, absolutely appalling quality of almost all example code - normalising one letter variable names, squeezing things onto one line and all kind of other terrible practises.

these are deeply serious problems that fp fanboys and academia seem to glaze over, presumably from a lack of real world experience, or appreciation of how valuable these qualities are ...


> There are some tiny niches where functional programming is particularly useful, but outside of them the code is classically difficult to read, write or debug

What niches are these where FP is "classically difficult to read, write or debug"?


I think they're saying the opposite, that FP languages are difficult in general and only useful in a few niches.


Someone wrote a comment in this thread saying that Haskell's strength is "rigorous correctness". Ok sure, I agree that generally your program is going to be more correct with Haskell than with, say, TypeScript. But that doesn't preclude a TypeScript program from ever being as correct as a Haskell one. One limitation might be the additional boilerplate due to the inability to abstract as much as Haskell. But if more boilerplate translates to greater readability (i.e. less abstraction & much closer relation to the problem domain) then I would argue that the added boilerplate is justified.

Hence I don't really see, "rigorous correctness" as a good selling point for using Haskell since company culture tends to affect correctness just as much as the language a team uses.


That's a pretty big "if", though. Boilerplate is generally noisy, and it can be challenging to encapsulate it well enough (e.g. behind function boundaries) that the remaining code is high level and business-logic oriented. Company culture can only go so far (unless you're willing to pay the full tax, and maintain your code in the costly, slow way that e.g. NASA and other life-critical systems developers would).

You can get halfway to Nirvana just by using a language that self-manages memory and has some kind of exception system (error handling at a distance); something like RAII (automatic lifecycle management, or automatic destructors in a pinch) can be a big help, too. Now, at least, all the systems-level boilerplate can be omitted from your mainline code.

But high level, business logic pitfalls are hard to encapsulate, and especially so in a dynamically typed program. Documentation becomes critically important (don't ever do X before you do Y), as rigorous unit tests for chains of actions can sometimes be difficult to write. A good, type-level model of your program can be a helpful set of safety rails when refactoring a a large and complex code base, and a rich type system gives you more tools to build those rails to fit the the solution you're building. As a (weak) example, if your type model never lets you read from a file that's already been closed, that's one less thing you have to cover in testing or documentation -- or more importantly, it's one less thing you have to remember to test.

At scale, having multiple ways to introduce structure into an otherwise chaotic (or arbitrarily structured) code base is a win. Building a good type-level model is one smart way to achieve that.


> But if more boilerplate translates to greater readability (i.e. less abstraction & much closer relation to the problem domain) then I would argue that the added boilerplate is justified.

The added boilerplate usually translates to less readability and distracts from the "essence" of the problem or the main idea a piece of code is trying to communicate.


I prefer a book on api design with haskell rather than this theoretic book.


This "theoretic book" can help you design a very good api. If you mean REST api's by chance, remember those were theoretic once too!


I'd be more convinced if the author linked a bunch of software they'd written before writing yet another haskell tutorial.

10x? We're way above 10x in the ratio of haskell/functional/monad tutorials to running applications. If touting that yours is "the one" and we should skip over the tens of thousands of others out there, some supporting evidence might be useful.

Applications written in haskell you can run on your computer for doing something other than programming is still a list that you can count on your fingers. Please let me be wrong about that and list them if I am. I'll be delighted and wildly enthused.


Things that are strictly not programming related and are user-facing product software, which are primarily written in Haskell that I could come up from the top of my head:

https://www.juspay.in/ https://wire.com https://chordify.net/ https://circuithub.com/ https://channable.com https://www.habito.com/ https://www.adjoint.io/ https://www.costarastrology.com/ https://www.kittyhawk.aero/ https://lumiguide.eu/ https://cardano.org

More than I can count on my hand :)


Are they all websites?

It's this the answer? Use can use Haskell if a website fits the needs?

How do we know they're Haskell? Or is it a small Haskell plugin being overhyped?

I WANT to see Haskell success. Real success but marketing con-job boosterism.

Let's say all your examples are terrific examples and maybe they are. Let's say you've only noticed 20% of then.

Still orders of magnitude less than Monday tutorials.

We need to face that and understand it it progress isn't going to happen.


No. did you click on any of the links? Non of the things I linked are websites as the main product; they're all very much product-software. My point wasnt to point out websites that are written in haskell. but _PRODUCTS_ written in Haskell; and giving you links so you can read about them :)

If you're not in the mood for clicking:

Juspay: Payment processor in india

Wire: End to end encrypted chat

Chordify: Automatic analysis of music to generate tabs and sheet music

Circuithub: Tool for part picking and placing for PCBs

Channable: Datafeed management tool

Habito : Mortage broker

Adjoint: Treasury software

Costar: Astrology app

Kitthawk: Airplanes

Lumiguide: Smart parking management system using Computer Vision

Cardano: A blockchain

> Let's say all your examples are terrific examples and maybe they are. Let's say you've only noticed 20% of then.

You're moving goal-posts again. You asked for more than 10 product software solutions success stories in Haskell. I gave more than 10 and now you're telling me any number I give you I need to divide by five before it is sufficient. I'm not sure what your point is that you're trying to make here.

Many product software companies _outside_ of programming language technology use Haskell successfully and I gave some examples for you to look at.


I didn't get to 10 last time I did this. The above list is all closed and marketing but let's just accept it's all 100% haskell and 100% successful and 100% brilliant.

10 is a measure of how bad the situation is. More than 10 now, GREAT! 15 is less bad than less than 10. Hey let's say it's 50! Why not? Could be. Can't find 50 myself but you'd imagine the haskell website would boost the hell out of it if there were. But hey let's say there are. Why are we not at 500? Seriously. Why not 10,000?

From the original post the point I made was there "orders of magnitude more monad tutorials than haskell applications" is a real and genuine data point and the one I made. Not anything else. If we've got over 10 now I'm thrilled about that. Really.

Apparently noting the problem this book seeks to address has hurt people's feelings.


As I said in my reply, more than "websites" https://news.ycombinator.com/item?id=24347370


Millions of people have received & participated in Starbucks Rewards offers powered by Haskell.

Millions of people have bought things from Target stores whose inventories were planned by Haskell.


Note that the book iss not a Haskell tutorial, it's rather a design book that can be applied to any language. The author merely choose Haskell for its expressivity.

Haskell is used in a lot of places, as an example Facebook's Sigma blocks spam, phishing attacks, and malware and was migrated to Haskell.


And yet he calls out haskell by name and only haskell and claims to seek to solve the problem:

"Functional programming hasn't taken market share because we collectively don't yet know how to write real applications with it."

Someone claiming to have individually solved that problem should probably point at some applications by way of evidence, no? But I do acknowledge the self-awareness of the author that the problem /exists/ and is need of solution. But the actual lack of applications is a problem in need of a solution for anyone claiming haskell is generally useful. Dash off a few decent ones and suddenly I am ALL EARS. Aren't you?


> Applications written in haskell you can run on your computer for doing something other than programming is still a list that you can count on your fingers.

Before you make such strong claims, you should at least try and validate they're true. Please stop spreading this lie.


You could counter that claim by actually providing a list. You will quickly find that there are far more applications written in Java, Go, NodeJs, Python, Ruby and C/C++ than in Haskell.

EDIT: Please see the TIOBE rankings just to see where Haskell is. I think it is currently at 43


Someone already did and the goalposts just got moved again: https://news.ycombinator.com/item?id=24352585


As i said if it's outdated I'll be DELIGHTED.

Let's see the list again. It's always been very disappointing in the past but less so than pretence.


This actually is pretty ugly.

Accusing someone of lying. Hang your head in shame. I'll change my mind with evidence. You have provided precisely none.

Seems you'd rather abuse than show evidence?

Make a list. See if it convinces you. Do it without rancour and abuse.

Or just move on to a different thread.

This is the bit you cut from your edit of my claim:

"Please let me be wrong about that and list then if I am. I'll be delighted and wildly enthused."


> This actually is pretty ugly.

What precisely is "this"? From my point of view at best it's calling out FUD and an unsupported claim sold as obviously true.

It's not abuse to challenge unsupported claims and call them what they are.

That sort of attitude cultivates horrible horrible things... One example being science denial.

> I'll change my mind with evidence. You have provided precisely none.

Your initial claim had previously no evidence.

Others provided evidence and your responses have me no confidence that spending energy changing your mind was/is worth it.


I am not a corporation marketing something. I am a person. I am raising a point.

There are more monad tutorials than applications by an order of magnitude.

This seems interesting to me. If it doesn't meet your marketing neeeds and you need to start calling people names maybe HN isn't for you.

The evidence of many and varied haskell applications is very, very, very thin. I wish it wasn't. You clearly do too. Great, we agree on that.

Learn haskell it's good fun. It's interesting. Do not expect to write successful applications in haskell because the evidence points elsewhere.

But sure, link your repos of actual applications. Please. Not marketing pages. Actual applications. Code.

The goalposts are: Why is haskell so very much less successful in applications than any other language with as many people who have learnt it. Including me and you.

Git Annexe. There's another good one. There are some. Just not many. Where are the rest?

This author has stated clearly he seeks to address this. So it seems like something sensible people can discuss. This is apparently more difficult than it should be.

Calling people names suggests you know there's a real problem here that you wish wasn't. I'd suggest working on that problem. For a successful technology reality must take precedence over public relations.

I will change my mind with evidence. collate the list, see how compelling it is or isn't. Or just call people names so we can be clear on what isn't.


The one I know about is Pandoc, which I use every day.


One? Ok we'll do the full list again:

xmonad. Some websites usable as applications using yesod maybe. (Shellcheck - great but doesn't count, you use it for programming a computer).

What else are we missing. The list is not to bash Haskell, it's is instructive to be aware of a pretty significant data point and understand /why/

edit: Please note that libraries are /not/ applications but the two are conflated here (accidentally?) obscuring the point: https://wiki.haskell.org/Applications_and_libraries


BazQux is unfortunately not entirely open source, but it is my favorite RSS reader, is a profitable user-facing application, and is written partly in Haskell: https://bazqux.com/faq

https://github.com/bazqux/bazqux-urweb/tree/master/crawler

Off the top of my head I also know of Freckle (https://www.freckle.com/) and Co-Star (https://www.costarastrology.com/).

More importantly, this is not a Haskell book! I'm sure the author would be happy to see these ideas applied using Javascript or Scala or Java 14 or anything else.

> Haskell is considered a difficult language to learn, which is certainly true if you come from traditional procedural languages. But rest assured, this is not a Haskell book. You won’t need more than a passing understanding of the language’s syntax and a few high-level idioms. Everything you’ll need to know will be shown later in this section. The ideas present in these pages are adaptable to any piece of technology you’d like, though they might require strictly more discipline to maintain than is necessary in Haskell.

- Chapter 1.3.1, page 6

I'm not going to try to list all the user-facing applications written in any functional language, but I'm sure we can agree it is a much longer list.


"they might require strictly more discipline"

If it's /hard/ to use the language you have chosen for your book. Which is also the most prominent language in the space to write useful stuff then you need to address that right there on the landing page. Yeah? Is that not reasonable?

The Haskell list is very short. Why? Why is this a solution to that problem? Why is it not just yet another burito-like analogy for monads?

Frozen-bubble was great, slick and fun at least 15 years ago and is written in functional perl.

He's clearly trying to solve the Haskell problem of "We can't write apps" here. He says so.


I don't really know what to tell you, to me it's clear that this book is basically "lessons I've learned from Haskell that can be applied to any language". I think if it were trying to answer the questions you ask, it would say so.

Do you have the same issues with SICP using Scheme?

Finally, since this is Hacker News, I'll appeal to authority with Paul Graham: http://www.paulgraham.com/avg.html

> Another unusual thing about this software was that it was written primarily in a programming language called Lisp. It was one of the first big end-user applications to be written in Lisp, which up till then had been used mostly in universities and research labs.

At the time, Lisp was 40 years old.


And they were pushing scheme as the future of everything. "We need a lambda key on the keyboard, maybe you people at HP can help us with that." Which wasn't entirely in jest.

I'm pretty sure Abelson and Sussman would have had an honest, will reasoned and nuanced answer as to why lisp had not replaced fortran and whether it will or even should. Those guys didn't go for rank boosterism.

Why no lisp? People didn't /own/ computers they could do what they liked with. You needed permission. Ask bill gates about that, he'll tell you... How could you hire lisp hackers? Not an issue for Paul. Done.

There are tens of thousands of us who learnt Haskell (and then maybe wrote a monad tutorial). How many applications? This tells us something. What?

The author has a theory.

The author has written a tutorial.

Show me the application!


I'm pretty sure you're reading this line wrong (though I don't blame you; the wording could probably be improved here). But I believe what the author is saying is "you can use these techniques in any language, but if you aren't using Haskell they might be more annoying to implement or maintain".


> Shellcheck - great but doesn't count, you use it for programming a computer

Why does that make it not count? Not every language needs to be used to write enterprise software.


For this metric removing "things used to program a computer" is useful as it cuts out the circularity of cithing the Glasgow compiler and libraries. The Learn Haskell so you can program haskell well enough to use a tool used for programming haskell...

This is a useful metric for someone considering programming technology to get something done rather than to learn something. (The "learn something" case here is fine. If you're thinking of it, definitely do it. Learn FP. Learn Haskell. It's useful for your mind and fun!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: