You have to appreciate the audacity of some of the portfolio companies in Fund I for getting away hilariously bad business models. They literally fleeced Chris Dixon into funding Pied Piper (dfinity.org) and crypto kittens and he actually bought it. There's definitely some massive sunk cost effect going on with this new fund on a scale we haven't seen before. Anderssen used to be the smart money in VC and now they're doing this fund.
Showing how Yang-Mills and SU(3)×SU(2)×U(1) fall out would be a natural starting point for a proposed unification theory. This model doesn't even try explain the existing particle hierarchy as a special case.
Their usage in Haskell/OCaml etc is precisely faithful to their category theoretic definitions as can be in a general purpose language.
This debate about naming monads is pretty tiresome after so many years, if one called it "computation builder" it wouldn't change their structure or convey any notion of the laws any better than term monad. A monad at it's core is a set of algebraic relations.
But most people take comfort in having almost meaningless names. I can't blame them, when I first saw FP, I was lost in a see of nonsense. After time you see that lots of things and names are just crutches, and that structures, shapes, patterns, recursion relationships are where to look for answers.
> The way this problem manifests in Haskell is in how incredibly clever it makes you feel to get something done in it. Haskell is different enough from most languages that everything feels like an achievement when writing it. “Look, I used a monad! And defined my own type class for custom folding of data! Isn’t that amazing?“. “What does it do?” “It’s a CRUD app”.
The fallacy here is that type classes, folds and monads are not esoteric structures within the context of Haskell, and using them to structure regular business is pretty mundane. If someone feels a sense of achievement from doing this, they're probably very new to the language. The author of the article has baked in a value judgement about the "normal" way to structure a CRUD application that precludes alternative approaches to structuring application logic.
You could just as well argue that laying out object class hierarchies and using inheritance is "[solving problems] that have nothing to do with what you’re actually trying to achieve."
> You could just as well argue that laying out object class hierarchies and using inheritance is "[solving problems] that have nothing to do with what you’re actually trying to achieve."
Could, indeed semi-regularly do. I actively dislike solving problems with inheritance.
I also actively dislike people missing the point. The problem is not "using programming to solve problems". The problem is letting the novelty of the programming you're using to solve problems con you into thinking you're doing something more clever than you actually are.
I think you're equally missing the point– for serious Haskell programmers, using a monad to structure computation for a CRUD app is nothing to write home about. Having personally written quite a lot of CRUD code with Haskell, I have never thought "Gee, I wrote this with a monad. I deserve a back pat."
If we're talking about things that make me smug as a Haskell developer, I'd probably say that I feel most accomplished when I figure out how to encode more invariants into the type system so that the compiler helps me make fewer errors. I still stand by Haskell as being the best language for that that's not just a university research language.
> I think you're equally missing the point– for serious Haskell programmers, using a monad to structure computation for a CRUD app is nothing to write home about. Having personally written quite a lot of CRUD code with Haskell, I have never thought "Gee, I wrote this with a monad. I deserve a back pat."
Missingthepointception!
It's almost like I didn't explicitly point to this as a phase most serious Haskell programmers grow out of right in the goddamn article.
Yeah, sorry for not addressing that. I guess what I'm getting at is that I don't agree with the fundamental premise of the article. I don't really see "a large enough subset of the Haskell community never reaches that stage and just continues to delight in how clever they are for writing Haskell", and I've used Haskell for about 9 years now. Where are these people?
They're mostly the hobbyists. They're also the people you encounter in other languages (e.g. Scala) telling you how much better this all would be if you just understood Haskell. Unfortunately, due to the relative lack of success of Haskell in industry this is the majority of the people who consider themselves Haskell programmers, and certainly the majority I encounter on the internet.
> I have never thought "Gee, I wrote this with a monad. I deserve a back pat."
Another way of putting this doesn't have anything to do with how clever the developer thinks they are (or how serious they are now that they no longer think they're clever)...
It's "now you have two problems" -- the original problem you wanted to solve, and the problem of working with the tool/framework/paradigm you've chosen to solve it. So, great, you're doing object/design-pattern oriented development! And then you find this starts to lead you to a place where a significant portion of your overhead is sunk into solving problems that don't seem to have much to do with the original problem domain.
I don't know Haskell well enough to pick on it. I'd love to believe it escapes that kind of problem, but I think it's only somewhat less likely than beating entropy.
> You could just as well argue that laying out object class hierarchies and using inheritance is "[solving problems] that have nothing to do with what you’re actually trying to achieve."
A lot of people do argue that about big object-class hierarchies and inheritance, and they've been declining since the mid-2000s at least in mainstream OO languages like Java and C#.
It's not a hot technology ( read as overhyped ), but if you're in the analytics space and not using Python/PyData tooling then definitely check that out. It's certainly as mature, if not more so than R.
The academic type theory mentioned in this post is not really terribly important to implementing type systems for general purpose languages. There's this growing divide between the engineering discipline of type systems in general languages and the pure theory people who seem only interested in theorem provers and constructive math.
This is really not true at all. Unless what you mean by "implementing type systems for general purpose languages" is a sort of weak, generally useless type system that punishes rather than helps.
This is ESPECIALLY false of Pfenning and co's work, which is aimed specifically at understanding how to apply type theoretic techniques to the design of PLs, so that you get a PL with exactly the sort of stuff you want.
I added a link to the page to PFPL, which is an entire book on how to implement programming languages using type theoretic tools. It even has sections on OO programming, if you're into that sort of thing. It's all the same toolkit, in the end.
It really isn't. Take a good long look at Java generics and ask yourself if they really came straight out of type theory research. They didn't, that's why they're so botched, and why Odersky wanted a do-over with Scala :]
More seriously, Scala and Rust are the only things in your list that would actually claim to be influenced by academia. I'm sure Apple is not going for the type theorists with Swift, despite having some mildly interesting type structures like sum types, and C++'s "lambdas" obviously have very little to do with type theory, unless you want to make the very weak claim of "anything that has anything to do with the lambda calculus = type theory".
One of the primary major players with generics in Java is Philip Wadler, a type theorist and one of the co-creators of Haskell. Generics comes straight out of type theory research, and normally its called parametric polymorphism, but mainstream programmers can't handle that funky terminology. Apple's work on Swift has openly acknowledge its debt to Haskell and contemporary work on type theory, and it shows. As for the rest, you'd have to ask the people who worked on them.
At any rate, C++, Scala, Rust, Java.. these are not languages that take type theory very seriously, and probably couldn't. It's certainly true that the popular imperative languages don't take TT seriously.
But so what? The comment was about general purpose languages, and type theory is demonstrably of use in implementing them. Just because most mainstream languages don't use type theory doesn't make that not true. It just means most mainstream languages do not make use of everything they could.
Strictly speaking Hindley-Milner is the type system that itself admits tractable inference using the usual unification techniques via the Damas-Milner family of algorithms. Gradual typing itself uses type inference in the Damas-Milner family, although it diverges with it's notion of consistency.
In any language design, the total time spent discussing
a feature in this list is proportional to two raised to
the power of its position.
0. Semantics
1. Syntax
2. Lexical syntax
3. Lexical syntax of comments