>But there is equally no question that adding these features to Go would make it more complex. [...] I have no doubt that adding templated types to Go will make it a more complicated language,
I don't know if adding generics to Golang is the right thing to do but I don't agree with how Cheney has framed it. I wish programmers would use a more precise mental framework about "simplicity/complexity".
It's not important if generics make the "language specification" more complex. What's really more important is if it makes the real-world language usage more complex.
What matters is the sum of total complexity: the language spec + adhoc idioms/patterns/conventions/workarounds used in actual codebases.
As an example, if you make a super simple language like Javascript in 1995 with hardly any features, what you eventually end up with is complexity in npm with dozens of modules for things like leftpad(). Or you can insist that "prototypes are simpler than OO" but you actually end up with a dozen OO-simulation libraries with different syntax out in the wild. Instead of the overall JS landscape being simpler, it is more complex.
This does not mean you must put everything language feature including the kitchen sink into the language. The lesson learned from history is to talk about about simplicity/complexity in a more holistic way.
Users don't often have the luxury of discarding essential complexity in the problem their trying to solve. So, if you don't give them the right tools to tackle that in your language, that doesn't mean they no longer need those tools. It just means they have to build them themselves first.
What matters the sum of total complexity: the language spec + adhoc idioms/patterns/conventions/workarounds used in actual codebases.
This is a great one sentence summary of the history of C++. (Would also do for a history of programming.) Any hoary Ruby veteran probably has a story about the misuse of missing_method. (Likely one in which they were the antagonist.) The way that the programming field has shaken out, it turns out that there is a group of programmers who are fine with application programming and using well thought out libraries. However, only a minority of these programmers have the ability to engage in the thinking about other-POV and 2nd/3rd order effects that are needed to make a good library or meta-level tool. (Not to mention the community management skills needed to keep people happy while not giving them absolutely everything they want.) As a relative noob, I was listening to hoary Smalltalk veterans talking about this in the early 2000's. This situation is pretty analogous to the one surrounding crypto, except that the consequences of hubris are somewhat milder.
The lesson learned from history is to talk about about simplicity/complexity in a more holistic way.
Also from the early 2000's -- The Design Patterns guys were often saying that design patterns and other kinds of common incantations were often symptoms of deficiencies in programming language design.
I've been musing about these sorts of issues of "what is really complex" lately, too. I started evaluating new languages for low-level application code again after spending a lot of time overly close to web browsers, and then ended up deciding that Free Pascal beat them on the stuff I cared about, relative to C compilers as a baseline:
* Best-in-class compile times
* Modules; preprocessor is relevant, but not dominant
* Generics, variant records, function and operator overloads, exceptions, safe string and dynamic array types
* Libraries for common things, with a reasonably conventional style: APIs, containers, bindings, etc.
* No GC; pointers are available, but language features discourage casual usage. RAII pattern is achievable.
* Strong + static types, nominal types and type aliasing.
* Good IDE support, mature tooling, compiler has very broad platform support.
The new languages do win on a lot of the language-feature aspects(particularly memory safety when performing deep optimizations), but they don't have as good a story on the tooling. The compile speed is a major feature and is only likely to remain advantageous over time - and the features already in place are quite effective at limiting source code size, boilerplate, and dangerous constructs. This despite "Pascal is verbose" being memetic.
And if I want more high-level power, I've found that writing a FSM is the most probable path to achieving it. A FSM that embraces the problem domain is a tiny compiler: input source constructs, FSM checks them, applies appropriate algorithms, then emits a list of actions. More power? Add a second FSM to make it a two-pass compiler. Add a stack, or attach it to a database. If it has to be fast, it can generate code in the last step, too. Those things add tons of power to model the problem and aren't heavily dependent on language-side support.
Yup, we already had all of that in Turbo Pascal for MS-DOS, hence why I never saw any value in C back in 1993.
On those days outside UNIX, C was just yet another systems language, with most home micros being written mostly in Assembly.
Also Turbo Pascal OOP features were adopted from Apple's Object Pascal, used to write the initial versions of Lisa, Mac OS and known software like Photoshop.
> The Design Patterns guys were often saying that design patterns and other kinds of common incantations were often symptoms of deficiencies in programming language design.
When I reread the GoF book a few years ago, I was struck by the number of patterns that are just built-in to modern languages.
Most of the behavioural patterns are eased by fp techniques like lambdas+currying and adt/pattern matching (visitors are well known losers).
The structural patterns are however much more a pain in fp while they feel natural in oo. Facades and decorators need type hiding that's only nice(r) with existential types.
The creational patterns are a mixed bag. Builder, singleton fit with fp, Abstract Factory fits with oo and factory methods with both.
What do you think? How are the oo patterns improved by other techniques. Though a lot of those patterns are probably for C style oo.
I think the author was actually referring to the fact that it makes the written software using Go more complex to understand and maintain. Quoting:
"...efforts to add templated types and immutability to the language would unlock the ability to write more complex, less readable software. Indeed, the addition of these features would have a knock on effect that would profoundly alter the way error handling, collections, and concurrency are implemented."
Well, you've got two choices about how you want usage of user-implemented generic container types to look:
1) there are no parameters on the container types, every usage involves a cast to or from 'void*', 'interface{}', or the equivalent, and usage is effectively not type-checked.
2) there are parameters on the container types, subsequent usage isn't cluttered up with uninformative type-casts, and the types get checked by the compiler.
These lead to the same code at runtime (or can, in a Java-style implementation of generics); however, the Go maintainers seem to act as if the first is more maintainable and less error-prone. That seems ... odd, to some of us.
(There are other possible implementations of generics, of course. In fact, another post by the same author, [1], lists "everything is boxed" as a bad outcome that he'd like to avoid, in favor of specialized implementations for containers of, say, characters. But scenario 1, the Go status quo, effectively requires that everything be boxed when using a generic container type written the only way the language allows you to write them. Scenario 2 at least potentially gives the compiler other options. But the Go status quo doesn't avoid the "everything is boxed" bad outcome. Instead it mandates it -- with additional compile-time clutter.)
I think it's perhaps a reference to the original intent of Go at Google. They want new joinees to be quickly productive and also be able to write problem free software (and maybe maintain a lot of old software in addition to writing new software). Having more advanced language features is perhaps not conducive to this. Features like templates do take a bit more effort to grok and also take a bit more effort to understand the flow of code when maintaining software.
As a primarily .NET developer, I think I can count on two hands the number of times that I've needed to actually write my own generic types. Generic functions are maybe slightly more common, but still pretty rare.
However, if I had a penny for every time I've taken advantage of the generic collections built into the standard library, instead of using the old shitty, type-unsafe ArrayList or HashTable collections, I wouldn't have to work for the rest of my life.
.Net generics are more limited than full C++ style templates, or maybe I've been lucky enough to never encounter any code where people have gone overboard with them, but that seems to be a pretty sweet spot.
Anecdotally, there seems to be a giant rush to get everything covered in generics when people first start learning them. So, what could have been a short section of code that did its job well, is now a larger portion of code that doesn't quite do what anyone wants and you can find yourself just doing a dance with the generic types.
I do not know of any studies that explore this concept at large. I would be interested in them.
Templates have a very complex semantics, because they are checked at instantiation time and can be specialized anywhere. Fortunately, parametric polymorphism and templates have nothing to do with each other. :-)
But by pushing complexity into third parties, the vast majority can benifit from a smaller spec and less cognitive load. It's a trade off for sure, but I think as a language designer you have to be careful not to let the 1% convince you that the 99% needs x feature. There are tons of existing Go projects that are working just fine as is. How many of them really need more than what exists already?
A formulation of this argument has been used since the 1970s and it keeps being proven wrong over and over as progressively people realize the ML and Common Lisp folks were right. Those features are not bullshit invented to make a mathematician happy, they're features that make code safer, more concise, easier to reason about, and easier for optimizers to turn into performant code.
Why is it that folks keep trying to formulate this as "the egghead 1% vs the sane 99%?" Stuff is constantly flowing out of academia and advanced production labs and our industry re-levels to that. It's the way a growing industry is supposed to work.
Go is not a good language in a technical sense. It's an expression of Google's business needs. Google's got 0 interest in making rank and file programmers better; they're tuned for revolving door recruiting on a massive scale. For them, their goal is to make programmers who only have domain knowledge and require no training. They're not even that interested in lowering shipped error rates; they have a ton of machinery and a whole legion of SRE folks to help manage deployments when testing is minimal and hand-written.
To be clear btw, I am not condemning Google for arriving at this conclusion. Their economics are totally different from everyone else's. Every Big Company(tm) has a unique angle of attack on the training problem. Ironically, google's SRE folks are aggressively in the opposite camp in terms of recruiting and practice because they're the ones who actually have to enable this model of shipping broken code safely.
Everything about Go is designed to maximize what Google wants out of mid-experience programmers. It's so weird how non-google people seem to find this state of affairs good, as they almost certainly cannot afford to do it this way.
> Everything about Go is designed to maximize what Google wants out of mid-range programmers. It's so weird how non-google people seem to find this state of affairs good.
If it let's them build products that serve billions of users in ways that that were completely inconceivable even 30 years ago then maybe it's not such a bad tradeoff. To be clear I'm talking more about the philosophy that produced go, not so much the language itself. Obviously Google is more than go.
Given the features of Go, Google could have used Algol 68 instead, or Active Oberon just to refer something modern, and still make those wonderful products you mention.
Which are still a minority compared with the amount of Google products written in Python, Java, C++ or even Dart.
So, all the products were already conceivable in some 30 year old languages.
As I suggested, it's not inherently bad but it's functioning at a scale that simply isn't replicatable by most orgs.
If you think that approach can work for you in your job and that job is not at once of the top 10 tech find? Write an experience report about how you made it work.
That's his point. It's not a bad tradeoff... for Google. It might be a bad tradeoff for you, if you don't (according to his thesis) have a revolving door of mid-level engineers lacking domain knowledge and/or a large fleet of SREs.
Just an aside, it's slightly opposite from what you said. Google doesn't want software engineering at a larger scale to enter into the world of domain specific folks at all. So they specialize in giving domain specific folks a very tight coupling to the computer without a care in the world for most anything other than a narrow band of parameters.
They very much have an org for generating software that treats domain specialists as pluggable domain optimizers. Architects and SRE all enable that.
How does this only benefit Google? Other businesses can surely benefit from this aspect of Go. Heck if your an aspiring startup that has to hire new people, you gain from this.
Cheney is also speaking to a camp who have argued against generics in Go using various definitions of simplicity and complexity. Rather than getting into the weeds and highlighting differences in opinion about precise definitions Cheney seems to be acknowledging the validity of the positions of the other side _on their own terms_. This is how compromises are built and IMO a good step for the Generics in Go camp.
> It's not important if generics make the "language specification" more complex. What's really more important is if it makes the real-world language usage more complex.
This is what he was talking about when he mentions "more complex, less readable software." On the other hand, he doesn't mention the language specification once.
>This is what he was talking about when he mentions "more complex, less readable software."
You're making a facile argument without realizing it. For example, when you allow the ability to parameterize a thing, it can make the source code a little "less readable" while simultaneously reducing overall cognitive load.
Generics is the ability to parameterize _types_. As comparison, the language feature we are all familiar with is functions which allow us to parameterize _values_. Illustration of abstraction over _values_:
This is more readable (using technique of copy & paste):
Programmers may not think the function with passing parameters isn't less readable (because they are used to it) but it actually is. An extra layer of indirection will make the source text somewhat less readable but it's worth it if the net gain in comprehension, maintainability, correctness, etc is improved.
That's what parameterizing values via functions allows: it's simultaneously more complex and also more simple at the same time. It's complex at a superficial level but actually simpler at a deeper level. Programmers have embraced the additional "complexity" of "parameterized values" because it makes the language easier to use.
A similar analogy about "parameterizing types" can also be made.
>, he doesn't mention the language specification once.
Yes, if you do Ctrl+F looking for the exact phrase "language specification", you're not going to find it. However, if you understand the meaning of the quotes I pulled, that _is_ what Cheney is actually talking about.
I know Go gets a lot of heat for not having generics, but having done a few fairly complex things in Go now, I really haven't missed them, nor have I had to bail out to interface{} in ways that I wouldn't have had to in other languages.
The main reason for that is that golang lists and maps CAN be typed and while they aren't perfect, they are pretty darn great for most use cases. Certainly I understand some people have the need for specialized data structures, but I wonder how often a project needs that data structure to be totally generic, as opposed to just doing the work to implement it (yes even perhaps copy/pasting some other implementation) and moving on. That's ugly yes, but it just isn't that big a deal in my experience.
In any case, the gains from having a great runtime that compiles to a binary, an excellent standard library and awesome concurrency primitives far outweigh the negatives for me. Really enjoy Go.
I find Go is quite similar to C, and C also lacks all of these features. It's rare to see C code using some library for generic containers. Most developers roll their own, as needed.
In practice, bugs and safety aside, this tends to work swimmingly. Look at Redis, or git, or postgresql, or the many other projects getting by just fine without generics.
While I agree, and believe generics offer unparalleled power, they often aren't remotely necessary. If anything, their presence creates a tendency to over-use them, resulting in even greater complexity.
No, it doesn't work swimmingly. All of those projects have ended up either using void∗ or macros, or both, in order to roll their own generic data structures.
"It works" is an encoding of, "I can write tests which perform correctly" and not "My code is stable in the face of aggressive input."
Go definitely follows along with C in this tradition. It's better than C, because you have slightly more safety interpreting memory as executable and in bounds checking. However, it does very little to stop destructive bugs or let programmers escape copy-paste code hell where regressions keep resurfacing because it's most expedient to copy-paste code.
I was able to write software since 1980's until 1994, without a single line of generics in whatever language I had to use, until I got the first Borland C++ compiler with support for the early templates draft.
It is still possible to travel by horse, yet people have moved on to better solutions.
Having lived through that time as well, maybe the most painful period of that was toward the end, when we I was trying to use macro-based generic containers from the Rogue Wave Tools.h++ library.
Frankly, early templates based C++ code was not all that great either. It was the introduction of the STL that made a huge difference in the usefulness of templates.
That is what go generate reminds me, the macros that Borland had on Borland C++ 2.0 for MS-DOS, on their collection classes (BIDS), before they added templates support with 3.0.
Can you really set aside one of the core arguments for generics while discussing how it works?
How many exploits have been caused by the fact that C code tends to have its own data structures (including bounds checking errors)? I don't describe that as working "swimmingly", at least when discussing how things work in practice.
One thing that annoyed me when I looked at Go years ago was that it seemed like strings where basically the only data type of unlimited size that was supported as map keys.
I think that only types for which equality is defined in the language could be used as keys. Those types where numbers, characters and all other "small" fixed-size types, structs all of those members had equality defined, arrays of a type with equality, and strings. The size of an array is part of the type, so in that list of types with equality the only source of unlimited sizedness are strings.
I guess that cover most cases where you want a type of unknown size as a key, but I found it pretty annoying. If you want to use, say, polynomials with integer coefficients as keys in a map, you'd have to either choose a fixed maximum size (degree, for example, or number of monomials) for the polynomials or convert back and forth between strings.
It's not the only language that forces you to convert things to strings and back for use as map keys (AWK and Lua come to mind), but I wish it didn't have that limitation.
What price are you talking about? They're adding a map that offers a bit more leeway when working with it from several goroutines, is all I am seeing without thinking extensively about it.
So what's your complaint? That you want a synchronized map? Then write a two line function that wraps your native golang typed mapped with sync.Mutex and be done with it.
If you are sharing a map in this way across many goroutines you are probably "doing it wrong" in go anyways.
I can see your point but you're generalizing a bit. For many people they work, for many others they don't. The old and the new type systems both have their place and use-cases.
The new type system is a superset of the older ones.
You can still write your own specialized error-prone repetitive containers for each and every type by hand.
Yet, once a language has generics, people don't use the old way even though it's still possible to.
Perhaps that's because it has no place when there's a replacement which is less error-prone, less repetitive, more usable, and can be implemented once in a library and used forever.
After seeing Rich Hickey's excellent material on the matter^, I can no longer read anything talking about Simplicity and Complexity in programming without suspecting the author of being fast and loose with what those terms specifically mean.
As it stands, they are recipe for different camps and sub-camps of programmers to talk past each other endlessly.
Simplicity is very easy to objectively measure. Write down a formal semantics for the programming language in question, and count how many pages you used.
But, of course, nobody will actually do this, because it would expose the inherent complexity of designs advertised as simple. Many people's feelings would be hurt in the process.
I would argue that is a good measure of the simplicity of the language itself, but not a measure of the simplicity of the use of the language. By that measure, Malbolge is a simple language than C++ by a factor of ~1000. However, it is still much simpler to write code in C++ than in Malbolge.
Yes - evidently not as impressed. It's against the site guidelines to ask that btw.
For example, I cannot accept that having no means to define immutable structures makes for an overall "simpler" programming model. What could be simpler than allowing information to be information?
Whether having an additional concept makes Go more burdensome to learn and implement is another matter, and is on a different axis to Simplicity/Complexity (again, using Hickey's excellent deconstruction of simple/complex vs. easy/hard).
Not that you don't have a point, but I actually prefer fast and loose with most terms. Ironically, I find it leads to simpler conversations. :)
It can lead to some misunderstandings, but I think those are usually given more voice than they are worth.
Also ironically given everything I just wrote, I found that an odd mark for a footnote. I instinctively look up when I see the caret. Usually for a superscript, but not seeing one my eye kept going.
> I actually prefer fast and loose with most terms. Ironically, I find it leads to simpler conversations.
You mean simplistic conversations right?
The problem with being fast and loose with terminology is that it lacks precision; and with lack of precision comes ambiguity and misinterpretation, which beats the whole point of good communication.
Sorta. I'm reminded of the point Feynman made about keeping everything as "layman" in explanation as possible. His point was basically not to hide behind jargon and highly specific terms in trying to explain something.
So, if communication is hinged on highly specific meanings of words, the odds go way up that someone will not actually hear what you think you are saying.
Instead, keep conversations high level and do not rely on the specific meanings. It requires more thought from the listeners, in some ways, but it actually relies on less pre existing knowledge from the listeners.
It is tempting to think you have narrowed your audience down to non laymen. This is often an incorrect assumption, though.
And in writing, this can go out completely. There is a place for highly specific and very precise language. It is usually best along side the non-specific language.
I agree with the sentiments of this post (you can't "just" add generics; there are serious trade-offs), but I think his assertion that Python is "on the wrong side of history" is untrue. More people are using Python now than ever before, in more and more problem domains. It's great for certain things, just not very good at concurrency (alleviated by async) and particularly parallelism (though in practice you can often work around it with C extension modules and multiprocessing). But to say Python has "missed the boat" and is "on the wrong side of history" is not true.
Anecdotal evidence from the Pythonistas I've known in my career -- they stick to it because they like it, it's old (to them that means it's a proven tech) and because there's too much intertia in their organization to move on to something else.
Technologies aren't only popular because of their quality, or shall we bring PHP once again to prove this point?
Yup, this is what I hate about all of the we-must-be-right-look-at-our-adoption-rate arguments. From the blog post:
> There is a reason Go programmers choose to program in Go, and I believe that reason stems from our core tenets of simplicity and readability
Not the reason I choose it, I choose it for tight cross platform binaries and nice stdlib. Arguments made against generics should be technical, not political... complexity may be subjective, but the justification for the definition can still be technical.
As I mentioned in my more lengthy comment right here -- https://news.ycombinator.com/item?id=14563597 -- popularity is very loosely correlated to the quality of the popular thing. There's ease to pick up, there are pressing business needs fixed by a creative junior Python dev, there's the businessmen loving quick solutions (and future be damned of course), there's inertia, there's skepticism ("but our current system is working! why switch?"), there are tens of other such phenomena that can be attributed to the very buggy automatic brain processes.
So yeah, I hate the we-must-be-right-look-at-our-adoption-rate arguments as well. :)
Anecdotally, it seems to me that it's increasingly common to see Python as the language of choice in greenfield projects. I've seen this in organizations -- scientific, academic, industrial -- that aren't primarily software-oriented. This involves both internal applications/utilities and general use for statistical analysis, machine learning, and data processing.
It may be that choosing Python for a customer-facing web service would only be done now because of inertia or risk aversion, but there's tons of new users who don't care about the GIL and who like Python for its pleasant language design, good libraries, and ease of interfacing with C.
PHP got popular because it was easy to use, and then because of inertia and path dependence. I don't think anyone was really happy to "grow with" PHP, in terms of gaining in expertise and creating increasingly complex code. I don't see this problem with Python -- leaving aside issues with concurrency, the language largely doesn't get in the way of writing quality code.
(1) People are always inclined to one or the other language, even the non-technical people. I am sure most of the HN audience knows businessmen who read 3-4 short articles and felt like tech experts. Or a junior dev friend of theirs told them "man, language X is awesome". And it picks up from there.
(2) Python is a pretty good language, there's no arguing that. And if you have a team where even not-very-technical people can code an easy script with it, there's a subconscious peer-pressure to choose it for your next greenfield project. It's easy to pick up even if you're new to programming. That's a plus per se, but us the Homo Sapiens always prefer instant gratification over sensible long-term investments. It's our nature. That's why I picked Ruby -- after running away screaming from the Java EE world -- seven years ago. (Now I regret that decision, by the way.)
(3) Many projects don't care about concurrency. Languages with GIL who are easy to pick up -- Python and JS being the prominent examples -- are bound to be popular. You're correct on that point, 101%.
(4) Python has good C-bindings with awesome libraries. Facts are facts.
(5) I've known people who ran away from Python. I am not making this an argument against the language itself, I am just giving you perspective on your statement that "one can grow with Python". Yes they can, some choose not to. That's not an argument in either direction -- not for, and not against the language. It's a natural phenomena with humans.
---
I'm seeing popularity in general only loosely bound to quality. Many actors, software technologies, hardware pieces, TV shows etc. etc. became popular because there was a pressing need for something in the area and that entity arrived in time to satisfy the need. From then on the human nature of being creatures of habit kicks in and much better alternatives go unnoticed for years (or decades) and the old stuff gets replaced only when it becomes blatantly obvious that it's inadequate, many times in a row.
It seems our human systems hate gradual and constructive change. It always has to be semi-cataclysmic changes.
I digressed. Please note that I am in partial agreement with you, but felt compelled to give another take of your points.
If my experience with c# is anything to go by, you can't "just" add async either. The details of how it works are very complex, the syntax requires a lot of noise words all over the code, and there are many ways to get it wrong.
That's just a cost of retrofitting a paradigm-changing feature.
I would say, based on some experience with async capabilities in Python, Java, Go, and C#, that C#'s async/TPL implementation is among the more elegant. Its internals may be complex and it is not without some caveats that must be understood, but I find that it introduces the least amount of extraneous keywords and code constructs in the business logic of an application. And unlike some languages such as Java, C#'s async capabilities are well-supported throughout its core libraries. As someone who builds a lot of data processing applications in Java that benefit from both high concurrency and non-blocking IO, I find myself wishing Java's async capabilities were more like C#'s.
The facet that is missing is the impact on users' code complexity. It is all well and good if Go is simple, but if my code is horrendously complex to compensate, it isn't a net win. I exaggerate, but a lot of the complaints for Go already stem from this trade-off.
Focusing entirely on the language itself is a failure.
Keep on eye on the language's users,
No one knew about generics until languages like Pony and Crystal proved that they were important? This may be a pedantic point, but technically, generics or templated types were already being used in some obscure, academic languages like Java, C#, and C++.
As I read it, the author wasn't discounting those languages or saying nobody had heard of generics, he was saying that newer languages demonstrate that generics are now expected. Remember, both c++ and Java got generics added on (and c# followed Java's lead).
The introduction of languages that have generics designed in from day one is key. Even though Go doesn't (yet?) have them, the fact that its designers wrestled with the question from the start is an indicator that the concept has become central to programming language design.
> Remember, both c++ and Java got generics added on (and c# followed Java's lead).
C++ got templates 27 years ago. Java got generics 13 years ago. C# got them 12 years ago. Go was launched 10 years ago. I can't think of a single statically typed language with significant use when Go was developed that didn't have some for of type parametricity. Maybe C, but you could argue C users can simply opt into C++ if they want that.
The Go folks aren't risking being on the wrong side if they history if they don't add generics. They were on the wrong side of history when the language launched.
I do understand that generics are very difficult in an ahead-of-time compiled language where fast modular compiles are a priority one goal. I'm not saying its easy, at all. But there's no shortage of brilliant folks on the team. If they put their minds to it, they could solve this.
Maybe it was because they had the luxury of controlling the language and core libraries. They could make the types and functions they cared about -- channels, arrays, slices, maps, append(), etc. generic since they could bake them right into the language. It's really easy to not empathize with a user's problem if you don't experience it yourself.
C++ got templates during the time it was being designed.
Between C++ ARM and ANSI C++98, C++ compilers experimented with them, and its design reflected on the standard work.
Borland was one of the first PC compiler vendors to support templates, still on their MS-DOS/Windows 3.1 compilers, with the release of Borland C++ 3.0.
Microsoft Research already had a .NET prototype with generics support, by the time v1.0 was about to get released, but they decided it was more relevant to ship v1.0 than waiting for the generics support to be fully done. Don Syme from the F# team has a few blog entries about this.
One of the first languages to get generics was CLU in 1975, shortly followed by ML and Ada in the early 80's.
C++ did not have templates when the first commercial compilers shipped and the first books were published. It got templates around 1990, five years after the first release of a commercial C++ compiler and the first publication of "The C++ Programming Language".
node.js isn't single-threaded. The JS code gets executed in a single thread, but I/O happens in multiple threads (that stuff is all abstracted away by libuv, but I still consider libuv to be an integral part of node.js).
> Python I/O also releases the GIL (and has for decades).
I didn't know this; thanks! I'm not familiar with Python internals at all (in fact, I had to look up what GIL was when I read the article). Seems strange, then, that the author would imply a contrast between Python and node.js (which I'm now realizing was your point to begin with). Thanks for being patient with me :).
Yep! There are bindings for quite a few languages now. The only reasons why I conflate libuv with node.js are because libuv was developed specifically for node (as a cross-platform libev implementation) and because libuv comes with node by default.
The point is not List<int>. The point is that the primary intention of Go is to push developers to write the most readable (and hence, maintainable) code possible. No one argues that List<int> will not make the code more readable (and in fact, Go has some very limited generics for lists, maps and channels for these usecases). The problem is that once you give people parametrized types, they're going to parametrize the shit out of them, leading to byzantine messes such as the C++ standard library or Boost.
And that's not saying that something like Boost isn't an accomplishment in itself. But it's beyond hard to read, and when your language allows to write code like that, it's bound to appear in your codebase as well, and make maintaining it much harder than it needs to be.
> The point is that the primary intention of Go is to push developers to write the most readable (and hence, maintainable) code possible.
Lack of parameterized types leads directly to code-duplication. How is duplicate code more maintainable?
> The problem is that once you give people parametrized types, they're going to parametrize the shit out of them, leading to byzantine messes such as the C++ standard library or Boost.
An extraordinary claim with no backing what so ever.
Just because C++ turned out terrible doesn't mean everything else has to. Look to C# and Java. They're doing just fine.
Also: how is littering your code with "interface { }" any less Byzantine than simply using "T"?
I'm glad the article begins with mentioning the GIL, the elephant in the room of Python and Ruby. I often wonder why it isn't getting the attention I believe it deserves - either the problem is just too hard to solve and has been written off as not worthy of addressing, or perhaps developers do not understand how much (unnecessary) complexity goes into deploying, running and monitoring a production Python/Ruby app only to work around the GIL limitation. I devoted a section of my recent blog post to the GIL problem here [1] if you're interested in why I believe it is such a problem and how cool it is that Go does not have it. It's not the only thing I like about Go, but it is one of the most important ones in my opinion.
It can be argued that the GIL doesn't matter, because Python is too slow if you have a performance sensitive problem anyway. I'll give numpy as an example.
>By the time this decade rolled around, Node.js and Go had arrived on the scene, highlighting the need for concurrency as a first class concept. Various async contortions papered over the single threaded cracks of Python programs, but it was too late. Other languages had shown that concurrency must be a built-in facility, and Python had missed the boat.
While concurrency is nice, some citation is needed here. For one, Python is not going anywhere, and remains spectacularly more popular than Golang.
>But, no matter how important and useful templated types and immutability would be, integrating them into a hypothetical Go 2 would decrease its readability and increase compilation times—two things which Go was designed to address.
Well, they sacrificed compilation times to get a native compiler (which has no real utility to the users of the language, on the contrary it makes bootstrapping a little more messy).
Sacrificing it for Generics seems like a no brainer, but it seems they're not yet ready to bite the bullet.
As for "sacrificing readability" I don't think that having 30 lines of code just because you need to make it work with different types is any easier than having 10 lines of the same code with parametric types. Not to mention NOT having to write any code at all, because the standard library can finally have code (e.g for math, collections, etc.) that works across all relevant types. With Generics people could just reuse a few generic implementations of channel uses to handle 90% of their concurrency needs.
And it's also much better and easier to parse than using dreadful string templating kludges or interface{} to get the same behaviour.
I mean, the author mentions all those languages with generics ("Rust, Nim, Pony, Crystal, and Swift showed that basic templated types are a useful, and increasingly, expected feature of any language—just like concurrency"). Does he see the users of those languages complaining about generics being a burden?
Why do people always have to lament the terrible burden Generics will bring to Go in advance, when millions of us (literally, just add Java and C# programmers) just use generics in other languages without thinking twice about it?
There's a proverb in my country, that "He who'd rather not knead the dough (to make bread), keeps sifting the flour for days".
In programming terms it would translate roughly to: "He who is too lazy to build a bikeshed or doesn't like the prospect of building one, will keep on discussing the color it should give it".
Which I think is the case whenever anybody from the core Go team discusses Generics.
>As it stands now, generics or immutability can’t just be added to Go and still call it simple.
It's 2017. Neither are particularly difficult to grasp for a modern programmer. Even enterprise Java programmers have been using the former for a decade already, and they're not known for the wild experimental nature...
> For one, Python remains immensely more popular than Golang.
And it's worth looking at the reasons for that. Firstly, age is a factor - Python is much older.
However, most Python growth has come from data science and machine learning fields. There is no reason Go can not grow the pie and make such fields more accessible to those with an interest in computing concurrently. And surely, if there is one field where concurrency might be a boom it might be data science and machine learning.
Now, Go has some problems in that area: principally it does not have data structures as "friendly" as Python lists or NumPy arrays, and therefore people will have to jump through hoops. Could it get them? Yes. Would that be more powerful than generics? Absolutely. Will it get them? It's not obvious to me it will, other than through a third-party library which is fine.
In fact Python's successes might not be Python's. They might be NumPy's and Pandas' successes. Go can have the same thing, and do it better.
I think you underestimate just how important is Python's support for various numeric data types is for NumPy's, success. There is no possible equivalent in Go that would not look like a horrendous mishmash of various different functions to declare all the different numeric types and various forms of matrix slicing and traversal operators.
Go does not support multi-dimensional slices, and that is a problem.
I am not sure the numeric data types thing is such an issue.
Python supports int, float, long and complex types. So does Go, with the slight annoyance you have to care a little more about the size of your number, and so you don't just label something 'long' and walk away safe in the knowledge it's kinda big. Numpy just makes Go's numeric data types available. The clever thing it does that Go does not is around arrays and operations on them - something we need to consider in Go land, carefully.
The other stuff Numpy and Pandas makes available could be made available in Go, too and could abstract away the horrible stuff under the hood where you're resizing slices and creating multidimensional versions.
I think the moment multi-dimensional slices are available, you can do everything you want, and with concurrency and therefore able to trivially exploit multi-core systems - that could be the killer feature. Maybe.
>And it's worth looking at the reasons for that. Firstly, age is a factor - Python is much older.
Those things are not very good indicators. Cobol and Lisp are even older, but not as popular. Perl is the same age, but has fallen sharply. Smalltalk is a decade plus older, but fell from grace in the mid-nineties. And so on...
COBOL and Lisp are not _trendy_ but they are _hugely_ popular in certain fields.
Interestingly the reason they have not blossomed into old age is because they have exactly the opposite reputations: Lisp is hard to create but elegant when you have it right, COBOL is hard to replace, and inelegant wherever it lives on.
Perl was trendy but unmaintainable and often replaceable, so it was replaced.
Smalltalk lives on mostly through its ideas manifesting themselves in Ruby. It is still popular in certain fields, particularly those where "minicomputers" once reigned: business critical 9-figure turnover manufacturing sort of businesses.
Python has had longer to build traction, and so far has kept it - I'm arguing it might be about to pass.
I don't understand how Python's popularity in relation to Go has anything to do with whether or not Python has good concurrency support.
As I understand it, Python's Global Interpreter Lock means that no matter how many threads and processors there are, only one thread is going to be executed at one time.
Certainly it can be hugely popular in spite of that shortcoming, just like Go can be popular in spite of the lack of generics.
>Certainly it can be hugely popular in spite of that shortcoming, just like Go can be popular in spite of the lack of generics.
Well, the author of TFA disputes that "certainly". He says that nowadays a language kinda MUST have good concurrency support, or it will lose users.
Which I don't necessarily agree with, but it's a totally understandable position. So I don't see how one can say they don't understand it -- at worse, they don't agree with it.
The good part of Go is having a programming language, which DevOps are adopting in place of C, allowing for safer userspace applications.
The sad part is Google supporting Java, Kotlin, TypeScript, Dart and C++ progress o one side, all with lots of people knowledgeable in language design and then there is Go.
While the author's main idea is good (it can be more complex to work around missing features than to have them designed in from the beginning), starting from an invalid premise really weakens the article.
Node.js illustrates the need for concurrency, but is even more single-threaded than Python.
Somehow, in spite of the GIL, Python has continued to increase in popularity, probably because the workarounds are pretty easy (use multiple processes or c extensions that give up the GIL or jython or...).
Threads are not the only route to parallel execution, just as templates are not the only route to generic data structures.
Wrt/ to generics I think one interesting exercise would be to look at list and see what would need to happen such that it behaves more like the built in map. You'd want to be able to declare:
- list[type]
- delete should work.
- len and cap should work. (rather than .Len() )...
- range should work.
- What do we do for iterators?
I think it would make code that uses lists more readable/grokable. So you do get something in return for the debt you possibly took on to be able to implement list. In a sense that's also the way C++ trades things off, the library code is more difficult but the code using the library is simpler/safer.
This is all about trade-offs right? Choose to have a simpler language when it comes to concurrency and multi-threaded programming and the outcome is a harder time troubleshooting at runtime.
Or, make the compiler more powerful and annotate the code in such a way that the compiler does way more upfront on your behalf...now this means you don't have to work so hard to troubleshoot code in production.
Personally, I'd rather work harder up front--even if it means a higher learning curve.
I'm finding it difficult to accept the premise that generics and immutability are needed in Go.
In my case,
var foo map[<type>]interface{}
solves 99% of issues where I would normally need generics.
The author also doesn't really specify what kind of generics, there's a huge difference between parametric polymorphism (i.e. Haskell type classes) and Java/C++/C# generics.
The sheer rapid pace of development and simplicity has made Go our primary stack.
> solves 99% of issues where I would normally need generics.
But it also opens you to to runtime conversion errors, spurious if checks for type information compilers with generics have, or slow reflection to find the type.
> The author also doesn't really specify what kind of generics, there's a huge difference between parametric polymorphism (i.e. Haskell type classes) and Java/C++/C# generics.
Java/C++/C#/Haskell generics all provide parametric polymorphism though.
Haskell type classes provide ad hoc polymorphism, though I guess maybe they provide parametric polymorphism as well.
Per Wikipedia:
> In programming languages and type theory, parametric polymorphism is a way to make a language more expressive, while still maintaining full static type-safety. Using parametric polymorphism, a function or a data type can be written generically so that it can handle values identically without depending on their type.
> I'm finding it difficult to accept the premise that generics and immutability are needed in Go.
They aren't strictly needed; but that is not much of an argument.
Having Maybe and Either available is a huge help. Those are examples of generic types I've really benefited from in Haskell, Scala, Rust, Kotlin, Swift... But of course I can code without them if I have to.
I don't think Cheney properly understands the concept of "simplicity" fully. A language is not simple because it meets some arbitrary measure of smallness.
A language is simple if it decomplects unrelated concerns. Which go does admirably in certain areas, but its absence of generics certainly adds to cognitive overhead and unnecessary complexity elsewhere.
I don't know if adding generics to Golang is the right thing to do but I don't agree with how Cheney has framed it. I wish programmers would use a more precise mental framework about "simplicity/complexity".
It's not important if generics make the "language specification" more complex. What's really more important is if it makes the real-world language usage more complex.
What matters is the sum of total complexity: the language spec + adhoc idioms/patterns/conventions/workarounds used in actual codebases.
As an example, if you make a super simple language like Javascript in 1995 with hardly any features, what you eventually end up with is complexity in npm with dozens of modules for things like leftpad(). Or you can insist that "prototypes are simpler than OO" but you actually end up with a dozen OO-simulation libraries with different syntax out in the wild. Instead of the overall JS landscape being simpler, it is more complex.
This does not mean you must put everything language feature including the kitchen sink into the language. The lesson learned from history is to talk about about simplicity/complexity in a more holistic way.