Context is important here. Note that the article is on a hardware design website. The software development they are talking about is embedded software.
In the goverment contracting world in the 90s ADA was very common. The DoD mandated ADA in some development contracts. So, I can see how someone from this world might make the mistake of calling ADA mainstream.
In this domain a clear view of the hardware is important for maintenance. The hardware/software interface is where a lot of troubles may be found. Finding a problem with this interface through a VM takes longer to troubleshoot.
Back to language selection in this industrial segment. There were contract requirements imposing a particular language. Also upper managment might impose a language choice. Allowing the actual software engineers to select the programming language is a recent and not yet widespread change. How I read the sentence about selecting a language is to say the engineers doing the works should do so since they know best what the needs are.
I'm not a big fan of TIOBE [1], but Ada is in the top 20 and ranks above Lua, Go, Haskell, Scala, Clojure, Scheme, Ocaml, Erlang, and a bunch of others that get more mention here. If an article called Haskell or Go "mainstream" would you have called them out on it? Ada is still popular in a lot of fields.
I don't get the hate on Ada. It has built in tasks/threading, strong typing, low level access when you need it, generics, native compilation, a decent toolchain, an open standard, ...
I think it was sort of a perfect storm of a couple things. General dislike of mandates, especially from the DoD.
It's Pascal like syntax. In the 80's there was a very real Pascal vs. C war, literally based upon "BEGIN" and "END" being too verbose compared to "{" and "}" argued by guys with 300bps modems and crap like that. Pascal got an unfair bad rap. Ada looks an awful lot like it though.
Then C, especially old school C can almost be converted into assembly by YACC without a real compiler. While it's very hard to make a good or great C compiler, it's not hard to just make a C compiler. Ada came with a much larger library and then, if I'm not mistaken, there was a certification process to make sure your Ada was really "Ada" and that was fairly pricey.
It's worth a look if you're doing some kinds of green field development, the GNU Ada compiler is pretty good. I think there was a plan for like an eclipse plugin too.
Pascal's bad rap was entirely justified. You could not write a useful program in Wirth's Pascal - a lot of language extensions were required, and of course every Pascal vendor implemented their own such extensions.
Ada suffered for years for being an unimplementable language. It was a long time before compiler technology caught up with it, but Ada found it hard to shake that stigma.
C came along at the right time, did not have those problems, and so took off. On the PC, for example, in the early days C was the only possibility other than assembler.
> Ada suffered for years for being an unimplementable language. It was a long time before compiler technology caught up with it
Could you explain what features of the language made it that hard to implement?
I would really like to see your general opinion on Ada from a language designer standpoint.
Considering you're the main mind behind D - did you draw any inspiration from Ada while fleshing out the language?
I first got and read the spec for Ada in 1982, intending to write a compiler for it. I found it to be overwhelmingly complicated, but I had little experience with compilers at the time. I haven't paid much attention to it since. Many other early compiler devs shared my opinion of it, and indeed it was many years before it was successfully implemented.
I don't recall just what it was that made it hard, other than being so complicated. I doubt I'd feel the same way looking at it now, as I have a lot of practice implementing complex features from writing a C++ compiler :-)
What I find attractive about Ada is the emphasis on writing programs that are checkably correct. I believe that, with the ever increasing complexity of software, better and much more mechanically checkable encapsulation, etc., is the way of the future, as opposed to what I call "faith based programming" where you overly rely on the programmer not making mistakes.
A large focus of D has been and continues to be on being able to write mechanically checkable code.
I've heard complaints about how tough it was to get ADA code to compile. I'm not sure if it was the language itself, or unfriendly tools and compiler warnings...
Do you realize that this report is written in "electronic design", or an embedded systems journal? It's not talking about the PC desktop here.
I don't know the exact numbers but Ada still plays a big role in security critical systems (airplanes etc.).
> Managed languages lead to "applications that are difficult to maintain over time"?
I think this is true regarding to embedded systems because they evolve very fast. Microsoft would have to take much effort to be on par, for many different kinds of embedded systems. The second point is that Microsoft's support for their operating systems is not unlimited. So if you have a long lasting embedded system technology then you simply cannot afford to use .NET.
ADA did. It didn't as of new projects around 2002 - people had moved to cheaper bundled platforms rather than the traditional compiler vendors I.e. MIPS/PPC FPGA core plus vendor C compiler. Older kit still uses it AFAIK but you won't see much if any recent software shipping in it.
I quit embedded in about 2005. I dealt with avionics and guidance systems. They don't evolve fast. When I quit, people were still using RCA1802 cores and ASM for some tasks.
.net on windows CE is still out there in force and is well supported. There are literally millions of devices out there with 20 year support contracts on them.
Do you realize that this report is written in "electronic design", or an embedded systems journal? It's not talking about the PC desktop here.
I don't know the exact numbers but Ada still plays a big role in security critical systems (airplanes etc.).
Is there a renascence of C++ and ada in those domains? That is to say did they ever leave. I don't remember hearing about a big push to have embedded software for airplane controls written in C#.
Yes, you were right to have stopped reading because it continues to spout nonsense. It's full of memes, misconceptions and plain false information.
It's a shame, because there really is something to the C++ renaissance (I wouldn't know about Ada), and the shortcomings of Java and other managed languages in a few fields. It would have made an interesting article.
As a very long term C++ user, I find it much easier these days to write safe, efficient and maintainable code than it used to be. Much of this is due to the wealth of documented good practice and my own personal experience, but also due to language improvements, especially in C++11. C++11 is how all language updates should be: reduced verbosity, increased safety, common practice made standard, and even improved efficiency.
Don't forget "Requiring automatic dynamic memory management ("garbage collection") is several orders of magnitude less efficient than letting developers use stack and manual memory management"
This is kind of like saying "Writing in FORTRAN is several orders of magnitude less efficient than letting developers use assembly" yes, a smart enough engineer can write assembly that is, at worst, no slower than FORTRAN (proof by fact that FORTRAN compiles to assembly). But this doesn't mean that on real-world projects FORTRAN won't win.
Ada actually has the sane memory management by default compared to both C++ and Java.
Pointers are, by default verboten. Pointer arithmetic will generate a compiler warning.
People should genuinely look into this stuff more rather than just dismissing it offhand. Ada got almost everything right for a procedural-OOP language. It was just too far ahead of its time.
The article was claiming that C++ failed in the 90s with the large migration from C++ to Java; it 'failed' at being the general-purpose industry language of choice. This is a reasonable statement.
And Java is often 1-2 orders of magnitude less efficient at memory usage than C++, so while 'several' is perhaps hyperbolic the general point still stands (execution speed wasn't being discussed).
Java is not the only runtime with automatic dynamic memory management. I could run comparisons against "tcc" and conclude that java is an order of magnitude faster than C.
C++ has always been popular. Ada has been and will remain a niche language for niche jobs. Language selection is probably the least consquential thing in software project success as long as the language fits the job and your developers are comfortable with it.
Concerning concurrency, Ada is 30 years in advance over C++. The concurrency system in Ada83 was already more sophisticated than mutex and thread. Since Ada95, the concurrency system of Ada is really marvellous.
The single problem of Ada is that there is so few Ada programmer on the market.
Programs are rarely CPU-bound from a number of instructions-per-second point-of-view. 80% of the time I've profiled high-performance applications, it's been memory allocations and deallocations that have been the issue.
Garbage collectors still aren't (and I doubt they ever will be for all usage cases) smart enough to work out things like how to organise memory struct alignment so it's aligned to L1/L2 cache boundaries, exact pre-allocation size (for things like slab allocators for loads of small allocations), thread contention with concurrency while allocating memory... And that's ignoring things like even being able to allocate onto the stack which things like Java can't even do for anything other than base primitive types.
A garbage collector might be able to pick up some of these things to a primitive degree, but certainly not on the first run of the code, which means the first run will be slow.
Games and embedded systems often allocate a fixed size of memory up front and NEVER free it, re-using it instead.
Memory allocation being a bottleneck in high-performance applications? I'm not sure what your "high-performance" means (HPC? Games?), but profile an application without garbage collection and 80% of the time a significant part of the run time will be within malloc and free.
A garbage collector might even be more efficient, since it can delay/avoid the management. When will a GC actually do a collection run? At the moment, when it cannot provide an allocation request from its pool. A program which needs little memory might never hit this barrier. This is like a never calling 'free'. Of course, whether this can be done can only be decided at runtime.
There is nothing in a garbage collector which prevents alignments to cache boundaries, preallocation, or stack allocation. In the case of Java the Hotspot JVM can allocate objects on the stack. However, this is a compiler optimization in this case and cannot be directly controlled by the programmer. The upside is that the programmer cannot introduce memory bugs.
If you write a game with Java you can allocate a fixed size of memory up front as well. I do not see a problem with garbage collection here.
> Memory allocation being a bottleneck in high-performance applications? I'm not sure what your "high-performance" means (HPC? Games?)
Games, 3D raytracers, particle systems, fluid simulations. That's my experience, and in every one, GC would be a complete no-no for at least the main core algorithms. Games often use Lua or Python as the gameplay language (scripting events), but the number of times I know of those parts being re-written in C++ due to issues with memory allocation in the language is significant.
Alan Kay once mentioned in passing (in Early History of Smalltalk I think) that newer processors tend to be optimized for languages like C. Of course it would be difficult to make a garbage collector perform well on such platforms.
Really, we have it backward. The question shouldn't be which languages run faster on current platforms, but which languages are easier to use (depends on the problem of course). Once you know which programming patterns humans best deal with, you can optimize the implementation stack all the way to the Nand gates. It's a pity, a shame, that we currently have to stop before touching the silicon.
The sad fact is that most current computers are built to run Windows, meaning, yes, their processors are optimized to run code written in C. This is not entirely bad, because it helps make unixes run efficiently too, but we cannot expect anything very revolutionary at the ISA level unless we are willing too burn a couple billions. Azul Systems July processors designed to run Java, and, in particular, to run garbage collection efficiently, but it is so costly to build silicon they left the hardware business.
> Azul Systems July processors designed to run Java, and, in particular, to run garbage collection efficiently
That sounds really interesting. Do we know what kind of feature made these processors more suitable to Java byte-code and garbage collection? How did it fare compared to then current mainstream processors? Can we speculate on how it would have fared if Intel or AMD build this kind of processor instead?
Not at all. You only need to look at even simple CPU specs like number and type of registers to see this kind of thinking in action, and this has been going on for a long time.
Not really. Allocation and deallocation for a system that does not support object movement (and hence compaction) is inherently non-trivial. In most modern GCs allocation is bump-the-pointer with TLABs, you can get cheaper than that. If the first generation GC is a copying GC you pay absolutely nothing for collecting a dead object. Compare that to explicit allocation/deallocation schemes.
The biggest problem with GC however is not the cost of allocation/deallocation but the fact that it's pretty hard to make GCs incremental at a very fine-grain level.
Java, btw, at least with Hotspot JVM, does escape analysis, that will allocate non-escaping objects on stack. Heck, these objects may even just end up in a register (with unused data members discarded).
The garbage collector has nothing to do with things like memory struct alignment. And garbage collector makes concurrent allocation easier, not harder. And garbage collection doesn't prevent stack allocation. You seem to have no idea what you're talking about.
> The garbage collector has nothing to do with things like memory struct alignment.
Exactly - that's my point - with garbage collectors you can't do things like align to 16-byte boundaries for SSE, or so they fit in cache lines...
> And garbage collector makes concurrent allocation easier, not harder.
If it pre-allocates a huge amount of memory (i.e. a memory arena), yes. If it has to allocate the memory from the OS (i.e. it hasn't got any more available to the application), then it's a further painfully slow allocation. Allocating that extra memory is an extra step for the GC language (GC allocates amount at startup, but then when the program executes that isn't enough so needs to allocate more), whereas the C/C++ version can do it all in one allocation.
> And garbage collection doesn't prevent stack allocation.
I didn't say it didn't, I said Java did.
> You seem to have no idea what you're talking about.
I don't really understand why you think GCs are so limited, when you are comparing them not to the runtime heap provided by a C or C++ vendor, but instead some custom scheme. To level the playing field, you should be considering a GC specialized to the purpose. There's no reason why a GC can't allocate certain things specially, or have a specified initial heap size. And GC actually makes it easier to make better use of L1/L2 cache without needing specific optimization, because caches are nice sizes to use for generations in generational GC - they're extremely quick to collect.
Saying that games and embedded systems "allocate a fixed size of memory up front and NEVER free it, re-using it instead" is almost meaningless. They are necessarily implementing their own GC or memory allocator; pretty much the definition of an allocator is something which controls how memory is reused. The only thing they've insulated themselves from is the vendor or OS's memory allocator, by writing their own allocator. This can make lots of sense when your application is all written by a single relatively small team, and has nicely understandable lifetimes for various bits of memory - games are a good example, because memory can typically be classed as existing for the entire game, a level, a frame, or a call stack. When you have such a specialized use case, it makes sense to take advantage of it. But not all, or even most, applications are like that.
I'm not saying it's not theoretically possible for GC'd languages to do that, I'm just saying in my experience of some of them (Python, Lua, Java) limitations of the language in terms of how/where/when to allocate memory has limited its use significantly for the domain I work in.
> Saying that games and embedded systems "allocate a fixed size of memory up front and NEVER free it, re-using it instead" is almost meaningless.
I was using that as an example of being in complete control of the memory...
> But not all, or even most, applications are like that.
Well again, it must be the domains I've worked in, because at least when doing embedded and desktop software, it's often been a big concern.
Java has a lot of memory management problems, and as the flagship for garbage collection for so many years, many people confuse its issues with gc generally. So you're right, but in the applications most people work on, he's right too.
I found it impossible to read the rest of this article given the nonsense in the intro. So I skipped to the conclusion and saw this gem:
I thought the most successful software projects were those that met the needs of their users.