Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why C++ Sails When the Vasa Sank [pdf] (meetup.com)
112 points by dirtyaura on June 23, 2014 | hide | past | favorite | 212 comments


To me, the key line was "Most of the complexity of C++ is hidden from most of the users most of the time."

The complexity is there. Trying to understand the whole language at a language-lawyer level is mind-boggling. Trying to write libraries that make proper use of the goodies is very hard.

But using C++ isn't so bad, as long as you don't try to use every aspect of it. Just use the parts you need to be able to do what you're really trying to do. You don't have to pay for what you don't use. That's good enough for a huge number of programmers in a wide variety of contexts.

There's a lot of haters that can point out a huge number of flaws. But here's the thing: Most of them, I never run into. I don't care if the move syntax is confusing; I don't use it. Maybe I never will. If I have to, then I'll care about how good or bad the syntax is. In the meantime, the parts I actually use let me get my work done better than any alternatives.


> The complexity is there. Trying to understand the whole language at a language-lawyer level is mind-boggling.

It makes me uncomfortable not to have that level of understanding for a language that's so low-level and doesn't have the "Do What I Mean" bent of say Python. I'm "meh" on C++ going forward. I invested a lot of time into my copy of "The C++ Programming Language" (3d Ed.), for C++ 98, but I can't motivate myself to get the new edition and learn in the ins-and-outs of C++ 11. It doesn't seem worth it to me. I'm really hoping Rust catches on. It's a complex language in the same vein as C++, but the bang-for-buck in terms of expressiveness relative to complexity is better, I think.


I'm a former C++ programmer (and a current C programmer) and my go-to language is currently Golang; here's why I'd virtually always choose Golang before C++:

* Streamlined language with carefully-chosen syntax and keywords (there aren't 10 different meanings for "static", for instance); entire concepts have been eliminated to no apparent ill effect in building real systems. Fewer cycles allocated to the language, more cycles allocated to problem domain.

* The 95% case for associative and sequential containers is built into the language, with (for me) enormous benefits to programming efficiency.

* A streamlined and carefully orthogonalized standard library optimized for systems programming. Golang has the best-designed standard library I've ever used.

* A ruthlessly pragmatic packaging and modularity system; unlike C++, packaging is not left up to the build system.

* Build-world compilation times measured in 100s of milliseconds rather than 10s of minutes.

If I was writing, say, a generic trie library that I was hoping to get hundreds of different programmers to use, C++ would be a tempting target because of generics. But it turns out, most of the time I build projects like that, I'm really just wanking. Another benefit of Golang is that it's so stripped down and pragmatic that it subtly discourages wanking.

A lot of people would add Golang's concurrency model (share by communicating instead of communicating by sharing) and lightweight threads to this list. I like Golang concurrency too but my sense of it is that among languages designed from the jump for concurrency, this isn't that much of a unique benefit.


> Streamlined language with carefully-chosen syntax and keywords (there aren't 10 different meanings for "static", for instance);

I'll concede this point. I can't imagine learning modern C++ as a complete beginner, even if I know why the Committee overloaded keywords like this (or worse, at least to me, overloaded symbols): the general assumption that any good new keyword has already been used as a function or variable name in somebody's code, so that changing the meaning of that word would break their code.

We'll have to see whether the Golang designers follow in the C++ Committee's footsteps on this. I remember when Java was described as a clean and simple language. I would laugh at anyone who claimed modern Java was clean or simple: adding features to the language without breaking old code is a very difficult dance.


Absolutely agree on golang.

I started out as a C programmer, mostly coded embedded stuff, and have a decent knowledge of other programming languages (Java, Python, Perl, ...), but always avoided C++ for this reason.

When I made the move to C++, it was very painful to me. In C, I understood every aspect of the language, and going to C++ if felt like 'magic' happening all the time, and trying to understand every single aspect of the language was daunting. Every weird error was another tiring expedition into the language's core, the what and why. Fun is the last word I'd use to describe it, and I always ended up limiting myself to a very narrow subset of C++ because of this. At a certain point however you will be confronted with hairy ugly details of the language. Telling me that 'protected abstract virtual base pure virtual private destructor' is perfectly clear is living in your own small little world.

I enjoy coding in C, it feels like I'm in control, although you really have to do everything yourself, so it's a bit tedious. But C++? No, I absolutely don't enjoy 10-page error messages involving templates that have absolutely nothing to do with the actual error (although you have to give clang some credit here for providing a lot more helpful and compact error messages). It feels like you spend a lot of the time fighting the language and the compiler. Figuring out what a compiler error message exactly means can take a lot of time. And don't say this is the compiler's problem, it's the overly complicated language problem, parsing C++ is an absolute hell - which is the root of those amazingly bad error messages.

I said C++ goodbye about a year ago, and never looked back. My pretty recent discovery of golang was a breath of fresh air. Yes it's not suitable for embedded systems, yes it produces gigantic binaries due to debug symbols always being included and full static compilation (which also has advantages), yes there are no generics (...). It's not perfect, but the language is simple, using it is plain fun, you get a big standard library that even includes a real-world hardened web-server (Google uses it for it's download servers), and there is an amazing amount of 3rd party libraries available for such a young language. It feels a bit like a mix between a scripting and a compiled language - which is nice.


Is go really suitable for system programming as defined in TFA?


Go can link to assembler and C routines so yes, I would think Go could be a good language for mid/top-level stuff.


How will learning a new language help you? I guess it depends on what you're trying to achieve in the end, but when that something is building software, learning how to use a new kind of hammer isn't the breakthrough one needs most of the time.

In fact I find the premise of all the Go/Rust/D advocates a bit strange. It makes sense to learn a tool that does something you can't do with your existing ones, but learning 5 tools that are mostly the same is a waste of time unless one is not into collecting tools as a hobby.


Well, it depends on how you weight the alternatives. Rust's advantage over C++ is its memory safety. Go is similar (I'm not sure if its safety guarantees are as strong as Rust, I should read up on that...), and it's advantage over Rust is that its type system is easier to get started with, but its disadvantage is requiring garbage collection. D is closer to C++ than Rust is, but has amazing metaprogramming, and defaults to GC, though it's not mandatory.

If none of these things matter to you, then they do seem similar. But if they do matter, then the difference is much larger.


I wish language advocates would break themselves of the urge to sum up languages they don't use.


I did use D a lot, though it was very long ago. I don't like Go, but I respect quite a bit of it.

I consider myself more of a PLT enthusiast than a language advocate. Is there anything about my summation that's incorrect?


Yes, I think so. Let's not debate it though.


Yes, let's not. I just highly respect your opinion, and know you have a lot to say about Go.


Likewise, hence avoiding the subject. :)


Well, if one is frustrated with the limitations of a particular tool, learning other tools that are like it may in fact help - if the other tools work around some of the frustrations. If not, then it's a waste of time.


The value proposition of Rust is unique, at least when compared to the languages that you've brought up (and C++).

And how are Rust and Go "mostly the same"?


>"for C++ 98, but I can't motivate myself to get the new edition and learn in the ins-and-outs of C++ 11"

I can understand that but keep in mind that C++11 is a completely different language to what it is C++98.


This is a bug, not a feature.

There is a lot of good stuff in the new versions of C++, but the language really could've used a bifurcation years ago.

That said, it's still the best tool in the toolbox for certain problems.


Which problems? That's a genuine question: the way I see it, C++'s niche is shrinking by the year, as new languages and more powerful computers arise. So, what's left?


How so?

It is still the dominant language for large-scale low-level systems. Critical software driving submarines, power plants, financial exchanges and medical devices is likely written in C++. The code on most of SpaceX's spacecrafts is in C++. Most of Google's infrastructure is written in C++.

With the constant increase of low powered computing devices we are likely to see that "niche" increase.

I think it is rather embarrassing that most code written in modern languages performs worse on today's computers than native code running on 1980s machines.

Performance is what makes computing magical.

"In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering" - Donald Knuth

Sadly, people only cite his other quote about premature optimization being evil, leading to a generation of computer programs written with no consideration of speed and user experience.


Funny how "critical" software often uses wildly unsafe languages. They should take advantage of paranoid type systems instead, that would prevent the occasional crash upon integer overflow, or SI/imperial incompatibility…

In established engineering disciplines, specification is a relatively small part of the work. In software, specification is everything (compilers and interpreters do the heavy duty for you). In established engineering discipline, a 12% improvement in a meaningful dimension (such as construction costs, or interior volume, or…) is easily worth doubling the specification effort. In software, a 12% cut in specification effort is easily worth a 50% worsening of another dimension, such as runtime speed or compilation speed (though of course we must not overdo it http://xkcd.com/303/ ).


This very much depends on what you care about. If you are running a 100,000 machine datacenter, a 12% efficiency improvement may save you millions of dollars in computing and energy costs a year. That is certainly worth a doubling of specification effort.


Of course. But cases like this takes a tiny, tiny fraction of all programming effort. Very few programmers get to work under such tight constraints.


I don't think so. In established fields of engineering, optimization takes up as much, if not more time, than design.

Computing is a new enough field that so far that hasn't been the case, but I think with the increase of media processing and large scale data processing, a 10000 machine computer will be the new norm. And just like most other fields of engineering, optimization will be a major concern, if not the primary concern.

http://www.cs.berkeley.edu/~rxin/db-papers/WarehouseScaleCom...


You are glossing over a fundamental difference between computing and other fields of engineering: copy & paste.

You don't get to just copy a bridge. You have to build another. With software, it is harder to justify building something anew, even if it's a bit different: you could re-purpose the old thing at very little cost (provided the old thing was well written and close to your mark to begin with, which is often not the case…).

Then there's the systematic automation of everything we understand sufficiently deeply. Garbage collection. Compiler optimizations. Libraries, some of which are freakishly fast. Databases.

When faced with a new project, you will increasingly not care about performance, because you will just reuse the incredibly fast infrastructure the system folks wrote for you. Don't get me wrong, performance is likely to become more and more important over time (as you said). It will just take less and less programming effort, as everything will increasingly be put in common —well, as long as we have the internet and Free Software.

More and more, the path to freakishly good performance will be simple, elegant, and obvious code. Algorithms will still matter, but you will hide most of them behind libraries. And you certainly won't do micro-optimizations.

If the compiler is not enough, you can have a team rewrite your hot spots using computer assisted semantic preserving transformations, in a tiny fraction of the effort it took you to write elegant code in the first place. Should you need to modify this code, no problem: just re-run the special set of optimization they devised in the first place. Worst case, the compiler just tells you it can no longer run the specialized optimization on your new code, leaving you with slow code until the optimization team amends its optimization.


Sure, but none of this points to high-performance languages being a shrinking niche. They will continue to dominate low-level code and library code.

Less low-level code will be written each year than high-level code, but it'll still be tremendously important. And as machines change, the underlying libraries will keep changing.


Careful with the metrics.

On the one hand, we have required effort. On the other hand, we have impact. To take an extreme example, Web browsers are a tiny niche in terms of development effort. Their impact however is something else entirely.

This is what I predict with low-level stuff. It will grow in terms of impact, but shrink in terms of development effort (at least in relative terms).

> as machines change, the underlying libraries will keep changing.

Increasingly, no they won't. What will change will be the optimization technique that we will need to operate on otherwise clean, elegant, obvious, and slow code.

Take a C library for instance. If it is written in a portable fashion, you only need to write a new C back-end to port that library to another machine. The same goes for semi-manually optimized code. You don't need to change the specification (I mean, the source code) to port the thing to another platform. You only need to change the optimization strategy. It's still an effort, but that's much less effort than a complete rewrite.


> It is still the dominant language for large-scale low-level systems.

He didn't say it wasn't the least dominant. He asked why it was to be considered the best. Dominance is a thing of legacy and you have to take into account a bunch of reasons that have nothing to do with how good a language is to assess why that's so.

Being the best tool is something completely different. I think the commenter has a good point and I think it's clear that a lot of the areas C++ has been used for have been taken over by better languages that offer more without conceding much. In that regard, he is spot on in saying that C++'s niche is indeed shrinking.

C++, like most other languages, is the best tool for some problems and areas, but those are growing smaller and smaller as we identify exactly what we need and create new languages to escape the rest of C++.


> I think it is rather embarrassing that most code written in modern languages performs worse on today's computers than native code running on 1980s machines.

Ok, let's play ball.

> After the Cray-1, the Cray Corporation developed another giant named Cray-2. It remained the world’s fastest Supercomputer between the years 1985 to 1989, capable of performing 1.9 gigaflops. ( Source: http://infotology.blogspot.be/2012/04/super-computer-timelin... )

So that's tha fastest of the fast in the '80's. According to Intel, a i7-3770k consumer CPU gets 112 gflops and a Q6600 quad core, which was released in Q1 2007, has 38.40 GFlops. That's a factor 58+ faster on the i7, or a factor 20 for the Q6600.

Now let's take the Q6600 to make it easy to compare languages in a very bad way: http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...

I picked the bench config x86 quad core because this had the highest peak. Also, if you take these languages, which are more commonly used: * C#/Mono: Worst 9x, median 2x * Go: Worst 7x, median 3x * Haskell: Worst 3x, median 2x * Java: Worst 3x, median 2x * PHP: Worst 109x, median 36x * Python 3: Worst 129x, median 37x * Ruby: Worst 239x, median 53x * JRuby: Worst 115x, median 34x

But keep in mind that: * Current CPU integer performance, cache speeds, memory bandwidth and general i/o would completely destroy the Cray-2 * GFlops are a real bad indicator of how fast a computer is. Current computers are even faster. Current GPU's are floating point optimized and run circles around any general purpose CPU when it comes to GFlops. * These are CPU-bound artificial language comparison benchmarks. * Native code means nothing. Bad slow code will always be slow, whatever language you use. Some of the languages included do compile to 'native code' (Go and Haskell), and a lot of the interpreted/byte code compiled languages use a JIT to generate 'native code' too. * We don't take into account the Cray-2's bottlenecks (mainly I/O) * All GFlop numbers are according to the manufacturer. * Most applications hit i/o limits before hitting CPU limits (yes there are exceptions) * this is comparing with the fastest of the fast you could get at the end of the '80s

So in worst theoretical cases, sure. In the real world? Not even close.


> The code on most of SpaceX's spacecrafts is in C++.

A few weeks ago I found out that the B-2 Spirit now flys on olde C.

http://en.wikipedia.org/wiki/Northrop_Grumman_B-2_Spirit#Fur...

> I think it is rather embarrassing that most code written in modern languages performs worse on today's computers than native code running on 1980s machines.

Are there any sources to support this statement, or are you drawing on personal experience?


'Easily obtained'

I don't consider an improvement obtained switching from/avoiding a high level language that allows me useful abstractions and to avoid bookkeeping, to C++, an easy thing.

Yes, if the thing is just -slow-, optimize it. But never start out "We can't have this thing be slow; let's write it in C++!" unless you already have some metrics showing that it's -too- slow in a safer language.


I very much hope submarines' software use more Ada and less C++.


Scientific computing. For my work (in particle physics) I need something that is fast, has interoperability with decades worth of C, C++ and Fortran code, runs preferable close-to-metal, and has minimal overhead. Also, it has to be something physicists (not computer scientists) can/want to work with.

There are a lot of great libraries for Python, and it's much nicer and more productive. I use it whenever possible, esp. for making graphics, or when I can use optimized stuff like in numpy. However, most of the programs I write are of the type "loop over 10 million entries, perform a simple calculation, return a number". These are mostly IO bound, and one would think that this is a case where you could use a dynamic or interpreted language. But whenever I tried, the overhead for the large loop and the variable access in tight loops was too high, and the problem became CPU bound. So, in my experience, it has to be a compiled language (or at least something more performant then Python).

I'd love to be able to use something like Haskell, as a lot of my problems are functional in nature, and it'd be able to catch a lot of stupid bugs. But it is too esoteric to be realistically adopted by the community.

The ideal language for me would be "C+". A C++ lite, where you would ban most of the complications. You would still be able to write C++ libraries, but then use them from simple C+. A C+ user, as opposed to a library writer, would not have to know about the "safe bool idiom", traits, private destructors, default methods (which are utter maddness in C++11) and template metaprogramming.

I'd then augment this simplified C+ with a powerful static invariant checker. You could say something like "guarantee x < 10; x = f()" and it could complain "you demanded x be < 10, but f() only guarantees x < 100".


"I'd love to be able to use something like Haskell, as a lot of my problems are functional in nature, and it'd be able to catch a lot of stupid bugs. But it is too esoteric to be realistically adopted by the community."

The esoteric, and what you need to do what you said, are nearly entirely partitioned from each other. Reading in a list of things, running some code over it, and spewing the results out is not that hard and will not involve anything you can't understand.

That said, even after some relatively extensive use of Haskell I still have poor intuition about how well it performs on such tasks... no, not because of "laziness", but because "naively" written Haskell still involves things getting boxed and other such performance killers that can take you halfway back to Python. I can say writing Haskell to do your described tasks isn't hard... I have a harder time promising you can write performant Haskell just by doing the obvious things. There's some promising work on doing fast numerical calculations, but it still seems like it's very easy to be doing something very fast, then change a bit of code, which silently breaks optimizations and suddenly it's slow for no obvious reason. (Note this is NOT unique to Haskell... all compiler optimizations can have this problem. It's just particularly acute in Haskell, but I've encountered it elsewhere. I've often wished for more transparent optimization feedback in many languages, or the ability to assert that I'm triggering certain optimizations so that if I do break them in the future, I find out at compile time. Alas, I've never seen such a thing.)


With C++11 you don't need the safe bool idiom anymore, you can simply define a "explicit operator bool() const". Since the conversion needs to be explicit, you won't have nasty surprises but it can still be used in an if (or while) normally as in this case the explicit conversion will be used implicitly. In other words, it works exactly like the safe bool idiom but is much easier to write.


> I need something that […] has interoperability with decades worth of C, C++ and Fortran code.

Assuming you meant more than having a good FFI…

When I judge the suitability of a language for a task in the abstract, I systematically ignore 2 very important factors: legacy code, and skill availability. I do as if everyone know every language, and we start from a mostly blank slate.

If I didn't ignore those factors, I wouldn't be judging the languages, I would be judging whole ecosystems, introducing a huge status-quo bias in the process.

---

The C+ you look for is probably possible. I bet it could be implemented a la CFront.


I write music-making software, which is one domain where C++ is utterly dominant and likely to remain so for a long time. In this world you need a language that's fast, gives you fine-grained control of memory, and allows you to operate without ever allocating or taking a lock of any kind. You could also use C for this, and some people do, but I find that the more powerful abstractions C++ provides are very very helpful here.

I think it's unlikely we'll see anything like Chrome, Ableton Live, or Photoshop written in anything but C++ any time soon.


> I think it's unlikely we'll see anything like Chrome, Ableton Live, or Photoshop written in anything but C++ any time soon.

We are working to change that in Servo :)


I'm rooting heavily for you guys. I use C++ not because I love it but because it's my only reasonable option. If Rust becomes viable for the work I do I'll be on it in a flash.


There was a guy at the last Rust Seattle meetup that was doing procedural music generation in Rust and said he was having a blast.


Well Lightroom is not exactly like Photoshop but it is mostly written in Lua. Parts are in C or C++ still it is true.


It's not about specific problems, but rather when you can't compromise on performance and still want to raise the abstraction level from C.

There are up and coming languages that can claim to compete in that area, but they're still rather immature and/or lack developer support.


I'm afraid I must insist on specificity: no specific problem means no justification. I understand the abstract reasoning if we had to plan that we might need C++ in the future. But when you're choosing a language now, you always work on a specific problem.

So, which specific problems requires you to both optimize the hell out of your CPU and memory, and mostly work at a higher level of abstraction than C?

I'll go even further: C + <high-level language of choice> already competes on that front (abstraction + performance). On which specific problems are they not enough, and C++ is required?

I'll go even further: there has been a number of C pre-processors spouting up recently. One of them does Lisp-like macros. Another does lambdas. Yet another might address templates. I predict a modern version of "C with classes", which learned from the mistakes of the past (like trying to be compatible at the syntax level).

At that point, do you think there will be any specific problem where C++ is still the best choice?


> So, which specific problems requires you to both optimize the hell out of your CPU and memory, and mostly work at a higher level of abstraction than C?

My sibling here already mentioned AAA game development. Finance is another area where top performance is often required.

Performance is the requirement. Being able to work at an abstraction level higher than C is highly desirable. C++ gives you both of those.

> I'll go even further: C + <high-level language of choice> already competes on that front (abstraction + performance). On which specific problems are they not enough, and C++ is required?

You get a boundary between the "high level" and "performance oriented" parts of the code that is arbitrary and only exists because the high level language performs poorly. There are definite costs to that approach.

> I'll go even further: there has been a number of C pre-processors spouting up recently.

Cute hacks that are probably not suitable for production use.

Let's be honest here. You're asking for problems that require the use of C++ now. The situation is surely going to be different in, say, five years from now - though I don't think it'll be a "C with classes".


I believe Jane Street does High Frequency trading with OCaml. The other examples sound valid, though I have my doubts about batch processing (compression, crypto…).

I intend to work an a serious C pre-processor this summer, after I'm finished with my Earley parser. This won't remotely resemble "C with classes". I won't aim for any syntactic compatibility, that was a silly requirement. Heck, I'll probably have Python-like mandatory indentation, reworked operator precedence, an overhauled type syntax, switches that don't fall through… But I will keep semantic compatibility, and a relatively obvious mapping to underlying C code. Coffe-C, not Clojure-C.

But even that won't satisfy me. Eventually, I want to make a source-to-source transformation framework that can turn any language into any other. With that, languages become less of a programming interface, and more of an implementation concern.


"Eventually, I want to make a source-to-source transformation framework that can turn any language into any other."

Beware. That's a pretty common programmer tarpit. I've witnessed two take this on myself. If you never hear about it online, it's because it makes "general puprose visual programming language" look easy; people at least get to the "show something off" stage with that.

It may help to observe that we technically already have an intermediate language that works with all languages, which we call machine language. If it sounds infeasible to read all machine language and convert between various programming representations especially in light of things like VMs and such... well... yeah. That's a fairly direct and accurate reflection of the difficulty in question.

In some sense, this is the absolute purest "inner platform effect" project you could ever hope to undertake. http://en.wikipedia.org/wiki/Inner-platform_effect


Jane Street is not the only HFT firm out there, though you'd think so from reading the comments here. Most use C++ and Java.

C++ is everywhere, from embedded to application software to games to the server side (Google has a gigantic C++ codebase).

I worked on a camera that had its logic written in C++ and it wasn't some unusual thing.


My point was, HFT is possible to do with something other than C++. Jane Street does OCaml, and you have just cited Java (garbage collection and virtual machine!). Since C++ is such a Chtuloid horror, the mere fact that it is possible to use other language, let alone garbage collected languages, is extremely strong evidence that C++ simply isn't the best choice. You haven't refuted my point, you have supported it.

(Of course, I don't take legacy code nor talent availability into account. They are crucial in any real-world decision, but they don't influence the virtues of the language itself.)


> Since C++ is such a Chtuloid horror, the mere fact that it is possible to use other language, let alone garbage collected languages, is extremely strong evidence that C++ simply isn't the best choice.

Yeah, well, that's just, like, your opinion, man.

Seriously though, don't mistake your personal opinion for some objective truth. There are plenty of programmers out there who enjoy C++ and don't see it as a Chtuloid horror. What you have isn't strong evidence of anything, really.

> (Of course, I don't take legacy code nor talent availability into account. They are crucial in any real-world decision, but they don't influence the virtues of the language itself.)

And there are even more "external" factors to consider, but if you allow yourself to cherry-pick criteria then you can make any language be the optimal choice. Languages don't exist in a void.


It's not just my opinion. Do I have to resort to the nuclear option? http://www.yosefk.com/c++fqa/ Well, there you have it. C++11 and 14 made some things better, but they also have their problems.

My opinions have different strengths. This one has accumulated enough evidence that I no longer consider the possibility of its falsehood. C++ sucks, and that's the end of it. I know of the gazillion eminently reasonable reasons why C++ is what it is, but it still sucks. I know that if Stroustrup made cleaner choices for the language, it wouldn't be so popular, but it still sucks. I know that a better language that nobody use is… well… useless, but C++ still sucks. In my opinion, C++ is one of our greatest shame as a community.

> If you allow yourself to cherry-pick criteria then you can make any language be the optimal choice.

I don't cherry pick. I'm just comparing languages. Not external tools. Not backward compatibility. Not programmer availability. Just languages. The other criteria are short term considerations, and greatly increase status-quo bias. They're only interesting when I must do a small project now, with whoever happen to work with me at the moment.

For bigger projects however, the language is increasingly important. The bigger the project, the more reasonable it is to switch to use a better language, instead of a better known language.


Guys who write HFT systems in Java are basically programming them like one would in C or C++. They pre-allocate byte buffers for everything and never run the GC. If you write C-style code in Java, it's going to run pretty quickly. They aren't doing architecture astronaut AbstractMetaClassFactory stuff or using much of the provided libraries for latency-sensitive code.

FWIW, I don't think Jane Street is a competitor in the "ultra HFT" space where every nanosecond counts. AFAIK they are more of a statistical and quantitative trading group, so they may have less need for things like talking directly to hardware, keeping tight control over memory layout, deterministic latency w/o GC pauses, etc.

Almost everything that is popular has evolved over time, and making these changes once you have established users results in compromises that can be sort of ugly. C++ has patina.


> Almost everything that is popular has evolved over time, and making these changes once you have established users results in compromises that can be sort of ugly. C++ has patina.

You will note that C++ made that sort of ugly compromises from the start. Keeping a C-like syntax really wasn't necessary. (Keeping a C-like semantics, that's a different story.)


But isn't that because C++ itself evolved from C? The original "C With Classes" wasn't much more than a convenient syntactic sugar for using structs and function pointers as a poor man's object system, something that was and is common in large, abstract C programs (e.g. Linux's VFS layer, Gtk+).

I think you may be confusing is with ought here. Everybody sane knows C++ has flaws; I think Meyers even feels this way in his presentation. C++ is popular because it evolved and thus contains a lot of compromises. To draw an analogy, the Mormon Church is immensely popular in the US for similar reason--they basically said, "Hey the New and Old Testaments you spent your formative years learning are all good, but we've got some swell new stuff here too!" I'm sure Zen Buddhism is more theologically pure, but people like what's familiar to them.

Programming languages are a network effect problem first and foremost. Just like it's hard to unseat Craigslist despite its crappy UI, it's difficult to get users for a new language that may be only marginally better in terms of features, productivity, safety or convenience. Even if it's a significant improvement existing code bases, library availability, programmer availability with domain knowledge and so on matter much more for serious projects. There's a reason why Facebook took the time to write a PHP VM instead of rewriting their code.


> Programming languages are a network effect problem first and foremost.

If only reality wasn't like this. Yes, network effects are ridiculously important. And so is status-quo bias in general. I know C++ wouldn't be nearly as popular if it adopted a different syntax. I know that the ability to compile most existing C code out of the box was a big help in the adoption of the language, even though it technically doesn't matter. This is why I said erlier¹ that C++ is one of the greatest shame of our community. Stroustrup was well aware of our insanities, and he exploited them on purpose, supposedly for our own good. We as a community were simply incapable of accepting anything better. It would have been too different.

[1]: https://news.ycombinator.com/item?id=7938469

> But isn't that because C++ itself evolved from C?

It didn't. C++ did not evolve from C, it was implemented on top of C. Stroustrup didn't started by modifying the front-end of GCC, he started by writing CFront. From there, it would have been real easy to fix some of the most glaring flaws of C: switches that fall through, overly general (and verbose) for loops, the insane syntax of type declarations (ML existed at the time, and could have been an inspiration), the priority of the operators (15 levels are too much, some priorities are backwards), and maybe some operators themselves (the star for instance serves two unrelated purposes, which may be confusing). While we're at it, Stroustrup could have thrown the headers away, replacing them with a proper module system. Nothing fancy, just a nice way to package compilation units.

That was the "fixing C" part. These flaws were known at the time, it would have been easy to fix them. (Except maybe the module system. You still want an easy way to talk to existing C code.)

Then we can move on to building C++: a language with a stellar C FFI, semantics that are identical to C (except for advanced syntax sugar such as classes), but with a much cleaner, easier, and more flexible syntax. This language would have been much better than the current C++ on every possible measure, except one.

Adoption.

It seems we just can't change our syntax. No, scrap that, we just can't change, period. People love "change", but they hate change. There are few circumstances where people are willing to accept change (like this anecdote about introducing OpenOffice as "the new version of Word"). One of C++'s greatest strength (and our community's greatest weakness), was it's apparent similarity with C. It's the same, except with more features! A better syntax for C however, while more useful than a mere C-with-classes², would never have caught on.

[2]: Templates are C++'s killer feature, not classes. (OO as done with C++ and Java classes is mostly a mistake³. But that's another debate.)

[3]: http://loup-vaillant.fr/articles/classes-suck

(I'm not kidding about C's syntax being terrible. Experiments have shown that beginners fare no better with C than they do with a randomly generated syntax. It is that bad.)


I think C++ is a fine choice right now for a whole lot of things, though in most cases I think C is also a reasonable choice and there are tradeoffs between the two depending on the goals of the specific project (rather than the type of project).

So here's my list: A compression library, a crypto library, a numerical library, a video game, high frequency trading system, a high performance application server, a webserver, a database, a storage or caching system, an operating system, device drivers, a web browser, a VM, a programming language.


Game programming. AAA level games require high level performance which can typically not support interpreted languages. They also are complex enough that trying to program them in C would create more problems than they would solve.

Granted, many indy games can work by using an embedded interpreter for much of the game logic, but it doesn't take much to run out of computing power at that level and still maintain reasonable (and consistent) frame rates.


In C++ I can write a generic vector math routine that operates on any scalar type and still simd specialize it for particular scalar types.


You can do that in many languages...


Statically typed i.e. without a inner loop dispatch penalty? Which others? I've considered an implementation in Fortran and OCaml as well. OCaml is attractive but doesn't cross compile to ARM officially. With C++ we can target x64, ARM and TI DSP with the same core processing module.


Have you tried Haskell?


You're not describing a problem. You're describing a solution, that C++ can implement.


Cross platform applications? I know that C# apparently has cross platform support but I met with limited success on Mac OSX (perhaps I didn't try hard enough). It is fairly easy to get a cross-platform library and use C++ to write for the main OSes (Mac OSX, Windows, Linux, in no particular order) and then you can also take the same code and compile and run it on your Raspberry Pi and ARM systems.

I know you could probably do the same with Python but I think small systems would suffer a speed penalty (I even find desktop Python applications slow so I dread to think of GUIs on an ARM written in Python). I reality I have not ever come across a piece of paid software that included a runtime of Python or some scripting language other than Perl, and that clearly wasn't for a GUI app.

What else would you write for all three platforms in? Java? I am curious.

(I suppose if you are targeting one platform only, then it might make sense to write in languages typically tied to that platform, eg. Obj-C / Swift / VB / C# (as is vastly popular in Windows land)).


Writing certain windows apps?

The paper states that C++ must have done something right, as it is still consistently popular. If ms didn't go all in with it though, I think it would be a footnote in history.


> The paper states that C++ must have done something right, as it is still consistently popular.

Oh, yes it did. Mainly preying on human irrationality. Most notably status-quo bias, exploited with its embrace and extend strategy.


It's different in the sense that it adds a whole lot of features, which makes it even more complex. It's not just the new features, but how they interact with old one. C++98 is still there, just underneath.


The addition of a whole host of features in C++11 makes the language more consistent and easier to comprehend. The talk gives examples of this (auto variables, range for loops, etc.)


Of course, as soon as you have to use any external libraries written for C++98 you are right back to having to understand both.

I, thankfully, haven't used C++ much in the past few years after using it nearly exclusively for about a decade, but the one thing about working in it that will always stick with me is how virtually every project would eventually have like 8 different string classes contained within it for much the same reason (different libraries and/or OS level APIs would use different string classes, often their own), and this is well after std::string/wstring were a thing.

That's the real danger with a sprawling language like C++, IMO. Not that you have to use every bit of it in your own code (in your own code you can get by with a very small subset), but the fact that because there's little true consensus on which bits of it to use it, projects with external dependencies (that are also in C++) generally grow to use not only the entire language and standard library in different places, but also a bunch of badly hacked up reimplementations of the standard library that come along with some of those dependencies.


The problem with string classes, including std::string, is that they are object oriented. The things you do with a string should be orthogonal to how the string works. When you think about it std::string is really a string buffer. And if we had good string algorithms, it wouldn't be much more useful than vector<char>.

In various circumstances you may want: string_views, copy_on_write_buffers, small_buffers (for small-string optimization), native arrays of chars, std::arrays of chars, std::vectors of chars, tries, and so on.

In all of the above cases, you should be able to write: auto firstSpace = std::find( begin(myCharContainer), end(myCharContainer), ' '); ...to get an iterator to the first space in your container.

And on top of all that, you could make versions of begin() and end() that return iterators that respect various character encodings.


Initially when you start with C++11 you get the impression that the old heap of idiosyncrasies was fixed, but then you realize there's a whole bunch of new ones. My most recent peeves:

1) You can create range for loops over containers:

  for (auto x: vec) {
    foo(x);
  }
but then if you want to do the same over a C array, you have to define your own wrapper with begin() and end() functions.

2. You can create shared pointers using make_shared, but you can't create unique pointers using make_unique. What's worse, if you define your own make_unique function, your code will eventually break because it's already decided that C++14 will have its own make_unique.

I'm sure all of this will get fixed eventually, but frankly I don't have more patience for this. The root cause is that the C++ process is broken and is lead by incompetent clueless people.


1. That's not true, you can use range based for loops over C array types. http://stackoverflow.com/questions/7939399/how-does-the-rang...

2. As you mentioned, make_unique is a part of C++14 (although it will still be missing make_unique<T[]> ). I don't see the big deal is though since libc++, libstdc++, and VS2013 already have a compliant make_unique implementation.


> although it will still be missing make_unique<T[]>

This is not true -- see §20.9.1.4 of the C++14 draft [1]. make_unique is defined for array types with unknown bound (i.e., T[]), but it is deleted for array types of known bound (e.g., T[5]). So to make an array of 100 default-initialized `int`s,

    auto array = std::make_unique<int[]>(100);
[1] http://isocpp.org/files/papers/N3690.pdf


That's not how it felt to me. It felt more as if they added the new features underneath C++98. Exaggerating wildly: One new general rule turns three old special rules into simple corollaries.


C++ does feel like a new language. If you just buy Stroustrup's new book and read through the "tour of C++" towards the beginning, you'll find lots of new bits without having to read the entire book (although I am still going to try and do that).

For me, I am constrained by the compilers that are used by my employers (VC10) so I can't use half of the new features (although I could use them in Xcode and macro them out I suppose).


Looking forward to v1.0 of Rust to have some expectation that the constructs that I learn will endure over time.


I love how C++ is now considered by many to be "so low level". It's a high level language as far as my work goes. Sure, it's not Python-high-level, but Python isn't an option.


What do you program in? Assembly? Binary? C++ is C with "all the things" tacked on. If you only use the C subset it's pretty bare-bones because, well, it's basically C at that point, and C is an assembly macro.


C++ is indeed a high level language, comparable to, say, C#. Let's enumerate some features that we take for granted in most modern high level languages

1. Type deduction (C++ has it)

2. Lambda functions (C++ has it)

3. Support for OOP (C++ has it)

4. Generics (C++ has it)

5. Static typing (C++ has it)

6. Support for functional-style programming (C++ has it)

7. Support for garbage collection* (C++11 has it)

8. A powerful library of data structures and algorithms (C++ has it, and the library is growing very rapidly as well.)

C++ supports pretty much every feature other high level languages support, but the features are opt-in, that is, if you don't use it, you don't pay for it.

*As far as I know, however, no compilers are currently implementing garbage collection. But the language support is there already.


Being high-level is about more than having features - you also need to lack some features. More is not always more, if you get me. High level languages don't get segfaults. I'm not saying that dangerous behavior should be impossible, just that it should require a conscious decision where the developer is aware of the danger. In contrast C++ almost actively seeks to lead the programmer astray, hampering every effort to create good code. It's not quite the sirens singing to Odysseus - that would be PHP - but it's close.


If you look at ANSI Common Lisp, there is no GC support. It is not even mentioned in the standard. But the whole languages is designed to use it by default. C++ is designed to not use GC by default.

Lambdas/closures in C++ are a total hack.

I'd say C++ is a mid-level language with lots of emulations of high-level features. The integration of closures into C++ is an example of that. Just by integrating a high-level feature poorly into C++ does not make C++ high-level.


How are they a hack? They are pretty much implemented the way a programmer would write them manually.


I disagree with 5 being a requirement for a HLL (otherwise LISP is lowlevel) and will content that C++ lacks 7 since in my mind it GC has to be non-optional and enabled by default. In addition 8 is hardly satisfied by C++, since it has neither an XML parser nor an HTTP library by default (at least last time I checked). I do admit that it has a standard library that is decently designed and much bigger than Cs.


C and C++ most often,prototyping in MATLAB, Python, and even C# a times. There are a whole slew of languages in between assembly and C++ in terms of abstraction, as well as a whole slew of software applications outside the web and client apps.


def doappend_dwim(arg = []): arg.append('a') return arg

assert(doappend_dwim([]) == ['a']) assert(doappend_dwim() == ['a']) assert(doappend_dwim() == ['a','a'])


The problem is that in any existing codebase that is large enough you'll have such a plethora of different things that are used that it doesn't really matter that I don't use things I don't understand or know yet. I still encounter them when reading code and have to understand them then. At least for me who stumbled into a semi-ancient C++ codebase two years ago (and having known almost nothing about the language before) that was a major impediment in picking up how the language should or might be used properly.


Good point, and was most recently used in the PHP community with the introduction of 'goto' about 5 years ago. "If you don't like it, don't use it" is pretty naive, because more often than not, I'm not using my own code, but someone else's (via inherited code, libraries, etc). Code is written once and maintained for years. I've noticed a trend over the years whereby certain types of developers (ab)use new features in a language when first introduced, then move on to other projects, leaving their "my first time with feature X" code behind for someone else to deal with for years to come. Sometimes it's fine, sometimes it's a nightmare.

The GOTO thing in PHP has almost been a non-issue all around, but highlighted the "don't use it if you don't like it" argument's flaws.


Stumbling into ancient codebases is always an impediment to understanding the language properly. Like reading C++ from before dynamic_cast was available, and spotting macros to do the same thing littered around MFC code.

I also find that the ancient developer's design style and implementation left a lot to be desired, and if someone came across that as an introduction to a language, I think they would have hated the language too. That's why we should always write beautiful code and occasionally throw in obscure language features to force them to learn them :-)


One does not always have the luxury of sticking to the good parts. Code is communication, and you often have no control over who is trying to communicate with you.


The difference between engineering and tinkering is discipline.

Not to say tinkering is bad, its perfectly fine when trying to create the latest viral app or when learning how to do something. When engineering something, discipline is required.

And nothing stops any institution seriously committed to engineering to adopting practices that restrict language use to a subset (mandatory code review, static analysis tools that highlight deviant code and disallow it to be pushed, etc).


Sure, but unless you're going to keep falling into the "not invented here" trap, you're going to encounter code that comes from outside your organizational boundaries.


I think this makes it a great language for solo projects. You can write almost however you like and largely ignore features you don't need. The problems arise when dealing with a bunch of programmers with different experience levels and preferences. That's where simpler, more conservative languages like Java shine. C++ gurus used to their massive arsenal of options must cringe when forced to work with simpler languages.


Another quote:

    C++ most suited for demanding systems applications. [...] You choose C++ when the situation is already complicated.
... and then you have two problems </snark>. Yes, C++ has a lot of features, and those features are often individually useful to solve individual issues, but I remain unconvinced that this plethora of features and complexity pull their own weight.

You can write useful, efficient software in languages that expose much less complexity to their users.

Re: complexity. Not having to understand C++ most of the time, or subsetting the language to have a manageable code base only works until you run into somebody else's code, or some 3rd party library. It's not a fix, it's a symptom of something broken.


The point is that sometimes you need the added complexity to solve complex problems. Torque wrenches are more complicated than a socket wrench, which is itself more complicated than a simple $5 wrench.

But when you need to be able to precisely measure running torque or torque a fitting to known value, you can't avoid picking up the complexity of a torque wrench.


"C-with-STL-containers" is an incredibly powerful subset of C++.


Which is why I'm seriously tempted to add a simple template system on top of C (it has already been done, by the way), then implement containers on top of that.

With that, I get the power without the rabbit hole.


Wouldn't it just be simpler to fuse off the part of your linker that handles vtables? IIRC That'll kill every C++ feature that's objectionable to someone who likes the idea of 'C with templates'

Unless you want a truly simple C template system, in which case aren't the C11 type sensitive macros sufficient?


> C11 type sensitive macros

Hey, that sounds cool. Do you have a link? A quick search doesn't seem to work.

The vtables on the other hand I will steer clear from. Closures are way more useful than class based polymorphism. Combine it with parametric polymorphism (templates), and you won't need to subclass anything ever again —okay, I'm exaggerating a little.


Look for the "_Generic" keyword.

On that topic, it is possible to do cool things with macros with GCC extensions. A simple example of generic macros was discussed about a week ago: https://news.ycombinator.com/item?id=7896280


Thanks. Doesn't look like that's enough for my purposes, though.


It's called _Generic (which should be easier to Google), but the fact that you have to modify the macro wherever you want to use it with a new type severely limits its usefulness.


Type erased closures are strictly less useful than subtype polymorphism + virtual functions (another form of type erasure) as the latter allows you to share data among an entire interface.


Closures are simpler, and therefore less cumbersome for 90% of the use cases. I prefer this compromise over the other.

Also, I'm not sure I get this "type erasure" business. In ML languages, there is no such erasure. I guess you need this erasure because the language doesn't support parametric polymorphism to begin with? I'm thinking about Java, which hacked generics after the fact, and maintained backward compatibility through type erasure.

In C++, there are few type erasures. Generally, when we want a function to receive a closure, we write it like this (minus dark corner I have missed):

  template <typename T, typename E>
  for_each(T functor, vector<E>);
No base class in sight there, just lots of duplicated object code. (Which is better is left as an exercise to the reader.)


Aren't std::functions effectively using something like vtables though?


std::function looks like a regular template class (like any container). And It doesn't look like we should subclass it. So, unless I'm missing something, there is no vtable.


std::function is usually implemented using subtype polymorphism internally.


Okay…

By the way, why do we need std::function at all? Every time I saw it, it was to work around some dark corner of C++ type system or template dark magic.


Because function pointers in C++ carry no state. Given:

    class A { void foo(); };
    A a;
There is no such construct that lets you store a call to foo specialized for the instance 'a':

    ??? thing = a.foo; // doesn't exist
The only option in C++ is to take the address of the member function:

    &A::foo
But even this is not sufficient as it is equivalent to:

    void (*)(A * __this)
That is, the "this" pointer is not stored. Along comes mem_fun:

    struct mem_fun{ A *a; void(A::*ptr)(); };
Now you can store a call to foo for the instance a:

    mem_fun fun;
    fun.a = &a;
    fun.ptr = &A::foo;
Adding an operator() to mem_fun lets you treat a mem_fun instance as a function:

    void operator()(){ a->*ptr(); }

    ...
    fun()
Now, lets say you need to store this as a generic "callback". Well, how would you do that? Sure, you can store mem_fun instances but what about just plain function pointers? Now you can't store them. In comes std::function. With a little bit of VERY simple to understand magic, you can do:

    std::function<void()> foo = fun; // mem_fun
    std::function<void()> foo2 = global_fun; // simple member pointer


Most programming languages implicitly type erase closures, C++ does not. Type erasure can come at the cost of heap allocation and indirect function calls. If you need type erasure in C++ you can put any callable type inside std::function.


Is it because everyone was thrown by the syntax of function pointers (for no good reason, other than they never encountered them)?


This is how C++ started. See cfront. http://en.m.wikipedia.org/wiki/Cfront


"Just use the parts you need". (sounds prosaic, but actually well put).

I've used C++ over the years off-and-on (depending on what the clients required). You don't really need to use every 'exotic' feature in order to be productive. If you need something low-level with great compilers and a lot of great lib support, it's actually a nice language. All these new languages are nice, but Go and Rust just chisel off the 'ugly' in favor of some different 'ugly', IMO.


The problem is that other people's code might use the 'exotic' features. Unless you're the only one working on the code, or you have control over what subset of the language is used (including libraries), you have to deal with the whole language.


Is that a problem though? Wouldn't you be happier knowing that portion of the language, despite the difficulty in learning and understanding it to begin with?


My point is that this weakens the argument that having a complicated language isn't such a big deal since you can just use a subset of the language.


Ah yes good point. I suppose C++ is one of those languages that is pretty massive. You can probably get by with only knowing a small bit but it does seem like a never-ending journey with knowing it fully in and out.

I wonder if it was designed to accommodate the way some real clever boffins think so that they could properly express their thoughts and designs in a language?


Book needed: "C++ the Good Parts"


Get "Tour of C++", a new book from Bjarne only with modern C++11.


Isn't that "Modern C++ Design" by Alexandrescu?


Not quite. I mean, it is an interesting book - functional programming idioms applied to C++ templates which gives you an interesting [generative] techniques applied to several GoF design patterns, but it is not an everyday C++ code you write. I'd say, for general C++ good style, Bjarne's works are more useful. At least, to start with.


It was released years ago: http://www.amazon.com/dp/0131103628


Need a "C++: The Good Parts" compiler (or static analysis front end) that enforces the good parts but your code is a compilable by a C++11 compiler. I've heard some Rosy developers joke that Rust is C++ The Good Parts. :)


The problem is that "The Good Parts" vary from team to team and from domain to domain. One of the strengths of C++ is that it can fill all those needs. This is important because often the problem you start out solving is different than the one you need to solve.



When in Stockholm, go visit the Vasa museum. It's absolutely breathtaking.

Make sure to get a guided tour. You can read all the stuff on the displays and in books, but you probably won't. So you'd lose all those nice stories about how the German ambassador protested the ship, because the figurines on the front symbolized the Roman emperors, with the Swedish king at the very front. Preposterous! The emperor of the Holy Roman Empire of the German Nation is the rightful successor to the Roman emperors, after all!

Pay attention to the fact that the Swedes weren't as stupid as people tend to assume. "Oh, they couldn't even build a ship that's floating!".

Yeah well, it was a design no-one had attempted before (bigger, three masts, two rows of cannons), the math to calculate the behaviour wasn't available, yet. And best of all -- contemporary analysis shows that they almost made it. Had the ship been just a little bit wider (I don't know, something like 20 centimeters or so?), it would have been fine.

Be amazed how the Swedes never just found a scapegoat to punish for the loss of their flagship.

Or just admire the architecture of the museum.

Whenever I'm in Stockholm I go there. It's inevitably a high point of my visits.


Several people working on the design of the ship knew it never would sail. But since voicing their criticism about the king's unrealistic spec carried the risk of getting beheaded they kept their mouths mostly shut. The master shipbuilder who was in charge of the project fled back to Holland when Vasa sank. It was a classic case of hoping someone else would get the blame when the shit hits the fan.


There is also the matter of the incorrect measurements contributing to the sinking of the Vasa: http://www.pri.org/stories/2012-02-23/new-clues-emerge-centu...


I'd completely forgotten about that. Yes, it's a great bit of tourism - very interesting building.

On the over-arming of ships going spectacularly wrong, see also the Mary Rose[1].

[1] http://en.wikipedia.org/wiki/Mary_Rose


The second row of cannons was added at the insistence of the king, the shipbuilder knew it would doom the ship. Also, I've yet to meet anyone who thought the Vasa sank because "Swedes were stupid."


The way people like to complain about the bloat and arbitrariness of C++, but then still quietly wind up using it, reminds me a lot of the common complaints about the arbitrary pronunciation and other quirks of the English language. There is, of course, the trivial commonality that "everyone understands it and a lot of useful things have already been written in it", but at least part of the reason why languages like Esperanto or even Lojban don't catch on is that while many like to extol the virtues of a (natural or machine) language being simple and unambiguous on paper, in practice, more often than not people arrive at a point where they perceive the very un-Pythonic benefit of being able to express the same thing in more than one way (and, conversely, being able to convey more than what the words/code say at face value).

The natural language example of course being metaphor and allusion, I am thinking of practices such as indenting glVertex calls inside a glBegin/glEnd block, or overloading () on an object to convey the idea that "you should think of this as something like a function" when it is really not on a technical level.


> The way people like to complain about the bloat and arbitrariness of C++, but then still quietly wind up using it

Most such people are probably in the same position that I am in, and this is that they're using C++ primarily because they're involved in projects with a C++ legacy codebase. Language choice really isn't something that's up to individual developers.

But for personal projects, I've pretty much abandoned C++, other than when I need something to bootstrap on platforms that I can't make a whole lot of assumptions on, or when I have to operate close to the metal and C for some reason or another doesn't fit the bill.

The thing is that I don't really have a lot of beef with the language features of C++ (the way they're used in some libraries is a different issue). The features it has are generally there for a reason, and have good rationales. And C++ does pretty well in its chosen domain (system programming). It's not perfect (what programming language is?), but hardly terminally flawed, either.

But C++ simply doesn't meet my needs. It has a very strong bias for optimizing CPU cycles, even at the cost of developer cycles, but I'm far more likely to be short on the latter than the former. Those extra few percent of speed that I might gain (or might not -- some C++ features are not all that efficient, after all) are rarely worth adding even a few hours of development time.


Regarding English at least, and for the most part C++, being able to express things in more than one way is not what we're complaining about; we're not looking for Pythonicity. What's annoying about English is more the erratic spelling, grammar, conjugations, and stuff like that.


>"everyone understands it and a lot of useful things have already been written in it"

That's such an overwhelmingly good reason to express yourself in English, instead of Esperanto, that you really don't need to postulate any additional reasons.

Most things like programming language choice follow heavy-tailed distributions, which means that there are a few that are overwhelmingly popular, as well as many that are extremely unpopular. The reasons that C++ turned out to be overwhelmingly popular may be mainly historical.

You could probably also blame popularity for making C++ so ornate. It jumped on every language trend over the years. Fortran didn't, but if Fortran had been popular...


I like C++ because I only pay for what I use in the language. Very little overhead or additional baggage to slow or weigh things down, unless, I elect to include the extra luggage.

The recent updates - C++11, C++14 and the drafts for C++17 are also very welcome.

When you need performant, deterministic, portable and testable code, you go for C++.


The new Rust language by Mozilla makes similar promises,

* only pay for what you use

* memory safety

* testability

* portability

(They even got a standard testing module.)

EDIT: How do you make lists with Hacker News markup?


Conceptually they are similar, though it's always important to note that C++ has several man-centuries worth of compiler work. Rust may become stable and efficient enough to replace some existing C++, but at the moment it's most certainly not as mature as the existing C++ ecosystem.


Rust uses LLVM, so it benefits from the same optimizer as clang. The front-end is less mature of course, but also Rust is less complex, so may not need man-centuries of work to be parsed properly ;)

Currently Rust doesn't outperform C++ partly because LLVM is optimized for the C subset (can't take advantage of extra aliasing/immutability information Rust has), but given that Apple is now betting on (quite similar) Swift I expect LLVM to rapidly improve in this area.


Also remember most of the work on Rust at the moment is focused on stabilizing the language — performance is only really required to be acceptable for now, provided it is possible to improve.


Yes, there are several places where, for example, we generate sub-optimal LLVM IR, but time is better spent getting the interfaces and language spec correct for 1.0 than squeezing out performance.

I should mention that that's different than 'who cares about performance, let's toss this in.' Performance aspects are absolutely taken into account when changing the language. But that's different than the implementation itself.


> How do you make lists with Hacker News markup?

You just made one.


Kinda. It would be nice to have a richer markup support. We always cite one another, provide links, and enumerate things. Supporting a subset of Markdown (or anything like it) would be very welcome.



I've never seen footage of Meyers before...He kind of reminds me of the Tall Man from the Phantasm movies.

I guess C++ will do that to you. :)


I always thing his prog-rock 70s haircut makes him look like one of Robin Hood's merry men or a medieval archer.

Not that this is a bad thing.


Thanks for the link to the video. Always been a fan of Meyers but have never been a talk of his.


I think its "excessive" features make it an appealing language so long as I'm not dealing with not-so-well-written and/or expertly "clever" code.

I find straightforward, well-written C++ more comprehensible than Java and even Python (only due to Python's dynamic typing).

Of course, the definition behind straightforward, well-written C++ is very subjective.


  auto y = std::move(x);
Is this good code? How do you review it? What's the type of y? Usually I would ask for at least one type statement per line. For example this seems ok to me, because the type is on the left.

  auto* x = new Foo();
But this doesn't seem ok:

  auto y = std::move(x);


Do you allow this to pass code review (ignoring the bad name 'foo')?

  foo (A*x);
'A * x' creates a temporary with no documented type, but the conceit here is that we don't care. They could be ints, floats, matrices - any data type that implements '*'. By and large that is fine.

Here you are just moving x to y. What is the type of y? the same type as x, which supposedly didn't cause you heartache throughout the rest of the function.

Stroustrup gave an interesting talk this year about how, everytime he introduced a new feature, people complained that it was not clear and asked for very verbose syntax. Instead of foo<T>, we get template<typename T> foo<T>(T, well, you get the idea. Now everyone is familar with templates and loath the verboseness.

Back to auto, it allows for 'generic' programming. Do you object to templated functions?

  template<class T>
  T add (T a, T b)
  {
     return a + b;
  }

That seems really clear to me, as is the auto version:

   auto add (auto a, auto b)
   {
      return a + b;
   }
If that is clear, why is

   auto x = a + b;
unclear? I can certainly come up with counterexamples where it isn't clear, but by and large I love auto, and use it all the time. (counterexample: we care about the type of x because we need to cast to a 16-bit int because it is going to be use to communicate with hardware).


> we care about the type of x because we need to cast to a 16-bit int because it is going to be use to communicate with hardware

IMO you should still use auto there and use a static_cast<int16_t> on the RHS to make it clear to the reader that you're deliberately converting it.


You're right.


Problem arises when we find "y" deep in a function, searches for its definition, and encounters "auto y = std::move(x);". Now we have to know the type of x, and if x is defined similarly, up the chain. Not fun.

foo(A * x) doesn't pose similar problem. On the other hand, if I write

    auto bar = foo(A*x);
Then we can have the same problem (esp. if foo is a templated function).


To be honest, I just hover my mouse over 'bar' to get the type, and I name 'bar' something more meaningful than bar. "ball_covariance", "movie_recommendations", or whatever that matrix multiplication is computing for me.

I wrung my hands when I started using 'auto' but none of the worries came to pass.

But yes, if the code is unreadable without a type, add a type. No biggie, and no one is suggesting inflexible application of rules (always use 'auto' if it is possible). The same way when I might call

   foo(boo(x));
and it is not clear, I'll explicitly name the output of boo in a temporary variable:

   auto robot_velocity   = boo(x);
   auto robot_covariance = foo(robot_velocity);
And if that ain't enough, then sure, do this:

   robot::vector<float> robot_velocity         = boo(x);
   robot::matrix<float,float> robot_covariance = foo(robot_velocity);
I have to say, I find the last the most unreadable. I almost never care deeply about the type, and care deeply about the meaning.

Interestingly, no one worries about typedef. typedefs wrapped around intricate collections (map of lists of dictionary of arrays) can effectively obscure what the underlying types are just as much as auto. But again, mouse hover, CTRL+I, or whatever your IDE supplies pretty much makes that a non-issue as well.

edit: the problem you are describing is due to too big a function, not 'auto'.


> Problem arises when we find "y" deep in a function, searches for its definition, and encounters "auto y = std::move(x);".

This is gun control applied to computer languages.

The solution is not to ban a useful feature that would ordinarily aid the understandability of code, the solution is to not abuse such features to write gibberish.

People can write spaghetti code with any syntax you provide them, even Python.


when we find "y" deep in a function

There should be no such thing as deep in a function, or at least not as deep as you mean (i.e. so deep that you can't figure out what y is): that would in all likelyhood mean your function is too long, has too much responsabilities. Good functions are short and composed of other short functions, they should be short enough so that you can read them through without ever having to wonder 'wtf is this?' I know this might sound like textbook stuff without practical use but it's simply the truth, as I learned through the years. In those years my functions only became shorter, and hence better named, and more reusable, and all code simpler to read. So auto was a godsend that didn't hurt once.


While C++ programmers are still getting used to type inference, it's not really a new concept. It's been in other languages for decades.

Personally, I avoided type inference in C# for a long time. But I never ran into a bug caused by type inference. While C#'s (and C++'s) type system isn't as advanced as Haskell's, if you screw up the types you generally get a compiler error. If "auto y = std::move(x)" compiles, then why do you care what type y is? You know it's the same type as x, and you can do anything with y that you could have done with x.

There's really nothing to fear.


Consider this simple code to swap the first two items of a vector:

    vector<T> foo = ...
    auto tmp = foo[0];
    foo[0] = foo[1];
    foo[1] = tmp;
When T is int, it swaps as expected. But when T is bool, instead of swapping, it copies the second item over the first, with nary a warning. And I'll wager you could stare at that code all day without spotting the bug.

So yes, be afraid.


Actually, you did remind me that I did run into a type inference bug: the original ScopeGuard implementation ( http://www.drdobbs.com/cpp/generic-change-the-way-you-write-... ) relies on binding a temporary object to a const reference to guarantee that the destructor doesn't get elided. So changing ScopeGuard foo = MakeGuard(...) to auto foo = MakeGuard(...) does change the meaning of the code, and you may have the cleanup code optimized away.

An updated version of ScopeGuard doesn't have this problem ( https://github.com/facebook/folly/blob/master/folly/ScopeGua... ).

So, yes, I'll concede that code that expects you to cast some kind of proxy type to a different type (e.g., the vector<bool> example, or the original ScopeGuard, or perhaps valarray) isn't ready for type inference. But that kind of code is pretty rare, so the list of exceptions to the rule should be short.

Besides, if you want to swap two elements, use std::swap.


Correction: given a vector<bool> foo, "std::swap(foo[0], foo[1])" won't compile because std::swap takes parameters as non-const references, i.e., not temporaries (given a vector<int> bar, "std::swap(bar[0], bar[1])" won't compile either); so you have to use "std::iter_swap(foo.begin(), foo.begin + 1)" to (correctly) swap the first two elements.


And another correction: given a vector<int> bar, "std::swap(bar[0], bar[1])" does compile fine. All vectors other than vector<bool> return modifiable references when elements are accessed with square brackets.

I'll stop it now.


I object to calling vector<bool> a vector. I'm afraid of it, not of auto.


This is a contrived example. Not only is vector<bool> the real problem, but this is a standout terrible way to swap elements, especially in generic code.


You have to admit, it's hilarious that the most natural swap implementation you could possibly write (and the one used in C++03) is now "standout terrible." It's not wrong to point that out, but it is wrong to assign blame to the hapless programmer instead of the language.

And even if we make it not-generic, and add some C++11:

    std::vector<bool> foo = ...;
    auto tmp = std::move(foo[0]);
    foo[0] = std::move(foo[1]);
    foo[1] = std::move(tmp);
 
We've fixed nothing. The problem remains!

(It works if you use std::swap, I think because whatever type tmp resolves to happens to overload std::swap to do the right thing.)

So why is vector<bool> the "real problem?" It's because its operator[] doesn't return a bool&, but instead some type "convertible to bool," that may do lots of other stuff too. But the standard is chock-full of language like that. For example, with iterators: `str.begin() != str.end()` Is that a bool? `str.begin()[2]`. Is that a char? The standard doesn't require either. Care to roll the dice by assigning one to auto?

Type inference works well in other languages, but it is more dangerous in C++ due to the risk of implicit conversions. C++ does so much stuff for you under the hood that it can be quite hard to figure out what is really going on, and auto only makes that problem worse. Use with caution.


> You have to admit, it's hilarious that the most natural swap implementation you could possibly write (and the one used in C++03) is now "standout terrible." It's not wrong to point that out, but it is wrong to assign blame to the hapless programmer instead of the language.

The point wasn't that your swap implementation was bad, but that you would never write swap yourself. It was made even worse for generic code because types are expected to be able to provide their own swap implementation.

> So why is vector<bool> the "real problem?" It's because its operator[] doesn't return a bool&, but instead some type "convertible to bool," that may do lots of other stuff too. But the standard is chock-full of language like that. For example, with iterators: `str.begin() != str.end()` Is that a bool? `str.begin()[2]`. Is that a char? The standard doesn't require either. Care to roll the dice by assigning one to auto?

But the standard says that sequence containers with operator [] must return a reference, not a type convertible to T. If you wrote your swap implementation against the requirements of a container, it is fine, the problem is vector<bool> is not a container. As for other parts of the standard that do permit the type to only be [contextually] convertible to some type T, there is no rolling of the dice, your code is correct or it is not.


Like anything else in a language in which the syntax and other features permit writing correct code that is difficult to maintain or decipher, your objection has to be handled by a human being (or well-designed analysis tool) at the "meta" level (e.g. by code style guidelines/policies).

I agree "auto y = std::move(x)" is (likely) poor coding practice. I only use "auto" basically as a shorthand (e.g. instead of writing "hand_crampingly_long_container_iterator_type v = std::fn(c.begin(), ....)).

I have mixed feelings about "polluting" the namespace with "useless" typedefs and similar aliases versus using "auto". The former leads to very explicit code, but lots of extra "overhead." On the other hand, "auto" is much more powerful than a convenient in-place alias as I've described, and I've rarely found myself in the position of looking at code and having to truly ponder over the type of an "auto" variable. On the other other hand, seeing a lot of either may indicate something else about the code and whether it ought to have a design review. It's (somewhat) subjective.


Scott Meyers (the author of this presentation) makes a pretty good case in the following presentation: http://vimeo.com/97318797

Unfortunately the slides are not online. But his presentation starts with a case of why 'auto' should often be preferred to explicit type declarations.


"For example this seems ok to me, because the type is on the left"

    auto* x = new Foo();
Are you only referring to the fact that the type is noted at least once on that line? I ask because that would not pass my code review at all 99% of the time (manually allocating memory that way.)

As for something like

    auto x = std::move(y);
I couldn't say it is bad without seeing the context around it. Is it a small function? Is it obvious what y is? Looks fine to me in most cases.


do you NEVER allow dynamically allocated memory? or am I misunderstanding you?


There are very few instances in which manually managing memory in C++ is justified. When you have `unique_ptr`, `shared_ptr`, and all sorts of container classes, it simply doesn't come up in most cases. If you see operator new being used you should be suspicious.


In modern c++ you can get away without it by using a combination of RAII and the various shared pointer and container features.

It's an odd mental leap for this old C hack, but it works quite well when you get into it. Bonus - no 'free' or 'delete' necessary.


Please correct me if I am wrong, but RAII isn't flexible enough to allow for lazy initialization. the various STL pointer types (std::shared_ptr etc..) are great, but they do incur overhead, and sometimes that is not acceptable in e.g. embedded systems.


I think he means it should read something like:

    auto x = std::make_unique<Foo>();
Unless you're implementing an allocation policy, you shouldn't be calling new and delete directly.


I think it depends on context, I like using auto when the type is clear from the surrounding 5-7 lines of code or if it's for some nested-container-iterator stuff. The rule of thumb is, if the context doesn't provide enough information about types to deduce what the inferred type for `auto' will be after a quick glance, then don't use auto.


You should only really use "auto" when the type is clear (like in your example #2). If it's not clear, you should probably specify the type so that it's more obvious what's going on. #1 and 3 are probably fine if x is defined right above y, but maybe not if they're at the bottom of a long function or similar.


The types are immaterial to the operation, the important thing is the move, not the type line noise.


what about in the case of something like:

    auto it = std::find(vec.begin(), vec.end(), value);


That doesn't bother me because std::find returns an iterator, and nobody wants to look at long STL iterator type names. But if the next thing were this:

  auto foo = *it;
... I would be sad.


Why? You know foo has the type of "whatever 'it' dereferences to." Why would spelling it out be an improvement? Would your opinion change in a templated function where the type is, itself, a placeholder (e.g., "typename T")?

What if "it" were originally an unsigned int* but a refactoring changed that to a long*? Would you prefer the programmer hunt down all cases where "it" is dereferenced to change the type of the result, or would you prefer the programmer use "auto" to begin with?


The types of dereferenced iterators can be nasty. See for example std::map::iterator. Using an explicit type allows you to keep less in your head at once. It's good to give stuff names, and say what they are.

In a template, you're somewhat better off using Container::value_type. Though not by much.

The unsigned int* -> long* example is a good point. But auto doesn't fully solve that problem - consider something like iter = x or some_func(iter). The auto also makes it more work to figure out what the underlying type is. You're probably better off using a typedef.


You're certainly always allowed to use a typedef. I can just tell you from my experience that auto is more than "a nice thing to have in very specific circumstances." For me, at least, it's "a nice thing to have in almost all circumstances, with a handful of exceptions." When in doubt, I type "auto."


Are you sad because of the auto, or the terrible variable names?

  auto gps_position = *saved_position;
I can think of cases where I'd want to see the type of auto, but not often.


There are a ton of times in C++ or any language where you want the type. If having the type makes the code more readable, you should have it.


This is what everyone is saying. The only question is whether to ever use auto in non-generic code. I think it is unarguable that if you can write foo(goo()) in a clean way you can equally use auto. And if you can't do the former, you probably can't do the latter.

Balance, proportion, judgment, and code reviews are how you get great code, not 'never' and 'always' rules (which your comment makes clear you agree with).


One of the nice things about explicit types is that you can make sense of the code even when the person who wrote it wasn't the best at variable naming. I tend to distrust relying on convention.

That said, presumably one solution for such an issue is that have IDEs that can easily tell you what the type of an auto variable is.


Scott did an amazing talk in Moscow two weeks ago. The slides look the same. Here is the link to the video:

http://tech.yandex.ru/events/cpp-party/june-msk/talks/1954/


I've been using C++ for many years, and, sure, it has a ton of various features, not all of which I use all of the time. But, being comfortable with it, I don't find it any harder to program in than Java or Python. I am not sure if it's fair to say that the learning curve is higher to get to that point. But then again, if you do silly things in Java, won't your code be slow as well?


The line that stood out to me was the quote from Stepanov, about C++ being the only language in which it was possible to implement the STL. To most people, that would have been a hint that perhaps the design of the STL was flawed...

I think the STL may have been the worst thing to have happened to C++. It could have been so much nicer if something like Qt (the core part) was adopted instead.


Which bit about Qt were you hoping was part of the STL? I think the design goals of the STL was to make it portable, something Qt is not, despite being multi-platform. We mustn't forget that C++ is used on very low level devices.

If Qt is anything like wxWidgets, wxWidgets reuses the STL for its container types, basically providing a thin wrapper over the STL (or it can do with a build switch).

I myself find the STL very useful, so I am genuinely interested in which bits of the STL you wish were more Qt-like? Do you mean the GUI bits of it or something?


How do you define portable? If you mean the same code compiling and running out-of-the-box on Windows, Mac and Linux then actually my experience with Qt has been very positive in that regard. Those are the only platforms I develop for right now, so I can't ocmment about portability to e.g. iOS or Android (although I believe Qt ports are available for them).

Qt doesn't reuse the STL container types, it provides its own - which I find much more pleasant to work with than their STL counterparts. The STL puts a lot of effort into providing types of flexibility that are irrelevant or useless to me (e.g. allocators) and doesn't provide lots of simple things that I need every day (e.g. an indexOf method for std::vector, a contains method for std::set, or a toUpper/toLower method for std::string).


Ah I see what you mean. I should dig out Qt again really - I haven't used it for years. I have managed to make do with wxWidgets.

Regarding iOS and Android, the attempts at getting C++ on them (particularly Android) are brave but I think for normal apps (not OpenGL or games), developers are probably better using Java. Like you, I too am writing for the Big 3 normal desktop OSes, with sporadic Android development and iOS tinkerings, necessitating learning new languages.


"Portable" for C++ means I can run it, for example, on the computer that controls a car's anti-lock brakes. And, in fact, I can use C++ and the STL on such a computer, even if it has no OS.


I think the STL may have been the worst thing to have happened to C++

I think opinions on that greatly vary. I for one find a majority of what is in STL very well-designed and very usable.


Sure, I was just giving my opinion. Take it for what it's worth... :-)

But if you'll indulge me for a moment, consider how you'd convert a string to uppercase using the STL vs using Qt. The STL code (taken from http://en.cppreference.com/w/cpp/algorithm/transform) looks like this:

   std::string tmp = s;
   std::transform(tmp.begin(), tmp.end(), tmp.begin(), std::ptr_fun<int, int>(std::toupper));
   return tmp;
Note the use of a temporary variable, because std::transform does a destructive update.

The equivalent Qt code looks like:

   return s.toUpper();
I'll let you draw your own conclusions.

By the way if there's a simpler way to do that using the STL, I'd love to hear about it.


It should be (eliding namespaces):

    transform(begin(tmp), end(tmp), begin(tmp), [] (char c) { return toupper(c); });
...the only noisy parts are the calls to begin() and end(). On the other hand, it will work for any collection type with good iterators (vectors, lists, strings, deques, ropes, etc.).

It is also trivial to write a mylib::transform_in_place() that cuts down on the noise with a one-line function.


Thanks, but I was just looking for a better way to convert a string to upper case, not an arbitrary collection of chars. I've never yet needed to upper-case anything other than a string and unless I get the urge to write a text editor (unlikely!) I doubt I ever will.

And yes it's trivial to wrap this in a function, but in my opinion it really shouldn't be necessary. It's such a simple and common-place operation, why isn't it in the standard library?


a better way to convert a string to upper case, not an arbitrary collection of chars

just a sidenote - this is exactly the opposite of the way of thinking the STL promotes: in STL terms, you do want a generic operation on an arbitrary collection of anything that happens be compatible with toupper. Shouldn't even be a char, let the compiler figure that out. And as it turns out, std::string happens to fit in nicely.


I think that sums up nicely why I don't particularly like the STL: it's that mindset. It's all well and good having a generic operation over an arbitrary collection of anything, but sometimes you just want to get stuff done. All that generality adds distance between your code and the problem you're trying to solve with it. It's obfuscation.

Edit: I hope that doesn't sound too combative - I'm just trying to explain myself, not saying that you're wrong.


I'll let you draw your own conclusions.

from this single example I could say 'STL isn't bloated and contains just the things necessary while leaving plenty of room for extending. That is just the way it was designed' (And extending it is something I do regularly, resulting in a header-only library with tons of small helper functions containing something like noted in the other comments 'transform_inplace' and 'to_upper' which is implemented using the former. Also the resusability of most of these shouldn't be underestimated.) - but for this case I sort of agree that toUpper could have been a member of string becasue it's pretty basic and that if you work with strings regularly QString is nicer from the start as you don't need extra functions. But of course that opens the road to others saying that if toUpper is there, then why shouldn't x and y and z be there.. Never-ending :P. Which is also probably why the STL is the way it is.


I admit I'm guilty of cherry picking my examples. :-) I wouldn't say it's bloat though, if it's something that most people end up having to re-implement. For what it's worth, I have my own little library of STL helpers too...


Why does the title slide show a ship flying a US flag? He obviously has correct pictures, because they are in the later slides...


Because it is a pretty picture and the ship with the US flag represents the sailing part in the title of the presentation.


What, like C++ comes from the US?


It was created here, although it has long since transcended its origins.


I suppose the ship on the title slide is "the one that sails" (it's not the Vasa).


there are still things I wish would be possible for c++. I'm not a language expert, but I still wish for those things.

* non templated tuples instead of struct: for example, 2 struct types are the same if the data types they hold is the same. Not sure if it's possible for statically typed language.

* tighter STL container integration with the core language syntax

* some way to do modules

* some new thing to make compiling much faster, so to avoid relying on precompiled headers. go and rust are so much faster for this, C++ is not. I still wonder if there could be some gcc or clang extension for this, it would be really appreciated. C++ is awesone to work with, but language implementations suffers from those little details.


> * non templated tuples instead of struct: for example, 2 struct types are the same if the data types they hold is the same. Not sure if it's possible for statically typed language.

C++ has std::tuple, but most people (myself included) find nominal typing easier to understand. I Would only use std::tuple in situations where I needed to store a values of the types of a variadic argument pack.

> * tighter STL container integration with the core language syntax

C++ actually does pretty well here, it has operator overloading as well as std::initializer_list.

> * some way to do modules

Hopefully a good modules proposal makes it for C++17

> * some new thing to make compiling much faster, so to avoid relying on precompiled headers. go and rust are so much faster for this, C++ is not. I still wonder if there could be some gcc or clang extension for this, it would be really appreciated. C++ is awesone to work with, but language implementations suffers from those little details.

Modules would help with compile speed. IMO, Go doesn't offer very many abstractions to the programmer that the compiler must eliminate so its easier to compile quickly. My 1 day experience with Rust was that it actually compiled quite slowly, do you have any personal experience with the language that you found it compiled quickly ?


ML and Haskell are statically typed, and have tuples as you describe them. For instance, the tuple

  (42, "foo", fun x -> x+3)
has type

  int * string * (int -> int)
in ML. It would probably be a serious extension to C++'s type system, though, and for limited benefit: you're asking for yet another way to do Cartesian product. Structs already do that. What's sorely lacking is sum types:

  type 'a option = Some of 'a
                 | None

  type 'a list = Cons of 'a * 'a list
               | Empty

---

Integrating the STL at a syntax level gives a special status to the standard library, preventing users from writing effective replacements or analogies. Also, C++ has precious little orthogonality left, let's not destroy it.

---

What kind of modules are you talking about? C++ has compilation units, classes, and namespaces. What capability do you want that they don't already provide?

---

Good luck on the fast compilation front. That one will likely require removing things from the language, or making some major compatibility-breaking changes in the syntax (to make it easier and faster to parse), and possibly a rework of the template system. No. Way.


If I understand correctly, Go is the "removing things from the language" approach to fixing the fast compilation issue.


Yes. Namely templating.


clang has implemented modules as a replacement for monolithic precompiled headers, but they are marked experimental for C++. They are tracking the C++ modules proposal.

See http://clang.llvm.org/docs/Modules.html and http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n207....


Why can't we simply have C + RAII + generics? That would be a baller language.


throw in first-order functions and lambdas as well, replace generics with templates the way they are now, and I'm all for that!


I really enjoyed this talk. The Q&A session was great as well, Scott is an entertaining story teller.


Why do people insist on linking to slideshare? http://files.meetup.com/1455470/Why%20C%2B%2B%20Sails%20When...





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: