Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but 3 seconds is not really something I would worry about

Except it's 3 seconds per file.

The project I'm currently working on has 1.5 million lines of code spanning ~5,000 c++ source files.

At 3 seconds per file it would take over 4 hours to do a clean compile.

Luckily for this project it's not 3 seconds per file, and a full rebuild only takes about 2 hours - which is still a major pain.

The concerns raised by the OP are completely valid, and cause issues for any medium-large c++ project.

I'd love to have faster C++ compile times.



My project is around 600k lines of C++ code, with around 1000 source files.

Not even half as big as yours, but for what it's worth my incremental build time is a bit under 3 seconds (and is almost entirely taken up by linking).

A full rebuild for me (on a Ryzen 5950x, doing the build in parallel across all 32 logical cores) takes about 80 seconds.

I feel that for the creative projects I work on, having an iteration time under five seconds is the most important thing for me. Once I've written the code, if that code isn't compiled, linked, launched, and visibly running on screen within five seconds of when I finished typing the code, then I'm going to be tempted to check email or otherwise context switch while I wait for it, and it might as well have taken half an hour.

I spend a lot of time early in development trying to figure out optimal approaches to getting that iteration time down, optimising for incremental build times, link times, and debug build launch times.

I find it pays huge dividends, and quickly, for me.


> I'd love to have faster C++ compile times.

It's not an inherent property of the language. With some care, it is possible to write C++ code with compilation speed comparable to good old C, even in large projects.

The worst offender is usually templates. Especially third-party libraries which use them heavily, like boost. Ideally, don't use these dependencies. Second best option, only include these libraries in *.cpp files which actually use them, and keep that number to minimum. When absolutely necessary, note C++ allows to split templates across h/cpp files; just because the standard library is header only doesn't mean non-standard templates need to follow the convention.

Another typical reason is insufficient modularity of the code. It can cause some of the source files (especially higher level ones, like the one containing the program's main function) to include ~all headers in the projects. The fix is better API design between different components of the software. A good pattern for complicated data structures is pure abstract interfaces, this way the implementation stays private, the consuming code only needs to include the (presumably tiny) interface definition. Another good pattern is FP-style. Regardless on the style, I sometimes write components with thousands of lines of code split across dozens of source/header files, with the complete API of that component being a header with 1-2 pages of code and no dependencies.

And of course you want all the help from the toolset you can get: precompiled headers, incremental builds, incremental linker, parallel compilation, etc. Most of these are disabled by default, but can be enabled in the build system and/or IDE.


Appreciate the suggestions, and these are all things I aim for in my code, however this is a legacy project that has not generally taken these things in to account, and sorting them out has not previously been a priority.

Luckily there is buyin not just from developers but also senior management to fix things, but it’s also in an extremely risk averse industry so changes need to be slow and careful.


> and a full rebuild only takes about 2 hours

The fact that there is an "only" in that sentence, tells me everything that is wrong about C++ compile times, and makes me happy I work with Golang.


Do you have a Golang project with 5000+ source files? If no, your project is not on the same level at all.


No I don't, and of of the reason why, is that its much easier to organise golang code (no header files).

And even if a project was that size, it would compile faster than the C++ equivalent.


I would suggest using parallel builds and dropping your Pentium 4 for a modern multi core system.


Thanks. I’m sure that will fix the problem.


Not my fault your calculation for a 4 hour recompile depends on having zero parallelism, in a year where 16 core processors are easily available and blocking IO is still a thing. So yes it will probably turn several hours of waiting into a short coffee break, the few times you need a full recompile.


The math for my 4 hour compile didn't take in to account parallelism, the 2 hour+ builds however do, and were built on a 16 core CPU with parallel builds enabled. It's not turning in to a short coffee break.


Anybody complaining about C++ compile time but not using ccache, mold, or ninja has lost all griping rights. Building a 5000 file project using only a single core is beyond silly.

Splitting the project into libraries that don't need to be rebuilt for normal development changes eliminates 90-99% of your build time.

We can get into separate dwarf files after you get the basics down.


> Splitting the project into libraries that don't need to be rebuilt for normal development changes eliminates 90-99% of your build time.

..for your particular use case*

* your particular use case may not be representative




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: