Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He also recommends disabling exceptions which is a valid approach for size constrained applications. STL throws, so it can never be used safely with exceptions disabled. When you do this you're committing to writing C-with-classes rather than modern C++.


I haven't read the article yet, but it's standard operating procedure for embedded.

* No exceptions, ever

* No RTTI, ever

* Compile with -Os

* Limit STL to the non-allocating, non-throwing parts

* Don't allow heap usage

* Link against the C standard library so you don't get accidental heap allocations from lambdas with large capture specs and static construction and destruction fail to link

* Use alternatives to std::function (which while easy is very large and very slow)

* Limit the use of virtual functions (some people suggest eliminating them altogether but I feel like that's a bridge too far).

* Avoid inlining functions if you're not sure that they reduce down to something trivial. I've saved a kilobyte or two by moving some particularly large, commonly used functions into separate translation units so that the compiler couldn't inline them.

* Look at and track your space utilization commit by commit. If the data segment goes up unexpectedly, revisit what you're doing. You may have unwittingly allocated a big block for something. Similar for the code segment.

You get used to the constraints. Type safety, templates, and compile time programming make it 10 times better than C for the purpose in my experience. The only reason to use C is if your C++ compiler is garbage (which is often true for the tiny low powered processors). But if it's ARM you probably have access to a modern compiler and the only reason to stick with C is inertia.

I've been programming with embedded C++ for ten years and every time I have to poke into the C part of the codebase I end up hating life.


> so you don't get accidental heap allocations from lambdas with large capture specs

Lambdas themselves never require heap allocation. I guess you meant std::function?


It depends on what you're using for your function type erasure, but yes.


This is pretty uncontroversial in an embedded system context. As others have said in this thread, nothing spectacular happens if STL throws, it just boils down to a std::terminate. You can mitigate this by being careful with what you do with STL.

Also, in a real-time system context, exceptions can be undesirable since they might cause non-deterministic behaviour.

Catching an exception can be surprisingly costly. Did some benchmarking a while ago on the embedded, real-time system I work on and saw that throwing and catching a std::runtime_error had about the same execution time as a rather slow CRC32 calculation (no pre-calculated tables, no special instructions) of a 256 bytes input array. (Of course, this depends a lot of the CPU architecture, compiler, etc.)


I agree with the sentiment but in practice I’ve found that most C++ STL exceptions throw in a “fatal error” type of scenario like a bad allocation and generally not an “expected error”. For example, basic_ifstream::open() sets a fail bit on error, and doesn’t throw an exception.

This is in contrast to python or Swift for example, their standard libraries are more “throw-prone”. Building off the previous example Swift’s String.init(contentsOf:encoding:) throws on error on failure.

So in practice, IMO it is usually safe to disable exceptions in C++. Though, I have run into tricky ABI breaks when you link multiple libraries in a chain of exceptions->noexcept->exceptions and so on! You’re of course at the mercy of nonstandard behavior so buyer-beware. I definitely wouldn’t advocate for turning them off -just- for a binary size reduction.


You can recover from failed allocations without catastrophic failure. It is a fundamentally lazy programming practice to pretend that error handling has to be an out of band operation that can't be dealt with locally and bubble up progressively.


You can try but realistically, you shouldn't bother in the overwhelming majority of software. Depending on whether you're on a platform that allows for overcommit, you won't necessarily know that an allocation has failed until you attempt to make use of it and the OS tries to back the pages, by which point, you could be far from the source of the allocation.

You're just going to end up with an insane amount of error handling only to discover that in the real world, there's likely nothing you can really do anyway.


On platforms that allow overcommitment, you can guarantee your commit charge is physically backed by writing to each page in a memory pool at allocation time (probably at application startup, or at the end of the main loop), then allocating out of that pool.

Using memory that's been allocated but not committed seems like a recipe for disaster.


> Using memory that's been allocated but not committed seems like a recipe for disaster.

It can greatly accelerate sparse datastructures.


There have been implementations of most of the stl without exceptions here and there for embedded stuff. You can do modernish c++ without rtti or the stl, you will just have to do some extra work.


> STL throws, so it can never be used safely with exceptions disabled.

With exceptions disabled you still get a panic (e.g. the program immediately terminates) in places where an exception is thrown, this should be at least as safe as having exceptions enabled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: