Not trying to be rude but I don't believe this is a sensible choice if you care about performance (sounds like you were more concerned about missing familiar features and such so this comment may be irrelevant). Just curious if you were you able to measure the performance degradation after this change was made? In other words what is the price paid for obtaining familiar constructs (it certainly wont be zero).
GP was overly broad IMHO but there definitely can be performance costs to C++ over C. Some are really subtle while others are blatant. I got real-world bit by try/catch exceptions. In a non performance-intensive app like Transmission I would guess any such things would be irrelevant, but I don't think GP deserved the downvotes for the speculation.
Exceptions are faster than error codes in the happy case where there isn't an error. The deadly sin with exceptions is throwing them as part of the normal control flow. Don't do that for any code where performance matters.
I'm guessing the author measured the overall performance and assumed that using error codes, rather than exceptions, is what made the sample code he provided slower. Turns out, the reason it is slower is because, in the case using error codes, a copy of a std::string is made. In the exceptions case, where the string is returned rather than passed as a parameter by reference, the copy can be elided because the compiler invokes NVRO. Has nothing to do with exceptions.
Maybe that author should use a profiler before making these types of claims? check the generated assembly? etc?
Far slower to return a value in RAX vs. generate a ton of code (blowing icache, etc.) is quite an interesting statement. I think it deserves a closer look, though.
The only case where a string is returned in the exception-based code is Xml::getAttribute() where in both implementations there is exactly one copy (copy assingment in the error code case and copy construction in the exception case).
Being able to just return the type in the good case is however one of the main advantages of using exceptions, so it might even be valid to count such optimizations being possible in more code as a plus of using exceptions.
> Maybe that author should use a profiler before making these types of claims? check the generated assembly? etc?
Have you used a profiler before making your claims?
> Far slower to return a value in RAX vs. generate a ton of code (blowing icache, etc.) is quite an interesting statement. I think it deserves a closer look, though.
The cost for error codes is the branching at every point in the call stack vs. only at the error source with exceptions. And the generated exception code is marked as cold and not stored interspersed with the non-exception code so will not affect performance unless exceptions are thrown - which if you are usng exceptions correctly should be exceptional cases.
YO. I measured it myself just to be sure and since you were such a dickhead. (Copying and pasting the output from console)
With exceptions: Parsing took on average 135 us
With error codes: Parsing took on average 0 us
Maybe the dude forgot to enable optimizations altogether (not present in the command line options in the comment at the top of that file). I added /O2 since it is a PERFORMANCE test, remember? Hilarious.
Exceptions can be avoided with the use of `noexcept` whenever we can do avoid them, especially in sensitive areas that we cannot risk exception throws.
> but I don't think GP deserved the downvotes for the speculation.
That's why I asked @squid_demon for a real example that possibly got bitten by it; else, it's simply an emotional reaction for favoring one tool over another.
If aria2 [1] that is implemented in C++ is extremely fast, then I can almost guarantee that transmission's refactoring in C++ will get there too, sooner or later.
I was attempting, perhaps poorly, to ask the authors if they measured the performance degradation or not with this change. That way we would have the data you are asking _me_, for some reason, to provide. The onus is not on me to do their work for them and prove to everyone on HN that, yes, I have seen massive performance issues when people "port" C code to C++ (in many many projects over many many years throughout my professional career). If the authors choose not to answer or to care at all (if they even see my post) that is perfectly acceptable as well. I'm just interested in the facts so all of us can better choose whether this decision was a good one or not!
Actually yes, in an open source ecosystem the onus is on you to show that a change is making things worse for you. You can't expect the developers to test on all systems.
I'm not asking to test on any system EXCEPT THEIR OWN. Did they test the performance of not? If not, fine. That's all I was asking. It's not a huge ask either unless they don't give a rat's ass about performance. Which, if they want to use C++, they most likely do not.