Can someone please explain the purpose of a logging library (woof, log4j, etc)? There must be something beyond writing text to an external file, possibly with different error levels, that I'm missing. This is a serious question. I truly do not understand what you gain by depending on an external library for this (seemingly) simple operation and would appreciate some insight. Thanks!
Often logging libraries allow you changing the logging level at runtime, without restarting or recompiling the application, as well as turning logs or off at runtime for different parts of the application. They let you organize logging levels so that when, for example, you turn the level to INFO in one part of the applications, all of the connected code also gets its own log level turned to INFO and you can define which parts of the application should change their log levels in sync. There's also performance considerations, often log libraries claim to implement tricks so that logging is supposed to be faster than naively writing strings to a file.
I was wondering the same, but a recent explanation made sense to me. The idea is to have the ability to control logging of third-party libraries. You might want to see some log messages but you don't want everything spamming stuff into stdout.
If it's just writing to a file or the screen you don't need much.
But then you need interpolation of arguments. And other appenders, what about logging to the network for aggregation elsewhere. And logging levels. And a couple more features and corner cases.
And before you know it, you're writing a lot of code which could be packaged into a library.
But you don't need to, like with most library stuff you could write it in-house yourself.
> It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. This could allows attackers with control over Thread Context Map (MDC) input data when the logging configuration uses a non-default Pattern Layout with either a Context Lookup (for example, $${ctx:loginId}) or a Thread Context Map pattern (%X, %mdc, or %MDC) to craft malicious input data using a JNDI Lookup pattern resulting in a denial of service (DOS) attack. Log4j 2.15.0 restricts JNDI LDAP lookups to localhost by default. Note that previous mitigations involving configuration such as to set the system property log4j2.noFormatMsgLookup to true do NOT mitigate this specific vulnerability.
(Slightly off-topic, as the link is a collection of research papers rather than a book.) A Visual Survey of Text Visualization Techniques: https://textvis.lnu.se/
Immediate action! Some person actually told me that Florida would be underwater in less than ten years now (I guess it kind of already is with all the hurricanes). Eagerly awaiting the excuses when that time passes us all by.
Does this document apply to US (which is a mere tiny fraction of a percent of overall pollution)? Not sure how acting now would have any impact.
Not trying to be rude but I don't believe this is a sensible choice if you care about performance (sounds like you were more concerned about missing familiar features and such so this comment may be irrelevant). Just curious if you were you able to measure the performance degradation after this change was made? In other words what is the price paid for obtaining familiar constructs (it certainly wont be zero).
GP was overly broad IMHO but there definitely can be performance costs to C++ over C. Some are really subtle while others are blatant. I got real-world bit by try/catch exceptions. In a non performance-intensive app like Transmission I would guess any such things would be irrelevant, but I don't think GP deserved the downvotes for the speculation.
Exceptions are faster than error codes in the happy case where there isn't an error. The deadly sin with exceptions is throwing them as part of the normal control flow. Don't do that for any code where performance matters.
I'm guessing the author measured the overall performance and assumed that using error codes, rather than exceptions, is what made the sample code he provided slower. Turns out, the reason it is slower is because, in the case using error codes, a copy of a std::string is made. In the exceptions case, where the string is returned rather than passed as a parameter by reference, the copy can be elided because the compiler invokes NVRO. Has nothing to do with exceptions.
Maybe that author should use a profiler before making these types of claims? check the generated assembly? etc?
Far slower to return a value in RAX vs. generate a ton of code (blowing icache, etc.) is quite an interesting statement. I think it deserves a closer look, though.
The only case where a string is returned in the exception-based code is Xml::getAttribute() where in both implementations there is exactly one copy (copy assingment in the error code case and copy construction in the exception case).
Being able to just return the type in the good case is however one of the main advantages of using exceptions, so it might even be valid to count such optimizations being possible in more code as a plus of using exceptions.
> Maybe that author should use a profiler before making these types of claims? check the generated assembly? etc?
Have you used a profiler before making your claims?
> Far slower to return a value in RAX vs. generate a ton of code (blowing icache, etc.) is quite an interesting statement. I think it deserves a closer look, though.
The cost for error codes is the branching at every point in the call stack vs. only at the error source with exceptions. And the generated exception code is marked as cold and not stored interspersed with the non-exception code so will not affect performance unless exceptions are thrown - which if you are usng exceptions correctly should be exceptional cases.
YO. I measured it myself just to be sure and since you were such a dickhead. (Copying and pasting the output from console)
With exceptions: Parsing took on average 135 us
With error codes: Parsing took on average 0 us
Maybe the dude forgot to enable optimizations altogether (not present in the command line options in the comment at the top of that file). I added /O2 since it is a PERFORMANCE test, remember? Hilarious.
Exceptions can be avoided with the use of `noexcept` whenever we can do avoid them, especially in sensitive areas that we cannot risk exception throws.
> but I don't think GP deserved the downvotes for the speculation.
That's why I asked @squid_demon for a real example that possibly got bitten by it; else, it's simply an emotional reaction for favoring one tool over another.
If aria2 [1] that is implemented in C++ is extremely fast, then I can almost guarantee that transmission's refactoring in C++ will get there too, sooner or later.
I was attempting, perhaps poorly, to ask the authors if they measured the performance degradation or not with this change. That way we would have the data you are asking _me_, for some reason, to provide. The onus is not on me to do their work for them and prove to everyone on HN that, yes, I have seen massive performance issues when people "port" C code to C++ (in many many projects over many many years throughout my professional career). If the authors choose not to answer or to care at all (if they even see my post) that is perfectly acceptable as well. I'm just interested in the facts so all of us can better choose whether this decision was a good one or not!
Actually yes, in an open source ecosystem the onus is on you to show that a change is making things worse for you. You can't expect the developers to test on all systems.
I'm not asking to test on any system EXCEPT THEIR OWN. Did they test the performance of not? If not, fine. That's all I was asking. It's not a huge ask either unless they don't give a rat's ass about performance. Which, if they want to use C++, they most likely do not.
Homotopy type theory is a mathematical foundation, like set theory, or Martin Lof's type theory. This is different from a proof assistant like Coq or Lean. In fact, Lean has a library which implements part of HoTT (see https://github.com/leanprover/lean2/blob/master/hott/hott.md). In short, there is no comparison between Lean and HoTT.
I think it is pretty obvious that Mr. Meyers never actually programmed / used C++ himself for any real projects. That he would abandon the language after only 2.5 years of nonuse makes sense to me.
I would love to know, too! Oh. A search engine. What's this?
"Eighty participants, ages 21 to 65, who meet Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria for major depressive disorder (MDD) will be stratified by study site and randomized with a 1-to-1 allocation under double-blind conditions to receive a single 25 mg oral dose of psilocybin or a single 100 mg oral dose of niacin. Niacin will serve as an active placebo."
Is this the "new study" the article is referring to with "a single dose of psilocybin given to mice"? Oh. Some text. What's this?
> Eighty participants, ages 21 to 65, who meet Diagnostic and Statistical Manual of Mental Disorders (DSM-5) criteria for major depressive disorder (MDD) will be stratified by study site and randomized with a 1-to-1 allocation under double-blind conditions to receive a single 25 mg oral dose of psilocybin or a single 100 mg oral dose of niacin. Niacin will serve as an active placebo.
I'm not convinced these participants are mice.
(As an aside, I love the way they talk about double blind conditions as if you're not going to be able to tell whether that pill was 25mg of psilocybin.).
It's possible to have a blind participant if they've not had psilocybin before and they are going to receive either psilocybin or one of a multitude of other psychoactive substances (at least that's how Johns Hopkins did it).
You go to all the trouble to type up a smart ass response to show how smart you are with being able to use a search engine, yet you totally fail to provide the link which you found. This quote is meaningless without context of what study you found.