> It has been claimed that Euler's identity appears in his monumental work of mathematical analysis published in 1748, Introductio in analysin infinitorum. However, it is questionable whether this particular concept can be attributed to Euler himself, as he may never have expressed it.
https://en.m.wikipedia.org/wiki/Euler's_identity
Intel and AMD should provide GCC maintainers hardware. It could easily come out of the marketing budget, because benchmarks compiled using gcc would be affected, and they use benchmark numbers in advertising.
Maybe it's my recent living in the Erlang/Elixir community, but writing code that is not possible to crash doesn't make sense to me. Servers die, machines get unplugged, network partitions happen, etc. I don't discount that the type system allows you to specify segments of your program that can be provably impossible to crash, but doing that for entire nontrivial programs seems like an impossible ask to me.
The general aim of error handling in languages like Elm is for types to not lie. If you can either return a result, or an error, the type represents that. The more places that can return an error: the more complex the types, and the more complicated the error handling. You can always pass the error value up through layers, but as people would say, it's just easier to make illegal states unrepresentable.
It's not an aim to pretend that failures don't happen, especially when doing remote calls, and the types will be transparent in showing where this occurs.
> Servers die, machines get unplugged, network partitions happen, etc
These things happen in backend / distributed systems. They don't really happen in the browser, which is basically a self-contained sandbox that can make network requests. The browser code needs to handle those network requests maybe failing, but that's about it in terms of dealing with unpredictability. So, it's totally reasonable to think that a web-app should never be able to crash.
So if I make two API requests and get some nonsensical data that comes back, (say v1 from the first request, and v2 comes back from the second because of some bug in the backend). The program can certainly prepare itself for validating that data, and reject the v2 if it's not prepared for it, but then what do you do? To me "crashing" that entire page is a reasonable way to get around that for some programs. You as a browser can also lose Internet, in which case you also need to be prepared for things that happen on the other end depending on the circumstances of disconnection (did my request go out before it died? should I retry?). The problem is certainly more constrained, to your point, by being in a browser, but it's trivial to get your program in a nonsensical state that such that you cannot render as soon as you introduce external APIs. Perhaps Elm gets around this somehow, but I'm failing to see how.
> The program can certainly prepare itself for validating that data, and reject the v2 if it's not prepared for it, but then what do you do? To me "crashing" that entire page is a reasonable way to get around that for some programs.
Such stuff is mostly handled via types like `Maybe APIResponse` or `Either ErrorMsg APIResponse`, so the type systems can guarantee that you handle both cases. And handling the error case (even if it's just a simple message to the user like "Error: Unable to fetch new weather data for location X/Y") is better then just crashing.
I mean it depends on your definition of crashing, but I consider it similar to undefined behaviour. So I greatly prefer
$ ./fetch_weather $location
panic: unable to fetch new weather data for location X/Y
to
$ ./fetch_weather $location
Segmentation fault
and I guess most people do (where segfault is similar to 500 Internal Server Error which is equally bad).
Browser code often hangs forever and then the user has to reload the page. There might not be an error message, but this is still a bad user experience.
You’re describing errors. Errors do occur in Elm, but they rarely lead to crashing.
The type system forces the developer to deal with the error, so that, hopefully, the user of the app has a better experience than a non-responsive web page.
Crashes we cannot avoid through the type system is stuff like stack overflow or running out of memory.
I could not agree more - and in fact, have no recourse when a developer tells me with a straight face that their code "cannot crash". The worlds best type system doesn't prevent timeouts, running out of disk space, memory, etc. The attitude I typically receive when trying to push pragmatism instead of idealism is usually a "you just don't understand". Throwing an exception is _not_ the worst thing a program can do, by a hell of a long shot!
«Cannot crash» != «cannot fail». There are a bunch of things in Elm that can fail, but the language forces you, «through types», to deal with those situations.
That doesn’t mean developers make apps that deal with errors reasonably, but the developers should be aware of every part of the application that can fail.
Exception, of course, beeing stavk overflows as we haven’t solved the halting problem yet.
You're boned. Then most systems including Ethereum are based on the assumption that the miners aren't majority controlled by an adversary. That may or may not be a sound assumption.
The whole idea is kind of predicated on whoever you're worried about attacking you not wanting the information to get out more than they care about getting to the person holding the dead man's switch. If they are more concerned with getting to that person than with whatever information the person has threatened to publish no level of security on the switch matters it just becomes part of the cost of getting to the owner.
We have had our entire kube go down because the image size was huge and the master node only had 20gb disk and docker wasn’t cleaning up old named images. Smaller just extends the problem to the future, but it gives us time to re-build our cluster with a new kube version.
Most the horrible security bugs in Java show up in the sandbox, where attackers can supply arbitrary code for you to run.
In contrast, Java as a server language has an excellent security record IME. The last public patch panic I can remember was in 2011 with the denial of service bug regarding parsing of floating points. There has been other security bugs regarding cryptography etc, I'm sure, but in general you can feel very secure running Java on your servers.
It is a shame that security bugs for both are bundled together, making every sandbox compromise a "Remote exploitable" vulnerability. The "applet" use case should probably just die, there is no indication that Java sandboxing will ever be secure, the design is unsound.
Java as a server language has a record of nasty serialization-related RCE vulnerabilities. Of course, they're in popular Java libraries used on the server rather than the language itself, just like this bug was in a popular C library rather than the language itself - but Java makes it very easy to accidentally write that kind of vulnerability. In fact, just loading two unrelated libraries that are individually safe sometimes create an exploitable RCE condition in Java. That's worse than even C.
No disputing that bugs can be written in any language. But by avoiding C/C++ you're excluding a specific class of bugs which have historically proved harmful.
You can write exploitable code in Java. But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.
Essentially any bug that can be written in Java/Go/Rust/etc can be written in C/C++. But some C/C++ bugs are extremely uncommon in other languages, or you have to actually TRY to introduce them.
> But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.
Depends on your definition of arbitrary. Higher level languages have higher level exploits. While injecting x86 shellcode into a java process is probably hard, many java applications have been vulnerable to serialization bugs which result in the execution of arbitrary bytecode.
It also needs to be said that this is generally not a reasonable reason to pick C over Rust. Memory-safe languages are effective defenses against these flaws.
>Bugs can be found in code written in all languages.
But not all languages frequently produce security vulnerabilities as a result of common types of bugs that are due to error-prone humans having to do things that should be done for us automatically in the year of our Lord 2016
Java applets have security issues today. That's a situation where you are allowing random websites to execute arbitrary code on your computer. Flash has the same issues. So don't do that.
Don't confuse Java applets (and the lack of security thereof) with the JVM as a development platform. I'd bet on the security of a Java application over that of a C/C++ application any day.
To be clear, are you referring to security bugs in the Java standard library (written almost completely in Java), or those in the JVM itself or the browser plugins (written almost entirely in C++), or in Java code bases?
The vast majority of the high profile Java security bugs have been in the second, which would be more of a ding against C++ than Java the language, wouldn't it?
I think it would be against Java in sense Java does not support writing high performance code like Java runtime / security code etc. Now it may not have errors as much as openssl but that argument will be about implementation quality not against C/C++.
To be clear, I am not a security researcher, and I haven't verified the severity of these issues. But in 2016 alone there are 16 CVEs which is 4 per month.