Hacker Newsnew | past | comments | ask | show | jobs | submit | robert-wallis's commentslogin

I remember Google being more popular with tech savvy users than Yahoo or Alta-Vista. IMO this is the writing on the wall.


How about COO - Close Object Obligation? And you could sound like a pigeon saying "coo" during talks explaining it.


Hah I like this one!

PS. Congrats! https://verdagon.dev/blog/easter-egg-notes


I'd really like to see an LLM use Wolfram Alpha APIs like the new Toolformer paper does https://paperswithcode.com/paper/toolformer-language-models-...



I recently learned:

> It has been claimed that Euler's identity appears in his monumental work of mathematical analysis published in 1748, Introductio in analysin infinitorum. However, it is questionable whether this particular concept can be attributed to Euler himself, as he may never have expressed it. https://en.m.wikipedia.org/wiki/Euler's_identity

But it is still cool.

https://youtu.be/yPl64xi_ZZA


Intel and AMD should provide GCC maintainers hardware. It could easily come out of the marketing budget, because benchmarks compiled using gcc would be affected, and they use benchmark numbers in advertising.


One of Elm’s main focuses is code that is not possible to crash. Once I groked that, native modules going away in favor of ports made much more sense.


Maybe it's my recent living in the Erlang/Elixir community, but writing code that is not possible to crash doesn't make sense to me. Servers die, machines get unplugged, network partitions happen, etc. I don't discount that the type system allows you to specify segments of your program that can be provably impossible to crash, but doing that for entire nontrivial programs seems like an impossible ask to me.


The general aim of error handling in languages like Elm is for types to not lie. If you can either return a result, or an error, the type represents that. The more places that can return an error: the more complex the types, and the more complicated the error handling. You can always pass the error value up through layers, but as people would say, it's just easier to make illegal states unrepresentable.

It's not an aim to pretend that failures don't happen, especially when doing remote calls, and the types will be transparent in showing where this occurs.


> Servers die, machines get unplugged, network partitions happen, etc

These things happen in backend / distributed systems. They don't really happen in the browser, which is basically a self-contained sandbox that can make network requests. The browser code needs to handle those network requests maybe failing, but that's about it in terms of dealing with unpredictability. So, it's totally reasonable to think that a web-app should never be able to crash.


So if I make two API requests and get some nonsensical data that comes back, (say v1 from the first request, and v2 comes back from the second because of some bug in the backend). The program can certainly prepare itself for validating that data, and reject the v2 if it's not prepared for it, but then what do you do? To me "crashing" that entire page is a reasonable way to get around that for some programs. You as a browser can also lose Internet, in which case you also need to be prepared for things that happen on the other end depending on the circumstances of disconnection (did my request go out before it died? should I retry?). The problem is certainly more constrained, to your point, by being in a browser, but it's trivial to get your program in a nonsensical state that such that you cannot render as soon as you introduce external APIs. Perhaps Elm gets around this somehow, but I'm failing to see how.

Edit: redirection --> disconnection


> The program can certainly prepare itself for validating that data, and reject the v2 if it's not prepared for it, but then what do you do? To me "crashing" that entire page is a reasonable way to get around that for some programs.

Such stuff is mostly handled via types like `Maybe APIResponse` or `Either ErrorMsg APIResponse`, so the type systems can guarantee that you handle both cases. And handling the error case (even if it's just a simple message to the user like "Error: Unable to fetch new weather data for location X/Y") is better then just crashing.

I mean it depends on your definition of crashing, but I consider it similar to undefined behaviour. So I greatly prefer

    $ ./fetch_weather $location
    panic: unable to fetch new weather data for location X/Y
to

    $ ./fetch_weather $location
    Segmentation fault
and I guess most people do (where segfault is similar to 500 Internal Server Error which is equally bad).


Browser code often hangs forever and then the user has to reload the page. There might not be an error message, but this is still a bad user experience.

Personally, I'd rather have the error message.


You’re describing errors. Errors do occur in Elm, but they rarely lead to crashing.

The type system forces the developer to deal with the error, so that, hopefully, the user of the app has a better experience than a non-responsive web page.

Crashes we cannot avoid through the type system is stuff like stack overflow or running out of memory.


I could not agree more - and in fact, have no recourse when a developer tells me with a straight face that their code "cannot crash". The worlds best type system doesn't prevent timeouts, running out of disk space, memory, etc. The attitude I typically receive when trying to push pragmatism instead of idealism is usually a "you just don't understand". Throwing an exception is _not_ the worst thing a program can do, by a hell of a long shot!


«Cannot crash» != «cannot fail». There are a bunch of things in Elm that can fail, but the language forces you, «through types», to deal with those situations.

That doesn’t mean developers make apps that deal with errors reasonably, but the developers should be aware of every part of the application that can fail.

Exception, of course, beeing stavk overflows as we haven’t solved the halting problem yet.


> One of Elm’s main focuses is code that is not possible to crash

Unless you have an infinite loop...


What if the miners deny check-in transactions to force the killcord to execute?


You're boned. Then most systems including Ethereum are based on the assumption that the miners aren't majority controlled by an adversary. That may or may not be a sound assumption.


The whole idea is kind of predicated on whoever you're worried about attacking you not wanting the information to get out more than they care about getting to the person holding the dead man's switch. If they are more concerned with getting to that person than with whatever information the person has threatened to publish no level of security on the switch matters it just becomes part of the cost of getting to the owner.


We have had our entire kube go down because the image size was huge and the master node only had 20gb disk and docker wasn’t cleaning up old named images. Smaller just extends the problem to the future, but it gives us time to re-build our cluster with a new kube version.


It seems like using Google Sheets or the online Excel is the answer. Single source of truth, collaboration, and keep the same fundemental tools.


This sounds like a witch hunt. Java has horrible security bugs even today.

Bugs can be found in code written in all languages.


Most the horrible security bugs in Java show up in the sandbox, where attackers can supply arbitrary code for you to run.

In contrast, Java as a server language has an excellent security record IME. The last public patch panic I can remember was in 2011 with the denial of service bug regarding parsing of floating points. There has been other security bugs regarding cryptography etc, I'm sure, but in general you can feel very secure running Java on your servers.

It is a shame that security bugs for both are bundled together, making every sandbox compromise a "Remote exploitable" vulnerability. The "applet" use case should probably just die, there is no indication that Java sandboxing will ever be secure, the design is unsound.


Oracle is deprecating the Java browser plugin in JDK 9, ie. the applet use case. There will still be support for Java Web Start though.


I've always wondered why nobody sandboxes Java applets in a LXC/Docker container or in a chrome sandbox the same way flash is contained.


Java as a server language has a record of nasty serialization-related RCE vulnerabilities. Of course, they're in popular Java libraries used on the server rather than the language itself, just like this bug was in a popular C library rather than the language itself - but Java makes it very easy to accidentally write that kind of vulnerability. In fact, just loading two unrelated libraries that are individually safe sometimes create an exploitable RCE condition in Java. That's worse than even C.


No disputing that bugs can be written in any language. But by avoiding C/C++ you're excluding a specific class of bugs which have historically proved harmful.

You can write exploitable code in Java. But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.

Essentially any bug that can be written in Java/Go/Rust/etc can be written in C/C++. But some C/C++ bugs are extremely uncommon in other languages, or you have to actually TRY to introduce them.


> But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.

Depends on your definition of arbitrary. Higher level languages have higher level exploits. While injecting x86 shellcode into a java process is probably hard, many java applications have been vulnerable to serialization bugs which result in the execution of arbitrary bytecode.

Source: http://www.darkreading.com/informationweek-home/why-the-java...


Nobody is saying RCE is impossible in memory safe languages, just much less likely.


And this needs to be said more "RCE is possible in Rust", because sometimes it is portrayed in almost unassailable terms.


It also needs to be said that this is generally not a reasonable reason to pick C over Rust. Memory-safe languages are effective defenses against these flaws.


Intent doesn't matter; results do. This is a heatmap problem - there's simply more 'C'/C++ code out there.


I'm pretty sure there is more (and a greater variety of) net facing Java code than C/C++ code.


You may be right. I'm just thinking that everything( has an O/S and it's probably written largely in 'C'. So much will depend on how you measure it.

I tend to ignore the web as much as possible.


>Bugs can be found in code written in all languages.

But not all languages frequently produce security vulnerabilities as a result of common types of bugs that are due to error-prone humans having to do things that should be done for us automatically in the year of our Lord 2016


Java applets have security issues today. That's a situation where you are allowing random websites to execute arbitrary code on your computer. Flash has the same issues. So don't do that.

Don't confuse Java applets (and the lack of security thereof) with the JVM as a development platform. I'd bet on the security of a Java application over that of a C/C++ application any day.


To be clear, are you referring to security bugs in the Java standard library (written almost completely in Java), or those in the JVM itself or the browser plugins (written almost entirely in C++), or in Java code bases?

The vast majority of the high profile Java security bugs have been in the second, which would be more of a ding against C++ than Java the language, wouldn't it?


I think it would be against Java in sense Java does not support writing high performance code like Java runtime / security code etc. Now it may not have errors as much as openssl but that argument will be about implementation quality not against C/C++.


> Bugs can be found in code written in all languages.

This is like saying "there's no point building bridges because sometimes they collapse".

Memory safety bugs are found far, far less often in memory-safe languages.


Java removed certain classes of errors (memory management). It introduced other (providing an insecure sandbox for applets).


I think code execution by insecure deserialization is the big Java security problem now, though I'm neither a security guy nor a Java guy.


It's not like C applets are safer.


If we consider NaCl to be "C applets"... yes, it actually is safer.


Not sure what 'horrible security bugs' in Java you are referring too.

If those you refer too, and there are many, are exploits in browser plugins, sandboxes or the JVM, these are written in C(++).


Right. And certain languages remove entire classes of bugs.


> Java has horrible security bugs even today.

Example?

> Bugs can be found in code written in all languages.

And there's no difference between a bug every week and a bug every 10 years?


Here's a list of Java CVEs: https://www.cvedetails.com/vulnerability-list/vendor_id-93/p...

To be clear, I am not a security researcher, and I haven't verified the severity of these issues. But in 2016 alone there are 16 CVEs which is 4 per month.


Those are vulnerabilities in the JVM itself - and I'd bet a fair bit the majority will be in the C/C++ parts, not the Java parts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: