Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We really need to stop writing network code in unsafe languages, period.


But even 'safe' languages have unsafe VMs. There is no question that over the years these VMs (or the runtimes) have had equally severe vulnerabilities.

The thing that really sets heartbleed apart is not the details of the bug, it's the scale of the 'infection'. OpenSSL is a core dependency of so many distributions and of so many pieces of software.

I think we could argue for lots of different solutions (diversity of implementations, safe languages, more tests) and all of them might be good in one way or another but none of them are a silver bullet for any possible bug either.

I think the take away is that you need to in a position to upgrade any part of your software stack at a moment's notice not just the obvious top-level (e.g. Rails/Django/Jetty).


> these VMs (or the runtimes) have had equally severe vulnerabilities

I just looked at all the CVEs for .NET (62 of them). I did not find related to reading outside memory bounds or running arbitrary code. All the executable vulnerabilities were due related to loading code or escaping sandboxing: irrelevant unless you're running untrusted code in the first place.

A handful of them were due to calling out to an unsafe native library, like to render fonts.

The other serious ones were logic errors, for instance, ASP.NET returning file contents when it should not.

So while technically the VMs/runtimes have bugs, they aren't remotely the same severeness.


.NET isn't something I work with but that's good to hear.

Maybe you could tell me why this one doesn't count though? http://technet.microsoft.com/en-us/security/bulletin/ms10-06...

This is just the first I found. Sorry I'm not being awkward, I just don't work with CLR/Silverlight. What in your mind prevents this remote execution exploit from being serious? CVE denote it as a 9.3 and Microsoft claim it allows remote execution on a server too (under some circumstances).


> The vulnerabilities could also allow remote code execution on a server system running IIS, if that server allows processing ASP.NET pages and an attacker succeeds in uploading a specially crafted ASP.NET page to that server and executing the page

Like he said, this matters if you're running untrusted code from potentially malicious people. It's not a serious bug if you're running well-intentioned but potentially buggy code, like openssl.


>A remote code execution vulnerability exists in the Microsoft .NET Framework that can allow a specially crafted Microsoft .NET application

An attacker has to get the user to run their application. If you can get the user to run arbitrary executables, usually you've already won. It's only news in this case because .NET, Silverlight, Flash, Browser JS, Java Applets, etc. offered a sandbox.

It would not have any impact on applications a user is running.


Rust allows you to write safe code without a VM. It is statically checked to be memory and concurrency safe.


1. I think a lot of people believe Rust will just type-check any old program and tell you when it has faults. So you can start with a bit of Ruby/C/Python, translate it to Rust and presto, all your bugs are exposed for the world to see.

In practice Rust's type checker accepts only a _very_ small subset of correct programs. I've been in a position to write some decent sized Rust code recently and it takes a shift in your mindset to start writing decent Rust code.

Even now there are patterns I'm unsure how to model in Rust. Arena allocation is a good example because it was partly the cause of Heartbleed too. Arena allocation in rust seems to require unsafe pointers and unsafe code blocks. You can look at Rust's standard library and see this.

2. The point being that the Rust language exposes unsafe code blocks and pointers. At some point you're going to hit those blocks (if nothing else in 3rd party code) and you're back to square one: You need to trust unsafe code that it is correct. It doesn't matter if that code is a VM or unsafe code.

*edited for some legibility.


The argument Rust devs make is that most of the time you would not need to use unsafe code and when you do, being explicit about it would make you more careful and think twice about it.

To me it makes sense. And the example you give here is very relevant. First you'd try to do it within the standard language bounds and only when you realize you can't do it that way, I'll resort to unsafe code. But now your very aware that this part of the code needs to be treated why extra care. So, to me, you're not completely back to square one.

Nicholas Matsakis make this very point near the end of this talk: https://www.youtube.com/watch?v=9wOzjbgRoNU

I would even add, if care is taken to make that unsafe code really small it can even been generated by Coq for instance as stated in some comments here.

That said Rust might not be the best out there for the job but IMHO it shouldn't be dismissed to fast either. It is similar enough to C++ to allow a less painful transition for devs with the domain knowledge.


Would you not assume that the entire OpenSSL library would count as being in need of extra scrutiny? The point is that any time you let people directly access memory, they can, and often will, screw it up.


Ok, maybe we should make a distinction between, let's say the plumbing code and algorithms. Rust could help with the former. According to some comments I've read here it seems OpenSSL is using it's own abstraction of malloc/free (not that I have actually read the code). I suppose that this part of the code would be a suitable candidate for unsafe code with special extra care taken, then the rest of the algorithm does not not to be unsafe code. If you watch the video you might understand better what I mean : the ARC is unsafe but provide you with a safe abstraction for you to use in the checked part of the language.

Of course such a project must require extra scrutiny on all level and Rust does not resolves all the problems. I'd say pick your battles. Rust provide some interesting middle ground between C/C++ and a completely different language like Ada.


Oh, I'm not dismissing Rust. I think my overall point is that nothing is ever going to perfect. Plan for imperfection.


> It doesn't matter if that code is a VM or unsafe code.

What does matter is the amount of unsafe code to trust. It's much easier to check that a small area of clearly-marked unsafe blocks does "the right thing" than if your entire program is a gigantic unsafe block.


I haven't used it, but Rust provides a safe Arena abstraction for you: http://static.rust-lang.org/doc/master/arena/index.html


Those aren't the only "dangerous" kinds of mistakes a programmer can make though


Just because you can't design the perfect car, it doesn't mean that you should not improve the breaks.


No, but it eliminates a large chunk of the more comment cases.


ATS doesn't have a VM. It compiles directly to C. The example given in the article doesn't use a runtime and the resulting C code has no more overhead than the original C.


The VM is a much smaller attack surface than all applications and all their libraries.


That is true, but I don't think this invalidates my central point that the rational response to heartbleed is to update your contigency plans. The irrational response is to plan the perfect world.

After 20 years of Java we still don't have our perfect VM. It still sees critical security vulnerabilities. I don't think I'm picking on Java unfairly here. Java is a well written code base, it has plenty of unit tests, a proper code review process, a sound architecture. Pretty much all of these have been put forward as ideas that would 'cure' the OpenSSL project. Yet it doesn't seem to be a perfect cure. At some point no matter what your language, VM, OS is you are going to experience something similar.


> After 20 years of Java we still don't have our perfect VM. It still sees critical security vulnerabilities.

Sure, but how many, and how often? The last advisory for Java's SSL I can find is from 2009, and that was quite a limited flaw (allowed an attacker to inject a prefix into SSL data). Indeed the kind of exposure we see with heartbleed - leaking all of the process's memory including the private key - is more or less impossible by design. At this point maybe using Java for your internet-facing service might do more to improve your security than shaving a day off your response time.


Sorry, I'm not limiting the discussion to SSL vulnerabilities.

A remote code exploit is as bad as a memory leak.

I posted these two: CVE-2013-1493 and CVE-2013-0809 in another reply. These 2 were memorable to me just because visiting a page (or a compromised page) would allow the exploit to proceed without any password/prompt/warning.


CVE-2013-1493 is more an argument for java - the vulnerability exists because that part of the standard library is implemented as native code rather than in java itself.


As others have pointed out above (https://news.ycombinator.com/item?id=7572092), the 'perfect JVM' is missing the point. The JVM aims to provide two things:

1: A high-level development environment which allows well-intentioned developers to avoid, say, buffer-overflow bugs

2: A sandbox, in which untrusted code can be safely run

Java has a truly awful track-record on point 2 (running untrusted applets by default? awful idea), but a much better one on point 1, which is what's actually relevant here.

> At some point no matter what your language, VM, OS is you are going to experience something similar.

No. If all/nearly all of your OS is written in a safe language, it's going to be much safer from, say, buffer-overflow attacks. Unfortunately there aren't any such languages in major production use, so it's hard to point to concrete numbers.


s/languages/OSs/


So we offload the safety to the people that write the runtimes.

I understand that that centralises the potential problem area and may make it easier to address... but it still means that someone has to do the 'hard' bits, and if they get it wrong then everyone using the runtime is screwed. Just like what happened here (too many people depending on a single implementation).

I don't know about you, but I'm not 100% comfortable with the idea that other, cleverer people will take care of all that for me, so I don't have to worry my pretty little head about the details of what's really going on with the machine.

And look at all the JVM vulnerabilities we've seen recently...


Speed or security?... Age old question.



Actually, even "modern" C++ is a step up from plain C. That being said, I am really looking forward for Rust.


LOL "* In theory. Rust is a work-in-progress and may do anything it likes up to and including eating your laundry."


It's a shame to see that comment has been downvoted. That's a quote directly from the bottom-right corner of the Rust website itself!

Rust is promising, without a doubt. But it's not yet truly usable in the same sense that C, C++, Java, Python, Haskell, Go and so many other languages are.

Maybe it'll start to get to that point once 1.0 is released, once we see at least some language and library stability, and then perhaps some adoption. But that just hasn't happened yet.


I downvoted it because "LOL" is not the kind of comment I'd like to see here. The point could have been made in a more substantial way. Like you just did.


Wow sorry I can't laugh at something jeez. So, you have never in your life just felt like re-posting a quote off something and just added a little something to it to show the spirit in which it was meant to be. Now you are just being nitpicky and to be honest rude in a sense. I have just joined this community I am trying to fit in and you just come along and see the comment and you "don't like it" because it's short, sweet and too the point. I am laughing at the comment of the programmer of rust for the quote he put on his site and now you have just totally bashed me because you felt it necessary to not like my simplistic comment. Wow.


> you have never in your life just felt like re-posting a quote off something ...

I do, but I do that on Twitter, because, as you've found out, HN will downvote you into oblivion.

> I have just joined this community I am trying to fit in

Ah ha! Sorry, I didn't see that: usually, new users are in green. (also, your account is 163 days old?) If you haven't checked it out, you should check out the community guidelines: http://ycombinator.com/newsguidelines.html

For what it's worth, I am not trying to 'bash [you]'... but I don't think this was a great comment. Try to keep them more substantial here. Different forums are appropriate for different kinds of discourse, and short little comments are generally not taken very well here.

The same happens to "+1", "thanks", and "interesting!" comments. If you can't write more than two sentences, you probably shouldn't post.


What some one - I think kibwen - has brought up is that early adopters can benefit in the sense that the language design is still in flux. So these early adopters can uncover weaknesses in the design, before they get to the stage where they have to consider backwards compatibility.

So although early adopters might not get any useful software out of learning Rust at this stage, they might indirectly improve their future Rust code by having a small influence on the direction of the language.


If you are speaking about execution speed, you got the idea wrong. From a quote in the article:

> If you use the high level typing stuff coding is a lot more work and requires more thinking, [...] (but) you can even hope for better performance than C by elision of run time checks otherwise considered mandatory, due to proof of correctness from the type system. Expect over 50% of your code to be such proofs in critical software and probably 90% of your brain power to go into constructing them rather than just implementing the algorithm. It's a paradigm shift.

The idea is to formally prove that the code is not doing unexpected things. The process is relatively simple to understand:

First you define the assumptions you make about the program, its execution environment, and the acceptable/expected results of your program. This is known as "formal specification" of the program. It is a critical part. If your specification is wrong, then the whole approach breaks down. However, this part should be much smaller than your whole codebase, and hence you can be extra careful on it.

Next, using this specification, you write proofs showing that the code can not do anything unintended (such as accessing a buffer out of its valid range). The compiler goes through this proofs and checks that everything is provably correct (according to the specification). Then it can generate code without runtime checks that you would otherwise probably implement, because it is sure that certain things cannot happen. As a result, the code may end up being actually faster.

Although a bit involved, the idea should be pretty intuitive. It is exactly what you are doing in your mind when programming. The main differences are:

1. We humans are pretty comfortable working with inexact and/or incomplete specifications. Then some undefined behavior happens, and our programs bug out. For instance, it is very easy for us to think about the division operator as something that always yields a value, ignoring the "division by zero" edge case. Computers are not, and force you to specify what exactly should happen when you encounter such edge cases.

2. We are also pretty bad at exhaustively checking every possibility, whereas computers excel at it. With the help of human-written proofs, obviously (otherwise verifying a program would involve checking every possible input for it, which is obviously intractable).

TL;DR: The tradeoff here is between development and compilation speed versus correctness, which implies improved security and execution speed.


The type safety shown in the article doesn't come at a speed cost. The times are erased during code generation. The generated C code is much like hand crafted C but with the safety confirmed via the types.


no. read the article.


I read the article is does not mention speed, performance once, in which it had nothing to do with what I stated. I was simply stating that higher level languages will cause the library to be slower also less easy to be used by other high level languages like python.


The author has several articles about ATS (http://bluishcoder.co.nz/tags/ats/) and from what I've read, it outputs C code that is proven (for the parts that are in ATS) to be correct. There's a bit more detail in "Safer handling of C memory in ATS" (http://bluishcoder.co.nz/2012/08/30/safer-handling-of-c-memo...) and the end of the article contains some generated C code.


I do not understand why people keep pushing unsafe code when computers keep getting faster and we have more and more headroom (cpu, memory, bandwidth). There is no excuse to keep running unprovable crypto.


Maybe because most of the code currently out there being use by the biggest companies in the world still use these "unsafe" languages. & tons of the job market still is in these "unsafe" languages.


I think you should read the article again. The language in question isn't higher level really it just has compile time type checking, which has no overhead.


That's still not my point. At the time of starting openssl I don't believe that ATS was around. In any case my point is that back then C lang was the best choice for performance and still is revered as the "fastest" as `nearly` all other languages are written on top of it either directly or indirectly. In any case I would love to see someone tell all of the openssl community to just drop C and switch to a different language.


C being loved in the UNIX community is one of the primary reasons that these libraries are in C.

Ada has been around since the beginning of the eighties, has and had performance that is near that of C, does not use a garbage collector, provides C linkage, and is far more safe than C.

If you do allow garbage collection, there were many performant and safe alternatives in the 90ies, such as ML.

It's culture as much as performance.


Ada is a higher language weather or not it has linkage to C or not. The UNIX community cares about performance, performance, performance.


'Higher level' doesn't necessarily say anything. Rust and ATS are both higher level than C, but they can both do everything that C does.

Is Ada less performant than C? I know it has bounds checking, but that can be turned off for "shipped" software. Does it have some features that incur a runtime cost and that can't be disabled?


Ada with all the runtime features left on is slower than comparable quality C. It's faster than most languages though.

With all the runtime turned GNAT can/should produce code with in a percent or two as fast as GCC (they share the same backend).

And Ada has a thing called SPARK which is a set of compiler checks to formally verify your code so you can provably turn off those runtime features safely. https://en.wikipedia.org/wiki/RavenSPARK


So rust & ATS & ADA & higher level languages can modify memory space? That I am aware of most higher level languages stray from being able to modify memory space on purpose as it's dangerous but, someone has to do it for the operating system is all I am saying about low level now that we are completely off topic here.


Don't know about Ada, but Rust and ATS can. The Rust code would need to be written in an unsafe block in order to as freely modify memory as C, but regular Rust can still do a lot without requiring automatic memory management, safely.

As you can see from the article (not that you seem to have read any of it), ATS can express C, and optionally prove low-level stuff about it.


No. I think that the article clearly shows:

execution speed, safety, programmer productivity ← pick any two


This isn't a fundamental dilemma like the consistency/scalability dilemma of databases. This is (or was) just a limitation of languages and compilers. The arguments for using C are many but in this case the most common involve the need for low level access (for perf, timing)

C is certainly very much suited for some parts of an SSL implementation e.g. when you need absolute deterministic performance to avoid timing attacks etc. (Although performance should certainly be good enough with modern compilers for most languages, and avoiding side-channel attacks by having deterministic execution time is also possible without resorting to C).

Using the execution speed as an argument for writing the whole thing in C is just wrong. I haven't heard any good arguments as to why a library such as OpenSSL shouldn't be written in Haskell (or say 98% Haskell and 2% C).

Did someone at some point say

"There is 2% of the code that is performance critical and/or needs low-level code for cryptographic reasons so I'll write everything including the network code, command line argument parser, world, dog and kitchen sink in C" ?


> This isn't a fundamental dilemma like the consistency/scalability dilemma of databases. This is (or was) just a limitation of languages and compilers.

You're right. Null pointers are a nuisance in some languages, but other languages have shown that you can remove them and still have just as much of an expressive language (and the compiler can still translate pointers that might be "null" to actual null enabled pointers, so no performance cost). Rust might show that a stronger type system can remove certain raw pointer flaws from the language while still retaining both execution speed and programmer productivity.

Dependent types might mature to the point that you can use them and gain both execution speed, productivity and safety - time will tell.

> Using the execution speed as an argument for writing the >whole thing in C is just wrong. I haven't heard any good >arguments as to why a library such as OpenSSL shouldn't be >written in Haskell (or say 98% Haskell and 2% C).

Did someone at some point say

"There is 2% of the code that is performance critical and/or needs low-level code for cryptographic reasons so I'll write everything including the network code, command line argument parser, world, dog and kitchen sink in C" ?

...

The article makes the argument that, assuming that the whole program needs to be incredibly performant, you can write say 2% of it in verified ATS code, the rest in C-ish ATS code (ie. without proofs).

I guess you can also choose to write 2% verified low-level code, and the rest in a more high level ATS - ATS is a functional language with garbage collection and I presume other high level goodies that functional programmers are used to.


I think that C++ is so useful in practice because it gives you most of each of those, rather than just some of two (or even just one) of them.

Programs written in C++ aren't necessarily the fastest out there, but they're usually pretty close. They're at least almost always better than what you'd get when using most other languages.

And the same goes for safety. It may not allow you to write bulletproof code, but using modern C++ techniques can go a very long way toward avoiding many common problems with relative ease.

C++ may not be the most productive language for some developers, but it still does quite a good job of offering a wide variety of functionality, reasonably high-level constructs, good library support, and decent tooling.


C++ has just as much capacity to be unsafe as C, precisely because it accepts nearly all C code, not to mention that it has bare pointers, null references, and any number of other unsafe objects as first-class citizens. Of course, you're unlikely to use most of those if you're following best practices, but the same can be said of C. You might argue that C++ makes it easier to be safe because of its plethora of features and classes, but the massive size of C++ makes it a very hard language to master, and nearly impossible to guarantee that everyone will be conforming to best practices. C++ is quite possibly the largest language out there in terms of features, and with about the most gotchas (things that don't work the way you'd intuitively expect them to) that it's possible to put in a language. It's a step above C in terms of safety, but it is hardly a safe language, and it's only slightly more productive, because the benefits of its class system and standard library are so much at odds with it's huge mental overhead.


Maybe I should add another attribute to my original three - language complexity. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: