Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I offer that perhaps the issue here is an ignorance of the history of humanity, at least with respect to computing?

Perhaps, but as no one has offered a counterexample, I have no way of knowing.

> Dijkstra did do what you claim is impossible several times in his career

No, he did not. The meaning of a technique not being scalable is that while it does work for small instances it does not for large ones.

We have learned so much since Dijkstra, and can apply even more powerful end-to-end verification/synthesis than he had at his disposal and for larger programs, but we still can't scale it, just as he couldn't, and, indeed, didn't.

> You have heard of decomposing a large program into smaller programs, right? That's how it is done and has been done.

No, it has not been done, but unlike 50 years ago, in the decades we've studied the "hardness" of program analysis/synthesis we have an idea as to why. There are now complexity theory results that show that decomposition cannot always work. In addition, we now know -- what we didn't 50 years ago -- that there are essential barriers to scaling. Things that are easy when small can be absolutely impossible when large.

> This is a foolish self-imposed constraint.

It's not a constraint as much as saying that the claim you have a technique that can more effectively yield programs that we already know how to get more-or-less right and the claim you have a technique that can effectively, feasibly and scalably produce programs that are otherwise very hard to get right are two very different claims. It's like the difference between improving a polynomial algorithm by a constant and finding a polynomial algorithm for NP-complete problems. The first might get you published; the second will make you world-famous.

> It's still irrelevant to my claim about it being easier for a programmer to derive programs that meet an arbitrary specification than to test that an arbitrary program text satisfies it.

The proof is relevant because it precisely proves that your claim, as stated, is, in fact, false. Your claim might be made about "common cases" etc., but that would require a much more careful wording, as well as a careful empirical study rather than insistence.



> No, he did not. The meaning of a technique not being scalable is that while it does work for small instances it does not for large ones.

I see, so a complete and correct multiprocessing operating system running on unreliable hardware is a "small instance" according to you. This is the sophistry I promised to dismiss. Consider it done.

> There are now complexity theory results that show that decomposition cannot always work.

Obviously decomposition can't always work. Primitives can't be decomposed by definition. Yet another trivial truth.

>It's not a constraint as much as saying that the claim you have a technique that can more effectively yield programs that we already know how to get more-or-less right and the claim you have a technique that can effectively, feasibly and scalably produce programs that are otherwise very hard to get right are two very different claims. It's like the difference between improving a polynomial algorithm by a constant and finding a polynomial algorithm for NP-complete problems. The first might get you published; the second will make you world-famous.

This word-salad has nothing to do with anything I've written.

> The proof is relevant because it precisely proves that your claim, as stated, is, in fact, false. Your claim might be made about "common cases" etc., but that would require a much more careful wording, as well as a careful empirical study rather than insistence.

This is more interesting, but also misleading. Obviously every single program ever written falls within the class of "common cases," since, asymptotically speaking, very nearly all program texts have never been written, let alone studied. Look ma, I can do sophistry too!


You are talking about high-assurance code, which is roughly a statistical classification regarding failure rate. Pron is talking about formal verification, which proves something to be correct according to a mathematical model.

Formal verification is not required to produce code with a very low failure rate. I have personally known about unpatched vulnerabilities worth a small fortune which were not exploited because most people who chose to become programmers aren't assholes, or at least not criminally so.

Formal verification is not a magical certification that software is bug free, as there will inevitably be an error in the specification itself or how the specification relates to the hardware. This is astonishingly rare, however, I think Coq has had a single major error in the TCB.

The space shuttle, various nuclear accidents, and plain old manufacturing are example on the limits to which engineering can prevent failure: we just can't shove the real world into statistical models.

You two are in agreement, but seem to be talking past each other :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: