Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an absurd statement to make about a DOD program. Securing communication is communication in the most adversarial environment imaginable. There are thousands of problems that only emerge once cryptographic authentication and authorization are enabled. This isn't a b2b sass app where you can just add an "auth layer" once the api is built out. It's passing mission-critical messages through signal-jamming and unimaginably hostile imitation scenarios. Many DOD programs are built as prototypes that don't ever factor security in the architecture, and then have massive problems and delays trying to implement it later. DOD cyber acquisition is run by the most incompetent clowns on earth. The only reason they don't know how terrible their software is, is because they're not capable of detecting how fundamentally compromised their systems are, and China and Russia are not exactly white-hats looking for a bounty.


I have first-hand experience with the problems of designing systems for adversarial denied environments. These are largely orthogonal to the problem of access controls. The low-level communication security and channel capacity is handled almost exclusively by external trusted modules, systems built on top of them only have to concern themselves with the behavior of these modules.

There is a separate concern around denied data environments in the software realm but that is not on many people's radar. Most software devs would not know where to even start to protect systems from this.

A tension with access controls is that if you implement it to the level of granularity the most demanding parts of DoD say they want, it never actually gets used because it is too complicated for users to reason about and manage. Or worse, they make mistakes and leak data because it is complicated. A simpler model is always implemented on top of it. At the same time, fine-grained and scalable access controls impose a high operational cost in the software even if they are not being used and some parts of DoD care a lot about software performance. Many parts of DoD are realistic enough to not want to pay for access controls they'll never actually use.

On top of this, security architecture is designed to be somewhat pluggable because different users will have mutually exclusive design requirements for the security architecture. It would be nice if this wasn't the case but it is what it is.


> There is a separate concern around denied data environments in the software realm but that is not on many people's radar. Most software devs would not know where to even start to protect systems from this.

The concept of a denied environment is pretty clear to me when it comes to physical space, or radio communications - but could you clarify what you mean by a "denied data environment"? I have some notion of what you _might_ mean, but I can't find a clear definition of the idea anywhere.


It is a specific type of sophisticated denial-of-service attack on data infrastructure. Theoretically it is a really interesting attack and difficult to defend against. Essentially all open source and commercial systems are vulnerable to it.

Most systems, including military systems, use data from many exogenous sources. Critical systems may use data diodes and formally verified software interfaces to consume this data that make them extremely hardened against outright exploitation. However, these systems are vulnerable in another way: they use data structures and algorithms to serve their purpose, often with a bunch of architecture to allow scalability like multithreading.

You can subtly generate or dynamically edit data in exogenous data streams of target systems to produce pathological computer science outcomes that will pass all human inspection and formal verification as legitimate. Nonetheless, it is engineered to enable cascading pathological scenarios in common implementations of data structures and algorithms in popular systems. The attacks are usually targeted against lock graphs and subtle quadratic corner cases in algorithm implementations. Many years ago I engineered a prototype of this for fun targeting a well-known commercial database and it completely locked up the system for more than 10 minutes. For many purposes, that is almost as good as a system kill. The obvious way to recover your system is to disconnect it from data sources and users.

There is a requirement in DoD for systems that are designed to automatically detect these types of attacks and to preserve operational performance in these types of adversarial data environments. I think it isn't on anyone's radar because it hasn't been used in any real-world attacks against commercial systems. It definitely requires a sophisticated adversary, random hackers aren't going to pull it off.

The "data denied environments" are like the above, where your adversary is injecting these kinds of system attacks in all of your exogenous data feeds. If you have to shut those sources off to keep your systems up, you are running blind.


It is not absurd, this is the standard for rapid prototyping in the DoD. Given that Palantir already has a strong track record for authn/z inside their systems used across the DoD, LEO, and Intelligence Agencies. It's not an untread, uncharted path for these organizations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: