Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone with a libertarian bent, meaningful labels appeal to me as a decent way to address problems without overriding the judgement of the market. An informed market avoids lemons. So this proposal sounds OK in principle but here are some questions. Please be aware that I'm not a US citizen so my views don't really matter here, I'm just looking over the garden fence and asking questions.

1. Your argument for why it's under the FCC's jurisdiction doesn't seem all that strong. In your linked speech, you argue it's important because the FCC has the ability to regulate signals interference, and insecure devices could be turned into jammers. Has this ever actually happened? If not, is this not rather a large stretch of the FCC's mandate? Perhaps this sort of effort belongs in a different part of the government, or in an international standards agreement (possibly non-governmental).

2. What's the definition of security you're using? Security problems always exist in the context of a threat model, so having a label would imply standardizing a threat model. For example, smartphone security systems were originally designed to block malware, but over time have been stretched to try and solve often vaguely specified privacy goals towards non-malicious software too. If someone commits to supporting security updates for five or ten years at risk of government censure, then the definition of security is going to become a battlefield because whoever wins gets to control all the software that's got this label.

3. Modern security is layered via defense-in-depth strategies. If there's a bug in an inner layer but it's not exploitable due to mitigations or sandboxes (software firewalls) in outer layers, is that a mandatory security update or not? It could be argued either way because the device is not technically hackable still, simply the armor became weaker. Today this is left to the best judgement of engineers, who must balance efforts to patch theoretical vulns in old devices with work to e.g. build new defenses for newer devices. If it becomes mandatory, then paradoxically, new devices may become less secure than they otherwise could have been because all the effort is going into patching old devices.

4. Imagine a company commits to security updates for all devices for 10 years, but after 5 gets into financial difficulties. Maybe due to competitors who didn't make that expensive commitment. One quick way to dig themselves out of this hole is to push a 'security update' that drastically restricts the device's functionality e.g. prevents it from installing new apps released after a certain date. This can be indeed argued to make the device more secure, and you can argue that there's no expectation that the device will always be able to install new apps anyway, so no end-user expectations or promises have been violated. How would you stop this kind of perverse incentive?



> An informed market avoids lemons.

Has that been true in practice? I can think of plenty of horrible products, in IT, on the market. In terms of security (including privacy), the market has done nothing for IT consumers. And what about the people who already bought the lemons, before the market learned of it? Also, what if the lemon doesn't affect me but affects others (such as through DDoS)?

I prefer to keep it as simple as possible, but no simpler.


Nothing? Hasn't Apple built a part of their brand upon security and privacy?


You're right, that was hyperbole and I make a point of not using it. It's BS.

Security is awful, however, and I don't think that's hyperbole. Almost every consumer I know, including people in IT, have given up on privacy and security (i.e., confidentiality and integrity).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: