Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The fact is that some security people are stupid

Bold of you to assume that these companies have any security staff at all.

> Taken to the limit, let's call it "User Side Security" (USS), we build interfaces so that the user gets to decide their chosen security solutions (obviously compartmentalised so as not to affect any other users assets or choices). > (I feel a tremor in the Force, as if a million security people suddenly cried out in horror then suddenly fell silent.)

And rightly so. There's a lot of things broken in the security industry, but letting users pick: "Hmm, i want AES 256 ECB instead of AES 128 GCM, because 256 > 128" is not the answer.



> And rightly so. There's a lot of things broken in the security > industry, but letting users pick: "Hmm, i want AES 256 ECB instead > of AES 128 GCM, because 256 > 128" is not the answer.

Not quite the granularity I had in mind. But please say more. What I'm interested in specifically is whether or not you believe the owners of data have no stake in it's protection and no say in how that's done?


I wouldn't say they should have no stake, just that its impractical to ask them.

First of all, security is context dependent. Even a security expert will have trouble making good choices if they don't have the full picture of how the business operates. A non-expert has basically no chance. Just look at how many B2B security companies are basically preying on ignorance to sell useless security solutions. They sell them to businesses which should in principle be able to rationally evaluate the offering, and yet still manage to swindle them. What hope does the average person have?

Second, if you give users real choice, that means you have to implement all the choices, which means you have to spread your focus. Complexity is the enemy of security. The more complexity the more likely you will miss some unintended interaction.

Then there is the other trade-offs. Some security controls can have very real productivity and business trade-offs. For example, if one of your controls is that all staff have to get manager sign-off before accessing any machine with user data on it, that is going to slow down work. Often that is worth it, but the productivity loss can be significant depending on how the business is set up. I'm not sure it makes sense for users to control something like that, except in the sense they should be informed of protection in place and can freely decide if they want to continue doing business. Not to mention how can you do something like that for half your users?

My general view is that companies should be more transparent in what they do so that people can vote with their feet. Companies should also be liable for breaches, especially ones that would have been prevented by best practises. This punishes companies who play fast and loose, and also might in theory put pressure via insurance requirements. A big part of the problem right now is that it is generally more profitable to not invest in security. Breaches have very minor impacts, even major ones usually just mean a very small temporary dip in the stock price. Companies aren't going to care about security unless it affects the bottom line.


Thanks for this thoughtful response bawolff. I need to digest it but you make good points that all tally with ny experience. Yet I remain convinced that a regulatory approach needs to include the end user as a firts-class stakeholder. How to do this without making the life of security professionals an untennable misery is where I want to focus. After all people look after their own money, their own homes and their own health. Why do we carve out an exception for their data?

Respects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: