> A legitimate researcher should be notifying the company that they are going to be looking for vulnerabilities in the first place. That is part of the distinction in behavior that I am encouraging. This way if someone is caught poking around for things to abuse unsolicited, at least there's a little more merit to holding them accountable. We are able to treat it more like the threat it is.
The issue is this. You have some amateur, some hobbyist, who knows enough to spot a vulnerability, but isn't a professional security researcher and isn't a lawyer. They say "that's weird, there's no way...," so they attempt the exploit on a lark, and it works.
This person is not a dangerous felon and should not be facing felony charges. They deserve a slap on the wrist. More importantly, they shouldn't look up the penalty for what they've already done after the fact, find that their best course of action is to shut up and hope nobody noticed, and then not report the vulnerability.
The concern that we will have trouble distinguishing this person from a nefarious evildoer is kind of quaint. First, because this kind of poking around is not rare. As soon as you connect a server to the internet, there are immediately attempts to exploit it, continuously, forever.
But the malicious attacks are predominantly from outside of the United States. This is not a field where deterring the offenders through criminal penalties is an effective strategy. They're not in your jurisdiction. So we can safely err on the side of not punishing people who aren't committing some kind of overt independent crime, because we can't be relying on the penalty's deterrent regardless. We need the systems to be secure.
Conversely, if one of the baddies gets in and they are in your jurisdiction, you're not going to have trouble finding some other law to charge them with. Your server will be hosting somebody's dark web casino or fraudulent charges will show up on your customers' credit cards and the perpetrators can be charged with that even "unauthorized computer trespass" was a minor misdemeanor.
You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.
I think the subject has enough depth and complexity to it that we need to promote cooperation with companies. We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.
I don't think I'm proposing anything that assertive. I'm suggesting we just put it all in the open and down on paper in a way that addresses most of the concerns and involves the company.
Documented evidence that companies were notified of security issues by people who declared that they were researchers, who the company approved to research, is a great thing to have in the fight against ignorant companies.
I completely agree that a degree of this is quaint with respect to a lot of the trouble coming from outside your jurisdiction. I just really don't see an issue with creating protected avenues for people to do research.
Opening someone's front door "on a lark" can get you shot in some states. I get that innocent people do technically illegal actions sometimes but that doesn't change whether or not an action is perceived as threatening.
So I recommend we start writing down the actions that need to be protected and at the very least give someone acting in good faith a bulletproof way to both conduct research and preserve innocence.
If you happen to uncover something accidentally and are concerned, then you can make the request afterwards and repeat your finding and report it. So no need to feel the need to stay silent
> You can't give them a slap on the wrist if you assert what they are doing isn't criminal. Having an issue with the punishment model is no reason to throw out the law.
The law is too broad in addition to being too punitive.
But here's an argument for throwing it out entirely.
There are two kinds of people who are going to spot a vulnerability in someone else's service: Amateurs and professionals.
Professionals expect to be paid. But if you go up to a company and tell them their website might be vulnerable (you don't know because you're not going any further without their permission), and you send them a fee schedule, they're going to take it as a sales pitch and blow you off most of the time. Even if there's something there. To get them to take it seriously you would need to be able the prove it, which you're not allowed to do without entering into time-consuming negotiations with a bureaucracy, which you're not willing to do without getting paid, which they're not willing to do before you can prove it. So if you impose any penalty on what you have to do to prove it, professionals are just going to send them a generic sales pitch which most companies will ignore, and then they stay vulnerable.
Which leaves the amateurs. But amateurs don't even know what the rules are. If they find something, anybody's first instinct is "this is probably nothing, let me just make sure before I bother them." Which they're not really supposed to do, but in real life that's going to happen, and so what do you want to do after it has? Anything that discourages them from coming forth and reporting what they found is worse than having less of a deterrent to that sort of thing.
But subjecting them to anything more than a small fine is clearly inappropriate.
> We can build protections against companies being dicks much easier that we can codify the difference between malicious or innocent intent behind actions that are more or less identical up until damages happen.
The point is that we don't need to distinguish them. We can safely ignore anyone whose malicious intent is not unambiguous, because we're already ignoring the majority of them regardless -- even the ones who are clearly malicious -- when they're outside of the jurisdiction.
> Opening someone's front door "on a lark" can get you shot in some states.
The equivalent action for an internet service is to ban them from the service. Which is quite possibly the most appropriate penalty for that sort of thing.
I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.
At the end of the day, I am arguing for promoting people to try to work with companies, and to put out to the public a process for making that effort effective.
I feel like we agree but our solutions are opposite. The current laws are insufficient, so we need adjustments to the laws.
You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?
I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is. If a person chooses to enter into a protected space they do not have permission to be in, then they are susceptible to consequences to that. I think just because it is easy to do it from your bedroom doesn't change it. Much like how virtual bullying is still bullying; virtual breaking and entering is still breaking and entering.
If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.
An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"
I think we can actually do something about it, and I think we ought to. But before all of that, I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way
> I think you're getting way ahead of the conversation, and there is no way to know what the implementation would be like and how communication would go between researchers and companies because if you can think of the communication problem today, then we can consider a solution for that problem in the implementation tomorrow.
A major problem is that communicating with a large bureaucracy, even to just find a way to contact someone inside of it who will know what you're talking about, is a significant time commitment. So you're not going to do it just because you think you might see something, and as soon as you add that requirement it's already over.
You might try to require corporations to have a published security contact, but large conglomerates, especially the incompetent ones, are going to implement this badly. In many cases the only effective way to get their attention is to embarrass them in public by publishing the vulnerability.
> You (and others) propose we make hacking into systems fully legal, presumably because we can target malicious activity based on what they do with that access instead of the access itself. Is that correct?
So one of the existing problems is that it's not always even obvious what is and isn't authorized. Clearly if you forget your password to your own PC but you can find a way to hack into it, it should be legal for you to do this and recover your data. What if the same thing happens, but it's your own VM on AWS? What if it's your webmail account, and all you use it for is to recover your own account? You made an API call with a vulnerability that allows you to change your password without providing the old one, but you are authorized to change your own password.
There are many vulnerabilities that result from wrong permissions. You to go the service and ask for some other customer's account page and instead of prompting for a login or coming back with "401 UNAUTHORIZED" their server says "200 OK" and gives you the data. Is that "unauthorized access"? What do you even use to determine whether you're supposed to have access, if their server says that you do?
This kind of ambiguity is poisonous in a law, so the best way to resolve it is to remove it. Punish malicious activity rather than trying to subjectively evaluate ambiguous authorization. It doesn't matter whether their server said "200 OK" if you're using the data to commit identity theft, because identity theft is independently illegal. Whereas if you don't actually do anything bad (i.e. violation of some other law), what need is there to punish it?
> I also disagree that a ban is equivalent to shooting an intruder. The connection is not the actor, the person using it is.
The justification for being able to shoot an intruder is not to punish them, it's self-defense. Guess what happens if you tie them up first and then shoot them.
You don't need to physically destroy someone to defend yourself when all they're doing is transferring data. All you have to do is block their connections.
> If we formally adopt this attitude then we also enable ourselves to pressure other jurisdictions to raise their standards to match.
The reason other jurisdictions don't punish this isn't that no one is setting a positive example. It's that their governments have no resources for enforcement or are corrupt and themselves profiting from the criminal activity whose victims are outside of their constituency.
Or if you're talking about the jurisdictions who do the same thing as the US does now, it's because their corporations don't like to be embarrassed either, and we could just as well set the example that the best way to avoid being humbled is to improve your security practices.
> I think the first place to start is making a clear legal relationship between security researchers and the private sector and debate the laws that should be in place to facilitate that in a fair way
Companies will want to try to retain the ability to threaten researchers who embarrass them so they can maintain control over the narrative. But that isn't a legitimate interest and impairs their own security in order to save face. So they should lose.
The embarrassment itself is a valuable incentive for companies to get it right from the start and avoid the PR hit. Nothing should allow them to be less embarrassed by poor security practices and if anything cocksure nerds attempting to break into public systems for the sole purpose of humiliating major organizations should be promoted and subsidized in the interest of national security. (It's funny because it's true.)
> An uncontrolled internet appareny has 1 outcome - malicious spam. That is what everyone in this thread seems to agree on, and the arguments against what I suggest all seem start with the assumption "there is nothing we can do about it" and the corollary "there is nothing we need to do about it"
It's not that there is nothing we can do about it. It's that imposing criminal penalties on the spammers isn't going to work if they're on another continent, and correspondingly isn't a productive thing to do whenever it has countervailing costs of any significance at all.
You can still use technical measures. Email from an old domain with a long history of not sending spam and all the right DNS records, probably isn't spam. Copies of near-identical but never before seen messages to a thousand email addresses from a new domain, probably spam.
You can also retaliate in various ways, like stealing back the cryptocurrency they scammed out of people by using your own exploits.
What you can't do is prevent Nigerians from running scams from Nigeria by punishing innocuous impudence in the United States.
And one of the best things we can do is improve the security of our own systems, so they can't be exploited by malicious actors we have no effective means to punish. Which the existing laws are misaligned with, because improving security is more important than imposing penalties.
I'm much reminded of the NTSB approach to plane crashes: It's more important to have the full cooperation of everyone involved so you can identify the cause and prevent it from happening again, than to cause everyone to shut up and lawyer up so they can avoid potential liability.
The issue is this. You have some amateur, some hobbyist, who knows enough to spot a vulnerability, but isn't a professional security researcher and isn't a lawyer. They say "that's weird, there's no way...," so they attempt the exploit on a lark, and it works.
This person is not a dangerous felon and should not be facing felony charges. They deserve a slap on the wrist. More importantly, they shouldn't look up the penalty for what they've already done after the fact, find that their best course of action is to shut up and hope nobody noticed, and then not report the vulnerability.
The concern that we will have trouble distinguishing this person from a nefarious evildoer is kind of quaint. First, because this kind of poking around is not rare. As soon as you connect a server to the internet, there are immediately attempts to exploit it, continuously, forever.
But the malicious attacks are predominantly from outside of the United States. This is not a field where deterring the offenders through criminal penalties is an effective strategy. They're not in your jurisdiction. So we can safely err on the side of not punishing people who aren't committing some kind of overt independent crime, because we can't be relying on the penalty's deterrent regardless. We need the systems to be secure.
Conversely, if one of the baddies gets in and they are in your jurisdiction, you're not going to have trouble finding some other law to charge them with. Your server will be hosting somebody's dark web casino or fraudulent charges will show up on your customers' credit cards and the perpetrators can be charged with that even "unauthorized computer trespass" was a minor misdemeanor.