Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Note that this concerns the subset of security research that involves actively talking to computer systems owned by other people, presumably in production, on the public Internet.

Most security research does not in fact work this way. Consider, for instance, virtually any memory corruption vulnerability; while it was once straightforward (in the 90s) to work out an exploit "blind", today, researchers virtually always have their targets "up on blocks", connected to specialized debugging tools.

I am a little surprised that we are only now hearing about high-profile researchers getting dinged for actively scanning for actual vulnerabilities in other people's deployed systems. It has pretty much always been unlawful to do that.†

(These are descriptive comments, not normative ones. My take on unauthorized testing of systems in production is complicated, but does not mirror that of the CFAA).

It's for this reason that you should be especially appreciative of firms, like Google and Facebook, that post public bug bounties and research pages --- those firms are essentially granting permission for anonymous researchers to test their systems. They don't have to do that. Without those notices, they have the force of law available to prevent people from conducting those tests.

(Background, for what it's worth: full time vulnerability researcher, started in '94.)

Caveat: it does depend on the vulnerability you're testing for. There are a number of flaws you could test for that would be very difficult to make a case out of. But testing deployed systems without authorization is always risky.



Google, Facebook, and now over 70 companies do grant tacit permission for anyone to test their systems, and will pay the researchers for a disclosure, as long as they follow the program rules, which are usually quite reasonable. I'm serious when I say that few people are more thankful than myself for the existence of security bug bounties.

However, one thing has always crossed my mind: since the legal definition of authorization is still very fuzzy, what stops a third party from going after a researcher, even though the company who owns the server which was technically hacked has no interest in filing any complaint against the researcher?

To clarify my question, the recent Brazilian law regarding computer hacking establishes that only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.? My understanding of american law is very weak, but I know that, for some crimes, the victim does not have a say, i.e. the state will prosecute regardless of the victim's will.


> only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.?

After a US law enforcement agency has been notified of a complaint by a victim of a crime they forward it to a prosecutor. At this point the victim can no longer drop the charges. The only person who can drop the case then is the prosecuting lawyer. They occasionally do drop cases where it doesn't make sense anymore. But procescuters don't get 'cybercrime' cases very often and they often make headlines , especially these days, so I doubt many lawyers would voluntarily drop that opportunity for their resumes and work on the usual murder or drug trials instead.


There is something really disturbing about a system that allows personal ambition to play such an important role in how the institution of justice operates in effect, at least in specialised matters like this.

Expensive attorneys and ambitious prosecutors, each trying to twist half-truths to, more or less, ignorant judges and jurys. Makes me wonder if some of these servants of justice are forgetting that, their specific role aside, as the above description suggests, their common goal is to reach an honest conclusion about whether someone actually did something wrong, which implies everyone's effort to understand in what ways are the related actions harmful and how does that harm balance against fundamental freedoms.


Judges and juries aren't ignorant. They just don't give a shit about the things that are important to you. To your typical juror off the street, "hacking" into a computer for ostensibly "white hat" reasons is no different than breaking into a store to "test the alarm system." The reaction is not "oh yes, we have to make sure our legal system is flexible enough to accommodate this sort of 'security research'" but rather, first, "I don't believe you" or, at best, "didn't your mother ever tell you it is wrong to mess with other peoples' things without permission?"


To your typical juror off the street, "hacking" into a computer for ostensibly "white hat" reasons is no different than breaking into a store to "test the alarm system."

That sounds a lot like ignorance.


Because there is an obvious difference between "understanding of the facts of a case" and "different value systems used to evaluate those facts", the ignorance here might be in your assertion.


You don't think the supposed difference in values is the result of ignorance of how the internet works? Or what a security researcher does? Or that security researchers exist as a hobby and profession? Or that the security of the internet at large depends on people who do this? That the every-other-month theft of giant numbers of credit cards or passwords can be prevented if white hat hackers find the security hole first? I would expect most people don't know that big companies like Facebook or Google offer bounties to people who find exploits, or that bugs that threaten the entire internet are routinely found by people who donate their time in order to protect people they don't even know, and who don't know they exist.

The facts of the case: someone broke into a computer system without permission.

The inability to interpret those facts in the light of what a security researcher does isn't a result of different values, but a lack of knowledge of the context. People who don't know how computers or the internet work are open to being told whatever story the prosecution decides to spin.

Edit: I think the ignorance is actually made clear by the example in the GP. Imagine some good Samaritan is walking past a jewelry store after closing time. They notice that the front door is ajar, and upon testing they find that the alarm doesn't go off when they enter the store. So they call the owners and wait in the store until the owner can get there and make sure the store is secure.

Do you think it's likely that this person would be prosecuted? Or, if they were, that the prosecutors and judge would throw the book at them to "make an example"? People understand that scenario and are likely to treat it with leniency in a way that they don't understand the equivalent scenario in computing.

P.S., Always a pleasure to be slapped down by tptacek :)


In your jewellery store example I think it may be reasonable to prosecute the person.

In increasing levels of seriousness:

1. The person is walking by the store and, in the course of their everyday activity, sees that the door is ajar; they then contact the owner. This seems fine to me.

2. The person is walking by the store, sees the door ajar, and then altering their normal activities decide to actively test the door to see if they can break into the store; they can and then contact the owner. This seems dodgy to me.

3. The person chooses to visit each jewellery store in town to see if any have a door ajar. This definitely seems inappropriate.

The reason I come down opposed to the person in the second example is two-fold.

Firstly, ignoring intent, where do you draw the line on an acceptable level of 'break the security' activity?

- Thinking that the door is ajar and pushing on it?

- Seeing that the lock is vulnerable and picking it?

- Finding a ground floor window and breaking through it with a brick?

The resolution I choose is that if you have gone out of your way to subvert the security of my stuff without my consent then you have crossed the line. Gray is black.

Second, I don't care about your intent. Every security system will break at some point, and so I view the existence of doors and locks as mainly being about roughly outlining the boundaries that I expect to be respected. If I want to improve my security then I'll hire someone to advise me on how to do it. If I come home tonight to find a stranger who has broken into my house in order to prove that it's possible then (1) I already know, and (2) they have just caused the harm which they are nominally trying to protect me against.


They notice that the front door is ajar

But most likely a security researcher will fire off some multiple of a thousand probes to see if the door is open. Collateral damage is likely. This is not what is happening in your jewelry store door case.

That the every-other-month theft of giant numbers of credit cards or passwords can be prevented: These things can be prevented by the folks in charge paying attention to the alarms going off in the back.


But they should give a shit before they can pass judgement, because they're not important just to me and because there are actual victims involved (which might be different than the accusants).

What if no private data were actually accessed, let's say if the researcher only compromised his own account.

Or about the case that he hacked a device that he bought, violating the Acceptable Use Policy of the producer.

Or the case where someone automated the retrieval of data that he already had legally access to, like, if I recall correctly, Aaron Swartz.

All these examples are unique and would fail any physical-world analogies, so they should be examined and judged differently, by people that do give a shit, want to take the effort to understand their unique aspects -and are actually able to. I'm not sure if that's the case.

My general point is about how we found ourselves in a system where justice servants, like prosecutors, appear to treat their job "just like any job" (at least in cases that they might consider abstract -"hacking", less clear and direct effects than "murder"), where they can put their careers first and ignore any consequences to others. Or that someone has to bear enormous defense costs to stand a chance, or be coerced to plead guilty or abstain from exercising what should be his right, out of fear of finding himself involved in such a situation.


The whole point of juries is for them to judge you against the norms of society at large. The fact that a small group of people might be operating under different norms is irrelevant. They don't have to understand your values in order to judge you. All they need to understand are the facts and the law.

The prevailing norm is that property rights are sacrosanct, and any invasion of those rights is considered suspicious and explanations about benevolent intent are disbelieved. There is no general right to "tinker" with other peoples' property without permission, for fun, for research, or for any other reason. We are not a society that requires security measures to be effective in order to serve as a signal to keep out. A velvet rope is as effective as a steel door for the purposes of signaling that access is not allowed.

This is not a matter of prosecutors putting their careers ahead of the spirit of the law. It's about hackers not understanding that we're a society that requires you to keep your hands to yourself.

NB: I have a beef with the CFAA, but it's not with the spirit of the law, but rather the fact that criminal penalties under the CFAA are totally out of line with those in analogous physical scenarios. The standards for trespass on digital networks shouldn't be higher than the standards for trespass in the physical world. But juries can't do anything about this problem, and judges really can't either. It's Congress's problem for putting the felony escalation provision in there.


Researches that trespass a digital network aren't the only ones who are affected, though. Let's say, a quote from the OP article:

"Lanier said that after finding severe vulnerabilities in an unnamed “embedded device marketed towards children” and reporting them to the manufacturer, he received calls from lawyers threatening him with action. [...] As is often the case with CFAA things when they go to court, the lawyers and even sometimes the technical people or business people don't understand what it is you actually did. There were claims that we were 'hacking into their systems'.

The threat of a CFAA prosecution forced Lanier and his team to walk away from the research."


There's nothing to that anecdote other than a company getting mad about exposing defects in a product and their lawyer making a nasty phone call.

The CFAA is vague and over-broad, you won't get any disagreement from me on that. Applying it in a case involving a device you bought and own is totally inconsistent with traditional norms of private property. But those are edge cases. The actual prosecutions people get up in arms about aren't edge cases. They pertain to conduct that clearly violates the norms of trespassing on private property, and hackers justify their actions by saying that those norms shouldn't apply to digital networks. Juries, unsurprisingly, don't buy that. So hackers and the broader tech community call them "ignorant."


You've also got the Sony VS Hotz lawsuit, where Hotz was forced to back-off. Edge cases, maybe, but demonstrate that not everybody draws the line at the same place.

For you, someone finding a vulnerability in the software that provides a network service, hosted in some server he doesn't own, is clearly trespassing private property -even if he only accesses his own account's data- but finding a vulnerability in the software that comes bundled on a device he bought, is not.

For Sony, let's say, both constitutes violations of her property -it's her software, she owns it and she doesn't care if the carrier is her server or the device she just sold you. In both cases she only gives you permission to use her software in a certain way, which excludes any sort of hacking.

Maybe the reason that many draw the line to the medium, is because it is easier to visually compare a computer network to a physical property than a device that you have bought (but has data you don't own)?

But is the physical ownership of the medium that carries the data what matters or the ownership of the actual data that are being accessed? If it's the medium, why, when the really important thing that the owner cares to protect is, in almost all cases, the data?

Not trying to argue, just expressing some questions that I think are tricky and deserve more thought than they get. In any case, I think physical and digital property analogies can only take us that far, so I try to keep clear of them.


Sony vs. George Hotz was a civil case in which the CFAA played a small role compared to the numerous other statutes invoked, and that case ended in a settlement.

What we are talking about in this thread is the supposed criminalization of security research. If you're trying to get someone to take the other side of the argument that security research is needlessly legally risky, you're probably not going to find many takers. There is a world of difference, however, between being sued and being imprisoned.


Apologies for drifting the thread out of the CFAA scope, I was never specifically referring to CFAA to be honest -sorry if it seemed that I was.


>They don't have to understand your values in order to judge you. All they need to understand are the facts and the law.

That could be said for racist laws just as well (e.g Jim Crow stuff).

Even if they don't have to "understand his values", they should be made to, and the law is bad in this regard.

Hence, I don't see the point in pointing out the status quo and what privileges they have in a neutral manner. Seems like apologist to me.


You sound like you're drawing a normative conclusion from positive facts (i.e. the is/ought problem).

The fact that juries judge things from a certain perspective says nothing about whether they ought to do so or not.


and more to this...

usually these cases go to special DA Investigation units who invest huge sums of money to determine who was involved and to what extent (think full blown computer forensic investigations to gather evidence). even when the cases make absolutely no sense to pursue, they will persist on the sole outcome of recovering some of these costs... i know first hand, borderline extortion.


"...testing deployed systems without authorization is always risky..."

While what you describe may be the sad reality, it makes zero sense. If a legit researcher, 'specially one that's being transparent about it, researches any domestic system, then that's got to be better than the Iranians, Russians or Chinese doing it (which they do anyway).

But hey, what do we know anyway. There's probably some benefit that makes it preferable for a foreign party to uncover our vulnerabilities without our knowledge.


I disagree that it makes zero sense. There are reasonable concerns at play here:

* Security testing is extremely disruptive to production systems, most especially if those systems haven't been hardened in any way. Security testers are not as a rule good at predicting how their tests can screw up a production system.

* No matter how much effort you put into a security program (Google and Facebook put a lot of effort into it; more than most people on HN can imagine), attackers still find stuff. So there's not a lot of intellectual coherence to the idea that open, no-holds-barred research applies a meaningful selective pressure.

* It can be very difficult to distinguish genuinely malicious attacks from "security research", and malicious attacks are already extraordinarily difficult to prosecute.

I'm not saying that the rules as they exist under the CFAA today make perfect sense.


And as someone who has been on both sides of doing security research and systems administration. Generally, this kind of "pro bono" work isn't telling us anything we don't know, and since its not coordinated with the target it'll potentially get system/network/security adminstrators up at 3am in their morning to respond to your probes, and will drain company resources. You're also demanding that the company address whatever it is that you find, in short order, when it may actually not be the most important thing to the business -- particularly when you announce it to the world rather than follow responsible disclosure.

When "researchers" then flip around and talk to the press and don't follow responsible disclosure, then what you're dealing with really is a hacking attempt. You're walking up to the doors and windows of a business and jiggling then to see if they're open and taking notes on what kind of locks they're using and how they could be bypassed -- without any kind of approval from the business owner. Then you're turning around and damaging the business by talking to the press about it.

Back when I was more interested in computer security (roughly '94 just like tptacek), I knew that scanning systems that I didn't own without permission would get me in trouble. We seem to have devolved a bit in our collective maturity where we think we can just fly the flag of "security researcher" and that this gives us permission to initiate what look just like attacks on systems.

If you don't own a system and don't have permission for it then don't attack it, and don't put the government in the position of trying to discriminate between a foreign government launching attacks and a "security researcher" with pure motives... And don't be too shocked if the government and legal institutions have issues in distinguishing between those two cases and throw you in jail for 15+ years. The way to avoid that outcome is not to do it. Only attack and probe systems that you own or have permissions to attack and probe. Just because you're a "security researcher" who is egotistical enough to think you can save the internet from itself, that doesn't mean you're going to get treated differently from a foreign national with less pure motives. Stay away from shit that isn't yours (and the security of the entire internet is not your sole responsibility).


For what it's worth: I can't think of anyone who has done "15 years" for CFAA violations. Is there someone who fits that description?

(Don't get me wrong; any prison time for good-faith vulnerability research, no matter how negligent or ill-advised the research is, seems like a travesty).


That was badly phrased. The point I wanted to make was that if you don't want to get into a situation where you're being threatened with 15+ years of incarceration (because prosecutors decide to try to blindly throw the book at you), then don't give them any reason to. Don't put a judge in the position of having to determine if you're an ethical grey-hat hacker, or a north korean spy, because you might lose the judge lottery (which may be shortened on appeal, but you'll be rotting in prison in the meantime).

Justice isn't the machine language of a computer that has deterministic outcomes given its inputs. You're asking humans to determine your motives, which will be necessarily be subjective. And I'm not willing to put my freedom at the risk of someone else's subjective determination. And when "security researchers" do grey-hat hacking they shouldn't be too shocked if they're arrested and charged with those kinds of crimes, because they're asking too much of the legal system.

And that doesn't mean that its 'right', and i'm totally against that kind of penalty. Even though I think its wrong to test vulnerabilities and spin around to go to the press, I see a huge difference, and think that the penalties should be closer to a slap on the wrist (a fine, and 30 days in jail / community service kind of penalty -- not 15+ years in prison).

But I'm not going to put myself into the position of making a judge and the legal system make those kinds of distinctions. What is so important about being able to do that kind of grey hat hacking that you're willing to put your own freedom into that level of jeopardy?


"...this kind of "pro bono" work isn't telling us anything we don't know..."

That's scary right there. If you're deploying something you know has vulnerabilities you have bigger problems than losing sleep at 3am. Same for operating something you know is vulnerable. You (collective, not you, personally) totally deserve to get up at 3am. It's grossly irresponsible, because what you probably don't already know is how that harmless XSS vuln you know about is really a leaf in a 7-level deep threat tree that results in information disclosure. I can just imagine that such a cavalier attitude is how the Sony PSN network got owned.

My point stands. Attack from Iran or probe from a researcher (your points in your following paragraph noted and notwithstanding)?

"...If you don't own a system and don't have permission for it then don't attack it."

That's loud and clear, for sure.


> If you're deploying something you know has vulnerabilities ...

Everything has vulnerabilities.


Would "something with known vulnerabilities" be better?

Does everything have known vulnerabilities that are not actively being worked on?


Lots of things are vulnerable to DoS attacks in a multitude of ways. Depending on the business, it's not uncommon to just say "we'll deal with it when it happens."

But, someone asks, what if the business is really really important? Then that's all the more reason to not mess with it.


What's scary is how self-centered you are. How do you know that the vulnerabilities haven't already been found by an internal security audit, and that they're in the process of being patched, but by your disclosure to the media you are petulantly demanding that the company patch your vulnerability right this instant so you can get the fame and ego gratification from it?

All large companies have vulnerabilites, there's always work that needs to get done, that always get triaged according to impact and then people who ideally should have 40-hour work weeks have to start patching code, then it needs to get Q/A tested to prove that rolling it out won't break everything else, and all that takes time.

And I have worked for companies that took security seriously and worked for companies that had laughable security practices. In either situation, having 'help' from external 'security researchers' was not useful. In the case where companies were run competently it just means that you cause people to scramble and push solutions before they're ready. In the case where companies were not run competently it just causes people to scramble and does nothing to affect the underlying shittiness of the company. You are not going to be able to fix shitty companies. Its not your job to stop future Sony PSN networks from getting hacked, you can't do that, and you should stop thinking you can, and stop using that as justification for your own actions.


TLDR: I strongly disagree.

"...especially if those systems haven't been hardened..."

Well that's just it, isn't it? If the system hasn't been hardened then it wouldn't hold anything of interest and therefore wouldn't be targeted by either friendly researchers or malicious adversaries.

If a system holds value it should be appropriately secured. That must include dealing with attacks as part of business as usual.

As for meaningful, selective pressure - well then why bother with bug bounties? Even Microsoft, the only organisation at that level with a published SDL [edit: security development lifecycle], offers them now.

SDL ref. http://msdn.microsoft.com/en-us/library/windows/desktop/cc30...

I've had my rant. Will shut up now.


Microsoft has been throwing huge amounts of money at this problem for over a decade, and Microsoft's systems are not perfectly or even (if we take a hard, dispassionate look at it) acceptably secured against serious attackers. And I think Microsoft does a better job on this than almost anyone else.


How would you feel if I broke into your place of business, made a list of all of the things you were doing that were out of compliance with federal and state laws and regulations, then left you my card and offered to let you hire me to do legal compliance work for you?


"Broke in" rather presupposes the point.

If we're analogizing, an exterminator seeing rat droppings in your restaurant and offering to solve your problem rather than letting the department of health deal with it, is a slightly more realistic example.


Taking that a step further: it is like an exterminator going around different of restaurants, then crawling under customer's tables while they are eating, saying, "Don't mind me, just looking from rat droppings."

A more legit exterminator would agree to come past while the customers were not there.


Legally, pushing open a closed door is "breaking in." It's precisely how a lawyer would describe someone who opened the door to your business to come in and look around for stuff.


That's not quite a fair analogy. It'd be more like if you go into a bank and see a giant hole in their vault. You tell them about it and they sue you for breaking it. Meanwhile actual criminals come and go as they please anonymously. The bank's clients are the actual victims of course, it's not like this just affects the bankers.


No, you've misread the story. Nobody is being threatened simply for observing vulnerabilities. Instead, people are discovering vulnerabilities in popular software, and then exploiting them across thousands of machines to "prove" something everyone already knows: lots of people are vulnerable.


If it was well known that large and highly funded subsets of foreign militaries were roaming around breaking into everyones businesses and stealing things / exploiting the lack of legal compliance then, yes, I'd be very pleased that someone took the time to both find the mistake and give me the chance to fix it / get it fixed by them before it was used against me with legitimate malicious intent.


That's the kind of thing that's easy to say on a message board and colossally unlikely to match your revealed preferences in reality.


There actually are real-life burglars. That isn't hypothetical. Would you really grant people permission to burglarize your business to demonstrate its susceptibility to burglary?


So should someone find a remote exploit in OpenWhatever that gives them remote root access and they publicly disclose that (without having tested it on the Internet... just in their lab) then they are not subject to the CFAA?


Correct.

The CFAA requires access without authorization or exceeding authorized access. Presumably you are an authorized user of your own systems.

It is possible that some vendors may try to use User Acceptance Licenses to further restrict what actions can be taken with their software (even in case where you've purchased it and installed it on your system).

I believe (and would love to be corrected by a lawyer), that even those cases would be civilly prosecuted, and still not related to the CFAA.

This is one of the reasons why when providing penetration testing/application testing training we always took great pains to drill into their heads to never use any of those techniques on systems you do not own. Not poking around on your bank's website, etc.

If you knowingly access a system that you do not have authorization for, the owner of the system might not care (or might not notice), but under the CFAA, they can file charges against you.

Reasonable people may disagree what constitutes "exceeding authorized access" (where reasonable people might be your attorney and a prosecutor).


I have no problem with punishing unauthorized access although the punishment is stupid severe.

I mean once you've been sentenced under the CFAA you might as well have a shootout with the police or kill some people it make no difference hell the extra charges won't make much a difference you're still facing life.

Does that make sense to anybody?

What they do require though is an exception for researchers and you can define researchers anybody who discloses the vulnerability to the owner of the vulnerable system before publishing it publicly. A security researcher is required to disclose publicly the results of his research in order to be considered a researcher.

A regular hacker cannot claim to be a security researcher since hackers never disclose the vulnerabilities they find to the owner of the system even if they do share them publicly with other hackers sometimes. It is not in their interest to let the owner of the vulnerable system know they have a problem.


Correct. They are also probably (depending on circumstances) exempt from some provisions of the DMCA that might otherwise allow the research target to employ copyright law to stop them from conducting the research. US Federal Law has provisions that explicitly protect vulnerability research in some cases.


So what happens when the NSA does it without permission and keeps discovered vulnerabilities secret?

Is this setting up a precedent?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: