Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be an extremely totalitarian dynamic to be persecuted with the CFAA for modifying a device you own based on part of it having been (nonconsensually!) programmed by a third party to upload data to their own server. You own the device, so anything you do within that device is authorized. And the code that uploads the data is authorized to do so because it was put there by the same company that owns [controls] the servers themselves.

I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law - so it's best to steer clear. And this goes double with with the current overtly pay-to-play regime. But just saying.

(Awesome description btw! I really wish I'd find a buying guide for many makes/models of cars that detail how well they can be unshackled from digital authoritarianism. A Miata is not the type of vehicle I am in the market for (which is unfortunate, for several reasons))



If you can be prosecuted for guessing urls you can be prosecuted for sending garbage data in a way you know will be uploaded to a remote system.


The DoJ lost the case they went after for someone guessing URLs.


They lost it because they charged in the wrong jurisdiction.

Also come on, you can't reasonable describe that case as being about "guessing urls". It's the associated chat logs that really make the case.


link me


You think criminalizing guessing URLs is unreasonable.

What about guessing passwords? Should someone be prosecuted for just trying to bruteforce them until one works?


Guessing passwords is an attempt to access privileged information you have no right to access, and could not otherwise access without bypassing security measures.

Guessing a URL is an attempt to access (potentially) privileged information which was not secured or authenticated to begin with.

A password is a lock you have to break. An unlisted URL is a sticky note that says "private" on the front of a 40" screen. It's literally impossible for that information to stay private. Someone will see it eventually.


Guessing URLs is equivalent to ordering an item not on the menu in a restaurant. The request may or may not be granted.


This same logic is easily extended to SQL injection, or just about any other software vulnerability.

How do you propose the line should be drawn?


>How do you propose the line should be drawn?

there is a line drawn for such things. a fuzzy line. see:

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

same as this famous case, in which a supreme court justice is asked "what is and is not pronographie" - of course he realizes if he defines "what is not" people are going to make all kinds of porn right on the boundary (see: japanese pronographies where they do the filthiest imaginable things yet censor the sensitive books, making it SFW in the eyes of their law). this judge avoided that.

Anyways, parallel to the fact that filthy pronographies can be made a gorillion different ways, a "hack" may be manifested also a gorillion different ways. Itemizing such ways would be pointless. And also in the same vein, strictly defining a black and white line "this is legal, this is not" would cause hackers to freely exploit and cheese the legal aspect as hard as possible.. businesses and data miners and all these people would also freely exploit it, at massive scale and with massive funding, since it is officially legal. Thusly it must be kept an ambiguous definition as with pronographies, as with many things


Do you think the current line, where it's based on you "knowingly" exceeding your access or deliberately damaging the operation of a computer system, is excessively vague?


The question can be easily inverted for the other side: if any user accidentally damages a service's functionality in any way, can they always be criminally liable? Can this be used by companies with no security or thought put into them whatsoever, where they just sue anyone who sees their unsecured data? Where should the line be drawn?

To me, this is subjective, but the URL situation has a different feel than something like SQL injection. URLs are just references to certain resources - if it's left unsecured, the default assumption should be that any URL is public, can be seen by anyone, and can be manipulated in any ways. The exception is websites that put keys and passwords into their URL parameters, but if we're talking solely about the address part, it seems "public" to me. On the other hand, something like wedging your way into an SQL database looks like an intrusion on something private, that wasn't meant to be seen. It's like picking up a $100 bill of the street vs. picking even the flimsiest, most symbolic of locks to get to a $100 bill you can see in a box.


>The question can be easily inverted for the other side: if any user accidentally damages a service's functionality in any way, can they always be criminally liable? Can this be used by companies with no security or thought put into them whatsoever, where they just sue anyone who sees their unsecured data? Where should the line be drawn?

I don't think the question can be inverted like that, not meaningfully anyway. The CFAA specifically requires one to act knowingly. Accidentally navigating to a page you're not supposed to access isn't criminal.

>To me, this is subjective, but the URL situation has a different feel than something like SQL injection.

I don't think the url below is necessarily that different.

> GET wordpress/wp-content/plugins/demo_vul/endpoint.php?user=-1+union+select+1,2,3,4,5,6,7,8,9,(SELECT+user_pass+FROM+wp_users+WHERE+ID=1)

> if it's left unsecured, the default assumption should be that any URL is public, can be seen by anyone, and can be manipulated in any ways

It can be, but not lawfully so. It's not possible to accidentally commit a crime here, for example in the IRC logs related to the ATT case the "hackers" clearly understood that what they were doing wasn't something that AT&T would be happy with and that they would likely end up in court. They explicitly knew that what they were doing was exceeding authorized access.

> On the other hand, something like wedging your way into an SQL database looks like an intrusion on something private, that wasn't meant to be seen

I think you've reached the essence of it. Now, let's say you just accidentally find an open folder on a bank's website exposing deeply personal KYC information of their customers. Or even better, medical records in the case of a clinic.

Lets say those files are discoverable by guessing some URL in your browser, but not accessible to normal users just clicking around the website. If you start scraping the files, I think it's pretty obvious that you're intruding on something private that wasn't meant to be seen. Any reasonable person would realize that, right?


> GET wordpress/wp-content/plugins/demo_vul/endpoint.php?user=-1+union+select+1,2,3,4,5,6,7,8,9,(SELECT+user_pass+FROM+wp_users+WHERE+ID=1)

This is why I tried to make the clarification that I was referring to the address part of the URLs only, not the parametrized part. In my mind, something like /users?key=00726fca8123a710d78bb7781a11927e is quite different from /logins-and-passwords.txt. Although, parameters can also be baked into the URL body, so there's some vagueness to this.

> I think you've reached the essence of it. Now, let's say you just accidentally find an open folder on a bank's website exposing deeply personal KYC information of their customers. Or even better, medical records in the case of a clinic.

I guess if I try to distill my thoughts down, what I really mean is that there should be a minimum standard of care for private data. At some point, if being able to read restricted data is so frictionless, the fault should lie with the entity that has no regard for its information, rather than the person who found out about it. If a hospital leaves a box full of sensitive patient data in the director's office, and getting to it requires even the minimal amount of trespassing, the fault is on whoever did so. But if they leave that box tucked away in the corner of a parking lot, can you really fault some curious passer-by that looked around the corner, saw it and picked it up? Of course, there's a lot of fuzziness between the two, but in my mind, stumbling into private data by finding an undocumented address doesn't clear the same bar as bruteforcing or using a security vulnerability to gain access to something that's normally inaccessible.


Probably somewhere short of incarcerating someone for what they typed in a browser's URL bar.


So if I deliberately exploit a bug on your website and download your customer database by typing things in my browsers URL bar, I should not be prosecuted?


No, and I would support a law explicitly making it illegal for prosecutor to prosecute you for this.


I'd be totally down for that, but I reckon it would be kind of shitty for the vast majority of the people who are not CTF enthusiasts.


Cyber attacks are consentual, digital engineering is the only discipline where we have complete mastery of the media. If you make a system (or authorize it) what someone does with it is your fault.


Closer to trying the handle on random car doors.


Passwords are different from URLs because URLs are basically public, whereas passwords aren't supposed to be. Furthermore, this is not 1995. Everyone who is in the industry providing IT services is supposed to know that basic security measures are necessary. The physical analogy would be, walking through an unlocked and unmarked door that faces the street in a busy city, versus picking a lock on that door and then walking through it.


> Everyone who is in the industry providing IT services is supposed to know that basic security measures are necessary.

And everyone who doesn't have wool for brains knows to not carry large rolls of cash around in a bad part of town, but we can still hold the mugger at fault.


Nevertheless, URLs are as public as door knobs. If someone is merely observing that a door is unlocked and they have not stolen anything, they have done nothing wrong. People being prosecuted over discovery and disclosure of horrible design flaws based on URLs should never be prosecuted. If they use the information to actually cause damage, we can be in agreement that they are responsible for the damage.


>People being prosecuted over discovery and disclosure of horrible design flaws based on URLs should never be prosecuted. If they use the information to actually cause damage, we can be in agreement that they are responsible for the damage.

That's literally the current state of things.


Are you sure? I seem to remember people getting burned for publicly disclosing security vulnerabilities after stubborn agencies refused to fix them for years. Stuff like, exposing thousands of SSNs through a public gateway... We are literally having this discussion on URLs because of famous cases where people DID face unfair treatment. I don't recall any actual fix for this legal chicanery either. If you do, I would be very interested.


> I seem to remember people getting burned for publicly disclosing security vulnerabilities after stubborn agencies refused to fix them for years. Stuff like, exposing thousands of SSNs through a public gateway..

This has never happened in the US on the federal level. Unless your definition of "getting burned" is a nasty email from a clueless non-LE government worker.

> We are literally having this discussion on URLs because of famous cases where people DID face unfair treatment

I don't think any reasonable person can read through the court filings in those (Auerheimer, Swartz) cases and agree with the claim that there was unfair treatment wrt the application of the CFAA, or that the CFAA was unfair because it covers those cases.

I totally understand how someone who has not spent time familiarizing themselves with the actual details of the cases might be under the opposite impression; they are frequently misrepresented by people with agendas and nerds who mistakenly understand judicial process as a "Captain Kirk vs Computer" scenario.

There's a trend in communities like HN to claim that the CFAA is bad because Swartz deliberately broke the law while he engaged in some pretty cool civil disobedience. That's not reasonable. Two things can be true at once: what Swartz did was in fact cool and laudable, it still shouldn't be legal. Similarly, a reasonable person might consider it cool and laudable to punch a nazi, doesn't mean it should be legal.

In any case, there's also a trend of misrepresenting the potential penalties involved. On HN, you'll see people posting about how Swartz was facing 30 years in prison, which is an outright lie. Swartz had, in fact, behaved as described in the indictment; he had two plea deals on the table. One for 6 months with the opportunity to argue for further leniency from the judge, and another for 4 months outright. Lawyers familiar with the case have stated that it was very likely that he wouldn't have gone to prison at all.

Swartz killed himself, so the CFAA must be bad, but it's probably realistic to assume that Swartz did not kill himself because he was scared of spending a few months in prison. He was likely seriously mentally ill, and a victim of the poor state of the US healthcare system, not of the CFAA or the DOJ.


Have a look at this: https://m.slashdot.org/story/180969 Clearly there is precedent in the US and elsewhere for prosecution based on accessing URLs. Politicians have even argued that you should not be able to tamper with website source code in your browser, or otherwise use websites in any way not anticipated by the owners.

>Two things can be true at once: what Swartz did was in fact cool and laudable, it still shouldn't be legal. Similarly, a reasonable person might consider it cool and laudable to punch a nazi, doesn't mean it should be legal. We live in an age where people call other people Nazis as if uttering the accusation gives them a free pass to infringe on the rights of those people. Even if it could be proven to be true, the facts would not grant them any such right.

Theft and unprovoked assault are neither cool nor legal. I don't care if we're talking about absolute assholes, either. For each and every one of us, there is probably someone in the world who thinks that they should have our property and have the right to attack us.

>Swartz killed himself, so the CFAA must be bad, but it's probably realistic to assume that Swartz did not kill himself because he was scared of spending a few months in prison. He was likely seriously mentally ill, and a victim of the poor state of the US healthcare system, not of the CFAA or the DOJ.

Idk that much about this case. It seems to me that Swartz was in a hell of a lot of trouble, that could have gotten him a prison term along with financial and career ruination. He clearly should not have killed himself but that kind of stress can make people lose sight of the future.


> Have a look at this: https://m.slashdot.org/story/180969 Clearly there is precedent in the US and elsewhere for prosecution based on accessing URLs

"prosecution based on accessing URLs" is a dishonest way of describing this case. It's a prosecution based on accessing URLs with malicious intent, while the persons responsible knew they were not intended to access said URLs.

That's like saying "prosecution based on walking through a doorway". Well yeah, except it was the middle of the night and the door to someone else's house had been accidentally left unlocked.

>Idk that much about this case. It seems to me that Swartz was in a hell of a lot of trouble, that could have gotten him a prison term along with financial and career ruination. He clearly should not have killed himself but that kind of stress can make people lose sight of the future.

Swartz had good lawyers; he was certainly aware that he wasn't in big trouble. He was facing neither financial nor career ruination. The damages he had caused were far from enough to result in financial ruination. The charges had made him even more of a celebrity and would've been a boost to his career.

https://volokh.com/2013/01/14/aaron-swartz-charges/

https://volokh.com/2013/01/16/the-criminal-charges-against-a...

Orin Kerr, a top subject matter expert addressed this extensively. In the second part he also offers the best criticism of CFAA, which is that it's almost entirely redundant given the existence of very broad wire fraud statute.

And for what it's worth, Swartz had been spending a lot of time thinking about suicide for years before the whole JSTOR debacle http://www.aaronsw.com/weblog/dying


How do I know which URLs of a website are legal to visit and which are illegal?


I can't say I've ever struggled to make this determination, but I don't make a habit of trying random ports, endpoints, car doors, or brute-force guessing URLs.


But it was very tempting when i saw that my national exam results were sent to us in a mail as nationalexam.com/results/2024/my-roll-number. Why would i not try different values in the last part.


Try it once to see if it works, you'll probably be fine.

Find out that it works, and then proceed to look up various other people? Whether you're fine depends entirely on whether or not you genuinely believe that you're supposed to be accessing that stuff.


It depends on stuff.

Sometimes a URL can have a password in it.

But when it's just a sequential-ish ID number, you have to accept that people will change the ID number. If you want security, do something else. No prosecuting.


I think criminalising both is unreasonable, what you do with the URL you accessed or the password you guessed however is different.


As a strictly logical assertion, I do not agree. Guessing URLs is crafting new types of interactions with a server. The built in surveillance uploader is still only accessing the server in the way it has already been explicitly authorized. Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.

As a pragmatic matter, I do completely understand where you're coming from (my second paragraph). In a sense, if one can get to the point of being convicted they have been kind of fortunate - it means they didn't kill themselves under the crushing pressure of a team of federal persecutors whose day job is making your life miserable.


>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;

If your goal is to deliberately "poison" their data as suggested before, it's kind of obvious that you are knowingly causing the transmission of information in an effort to intentionally cause damage to a protected computer without authorization to cause such damage.

>Trying to tie some nebulous TOS to a situation that the manufacturer has deliberately created reeks of the same type of website-TOS shenanigans courts have (actually!) struck down.

This has very little to do with the TOS though, unless the TOS specifically states that you are in fact allowed to deliberately damage their systems.

And no, causing damage to a computer does not refer to hackers turning computers into bombs. But rather specifically situations like this.


A computer being supplied with false data which it then stores is not damaging the computer - hence there being a provision about fraud. But for this case it's not fraud either, as the person supplying the data is not obtaining anything of value from the false data.


>the term “damage” means any impairment to the integrity or availability of data, a program, a system, or information;

Deliberately inserting bad data to mess with their analytics does in fact fit that definition.


You are construing "integrity" to mean lining up with their overarching desires for the whole setup of interconnected systems regardless of who owns each one. By that measure, stopping the collection of data is impairing its availability on their system.

I would read that definition as applying only to their computer system - the one you aren't authorized to access. This means the integrity of data on their system has not been affected, even if the source of that data isn't what they'd hoped.

As I said, the law contemplates a different call out for fraud. This would not be needed if data integrity was meant to be construed the way you're claiming.

(For reference I do realize the law is quite unjust and I'll say we'd be better off if the entire law were straight up scrapped along with the DMCA anti-circumvention provisions)


Why do you think the CFAA is unjust?

What specific activities does it unjustly criminalize?


I had assumed you were coming from a similar position, and your argument was more of a reductio-ad-absurdum.

But if you're not - the fact it's putting a chilling effect on this activity right here is a problem.

Another big problem is the complete inequity. It takes the digital equivalent of hopping over a fence and turns it into a serious federal felony with persecutors looking to make an example of the witch who can do scary things (from the perspective of suits).

Another glaring problem is that if the types of boundaries it creates are noble, then why does it leave individuals powerless to enforce such boundaries against corpos, being easily destroyed by clickwrap licenses and unequal enforcement? Any surveillance bugs/backdoors on a car I own are fundamentally unauthorized access, and yet I/we are powerless to use this law to press the issue.


>I had assumed you were coming from a similar position, and your argument was more of a reductio-ad-absurdum.

>But if you're not - the fact it's putting a chilling effect on this activity right here is a problem.

I've personally had my own CFAA-related criminal troubles in the distant past, but I still have a hard time seeing the big problems with CFAA so often touted on HN.

The activity of childish vandalism by flooding Mazda servers with garbage data? There's no chilling effect on simply not sending any data to Mazda.

The activity which was proposed earlier was explicitly malicious in intent, why shouldn't there be a chilling effect put on it? Do you not think the government should generally protect you from people taking explicitly malicious actions aimed at causing you harm?

In this context it is the motivation that makes the crime. You could absolutely modify your car in a way where the data sent to mazda is replaced with zeroes or random data, but you would need to do so in good faith.

Of course, when the activity is explicitly malicious as stated above ("poison their databases and statistics with fake data") it's not surprising that you'd be in violation of the law.

>Another big problem is the complete inequity. It takes the digital equivalent of hopping over a fence and turns it into a serious federal felony with persecutors looking to make an example of the witch who can do scary things (from the perspective of suits).

I just don't think this is actually happening. The cases often spoken of here are Auernheimer and Swartz.

I have a hard time believing that anyone can read the court files in the Auernheimer case and argue in good faith that such behavior should be legal. Among other things, the court papers contain a chat log of the co-conspirators discussing how to they should use the data they've scraped from buggy AT&T site to spam AT&T customers with malware. In the end that was too complicated, so they arrived at trying to leak the data in most damaging way possible to hurt AT&T share prices.

Swartz performed an admirable act of civil disobedience and faced up to 6 months in prison for that (realistically, he'd most likely never have spent a day in prison). I think what Swartz did is admirable, but that doesn't mean what he didn't shouldn't have been illegal. Just as what Snowden did was admirable, but legalizing such activities would have catastrophic consequences.

>Another glaring problem is that if the types of boundaries it creates are noble, then why does it leave individuals powerless to enforce such boundaries against corpos, being easily destroyed by clickwrap licenses and unequal enforcement?

I feel like this is conflating the problems that CFAA seeks to address with a completely different set of problems.

Corporations are bound by the CFAA just as much as you are, it's just that companies are rarely in the business of doing this sort of crime. Just as companies are rarely in the business of selling heroin.

> Any surveillance bugs/backdoors on a car I own are fundamentally unauthorized access, and yet I/we are powerless to use this law to press the issue.

The fact that CFAA mostly does not address these particular issues is not a problem with the CFAA, people (or companies!) buying devices with software they don't like was never something CFAA was intended to address.

There are reasonable, effective legal solutions to surveillance like this, like the GDPR.


I'm not really looking to litigate the larger point with you. I had really thought you were coming from a place of not liking the CFAA but interpreting it as harshly as possible (especially with that username!)

In general in this argument here and our previous argument, you're focused solely on intent to the exclusion of analyzing actual actions. You're then attributing malevolence to the intent of the individuals acting, while giving a pass to the companies (in this case Mazda) that is also operating with malicious/adversarial intent. You're missing that criminality also revolves around specific actions - in this case unauthorized access.

> people (or companies!) buying devices with software they don't like was never something CFAA was intended to address.

It most certainly addresses this. If I loaded up a PC with a remote access trojan, sold it on the used market, and then spied on the buyer, I would be looking at a CFAA prosecution. This is exactly what companies are doing with embedded spyware, yet it's not prosecuted.


Any reasonable programmer (a peer) would say an unencrypted system that doesnt validate data is an unprotected system.


It's a legal term, has nothing to do with technical protections.

Practically any device connected to the internet is a "protected computer". The only case I can think of where the defendant prevailed on their argument that the computer in question was not a "protected computer" was US v Kane. In that case the court held that an offline Las Vegas video poker machine was not sufficiently connected to interstate commerce to qualify as a "protected computer".


It might be interesting for an enterprising lawyer to try to flip this around. Suppose you send a letter to your car manufacturer saying that, as the owner of the car, you are prohibiting them from accessing the location of the car or performing unauthorized software updates and that any attempt to circumvent this will result in criminal prosecution for unauthorized access to your computer.


Prosecuting someone for deliberately injecting garbage data into another persons system hardly seems totalitarian.

> You own the device, so anything you do within that device is authorized

You're very clearly describing a situation where at least some of the things you're doing aren't happening on your own device.

>I do know that the CFAA essentially gets interpreted to mean whatever the corpos want it to mean - it's basically an anti-witch law

FWIW this is simply not true. The essence of the CFAA is "do not deliberately do anything bad to computers that belong to other people".

The supreme court even recently tightened the definition of "unauthorized access" to ensure that you can't play silly games with terms of service and the CFAA. https://www.supremecourt.gov/opinions/20pdf/19-783_k53l.pdf


My device. I generate whatever the fuck the data I want. If you log it, kiss my ass.


Sure, I have the same attitude when it comes to the government telling me that I'm not allowed to use drugs. Doesn't mean I'm in the clear from a legal point of view.

However, it's worth clarifying that the important detail isn't generating the data, but sending it. Particularly the clearly stated malicious intent of "poisoning" their data.

This seems like exactly what the lawmakers writing CFAA sought to criminalize, and is frankly much better justified than perhaps the bulk of things they tend to come up with.

>(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;

Doesn't seem exactly unfair to me, even if facing federal charges over silly vandalism is perhaps a bit much. Of course, you'd realistically be facing a fine.


Could you argue the computer was unprotected? No encryption is wild.


No, "protected computer" refers to computers protected by the CFAA.

>(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or

>(B) which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States.


If you paid for a device it doesn't mean you have no rules set up on how you can operate it. I'm sure the is an EULA you agreed to.

As anecdote, while buying a new car I signed a statement that I'm not going to resell it to russia.


And you think it is all fine and dandy?


No it does in fact seem totalitarian. I support repealing the CFAA.


I would absolutely love to hear the arguments behind this.


If you were to purposefully try to poison/damage their dataset and admitted as such you probably wouldn't win without spending an unreasonable amount of money on lawyer fees. Without admitting anything though and claiming ignorance it would probably be pretty easy to get dismissed, provided you are able to spend atleast some money on a lawyer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: