Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm wondering if there is a more elegant way to solve sybil attacks here. For example: many CPUs are provisioned with key pairs that are unique to the processor and can be verified with the CA root cert of the issuer (Intel, AMD, etc.) You could tie PoW to successive signing and allow it to be verified in parallel. Then the operation couldn't be parallelized to a botnet as all PoWs would be unique to a CPU.

It seems that they're targeting memory as a way to make it more costly for botnets. I think that there are many other ways to help minimize this attack scenario, too. The same logic could also be applied to mobile phones using ESIM. Later authentication with the mobile network uses public key crypto so I feel like you could also do unique proofs there, too.

This is just a throw away comment though. I am probably missing obvious problems with this scheme.



If you are suggesting solutions based on immutable hardware keys and certified chain of custody from the manufacturer,I have to ask if you understand what TOR is.


In OP's defence, there might be a fully anonymous way to achieve attestation. Related: https://privacypass.github.io/


That method seems closer to PoW (as is being done by TOR) than attestation.

The strongest privacy guarantee I've heard behind attestation is it would require two parties to collaborate to break it. If Google attests to a Cloudflare protected site, they can determine who you are by cooperating.


There are ways to do the verification with cryptography that would preserve anonymity and wouldn't allow messages to be tied to public keys. I find condescending ignorant responses like yours highly annoying. One way to respond to people in the future is to start with the assumption that the person isn't a fucking idiot.


Proving your identity to an onion service in a way that can be tied to your use of other onion services feels like it might have bad outcomes?


DDoS has nothing to do with sybil attacks. DoS happens because limited resource (connection initiation) is provided for free.

They chose memory-hungry algorithm because that would prevent use of specific hardware (ASICs).


Attackers can still outsource the PoWs. The sybil is the assumption that 1 PoW == one PC. But you can force this assumption with provisioning keys at least.


And why do we need this assumption exactly?


Proof-of-work uses resources like memory, CPU, hard drive space, and so on for their challenges which just means that the person with the most resources has a disproportionate impact within the system. A botnet owner has more total resources than anyone else so any PoW challenges that a server issues can be easily outsourced to the system.

Overall, they will have more leverage from these resources than the number of systems they have access to. But you could at least restrict this to the number of systems with provisioning keys. The idea behind memory bound hash functions is that you're trying to make it hard to paralyze the challenge to a farm. But many systems in the farm are still going to have multiple cores and gigabytes of RAM (so they can be used to leverage multiple challenges simultaneously.) The underlying problem to solve here is an identity problem: allowing an individual machine to act as a single identity which various proof-of-work schemes have tried to achieve.

The ideal solution would also limit connections made by the same actors but that is probably not something you can achieve with something like TOR. This is a sybil problem, by the way.


You're trying to solve a straightforward engineering problem with an unfit solution to an ill-defined problem. The solution of sybil problem would not solve the case of coordinated attack by multiple nefarious agents. You can also call this meat botnet owned by master-coordinator. The solution would distinguish this from a normal botnet but in the end your service down in the very same manner and clients gave up most of their privacy for nothing.

Imagine instead the following trivial scheme: instead of burning resources the client would pay to be served in reverse order of payment value. Let's say client is willing to pay 1 cent to be served in the next 10 seconds. The attacker would have to pay more as he have to occupy the whole head of this queue all the time to be successful. Let's say server can process 100 rps - now he's making over a dollar per second, which he can use to scale his serving capacity.


Introducing the requirement to spend money to use the service would drastically reduce its value. It wouldn't be Tor anymore. Payments would make it easier to link identities and filter access to it. It would also mean not everyone could afford to pay for the service.

>and clients gave up most of their privacy for nothing.

Also not really sure how giving up privacy comes into this? Depending on how the scheme is implemented you can still preserve all the same privacy of using Tor with provisioning keys. E.g. you might use enclaves and keep verification hidden inside enclaves (so hosts cannot see the challenge protocol) or use zero-knowledge proofs to hide everything.

There may even be simpler algorithms since the certificate chain would be using something like RSA SHA256 (which have some neat math tricks to modify them more compared to other algorithms.)


Of course, this wouldn't work if you don't trust Intel, AMD, etc's certificate, and I don't see why you would in this application.


"Waiting for pair client connection". That'd be something. Interesting thought but I can imagine a range of issues.

In the same vein, how about the server would hold a pool of IPs in which the client has to return a proof of port knocking? e.g. here is a token, send that to this IP:port and wait for a unique response I can verify. Call this proof of latency. It would be low CPU, would spread the load across various machines and ports. On the downside, of course, you need multiple IPs and potentially servers. It could be implemented on the same machine but that would shift the cpu load to port connections.


What does this prove about the client? Just that they have a reasonably fast connection (which in TOR-world can be painful to achieve), not that they aren't part of a botnet.

It allows you to scale your workload, but "just pay for more servers and outscale the attacker" isn't generally an acceptable way to deal with DDOS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: