Looks cool! a bit of crypto hygiene though, I'd recommend passing the ECDH output through a KDF before using it in xsalsa20-poly1305.
Also, if you're using xsalsa20 anyway, why go through the trouble of doing incremental n0nces? One of the main benefits of opting for an extended-n0nce construction is that it simplifies n0nce handling greatly. Why not randomly generate the 24 byte n0nce and forget about the tricky and error prone project of ensuring n0nce uniqueness?
If anyone's ever wondering how carefully choreographed our HN presence is, please refer them directly to this subthread, which, for posterity's sake, I will note is occurring on the morning of July 4.
Sorry, didn’t intend for throwaway to come across as not standing behind the product.
I wrote the community forum announcement and was one of the people that worked on implementing our PAYG plan. I just wanted to clarify that we didn’t deprecate silently, so I quickly created an account to do so.
for those of you(like me) wondering where this apparently spooky constant is coming from, it is a bitstring of the coefficients of the lexically first irreducible polynomial of degree b with the minimum possible number of non-zero terms, where b is the block size(in bits) of the underlying block cipher with which CMAC is instantiated. So, nothing up the sleeve here.
My natural follow-up question was "why can't you just have K1 = L?" Obviously it's inherited from CMAC, but why does CMAC do it?
Investigating further, general-case CMAC involves generating a K1 and a K2, which afaict just need to be arbitrarily different from each other. So why not something even simpler, like "xor with 1"?
The multiplication in CMAC is there to distinguish between full and partial final input blocks. It can't be simply a xor with a constant because that would be easily cancelable in the input, and wouldn't satisfy the required xor-universal-like properties required by the security proof.
The input here is highly restricted so there's no point to it.
I should perhaps have said "works round" rather than fixes - yes, it has to run over UDP but the quic team had an explicit goal of not allowing the network to interfere further : see sec 3.3 and 7.5 of https://dl.acm.org/doi/pdf/10.1145/3098822.3098842
This now means that innovations based on Quic can occur
Since both QUIC and UDP are transport layers, does this mean that once switches and ISPs start supporting QUIC along side UDP and TCP, that a new version of QUIC can be released which doesn't require UDP?
Switches don't need to change anything to support QUIC, precisely because it uses UDP. So, the deployment of QUIC doesn't involve any technical changes that would enable a version of QUIC that worked directly over IP. It's possible that the defeat of middleboxes would change the motivation of ISPs to filter out alternate protocol numbers, meaning that there might be less resistance to getting them to allow that in the future - but doing so is still a big exercise.
Also, the overhead of a UDP packet is 8 bytes total, in 4 16-bit fields:
- source port
- dest port
- length
- checksum
So, we can save a max of 8 bytes per packet - how many of these can we practically save? The initial connection requires source and dest ports, but also sets up other connection IDs so theoretically you could save them on subsequent packets (but that's pure speculation, with zero due diligence on my part). It would require operational changes and would make any NATs tricky (so practically only for IPv6)
Length and checksum maybe you can save - QUIC has multiple frames per UDP datagram so there must be another length field there (anyway, IETF is busily inventing a UDP options format on the basis that the UDP length field is redundant to packet length, so can be used to point at an options trailer). QUIC has integrity protection but it doesn't seem to apply to all packet types - however I guess a checksum could be retained for those that don't only.
So in sum maybe you could save up to 8 bytes per packet, but it would still be a lot of work to do so (removing port numbers especially)
"security" is a term that has to be defined in relation to a threat model. If your threat model is an attacker with a static IP hammering your server, fail2ban does provide some security against that sort of attacker.
No it does not. If the packet is at your door it is too late already. Then either it does not matter in which case you do nothing, or it matters (DoS) and then you have other problems.
You are right that security works in the context of a threat model. There are however useless tools that give a false sense of "security" that do not fit in any reasonable model.
I have cases where I block whole ranges of IPs for "legal" reasons - it does not make sense but there you are, the ones who write the rules are not the ones who actually know the stuff.
> No it does not. If the packet is at your door it is too late already.
Too late for what? Again, it only makes sense to talk about "security" in the context of a threat model. You can debate the reasonableness of that threat model, but that's another discussion.
My threat model(for the sake of argument :^)) is an attacker with a static public IP address trying to bruteforce access to my service via repeated login attempts.
I'll maintain(for now) that fail2ban can be an effective tool that does provide some security against an attacker of this kind.
You wrote that someone is hammering your IP. This was for me the definition of a DoS. Nothing on your side will mitigate that.
But it does not really matter anyway. Your threat model is a single IP attacking you. What are you concerned about? That they will find services that are exposed and attack them? You should be securing these.
You will never be attacked by one IP. The exact same attack will be done from many, many IPs and you do not want to defend against IPs attacking you, but against them exploiting a vulnerability on your side.
Of course there is the "why not an extra layer of protection". This is great when you want to obscure something (moving a port for instance) because this does not have an effect on your system. Just imagine what happens when fail2ban goes south and blocks all addresses, or half of them, or yours because you tried too many times. This is a moving part that is actually dangerous.
If your server is on the internet with a public ssh server then it is probably providing some sort of internet service. That internet service is almost always easier to DoS than your openSSH server. If you are not providing a internet service then why is your SSH open to the internet?
What is the networking difference between a service for yourself that you want to access from "various places" and a public service with auth checks for your key?
Sure, but are L7 attacks easier than L4 against those servers? Adding more layers/software has a cost in configuration, maintenance, attack-surface, etc.
In context, I believe interpretation number 1 stands on firmer ground than yours.
In the previous paragraph:
> Unlike his wife, Hemingway never went ashore at Normandy. On June 6, all he could do was watch from a landing craft as American soldiers fought their way onto Omaha Beach.
> Even though Gellhorn scooped Hemingway, his story ran first. “Voyage to Victory,” proclaimed the cover of Collier’s July 22, 1944, issue. The article identified Hemingway as “Collier’s famed war correspondent” and included a photo of the whiskered writer with Allied soldiers.
Only then does the section conclude with the line in question
> No mention was made of the fact that she was the only female journalist on the ground at Omaha Beach.
You can argue that the author just finds these and other facts interesting, and nothing more. I think that is ignoring the clear subtext present in the writing. The author is certainly making the case that she was not given the recognition she deserved; either because of her sex or her proximity to Hemingway. Because the author himself invoked her sex in the final line, I am inclined to think the former.
Also, if you're using xsalsa20 anyway, why go through the trouble of doing incremental n0nces? One of the main benefits of opting for an extended-n0nce construction is that it simplifies n0nce handling greatly. Why not randomly generate the 24 byte n0nce and forget about the tricky and error prone project of ensuring n0nce uniqueness?