We're actually seeing this play out right now with the server-based age assurance systems which are already widely deployed and mandated under the UK Online Safety Act and laws in about 25 US States. In many cases, the sites just comply, presumably because they are worried that the regulators have a way to reach them even if they aren't hosted in the relevant jurisdiction. In some cases, however, the sites just ignore the regulations or tell the regulators to pound sand, as 4Chan is doing with UK OfCom: https://www.bbc.com/news/articles/c624330lg1ko
> 1) The parental responsibility is given to the wrong people. You're basically being forced by law to give all apps and websites your child's age on request, and then trusting those online platforms to serve the right content (lol). It should be the other way around. The apps and websites should broadcast the age rating of their content, and the OS fetches that age rating, and decides whether the content is appropriate by comparing the age rating to the user's age. The user's age, or age bracket, or any information about the user at all, should not leave the user's computer.
FWIW, this is not quite an accurate description of AB1043, in at least three respects:
1. Apps don't get your exact age, just an age range.
2. Websites don't get your age at all.
3. AB1043 itself doesn't mandate any content restrictions; it just says that the app now has "actual knowledge" of the user's age. That's not to say that there aren't other laws which require age-specific behaviors, but this particular one is pretty thi on this.
In addition, I certainly understand the position that the age range shouldn't leave the computer, but I'm not sure how well that works technically, assuming you want age-based content restrictions. First, a number of the behaviors that age assurance laws want to restrict are hard to implement client side. For example, the NY SAFE For Kids act forbids algorithmic feeds, and for obvious reasons that's a lot easier to do on the server. Second, even if you do have device-side filtering, it's hard to prevent the site/app from learning what age brackets are in place, because they can experimentally provide content with different age markings and see what's accepted and what's blocked. Cooper, Arnao, and I discuss this in some more detail on pp 39--42 of our report on Age Assurance: https://kgi.georgetown.edu/research-and-commentary/age-assur...
I'm not saying that this makes a material difference in how you should feel about AB 1043, just trying to clarify the technical situation.
Regarding what to do with algorithmic feeds, instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child. We could implement a password lock for app installation/updates at the OS-level that can be enabled in the phone's settings, that works like Linux's sudo. Every time you install/uninstall/update an app, it asks for a password. Then parents would be able to choose which apps can run on their child's device.
Notice their strategy: these companies make it hard/impossible for you to uninstall preloaded apps, and they make it hard to develop competing apps and OSes, and they degrade the non-preloaded software UX on purpose, which creates the artificial need to filter the feeds in existing platforms that these companies control. They also monopolize the app store and gatekeep which apps can be listed on it, and which OS APIs non-affliated apps can use. Instead of accepting that and settling with just filtering those existing platforms' feeds, we should have the option to abandon them entirely.
We need the phone hardware companies to open-source their device firmware, drivers, and let the device owner lock/unlock the bootloader with a password, so that we could never have a situation like the current one where OSes come preinstalled with bloat like TikTok or Facebook, and the bootloader is locked so you can't even install a different OS and your phone becomes a brick when they stop providing updates. If we allow software competition, then child protection would have never been a problem in the first place because people would be able to make child-friendly toy apps and toy OSes, and control what apps and OS can run on the hardware they purchased. Parents would have lots of child-friendly choices. This digital parenting problem was manufactured by the same companies trying to sell us a "solution" like this Cali bill or in other cases ID verification, which coincidentally makes it easier for them to track people online.
> instead of forcing platforms like Facebook to be less evil, we should give parents the ability to simply uninstall Facebook, and prevent it from being installed by the child.
Isn't that how parental controls already work?
There are problems, though:
1. The kids want to use Facebook. If parent A refuses to let their kid use Facebook, then kids B, C, D, E, F... all use Facebook and kid A becomes a social outcast. This actually happens. (Well, now it's other apps; kids don't use Facebook anymore.) This is similar to the mobile-phones-in-schools problem: if a parent doesn't let their kid bring a phone to school, and all the other parents do, that creates social isolation. When the school district bans the phones, it solves the problem for everyone. (So it's a collective action problem, really.)
2. Web browsers. Unless the parent is going to uninstall and disallow web browser use, the kid can still sign into whatever service they want using the web browser. I don't think parental controls block specific sites, and even if they do, there are ways around that, certainly.
I am very often the person who says that parents should actually parent their kids and not rely on the government to nanny them. But in this case I think there actually is value to the government making laws that make Facebook (etc.) less evil. And as a bonus, maybe they'll be forced to be less evil to adults too!
It's possible to mandate effective parental controls and then say "it's illegal to give your child access to facebook" and then just see what happens. You don't have to jump straight to making it technologically guaranteed by construction, maybe it's enough to just give parents the tools and an excuse to say no.
We don't need DNA testing locks on cans of beer that won't let you drink from them unless you're an adult, do we? It's perfectly possible for a parent to buy their child all the beer they want, and there's nothing stopping the children from trying to peer pressure them into it, and in many countries it's not even generally illegal to let your child drink beer! And yet almost all parents are able to almost completely enforce a reasonable level of restricted access, simply because society frowns upon it.
1. The current norm of social siloing apps was created by these tech companies in the first place. What regulators can do is discourage anti-competitive practices that lock users into specific software and hardware platforms. If there's plenty of competition for every kind of social app, and competition for OSes, and users could freely choose and move between them, then not having a particular app would not result in social isolation. This affects adults as well.
2. The OS has a firewall. But it's currently not user-controllable on your phone. Phone companies have decided you don't need that feature. But actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites. We can even make it possible to import/export site lists as a txt file so that you can download/share a curated block list that you or other parents made, to block many things at once. You could also do this for your entire home WiFi network in your WiFi router's settings, if your router's firmware has that feature.
And yeah, I agree that we should make the platforms less evil in general. But I think the way to do that is to give people the ability to easily ditch bad platforms and build new ones. Let the platforms actually compete, then the best will prevail. Right now, they don't prevail because of layers and layers of anti-competitive barriers. It would take great technical effort to regulate all the tricks these tech companies use, that's why I propose dealing with it at the root: make it so that all computer/phone hardware manufacturers must open-source their device drivers and firmware, and let the user lock/unlock the bootloader and install alternative OSes. If we do this, then the entire software ecosystem will fix itself over time along with all the downstream problems.
> Phone companies have decided you don't need that feature.Bu actually, they can easily implement a nice UI in the settings for the firewall and lock it behind a password, then parents would be able to use it to block individual websites.
iOS: Settings > Screen Time > Content & Privacy Restrictions > Toggle on
Then same area:
- App Installations & Purchases: disallow all
- App Store, Media, Web & Games > Web Content > Limit Adult Websites > Fill in allowlist and/or denylist, or Only Approved Websites and fill in allowlist
Apple is indeed better than most other companies on #2. But that's because it's the worst offender on #1. Its strategy is to appear to be the model company that cares about user rights and privacy, in hopes of capturing everyone in their closed-source walled garden that's already surveiling you at the OS level.
They're a part of the corp-gov surveillance complex [0]. This is the real threat behind the age verification push. The feds already have mass surveillance capabilities in iOS and macOS, and even Windows and most Android distros, but not on most open-source Linux distros, so they're starting to force it legally in the open. They're desperate because Linux is about to outcompete the enshittified Windows on desktops.
If I may nitpick, the conventional term for systems which attempt to determine the user's age is "age assurance". This covers a variety of techniques, which are typically broken down into:
* Age estimation, which is based on statistical models of some physical characteristic (e.g., facial age estimation).
* Age verification, which uses identity documents such as driver's licenses.
* Age inference, which tries to determine the user's age range from some identifier, e.g., by using your email address to see how old your account is.
These distinctions aren't perfect by any means, and it's not uncommon to see "age verification" used for all three of these together but more typically people are using "age assurance".
It actually is more like a flag in most cases. Specifically, in the case of AB1043, you enter your age or your DOB but then the OS provides an age range (<13, 13-15, 16-17, 18+).
Also, while some bills do seem to require browsers to promulgate age data to websites (e.g., NY SB102A [0]), AB1043 does not. Rather, it requires the browser
to read the age range just like any other app, but does't say anything about providing it to sites.
OP is certainly right that a lot of this legislation is written in ways that are hard to interpret and that often seem like they would have undesirable side effects even under the assumption that the basic idea is good (whether that's actually true is a whole different question).
In the specific case of CA AB1043: (1) Systems are required to ask the user for their age and just trust whatever they say (2) Applications are required to query the system for the user's age range. Other enacted and proposed device-based age assurance mandates have different properties.
I think this legislation is as dumb as everyone else does, but it also seems like the cheapest way for everyone to agree that we did something about the moral panic without actually giving up anything. It doesn’t do anything with ID or privacy or even actual verification. There’s no complicated auth dance to do with government services to verify our age tokens or whatever the latest Rube Goldberg machine “zero knowledge” age check proposal is.
I’ve been shocked at how many HN comments always come out in favor of age related legislation and heavy government regulation when the topic comes up. The pro-regulation commenters always seem to assume the age checks would never apply to them because they don’t have use TikTok or Facebook or other services, yet few realize that there aren’t going to be laws written in a way that only apply to a couple named companies you don’t use anyway. If we age verification laws then they’re going to be everywhere.
I personally hope this legislation dies and we can be done with this silly exercise, but if we’re stuck with age verification moral panic than a simple OS-level switch that we set once and then forget about seems like the least intrusive form of “age verification” we can get away with.
I think the writing has both intentions. Both implicate companies to comply as well for the mass to not defend. If it was not, there wouldn't be a guy on TV saying that there are 5000 possible pedo cases that are not being investigated and that's why they need it.
Anyone with more than 2 brain cells can put it together
> I personally hope this legislation dies and we can be done with this silly exercise, but if we’re stuck with age verification moral panic than a simple OS-level switch that we set once and then forget about seems like the least intrusive form of “age verification” we can get away with.
Just for clarification. CA AB1043 was signed back in 2025 and takes effect January 1 2027.
> I’ve been shocked at how many HN comments always come out in favor of age related legislation and heavy government regulation when the topic comes up.
Where do you see that? HN is overwhelmingly critical of age sniffing.
I disagree with your overall sentiment that this is benign because it's ineffectual in its current state. If anything, this is going to warm people up to the idea of government mandated prompts gathering personal information in their OS, and legislators in 2030 (or whenever) are going to say: "this isn't working, lets build on top of that prompt we already have and make it verify IDs"
In other words, I think this first bit of legislation had to be watered down to not receive too much backlash. This is the governments first plunge into mandating things on the frontend.
You're on the right path, but the "something" politicians want to do is specifically "regulate Facebook's patent harms to children". Facebook's counter-argument is: "we don't have a legally ironclad way to check user age, it should be Apple and Google's job". So the politicians want to write a law to make it Apple and Google's job to check age.
In other words, all of these age verification laws are here predominantly to indemnify Facebook from a growing wave of child endangerment lawsuits in a way that will ensure Facebook doesn't have to kick off even a single teen from their platforms. That's why the "verification" is just a date and an age range bucket.
My personal opinion is that these laws are stupid, but not harmful to Linux users, and that everyone angry at systemd for complying is shooting the wrong guy. Your real target is Facebook and you should be yelling at your local representative to make this bill not target Linux distros.
No, we can also be mad at the systemd guys for their very mid attempt at complying with an idiotic and unenforceable law, when the default of doing nothing was objectively the best option for them AND their end users.
> Systems are required to ask the user for their age and just trust whatever they say
If you're going to do anything like this, this is the thing they actually get right. It removes the inconvenience, privacy invasion, forced use of corporate verifiers with perverse incentives, etc. Meanwhile if the user is actually a child then their age is set by their parent.
> Applications are required to query the system for the user's age range.
This is classic legislative stupidity. Applications are required to query the user's age range even if they contain no age-restricted content? Brilliant.
>> Systems are required to ask the user for their age and just trust whatever they say
>
> This is the thing they actually get right. It removes the inconvenience, privacy invasion, forced use of corporate verifiers with perverse incentives, etc. Meanwhile if the user is actually a child then their age is set by their parent.
Well, maybe. For instance, if a child buys their own device they could
set the age to whatever they want.
>> Applications are required to query the system for the user's age range.
>
> This is classic legislative stupidity. Applications are required to query the user's age range even if they contain no age-restricted content? Brilliant.
Note that AB1043 doesn't actually impose much in the way of requirements about age restricted content. Rather, the way it works is that the developer is then assumed to have "actual knowledge" of the user's age (See 1798.501(b)(2)(A)) and then has to behave accordingly in other age-restricted contexts.
It requires the device/computer have a way to set the age. If you don't want to set your real age, that's fine. If you are a kid, your parent will probably have set it for you (it's really a feature for the parent, and they don't have to use it).
It then establishes that apps can know your age group, sufficient to comply with existing (and I suppose future) content age-restriction laws (where today they can dodge and say they did not know).
It's a pretty incremental step, and fairly minimal (in the range of all options proposed around the world). We can try it and see how it goes.
> For instance, if a child buys their own device they could set the age to whatever they want.
If a child has the money to buy a device without the parent knowing about it then they could just buy a used device that has already been configured with an account or pay a high school senior to set one up on their new device.
> Rather, the way it works is that the developer is then assumed to have "actual knowledge" of the user's age (See 1798.501(b)(2)(A)) and then has to behave accordingly in other age-restricted contexts.
How is mkdir or python3 supposed to "behave accordingly in other age-restricted contexts"? And if the answer is that its behavior is entirely unmodified, why is it required to do something without effect?
Also, who is the "developer" of a thirty year old project with thousands of contributors and multiple forks? All of them? None of them? The last one to make a commit, even if they're outside the jurisdiction?
> > For instance, if a child buys their own device they could set the age to whatever they want.
> If a child has the money to buy a device without the parent knowing about it then they could just buy a used device that has already been configured with an account or pay a high school senior to set one up on their new device.
Yes, agreed. I'm just describing how it works.
> > Rather, the way it works is that the developer is then assumed to have "actual knowledge" of the user's age (See 1798.501(b)(2)(A)) and then has to behave accordingly in other age-restricted contexts.
>How is mkdir or python3 supposed to "behave accordingly in other age-restricted contexts"? And if the answer is that its behavior is entirely unmodified, why is it required to do something without effect?
> Also, who is the "developer" of a thirty year old project with thousands of contributors and multiple forks? All of them? None of them? The last one to make a commit, even if they're outside the jurisdiction?
Then the law can make it illegal to sell smartphones or computers to 12 years olds or we could just ask the parents to do a bit of work and ensure their children is not buying devices behind their backs.
The idea is to make it easy for responsible parents to give a device to their children and make it easy for legal websites to block minors from adult content. We can't get perfect results but good enough could shut upo the complainers and maybe we get them do things like educating parents on how to proceed when they gift a device to a child.
>
This is classic legislative stupidity. Applications are required to query the user's age range even if they contain no age-restricted content? Brilliant.
This is classic programmer stupidity attempting to read the law in the stupidest possible way. No - if the application needs to know the user's age because of a content restriction, it shall query the system for that, instead of getting it some other way. Unlike computer code, laws are understood by humans in a context.
> DNSSEC can be trivially used with DANE to protect the entire session. The browser vendors quite consciously decided to NOT do that.
100%. The reasons why are explained in some detail here: https://educatedguesswork.org/posts/dns-security-dane/. The TL;DR is that by the time DANE was created the WebPKI already existed and was universal and so adding DANE didn't buy you anything because you still were going to have to have a WebPKI certificate more or less in perpetuity.
> This is the outcome of browser vendors not caring at all about privacy and security.
This is false. The browser vendors care a great deal about privacy and security. Source: it was my job at Mozilla to care about this, amongst other things. It may be the case that they have different priorities than you.
> You're saying that to provide service for anything over the Web, you have to publish all your DNS names in a globally distributed immutable log that will be preserved for all eternity?
Well, back when people were taking DNSSEC and DANE more seriously, there was a lot of talk of doing DNSSEC Transparency.
> And that you can't even have a purely static website anymore because you need to update the TLS cert every 7 days? This is just some crazy talk!
This is hyperbole, because nobody is forcing you to update the TLS cert every 7 days. It's true that the lifetimes are going to go down to 45 days eventually and LE offers 6 day certificates, but those are both optional and non-default.
Moreover, the same basic situation applies to DNSSEC, because your zone also needs to be signed frequently, for the same underlying reason: disabling compromised or mississued credentials.
> The TL;DR is that by the time DANE was created the WebPKI already existed and was universal and so adding DANE didn't buy you anything because you still were going to have to have a WebPKI certificate more or less in perpetuity.
Yet somehow they managed to wrangle hundreds of CAs to use the CT logs and to change the mandated set of algorithms.
> Well, back when people were taking DNSSEC and DANE more seriously, there was a lot of talk of doing DNSSEC Transparency.
And this would have been great. But it only needs to make transparent the changes in delegation (actually, only DS records) from the TLD to my zone. Not anything _within_ my zone.
And tellingly, the efforts to enable delegation in WebPKI are going nowhere. Even though X.509 is supporting it from the beginning (via name constraints, a critical extension).
> This is hyperbole, because nobody is forcing you to update the TLS cert every 7 days.
The eventual plan is to have shorter certs. 47 days will be mandated by 2029.
It also doesn't really change my point: I can't have a purely static server anymore and expect it to be accessible.
> Moreover, the same basic situation applies to DNSSEC, because your zone also needs to be signed frequently, for the same underlying reason: disabling compromised or mississued credentials.
That's incorrect. I've been using the same key (inside my HSM) since 2016. And I don't have to update the zone if it's unchanged. DNSSEC is actually _more_ secure than TLS, because zone signing can be done fully offline. With TLS, the key material is often a buggy memcpy() away from the corrosive anonymous Internet environment.
So you can rotate the DNSSEC keys, but it's neither mandated nor necessary. The need for short-lived certs for TLS is because there's no way to check their validity online during the request (OCSP is dead and CRLs are too bulky). But with DNSSEC if at any point my signing key is compromised, I can just change the DS records in the registrar to point to my updated key.
> > The TL;DR is that by the time DANE was created the WebPKI already existed and was universal and so adding DANE didn't buy you anything because you still were going to have to have a WebPKI certificate more or less in perpetuity.
>
> Yet somehow they managed to wrangle hundreds of CAs to use the CT logs and to change the mandated set of algorithms.
I'm not sure I see the connection here. What I'm saying is that the benefit for sites to adopt DANE is very low because as long as there are a lot of non-DANE-using clients out there they still need to have a WebPKI cert. This has nothing to do with CT and not much to do with the SHA-1 transition.
Re: your broader point about static sites, I don't think you're correct about the security requirements. Suppose for the sake of argument that your signing key is compromised: sure you can change the DS records but the attacker already has a valid DNSSEC record and that's sufficient to impersonate you for the lifetime of the record (recall that the Internet Threat Model is that the attacker controls the network so they can just send whatever DNS responses they want). What prevents this is that the records expire, so the duration of compromise is the duration of those records, just like with the WebPKI without revocation [0]. The same thing is true for the TLSA records signed by your ZSK.
In the DNSSEC/DANE paradigm, then, there are two signatures that have to happen regularly:
- The signature of the parent over the DS records, attesting to your ZSK.
- The signature of your ZSK over the TLSA records.
In the WebPKI paradigm, the server has to regularly contact the CA to get a new certificate. [1]
I agree with you that one advantage of DNSSEC is that that signing can all be done offline and then the data pushed up to the DNS servers, but it's still the case that something has to happen regularly. You've just pushed that off the TLS server and into the DNS infrastructure.
More generally, I'm not sure what you mean by a "purely static server". TLS servers are inherently non-static because they need to do the TLS handshake and I think the available evidence is the ACME exchange isn't that big a deal.
[0] As an aside, all the major browsers now have some compressed online revocation system, but that's not necessarily a generalizable solution.
[1] When we first were designing LE and ACME, I advocated for the CA to proactively issue new certificates over the old key, but things didn't end up that way, and of course you'd still need to download it.
> I'm not sure I see the connection here. What I'm saying is that the benefit for sites to adopt DANE is very low because as long as there are a lot of non-DANE-using clients out there they still need to have a WebPKI cert.
I find this argument laughable. Adding support in just 4 browsers and to iOS/Android would have moved something like 99% of traffic to DANE. The long tail could have been tackled incrementally. A lot of it doesn't even care about validation anyway.
> Re: your broader point about static sites, I don't think you're correct about the security requirements. Suppose for the sake of argument that your signing key is compromised: sure you can change the DS records but the attacker already has a valid DNSSEC record and that's sufficient to impersonate you for the lifetime of the record
I said it before, and let me repeat it: long TTLs for DNS records are an operational malpractice at this point, even disregarding DNSSEC. Having a TTL more than 15 minutes provides no practical advantages outside the root zone.
> I agree with you that one advantage of DNSSEC is that that signing can all be done offline and then the data pushed up to the DNS servers, but it's still the case that something has to happen regularly. You've just pushed that off the TLS server and into the DNS infrastructure.
Why? What additional security do I gain from periodic ZSK/KSK rotations? Especially if I keep the private key material offline (in an HSM), which is not possible for TLS, btw.
> In the WebPKI paradigm, the server has to regularly contact the CA to get a new certificate. [1]
Except that ACME does not enforce the private key rotation. I think most of infrastructure now rotates them, but the old key will still be valid for the duration of the compromised cert. And unlike 1-2 hour typical DNS TTLs, the WebPKI certs will be valid for weeks/days.
So yeah, I don't see any reason why WebPKI is _technically_ superior. I can see it being superior because of the browser vendors' support.
> I'd argue that the only difference is that browser vendors care about protecting against MITM on the client side. They're fine with MITM on the server side or with (potentially state-sponsored) BGP prefix hijacks. And I'm not fine with that personally.
Speaking as someone who was formerly responsible for deciding what a browser vendor cared about in this area, I don't think this is quite accurate. What browser vendors care about is that the traffic is securely conveyed to and from the server that the origin wanted it to be conveyed to. So yes, they definitely do care about active attack between the client and the server, but that's not the only thing.
To take the two examples you cite, they do care about BGP prefix hijacks. It's not generally the browser's job to do something about it directly, but in general misissuance of all stripes is one of the motivations for Certificate Transparency, and of course the BRs now require multi-perspective validation.
I'm not sure precisely what you mean by "MITM on the server side". Perhaps you're referring to CDNs which TLS terminate and then connect to the origin? If so, you're right that browser vendors aren't trying to stop this, because it's not the business of the browser how the origin organizes its infrastructure. I would note that DNSSEC does nothing to stop this either because the whole concept is the origin wants it.
> I'm not sure precisely what you mean by "MITM on the server side".
For the vast majority of Let's Encrypt certs, you only need to transiently MITM the plain HTTP traffic between the server and the rest of the net to obtain the certificate for its domain. There will be nothing wrong in the CT logs, just another routine certificate issuance.
It is possible to limit this with, yes, DNS. But then we're back to square one with DNS-based security. Without DNSSEC the attacker can just MITM the DNS traffic along with HTTP.
Google, other browser makers, and large services like Facebook don't really care about this scenario. They police their networks proactively, and it's hard to hijack them invisibly. They also have enough ops to properly push the CAA records that will likely be visible to at least one point-of-view for Let's Encrypt.
To detect the misissuance you would run something that compares the certs requested by the server with the certs actually issued and included in the log. If you don't care (and most people don't) then you don't detect it.
With DNSSEC, the public key is communicated to the top-level domain registry through out-of-band means. Presumably over a secure TLS link that can't be MITM-ed. The hash of the public key ("DS record") is, in turn, signed by the TLD's key. Which in turn is signed by the well-known root zone key.
So the adversary won't be able to fake the DNSSEC signatures, even if they control the full network path. They need to compromise your registry, at the very least.
DNS underlies domain authority and the validity of every connection to every domain name ultimately traces back to DNS records. The amount of infra needed to shore up HTTPS is huge and thus SSH and other protocols rely on trust-on-first-use (unless you manually hard-code public keys yourself - which doesn't happen). DNS offers a standard, delegable PKI that is available to all clients regardless of the transport protocol.
With DNSSEC, a host with control over a domain's DNS records could use that to issue verifiable public keys without having to contact a third party.
I ran into this while working on decentralized web technologies and building a parallel to WebPKI just wasn't feasible. Whereas we could totally feed clients DNSSEC validated certs, but it wasn't supported.
Thanks for the explanation. It seems like there are two cases here:
1. Things that use TLS and hence the WebPKI
2. Other things.
None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.
That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
> None of what you've written here applies to the TLS and WebPKI case, so I'm going to take it that you're not arguing that DNSSEC validation by clients provides a security improvement in that case.
It would benefit the likes of Wikileaks. You could do all the crypto in your basement with an HSM without involving anyone else.
> That leaves us with the non-WebPKI cases like SSH. I think you've got a somewhat stronger case there, but not much of one, because those cases can also basically go back to the WebPKI, either directly, by using WebPKI-based certificates, or indirectly, by hosting fingerprints on a Web server.
But do they? That requires adding support for another protocol.
I would like to live in a world where I don't have to copy/paste SSH keys from an AWS console just to have the piece-of-mind that my SSH connection hasn't been hijacked.
In practice, fleet operators run their own PKIs for SSH, so tying them to the DNSSEC PKI is a strict step backwards for SSH security.
There may be other applications where a global public PKI makes sense; presumably those applications will be characterized by the need to make frequent introductions between unrelated parties, which is distinctly not an attribute of the SSH problem.
And for everyone else that just wants to connect to an SSH session without having to setup PKI themselves? Tying that to the records used to find the domain seems like the obvious place to put that information to me!
DNSSEC lets you delegate a subtree in the namespace to a given public key. You can hardcode your DNSSEC signing key for clients too.
Don't get me started on how badly VPN PKI is handled....
Yes, modern fleetwide SSH PKIs all do this; what you're describing is table stakes and doesn't involve anybody delegating any part of their security to a global PKI run by other organizations.
The WebPKI and DNSSEC run global PKIs because they routinely introduce untrusting strangers to each other. That's precisely not the SSH problem. Anything you do to bring up a new physical (or virtual) involves installing trust anchors on it; if you're in that position already, it actually harms security to have it trust a global public PKI.
The arguments for things like SSHFP and SSH-via-DNSSEC are really telling. It's like arguing that code signing certificates should be in the DNS PKI.
No, we run a fleet with thousands of physicals and hundreds of thousands of virtuals, of course we don't hardcode keys in our SSH configuration. Like presumably every other large fleet operator, we solve this problem with an internal SSH CA.
Further, I haven't "moved on to another argument". Can you answer the question I just asked? If I have an existing internal PKI for my fleet, what security value is a trust relationship with DNSSEC adding? Please try to be specific, because I'm having trouble coming up with any value at all.
We also have thousands of devices accessible over SSH and we maintain our own PKI for this purpose as well. We also use mTLS with a private CA and chain of trust, for what it's worth.
PEM actually gets used? People depend on it? It hasn't been a market success, but if the root keys for DNSSEC ended up on Pastebin this evening, almost nobody would need to be paged, and you can't say that about PEM.
Multicast gets used (I think unwisely) in campus/datacenter scenarios. Interdomain multicast was a total failure, but interdomain multicast is more recent than DNSSEC.
Fair enough on Multicast and HIP. I'm less sure about the case for PEM.
S-HTTP was a bigger failure in absolute terms (I should know!) but it was eventually published as Experimental and the IETF never really pushed it, so I don't think you could argue it was a bigger failure overall.
There really has been a 30+ year full-court press to make DNSSEC happen, including high-effort coordination with both operators and developers. I think the only comparable effort might be IPv6. But IPv6 is succeeding (slowly), and DNSSEC seems to have finally failed.
(I hate to IETFsplain anything to you so think of this as me baiting you into correcting me.)
To really nerd out about it, it seems to me there are two metrics.
1. How much it failed (i.e., how low adoption was).
2. How much effort the IETF and others put into selling it.
From that perspective, I think DNSSEC is the clear winner. There are other IETF protocols that have less usage, but none that have had anywhere near the amount of thrust applied as DNSSEC.
It's actually not safe for clients to perform local validation because a quite significant fraction of middleboxes and the like strip out RRSIG and the like or otherwise tamper with the records in such a way that the signatures don't validate.
reply