I haven't read the proposal in depth. But skimming, this stands out:
> With the web environment integrity API, websites will be able to request a token that attests key facts about the environment their client code is running in. For example, this API will show that a user is operating a web client on a secure Android device. Tampering with the attestation will be prevented by signing the tokens cryptographically.
I don't see what else this could be referring to besides bringing TPM "remote attestation" up through the software stack to the level of a web browser. By "secure" Android it must mean one running a corporate Android distribution (see: SafetyNet), where Google has already been pushing this lockdown dynamic for a few years at least. Without tying it into the TPM, there would be literally no point to this specification as it could always be faked.
The insidious thing about this spec is that it's not an immediate prescriptive lockdown the way corporate "secure" boot is. Rather if it turns on tomorrow, Firefox, extensions, and community Linux distributions will all still work fine. But the long term dynamic is that each of these nonstandard things will be stamped out in the name of "security" - look at how the SafetyNet requirements on Android are getting incrementally harder to "pass".
Fundamentally this is entirely about consensual interactions. Right now, the demarcation point between user interests and website/server/company interests is the communications protocol itself. Your computer represents your interests, my computer represents my interests, and they possibly communicate with each other while still representing each of our interests. Remote parties that you're communicating with being able to verify what code you are running means they are then able to dictate what code you must run, even when it undermines your interests. Your only recourse becomes to not communicate, which doesn't work in our world of imbalanced power relationships. Computing's revolutionary spark of personal autonomy gets shoved back in the bottle as far as the Web is concerned.
> centralized gate keepers determining what we can do with our systems. Or at least, that will be the default unless you can get grandpa's old TPM-free linux laptop working again
There's some nuance here. Likely you will still be able to "jail break" new devices, or even root them in a supported way like Google's current Android devices. But doing so will make the device useless for accessing any website that insists on performing the verification. So sure, you can keep on using your nonstandard development environments just fine - most of the Web will be unavailable to it though.
You will just need a second WebTV like device for accessing banking websites, then shopping websites, then news websites. As I said, anywhere that currently pops up CAPTCHAs when browsing from less-surveillable IPs is a good indicator for the eventual adoption path. Said device will implement all the restrictions the website publishers can dream of - ads, lack of copy/paste, no screenshots, no access by VNC, no browser extensions, no protection from corporate surveillance, etc.
> And even if you do, you won't be able to connect it to the future internet to do anything real
That's a long way off and doesn't have any technical connection to this proposal. But one can imagine this proposal being one step in a chain of developments/legislation that brings us to that point.
>there would be literally no point to this specification as it could always be faked.
I disagree. There is a point to making something more difficult but not impossible: you alter behavior at statistically significant scale in practice AND you get to point to the alternative as a reason why the change isn't "coercive". In practice, 99% of users won't know to download an altered Chrome - they have a shaky understanding of "browser" and "os" as it is. In fact, I can imagine Googlers rationalizing this as a kind of shibboleth that keeps hacker culture alive.
Sure, I see where you're coming from, and much corporate software has traditionally worked in this quasi-consensual hostile-default kind of way. But the specific terms used in that passage are highly indicative of this being intended as implementation of remote attestation for the web.
Furthermore, even if the "key facts" it reports don't initially include results of hardware remote attestation, it's entirely foreseeable that over time these will be added.
> With the web environment integrity API, websites will be able to request a token that attests key facts about the environment their client code is running in. For example, this API will show that a user is operating a web client on a secure Android device. Tampering with the attestation will be prevented by signing the tokens cryptographically.
I don't see what else this could be referring to besides bringing TPM "remote attestation" up through the software stack to the level of a web browser. By "secure" Android it must mean one running a corporate Android distribution (see: SafetyNet), where Google has already been pushing this lockdown dynamic for a few years at least. Without tying it into the TPM, there would be literally no point to this specification as it could always be faked.
The insidious thing about this spec is that it's not an immediate prescriptive lockdown the way corporate "secure" boot is. Rather if it turns on tomorrow, Firefox, extensions, and community Linux distributions will all still work fine. But the long term dynamic is that each of these nonstandard things will be stamped out in the name of "security" - look at how the SafetyNet requirements on Android are getting incrementally harder to "pass".
Fundamentally this is entirely about consensual interactions. Right now, the demarcation point between user interests and website/server/company interests is the communications protocol itself. Your computer represents your interests, my computer represents my interests, and they possibly communicate with each other while still representing each of our interests. Remote parties that you're communicating with being able to verify what code you are running means they are then able to dictate what code you must run, even when it undermines your interests. Your only recourse becomes to not communicate, which doesn't work in our world of imbalanced power relationships. Computing's revolutionary spark of personal autonomy gets shoved back in the bottle as far as the Web is concerned.
> centralized gate keepers determining what we can do with our systems. Or at least, that will be the default unless you can get grandpa's old TPM-free linux laptop working again
There's some nuance here. Likely you will still be able to "jail break" new devices, or even root them in a supported way like Google's current Android devices. But doing so will make the device useless for accessing any website that insists on performing the verification. So sure, you can keep on using your nonstandard development environments just fine - most of the Web will be unavailable to it though.
You will just need a second WebTV like device for accessing banking websites, then shopping websites, then news websites. As I said, anywhere that currently pops up CAPTCHAs when browsing from less-surveillable IPs is a good indicator for the eventual adoption path. Said device will implement all the restrictions the website publishers can dream of - ads, lack of copy/paste, no screenshots, no access by VNC, no browser extensions, no protection from corporate surveillance, etc.
> And even if you do, you won't be able to connect it to the future internet to do anything real
That's a long way off and doesn't have any technical connection to this proposal. But one can imagine this proposal being one step in a chain of developments/legislation that brings us to that point.