For one, Do Not Track is on the client side and you just hope and pray that the server honors it, whereas cookie consent modals are something built by and placed in the website.
I think you can reasonably assume that if a website went through the trouble of making such a modal (for legal compliance reasons), the functionality works (also for legal compliance reasons). And, you as the client can verify whether it works, and can choose not to store them regardless.
I would assume most websites would still set cookies even if you reject the consent, because the consent is only about not technically necessary cookies. Just because the website sets cookies doesn't tell you whether it respects you selection. Only if it doesn't set any cookies can you be sure, and I would assume that's a small minority of websites.
In a 64 bit register, e.g., RAX, AL refers to the lowest 8 bits [0-7] and AH refers to the next 8 bits [8-15].
Together, AX refers to bits [0-15]. EAX refers to [0-31].
It's counterintuitive (or at least, inconsistent) that we have a name for bits [8-15] but not for [16-31] or [32-63]. My fuzzy understanding is that this came about from legacy decisions.
Often when you're calling something "manual" you're taking something off the automated happy path to tediously and slowly wind the thing through its complex process by hand. The difference between manual coding and human coding is tedium and laboriousness. It's not laborious to program, but the phrase "manual coding" evokes that image.
Maybe that’s what they’ve been doing? No one using Vim, Emacs, or Unix as an IDE would say they do manual coding with the amount of automation that usually goes there.
Has anyone tried running a generic User-Agent on a standard modern browser? (By "generic", I mean one that does away with this whole compatibility dance.) I'm curious how much would break or degrade.
I once tried having my web browser claim to be the Google spider. It worked well and I completely forgot about it until one website soft banned me a year or two later for impersonating the Google spider. I contacted them to complain. They lifted the ban and told me the ban reason, which made me remember the experiment I never terminated.
All of the privacy stuff straight up breaks the internet these days. Try using a VPN and half the time CDNs straight up refuse to serve you the JS files so sites like supermarkets or flight booking websites load broken or show some generic forbidden error.
Blanking out the user agent would have to be pushed by either Apple, Google, or Microsoft. And out of those I feel like only Apple would do it. iCloud private relay doesn’t end up breaking websites since companies care about not degrading the experience for Apple users and make sure the site still works.
I tried doing that in the early 2010s. Even back then it didn't work (github broke for example). If you did it today, you'd likely be blocked by a lot of major websites for "lying" about your user agent. Cloudflare turnstiles will stop working, you'll get captcha'd to death, and so on.
Even tor-browser doesn't dare to modify the user agent string in any major way. It's almost impossible to lie about because trackers don't actually care about your user agent. They're identifying your device/OS through side channels (canvas, webgl, fonts, etc).
Wrt/ Tor browser, it's not that they don't dare to, it's that they don't want to. One of the goals of that browser is to not stick out too much, and changing the user agent would do just that, so they don't do it.
Then the ideal would be to normalize the user agent string to look identical on every platform. My point is: they can't do that. e.g. A linux machine identifying itself as windows would be spotted immediately. Instead, they have to reduce entropy by bucketing you according to your device/OS/arch.
I don't think there is a point there. In case of the Tor browser, they use the user agent to blend in, so they are not a good candidate to do anything about how stupid the user agent is.
It's the current heavyweights who could change it for the better: Google and Apple. If either introduced a major change in how they present the user agent, websites would be very quick to adapt (if they need to in the first place...), or else. Otherwise, no change will happen - and I think this will be the case, same as with the HTTP "Referer" (misspell of "referrer").
Fun fact, non-browsers actually have much nicer user strings. I run an internet radio, and there is a lot of clients like
Linux UPnP/1.0 Sonos/85.0-64200 (ZPS1) Nullsoft Winamp3 version 3.0 (compatible)
> In case of the Tor browser, they use the user agent to blend in, so they are not a good candidate to do anything about how stupid the user agent is.
No. They don't use it to blend in. If they wanted to blend in they would be modifying every platform's user agent string to look like Windows x86_64 or something. They don't do that because there's no way they could possibly get away with it.
Instead, they're resigned to simply censoring the minor version number of the browser to reduce entropy.
> Fun fact, non-browsers actually have much nicer user strings. I run an internet radio, and there is a lot of clients like
And those tools will get blocked by various CDNs for not having a browser user agent string, not having a browser-like TLS handshake, etc. This is why projects like curl-impersonate and golang's utls had to be created.
I will also plug the Keyboard.io Model 100 here. It has a ton of mounting options including tripod screws for custom solutions. The halves are connected with a standard ethernet cable so you can have them any distance apart.
Governments should not operate fiscally like corporations. A financial institution will budget around fees because it's in their benefit for their customers to incur fees. A government should not budget around fines because they want the behavior which was fined to not occur at all.
I think one way to prevent bad incentives is to ensure that the organizational units that create and enforce policies are not the ones that benefit from any fines collected.
Maybe a uniform tax credit/refund for each citizen that is covered by that level of government. We the citizens can then decide if we fix the issue or continue to generate fines, but at least the budget isn't expecting revenue that could disappear (like the lack of traffic tickets during the beginning of COVID).
How about fines go into a sovereign wealth fund (but not be seen as major source for the fund- more a bonus) so there is no short term budget planning based on fine revenue.
If CSS is disabled or using a browser that does not implement CSS, that might also be an issue. (A mode to disable CSS should ideally also be able to handle ARIA attributes (unless the user disables those too), but not all implementations will do this (actually, I don't know if any implementation does; it doesn't seem to on mine), especially if they were written before ARIA attributes were invented.)
atuin is a collection of the past, which can be training data for a collection in the future. If I'm asking AI to essentially generate commands, my previous inputs ideally would be part of the basis.
I think you can reasonably assume that if a website went through the trouble of making such a modal (for legal compliance reasons), the functionality works (also for legal compliance reasons). And, you as the client can verify whether it works, and can choose not to store them regardless.