But when you can use them, cookies are demonstrably better. XSS is the main argument against localstorage. Even this article[0], which pillories cookies, starts off with:
...if your website is vulnerable to XSS attacks, where a third party can run arbitrary scripts, your users’ tokens can be easily stolen [when stored in localstorage].
The reasons to avoid cookies:
* APIs might require an authorization header in the browser fetch call.
* APIs might live on a different domain, rendering cookies useless.
CSRF is a danger, that's true. can be worked around. My understanding is that XSS has a wider scope and that many modern frameworks come with CSRF protection built in[1]. Whereas XSS is a risk any time you (or anyone in the future) includes any JS code on your website.
> * APIs might live on a different domain, rendering cookies useless.
That's when you implement a BFF which manages your tokens and shares a session cookie with your frontend while proxying all requests to your APIs. And as said, you "just" have to setup a way for your BFF to share CSRF tokens with your frontend.
Yup, big fan of the BFF. Philippe de Ryck did a presentation on the fundamental insecurity of token storage on the client that he allowed us to share: https://www.youtube.com/watch?v=2nVYLruX76M
If you can't use cookies (which as mentioned above, have limits) and you can't use a solution like DPoP (which binds tokens to clients but is not widely deployed), then use the BFF. This obviously has other non-security related impacts and is still vulnerable to session riding, but the tokens can't be stolen.
CSRF is not as big of an issue as it used to be, and when it is an issue it can be solved more easily and comprehensively than XSS:
1. The default value for SameSite attribute is now "Lax" in most browsers. This means that unless you explicitly set your authentication cookies to SameSite=None (and why would you?), you are generally not vulnerable to cookie-based CSRF (other forms of CSRF are still possible, but not relevant to the issue of storing tokens in local storage or cookies).
2. Most modern SSR and hybrid frameworks have built-in CSRF protection for forms and you have to explicitly disable that protection in order to be vulnerable to CSRF.
3. APIs which support cookie authentication for SPAs can be deployed on another domain and use CORS headers to prevent CSRF, even with SameSite=None cookies.
On the other hand, there are no mechanisms which offer comprehensive protection from XSS. It's enough for a single JavaScript dependency that you use to have a bug and it's game over.
For this reason, OAuth 2.0 for Browser-Based Applications (draft)[1] strongly recommends using a HttpOnly cookie to store the access token:
"This architecture (using a BFF with HttpOnly cookies) is strongly recommended for business applications, sensitive applications, and applications that handle personal data."
With regards to storing access tokens and refresh tokens on local storage without any protection it says:
"To summarize, the architecture of a browser-based OAuth client application is straightforward, but results in a significant increase in the attack surface of the application. The attacker is not only able to hijack the client, but also to extract a full-featured set of tokens from the browser-based application.This architecture is not recommended for business applications, sensitive applications, and applications that handle personal data."
And this is what it has to say about storing the refresh token in a cookie, while keeping the access token accessible to JavaScript:
"When considering a token-mediating backend architecture (= storing only access token in local storage), it is strongly recommended to evaluate if adopting a full BFF (storing all tokens in a cookie) as discussed in Section 6.1 is a viable alternative. Only when the use cases or system requirements would prevent the use of a proxying BFF should the token-mediating backend be considered over a full BFF."
In short, the official OAuth WG stance is very clear:
1. HttpOnly cookies ARE better in terms of security.
2. Storing Refresh Tokens in local storage is only recommended for low-security use cases (no personal data, no enterprise compliance requirements).
3. Storing short-lived Access Tokens in local storage should only be considered if there are technical complexities that prevent you from using only cookies.
> Whilst Crowdstrike are going to cop a potentially existential-threatening amount of blame, an application shouldn't be able to do this kind of damage to an operating system.
It doesn't operate in user space, they install a kernel driver.
It's a design decision. People want the antivirus to protect them even if an attacker exploits a local privilege escalation vulnerability or if an attacker that compromised an admin account (which happens all the time in Windows environments) wants to load malicious software. That's kind of the point of these things. Somebody exploits a memory vulnerability of one of the hundreds of services on a system, the antivirus is supposed to prevent that, and to their benefit, Crowdstrike is very good at this. If it didn't run in the kernel, an attacker with root can deactivate the antivirus. Since it's a kernel module, the attacker needs to load a signed kernel module, which is much harder to achieve.
Presumably Crowdstrikes driver also has the ELAM flag which guarantees it will be loaded before any other third party drivers, so even if a malicious driver is already installed they have the opportunity to preempt it at boot.
If we are being pedantic then an ELAM driver can't be guaranteed to load before another ELAM driver of course, but only a small list of vetted vendors are able to sign ELAM drivers so it is very unlikely that malware would be able to gain that privilege. That's the whole point.
Yep. We can't migrate our workstations to Ubuntu 24.04 because Crowdstrikes falcon kernel modules don't support the kernel version yet. Presumably they wanted to move to EBPF but I'm guessing that hasn't happened yet. Also: I can't find the source code of those kernel modules - they likely use GPL-only symbols, wouldn't that be a GPL violation?
I was given to understand that Crowdstrike provided some protection from unvetted export of data. I'm not sure that data would be useful without the rare domain expertise to use it, but I wasn't shown the risk analysis. And then someone else demands and gets ssh access to GitHub. Sigh.
I think "compliance" would be a better word to use that "safety" when it comes to a lot of "security" software on computers.
And I bring up the distinction because while compliance is "sometimes" about safety, it's also very often about KPIs of particular individuals or due to imaginary liability for having not researched every possible "compliance" checkbox conceivable and making sure it's been checked.
Some computer security software is completely out of hand because its primary purpose is to have the appearance of effectiveness for the exec whose job is to tick off as many safety checkboxes as they can find, as opposed to being actually pragmatically effective.
If the same methodologies were applied to car safety, cars would be so weighed down by safety features, that they wouldn't be able to go faster than 40km/h.
They mean distributing Linux + the module together. Like e.g. shipping the Nvidia kernel module alone is fine, but shipping a Linux distro with that module preinstalled is not fine.
Two different "it". As an analogy: selling pizza Hawaii is dicey, but you can sell pineapple slices and customers can add those to their pizza themselves.
Last time I dealt with HP, I had to use their fakeraid proprietary kernel module which "tainted" the kernel. Of course they never open-sourced it. I guess it's not necessary.
GPL exported symbols are the ones that are thought to be so tightly coupled to the kernel implementation that if you are using them, you are writing a derivative work of the kernel.
Yeah that was also my understanding, and I can't imagine a av module able to intercept filesystem and syscalls to be only using non-core symbols. But of course you never know without decompiling the module
Are they? Apple has pretty much banned kernel drivers (kexts) in macOS on Apple Silicon. When they were still used, they were a common cause of crashes and instability, not to mention potential gaping security holes.
Most things that third-party kernel drivers used to do (device drivers, file systems, etc) are now done just as well, and much more safely, in userspace. I'm surprised if Microsoft isn't heading in this direction too?
Presumably, Crowdstrike runs on macOS without a kernel extension?
> Presumably, Crowdstrike runs on macOS without a kernel extension?
That's correct: CrowdStrike now only installs an "Endpoint Security" system extension and a "Network" system extension on macOS, but no kernel extension anymore.
One would hope that Crowdstrike does a similar thing on Linux and relies on fanotify and/or ebpf instead of using a kernel module. The other upside to this would be not having to wait for Crowdstrike to be constantly updating their code for newer kernels.
I believe so but would like better details. We used to use another provider that depended on exact kernel versions whereas the falcon-sensor seems quite happy with kernel updates.
Whatever protection is implemented in user-land can be removed from user-land too. This is why most EDR vendors are now gradually relying on kernel based mechanisms rather than doing stuff like injecting their DLL in a process, hooking syscalls, etc...
First, we were talking about EDR in Windows usermode.
Second, still, that doesn't change anything. You can make your malware jmp to anywhere so that the syscall actually comes from an authorized page.
In fact, in windows environment, this is actively done ("indirect syscalls"), because indeed, having a random executable directly calling syscalls is a clear indicator that something is malicious. So they take a detour and have a legitimate piece of code (in ntdll) do the syscall for them.
The original Windows NT had microkernel architecture, where a driver/server could not crash the OS. So no, Crowdstrike didn't have an option really, but Microsoft did.
As PC got faster, Microsoft could have returned to the microkernel architecture, or at least focused on isolating drivers better.
They've done it to a degree but only for graphics drivers, Windows is (AFAIK) unique amongst the major OSes in that it can nearly always recover from a GPU driver or hardware crash without having to reboot. It makes sense that they would focus on that since graphics drivers are by far the most complex ones on most systems and there are only 3 vendors to coordinate API changes with, but it would be nice if they broadened it to other drivers over time.
NT was never a true microkernel. Most drivers are loaded into the kernel. Display drivers being a huge pain point, subsequently rolled back to user space in 2000, and printer drivers being the next pain point, but primarily with security -- hence moving to a Microsoft-supplied universal print driver, finally in Windows 11.
There's a grey area between "kernel drivers are required for crowdstrike" and "windows is not modular enough to expose necessary functionality to userspace". It could be solved differently given enough motivation.
There are other paths to the attack he mentioned. Eg you find an API that accepts ciphertext or part of. Or a cloud backup/restore flow. Likely you need another vulnerability but it does happen.
Useful because you can support existing passwords without requiring everyone to login or reset their password. Still has flaws though, like password shucking.