> Inside the office, the WiFi was connected to the VPN (i.e., VPN client was only required off-site) … and the office WiFi was WPA2-PSK. And of course the PSK a. was not a good PSK and b. did not rotate when employees left the company.
Yeah, I've never found network topology to be a good way to manage trust. Even ignoring employees sticking Raspberry Pis under desks or reconnecting using non-rotated credentials, it's ridiculously easy to convince apps to make internal requests expecting them to be external requests. Slack had a big security incident when its url unfurler, which runs on its internal network, started making requests to other internal services, thinking that they were external websites. The problem is that internal addresses were implicitly trusted, and people have endpoints like /quitquitquit hanging around, and that combination ends up being game over.
Ultimately, every request depends on at least two pieces of information: what user is making this request, and what downstream application is making this request. Most people only take into account the first, and thus these problems recur. (Because operators get made when they "kubectl port-forward" in and the application rejects their debug requests because "random unauthenticated HTTP request" does not meet the security requirements. This, of course, is a good thing. For security, anyway.)
> Getting working SSO, even with just a decent MFA experience & then getting a service to authz with it has been considerably complex.
Yeah, the industry seems to have decided upon OIDC, which is significantly more complicated for both the operator and the application developer. I really like the way Google's managed auth proxy works, and I'm surprised it's not more popular. If the request goes through the proxy, it injects a signed header with the user information in it. The application simply uses a few lines of code (ok, JWKS is involved, so a lot of lines of code to keep the list of trusted public keys up to date) to verify the signature and extract the username, and then can make an authorization decision. No cookies, no redirects.
I ended up writing my own proxy that uses username + WebAuthn to authenticate and pass this information on to applications behind the proxy, and it's nicer than any auth solution I've paid 100000x more for. I can FaceID into internal status pages when I'm out drinking, impressing everyone! OK, not very many people are impressed, but they can at least see the thing I want to show them. I'm surprised there's no maintained OSS thing that works like this.
Yeah, I've never found network topology to be a good way to manage trust. Even ignoring employees sticking Raspberry Pis under desks or reconnecting using non-rotated credentials, it's ridiculously easy to convince apps to make internal requests expecting them to be external requests. Slack had a big security incident when its url unfurler, which runs on its internal network, started making requests to other internal services, thinking that they were external websites. The problem is that internal addresses were implicitly trusted, and people have endpoints like /quitquitquit hanging around, and that combination ends up being game over.
Ultimately, every request depends on at least two pieces of information: what user is making this request, and what downstream application is making this request. Most people only take into account the first, and thus these problems recur. (Because operators get made when they "kubectl port-forward" in and the application rejects their debug requests because "random unauthenticated HTTP request" does not meet the security requirements. This, of course, is a good thing. For security, anyway.)
> Getting working SSO, even with just a decent MFA experience & then getting a service to authz with it has been considerably complex.
Yeah, the industry seems to have decided upon OIDC, which is significantly more complicated for both the operator and the application developer. I really like the way Google's managed auth proxy works, and I'm surprised it's not more popular. If the request goes through the proxy, it injects a signed header with the user information in it. The application simply uses a few lines of code (ok, JWKS is involved, so a lot of lines of code to keep the list of trusted public keys up to date) to verify the signature and extract the username, and then can make an authorization decision. No cookies, no redirects.
I ended up writing my own proxy that uses username + WebAuthn to authenticate and pass this information on to applications behind the proxy, and it's nicer than any auth solution I've paid 100000x more for. I can FaceID into internal status pages when I'm out drinking, impressing everyone! OK, not very many people are impressed, but they can at least see the thing I want to show them. I'm surprised there's no maintained OSS thing that works like this.