clickhouse has pretty good github_events dataset on playground that folks can use to do some research - some info on the dataset https://ghe.clickhouse.tech/
Yeah. It would be interesting to see who adopted to the compromised versions and how quickly, compared to how quickly they normally adopt new versions (not bots pulling upgrades, but how quickly maintainers approve and merge them)
If there were a bunch of people who adopted it abnormally fast compared to usual, might point to there being more "bad actors" in this operation (said at the risk of sounding paranoid if this turns out to be a state run thing)
> If the code is complex, there could be many more exploits hiding.
Then the code should not be complex. Low-level hacks and tricks (like pointer juggling) should be not allowed and simplicity and readability should be preferred.
Yes, but my point was that at the level of performance tools like this are expected to operate at, it’s highly probable that you’ll need to get into incredibly esoteric code. Look at ffmpeg – tons of hand-written Assembly, because they need it.
To be clear, I have no idea how to solve this problem; I just don’t think saying that all code must be non-hacky is the right approach.
Performance can be bought with better hardware. It gets cheaper and cheaper every year. Trustworthiness cannot be purchased in the same way. I do not understand why performance would ever trumph clean code, especially for for code that processes user provided input.
This attitude is how we get streaming music players that consume in excess of 1 GiB of RAM.
Performant code needn’t be unclean; it’s just often using deeper parts of the language.
I have a small project that became absolute spaghetti. I rewrote it to be modular, using lots of classes, inheritance, etc. It was then slower, but eminently more maintainable and extensible. I’m addressing that by using more advanced features of the language (Python), like MemoryView for IPC between the C libraries it calls. I don’t consider this unclean, but it’s certainly not something you’re likely to find on a Medium article or Twitter take.
I value performant code above nearly everything else. I’m doing this for me, there are no other maintainers, and it’s what I enjoy. You’re welcome to prioritize something else in your projects, but it doesn’t make other viewpoints objectively worse.
Performant code does not need to be unclean, exactly! My original point was just to not put performance on a pedestal. Sure, prioritize it, but correct and clean should come first - at least for foundational libraries that others are supposed to build upon.
I maintain ML libraries that run on microcontrollers with kilobytes of memory, performance is a friend of mine ;)
I suggest you run your browsers Javascript engine in interpreter mode to understand how crippling the simple and sraight forward solution is to performance.
I guess because at server farm level, performance/efficiency translate to real million USD savings. In general, at scale ends (the cloud and the embedded) this matters a lot. In resource limited environments like raspberry pi, this design philosophy wins over many users between DIY and the private industry.
I hate this argument. If current hardware promises you a theoretical throughput of 100 MB/s for an operation, someone will try to hit that limit. Your program that has no hard to understand code but gives me 5 MB/s will loose in the face of a faster one, even if that means writing harder to understand code.
No, but often it is far worse than 95%. A good example is random.randint() vs math.ceil(random.random() * N) in Python. The former is approximately 5x slower than the latter, but they produce effectively the same result with large enough values of N. This isn’t immediately apparent from using them or reading docs, and it’s only really an issue in hot loops.
Another favorite of mine is bitshifting / bitwise operators. Clear and obvious? Depends on your background. Fast as hell? Yes, always. It isn’t always needed, but when it is, it will blow anything else out of the water.
Bitwise is highly context dependent. There are simple usages like shifts to divide/multiply by 2. Idiomatic patterns that are clean when wrapped in good reusable and restricted macros, like for common registers manipulation in microcontrollers. And other uses that are anything from involuntary obfuscation to competition grade obfuscation.
> There are simple usages like shifts to divide/multiply by 2.
Clean code should not do that as the compiler will do that.
Clean code should just say what it wants to do, not replace that with low-level performance optimizations. (Also wasn't performance to be obtained from newer hardware?)
Faster and more complex hardware can also have bugs or back doors, as can cheaper hardware. That said, I'm not happy with buggy and untrustworthy code either.
If this is a conspiracy or a state-sponsored attack, they might have gone specifically for embedded devices and the linux kernel. Here archived from tukaani.org:
> XZ Embedded is a relatively small decompressor for the XZ format. It was developed with the Linux kernel in mind, but is easily usable in other projects too.
> *Features*
> * Compiled code 8-20 KiB
> [...]
> * All the required memory is allocated at initialization time.
This is targeted at embedded and real-time stuff. Could even be part of boot loaders in things like buildroot or RTEMS. And this means potentially millions of devices, from smart toasters or toothbrushes to satellites and missiles which most can't be updated with security fixes.
One scenario for malicious code in embedded devices would be a kind of killswitch which listens to a specific byte sequence and crashes when encountering it. For a state actor, having such an exploit would be gold.
One of my complaints about so many SciFi stories is the use of seemingly conventional weapons. I always thought that with so much advanced technology that weapons would be much more sophisticated. However if the next "great war" is won not by the side with the most destructive weapons but by the side with the best kill switch, subsequent conflicts might be fought with weapons that did not rely on any kind of computer assistance.
This is eerily similar to Einstein's (purported) statement that if World War III was fought with nuclear weapons, World War IV would be fought with sticks and stones. Similar, but for entirely different reasons.
I'm trying to understand why the characters in Dune fought with swords, pikes and knives.
> I'm trying to understand why the characters in Dune fought with swords, pikes and knives.
At least part of the reason is that the interaction between a lasgun and a shield would cause a powerful explosion that would kill the shooter too. No one wants that and no one will give up their shield, so they had to go back to melee weapons.
No, there is a in-world reason at least for no drones. Wikipedia:
> However, a great reaction against computers has resulted in a ban on any "thinking machine", with the creation or possession of such punishable by immediate death.
tl;dr - Machine intelligences existed in Dune history, were discovered to be secretly controlling humanity (through abortion under false pretenses, forced sterilization, emotional/social control, and other ways), then were purged and replaced with a religious commandment: "Thou shalt not make a machine in the likeness of a human mind"
No, and there is a (piloted) drone attack in the first book -- Paul is attacked by a hunter-seeker.
The reason nobody tries to use the lasgun-shield interaction as a weapon is because the resulting explosion is indistinguishable from a nuclear weapon, and the Great Convention prohibits the use of nukes on human targets.
Just the perception of having used a nuclear device would result in the House which did so becoming public enemy #1 and being eradicated by the Landsraad and Sardaukar combined.
@Potro: If you liked the movie, read the books. I don't read a lot anymore, but during sick leave I started with the first book. Didn't stop until I finished the main story, including the sequels by Frank Herbert's son about a month later. That's like... uh... nine books?
In the book Paul is attacked by an insect drone while in his room. The drone was controlled by a Harkonnen agent placed weeks in anticipation inside a structure of the palace so it was also a suicide attack as the agent had no chance to escape and would die of hunger/thirsty if not found.
It's related to excessive coupling between modules and low coherence.
There is a way for programs to implement the systemd readiness notification protocol without using libsystemd, and thus without pulling in liblzma, which is coupled to libsystemd even though the readiness notification protocol does not require any form of compression. libsystemd provides a wide range of things which have only weak relationships to each other.
There are in fact two ways, as two people independently wrote their own client code for the systemd readiness notification protocol, which really does not require the whole of libsystemd and its dependencies to achieve. (It might be more than 2 people nowadays.)
This is only evidence that libsystemd is popular. If you want to 0wn a bunch of systems, or even one particular system but make it non-obvious, you choose a popular package to mess with.
BeOS isn't getting a lot of CVEs attached to it, these days. That doesn't mean its good or secure, though.
It's easy to have your existing biases validated if you already dislike systemd. The reality is that systemd is much more coherently designed than its predecessors from a 'end user interface' point of view, hence why its units are largely portable etc. which was not the case for sysvinit.
The reality is that it is not systemd specifically but our modern approach to software design where we tend to rely on too much third party code and delight in designing extremely flexible, yet ultimately extremely complex pieces of software.
I mean this is even true as far as the various CPU attack vectors have shown in recent years, that yes speculative execution is a neat and 'clever' optimization and that we rely on it for speed, but that maybe that was just too clever a path to go down and we should've stuck with simpler designs that would maybe led to slower speedups but a more solid foundation to build future CPU generations on.
Let's be real, sshd loading random libraries it doesn't actually need because distros patched in a kitchen sink library is inexcusable. That kitchen sink library is libsystemd and it follows the same kitchen sink design principle that systemd-opponents have been criticising all along. But its easier to accuse them of being biased rather consider that maybe they have a point.
People hate systemd from an ethical, philosophical, and ideological standpoint.
People love systemd for the efficiency, economics, etc.
It's like ideal vs production.
That is just technical disagreements and sour grapes by someone involved in a competing format (Lzip).
There’s no evidence Lasse did anything “wrong” beyond looking for / accepting co-maintainers, something package authors are taken to task for not doing every time they have life catching up or get fed up and can’t / won’t spend as much time on the thing.
Yes, nothing points to the inventor of the format and maintainer for decades has done anything with the format to make it suspect. If so, the recent backdoor wouldn't be needed.
It's good to be skeptic, but don't drag people through the mud without anything to back it up.
If a project targets a high-profile, very security sensitive project like the linux kernel from the start, as the archived tukaani web site linked above shows, it is justified to ask questions.
Also, the exploit shows a high effort, and a high level of competence, and a very obvious willingness to play a long game. These are not circumstances for applying Hanlon's razor.
Are you raising the same concerns and targeting individuals behind all other sensitive projects? No, because that would be insane.
It's weird to have one set of standards to a maintainer since 2009 or so, and different standards for others. This witch hunt is just post-hoc smartassery.
Yes, I think if a project has backdoors and its old maintainers are unable to review them, I am more critical than with normal projects. As said, compression is used everywhere and in embedded systems, it touches a lot of critical stuff. And the project went straight for that since the beginning.
And this is in part because I can not even tell for sure that he even exists. If I had met him a few times in a bar, I would be more inclined to believe he is not involved.
> As said, compression is used everywhere and in embedded systems, it touches a lot of critical stuff. And the project went straight for that since the beginning.
> You appeal to trust people and give them the benefit of doubt which is normally a good thing. But is this appropiate here?
Yes.
Without evidence to the contrary there is no reason to believe Lasse has been anything other than genuine so all you're doing is insulting and slandering them out of personal satisfaction.
And conspiratorial witch hunts are actively counter-productive, through that mode of thinking it doesn't take much imagination to figure out you are part of the conspiracy for instance.
1. An important project has an overburdened / burnt out maintainer, and that project is taken over by a persona who appears to help kindly, but is part of a campaign of a state actor.
2. A state actor is involved in setting up such a project from the start.
The first possibility is not only being an asshole to the original maintainer, but it is also more risky - that original maintainer surely feels responsible for his creation and could ring alarm bells. This is not unlikely because he knows the code. And alarm bells is something that state actors do not like.
The second possibility has the risk of the project not being successful, which would mean a serious investment in resources to fail. But that could be countered by having competent people working on that. And in that case, you don't have any real persons,just account names.
I don't think state actors would care one bit about being assholes. Organized crime black hats probably wouldn't either.
The original maintainer has said in the past, before Jia Tan's increased involvement and stepping up as a maintainer, that he couldn't put as much into the project due to mental health and other reasons [1]. Seems to fit possibility number one rather well.
If you suspect that Lasse Collin was somehow in it from the start, that'd mean the actor orchestrated the whole thing about mental health and not being able to keep up with sole maintainership. Why would they even do that if they had the project under their control already?
Of course we don't know what's really been happening with the project recently, or who's behind the backdoor and how. But IMO creating suspicions about the original maintainer's motives based entirely on speculation is also a bit assholey.
More layers of obfuscation. For example in order to be able to attribute the backdoor to a different party.
It is of course also possible that Lasse Collins is a nice real person who just has not been able to review this. Maybe he is too ill,or has to care for an ill spouse, or perhaps he is not even alive any more. Who knows him as a person (not just an account name) and knows how he is doing?
That is kinda crazy - state actors don't need to care about that level of obfuscation. From a state's perspective the situation here would be simple - hire a smart & patriotic programmer to spend ~1+ years maintaining an important package, then they slip a backdoor in. There isn't any point in making it more complicated than that.
They don't even need plausible deniability, groups like the NSA have been caught spying on everyone and it doesn't hurt them all that much. The publicity isn't ideal. But it only confirms what we already new - turns out the spies are spying on people! Who knew.
There are probably dozens if not hundreds of this sort of attempt going on right now. I'd assume most don't get caught. Or go undetected for a many years which is good enough enough. If you have government money on the budget, it makes sense to go with large-volume low-effort attempts rather than try some sort of complex good-cop-bad-cop routine.
You're correct about a great many things.
State actors do things in broad-daylight, get exposed, and it's no fuss to them at all.
But that depends on which "sphere of influence" you live in.
Russia and China have made major changes to key parts of their critical infrastructure based on revelations that might only result in a sub-committee in US Congress.
But to establish a significant contributor to a key piece of software, not unlike xz, is an ideal position for a state actor.
The developer doesn't even need to know who/why, but they could be financially/ideologically aligned.
This is what intelligence officers do. They manage real human assets who exist naturally.
But to have someone long-established as an author of a project is the exact type of asset they want. Even if they push the code, people immediately start considering how it could have been done by someone else.
Yes, it's conspiratorial/paranoid thinking but there's nothing more paranoid than state intelligence trade craft.
It makes me wonder. Is it possible to develop a robust Open Source ecosystem without destroying the mental health of the contributors? Reading his posting really made me feel for him. There are exceedingly few people who are willing do dedicate themselves to developing critical system in the first place. Now there is the burden of extensively vetting every volunteer contributor who helps out. This does not seem sustainable. Perhaps users of open source need to contribute more resources/money to the software that makes their products possible.
False dichotomy much? It doesn't have to be a motivated state actor pulling the strings from the begging. It could also just be some guy, who decided he didn't care anymore and either wanted to burn something or got paid by someone (possibly a state actor) to do this.
Recall that the original maintainer had mental health issues and other things that likely led to the perceived need to bring on someone to help maintain xz.
This brings up some integrity questions about you and other people bringing forth accusations in order to make the original maintainer feel pressure to bring on someone else to replace the one that inserted a backdoor after several years of ostensibly legitimate commits.
Hopefully this helps you see that these sorts of accusations are a slippery slope and unproductive. Heck, you could then turnaround and accuse me of doing something nefarious by accusing you.
I don’t stalk all of your social media posts, so from my perspective I don’t see any of the solutions you’ve posted elsewhere — which brings up a good point to keep in mind: none of us see the complete picture (or can read minds to know what someone else really thinks).
The possibility can be kept in mind and considered even if it isn’t being actively discussed. I think in this case, most people think he is not malicious — and feel that unless new compelling evidence to show otherwise appears, potentially starting a harmful rumor based on speculation is counterproductive.
You might not be trying to start a rumor, but other people could when they try to answer the questions from a place of ignorance — if you take a look at the comments on a gist summarizing the backdoor, there are quite a few comments by z-nonymous that seem to be insinuating that other specific GitHub users are complicit in things by looking at their commits in various non-xz repositories.
No one is running cover, just that most information so far points to the original maintainer not knowing that the person brought on to help out had ulterior motives, and likely wasn’t even who they purported to be. If you were running an open source project and facing burnout as the sole maintainer, I’d imagine you’d exercise perfect judgement and do a full background check on the person offering to help? I think many of us would like to believe we’d do better, but the reality is, most of us would have fallen for the same trick. So now imagine having to deal with the fallout not just on the technical side, but also the never-ending questions surrounding your professional reputation that people just keep bring up — sounds like a recipe for depression, possibly even suicidal thoughts.
I am running an open source project. Yes if someone was eager to help and was making changes to things that involved security, I would make them doxx themselves and submit to a background check
Well, good for you being one of the few exceptions who would make everyone submit themselves to a proper background check (presumably also covering the cost) before giving any write/commit access to the repo. That’s more than even most large open source projects do before giving access.
Thanks, but you assume too much. I outlined the circumstances under which i would require a background check, so you might want to reread. any other questions?
As I understand it Jia was contributing things like tests, not making changes that involve “security”. They just turned the commit, and eventual ability to make releases on the xz GitHub after “earning” more trust (+ access to GitHub pages hosted under tukaani domain), into something they could use to insert a backdoor.
No questions. Anyone can become a victim to social engineering — I believe the short answer to your question about all the downvotes is that a lot of people recognize how they could have fallen for something similar, and empathize that Lasse is likely now going through a rather difficult time.
I have no question about the downvotes, bud. You're very verbose. Still not sure why you revived an account you haven't commented with in 6 years just to run cover. I find you to be a highly suspicious individual and I really have nothing more to say to you.
I suppose I think verbose-ness will help people see the other side of things. I think I was also trying to convince myself that you aren’t just into conspiracy theories, but given that you’re now accusing me of being suspicious… :shrug: it did come full circle where in my first comment I said you would start accusing me. I guess neither of us have anything more to say to each other because we are both too locked into our own beliefs.
This person revived an account they haven't touched since 2018 in an attempt to convince ME SPECIFICALLY that there is nothing wrong with the original repo maintainer. They gloss over my arguments, use logical fallacies and are generally antagonistic in a way that is not immediately obvious. You be the judge, dear readers.
At any rate, this person has failed their cause and has actually made me double down on the conspiracy theory :)
It's possible that he was intentionally pressured and his mental health made bad or worse by the adversary to increase stress. The adversary would then propose to help them reduce the stress.
It argues the topic pretty well: xz is unsuitable for long-term archival. The arguments are in-depth and well worded. Do you have any argument to the contrary beyond "sour grapes"?
I can understand wanting your project to succeed, it's pretty natural and human, but it's flagrant Antonio had a lot of feels about the uptake of xz compared to lzip, as both are container formats around raw lzma data streams and lzip predates xz by 6 months. His complaint article about xz is literally one of the "Introductory links" of lzip.
Neither is lzip since it doesn't contain error correction codes. You can add those with an additional file (to any archive) e.g. via par2 but then most of the points in the linked rant become irrelevant.
Collateral damage yes, but it seems like he is currently away from the internet for an extended time. So it could be that Github needed to suspend his account in order to bypass things that he would otherwise have to do/approve? Or to preempt the possibility that his account was also compromised and we don't know yet.
No. I mean that the link you shared is a opinion piece about the xz file format, and those opinions are fully unrelated to today's news and only serve to further discredit Lasse Collin who for all we know have been duped and tricked by a nation state, banned by github and is having a generally shitty time.
There may be some suboptimal things about security of the XZ file format, I don't know.
I bet you there are less than optimal security choices in your most cherished piece of software as well.
This thread is about an exploit that does not rely on any potential security problems in the DESIGN of the xz FORMAT. Therefore your point, even if valid as a general one, is not really relevant to the exploit we're discussing.
Further, there's some proof needed that any potential suboptimal aspects of the security design of the xz FORMAT was designed such so that it could be exploited later or simply because no programmer is an expert on every aspect of security ever. I mean you could be the most security conscious programmer and your chain could still be compromised.
Security today is such a vast field and it takes so little to get you compromised that proclaiming anything 'secure design' these days is practically impossible.
I bet you an audit of lzip would find plenty of security issues, would those be intentional?
1) Are there no legit code reviews from contributors like this? How did this get accepted into main repos while flying under the radar? When I do a code review, I try to understand the actual code I'm reviewing. Call me crazy I guess!
2) Is there no legal recourse to this? We're talking about someone who managed to root any linux server that stays up-to-date.
> 2) Is there no legal recourse to this? We're talking about someone who managed to root any linux server that stays up-to-date.
Any government which uses GNU/Linux in their infrastructure can pitch this as an attempt to backdoor their servers.
The real question is: will we ever even know who was behind this? If it was some mercenary hacker intending to resell the backdoor, maybe. But if it was someone working with an intelligence agency in US/China/Israel/Russia/etc, I doubt they'll ever be exposed.
Reflecting on the idea of introducing a validation structure for software contributions, akin to what RPKI does for BGP routing, I see significant potential to enhance security and accountability in software development.
Such a system could theoretically bring greater transparency and responsibility, particularly in an ecosystem where contributions come from all corners.
Implementing verifiable identity proofs for contributors might be challenging, but it also presents an opportunity to bolster security without compromising privacy and the freedom to contribute under pseudonyms.
The accountability of those accepting pull requests would also become clearer, potentially reducing the risk of malicious code being incorporated.
Of course, establishing a robust validation chain for software would require the commitment of everyone in the development ecosystem, including platforms like GitHub. However, I view this not as a barrier but as an essential step towards evolving our approach to security and collaboration in software development.
The actual inclusion code was never in the repo. The blobs were hidden as lzma test files.
So you review would need to guess from 2 new test files that those are, decompressed, a backdoor and could be injected which was never in the git history.
Ok, go ahead and scrutinize those files without looking at the injection code that was never in the repo? Can you find anything malicious? Probably not - it looks like random garbage which is what it was claimed to be.
"Jia Tan" was not a contributor, but a maintainer of the project. The key point here is that this is a multi-year project of infiltrating the xz project and gaining commit access.
In a large tech company (including ones I have worked at), sometimes you have policy where every change has to be code reviewed by another person. This kind of stuff isn't possible when the whole project only has 1-2 maintainers. Who's going to review your change other than yourself? This is the whole problem of OSS right now that a lot of people are bringing up.
I maintain a widely used open source project myself. I would love it if I could get high quality code review for my commits similar to my last workplace lol, but very very few people are willing to roll up their sleeves like that and work for free. Most people would just go to the Releases page and download your software instead.
>How did this get accepted into main repos while flying under the radar? When I do a code review, I try to understand the actual code I'm reviewing. Call me crazy I guess!
And? You never do any mistakes? Google "underhanded C contest"
It's hardly surprising given that parsing is generally considered to be a tricky problem. Plus, it's a 15 years old project that's widely used. 750 commits is nothing to sneer about. No wonder the original maintainer got burned out.
For the duration of a major release, up until ~x.4 pretty much everything from upstream gets backported with a delay of 6-12 months, depending on how conservative to change the rhel engineer maintaining this part of the kernel is.
After ~x.4 things slow down and only "important" fixes get backported but no new features.
After ~x.7 or so different processes and approvals come into play and virtually nothing except high severity bugs or something that "important customer" needs will be backported.
Sadly, 8.6 and 9.2 kernel are the exception to this. Mainly as they are openshift container platform and fedramp requirements.
The goal is that 8.6, 9.2 and 9.4 will have releases at least every two weeks.
Maybe soon all Z streams will have a similar release cadence to keep up with the security expectations, but will keep a very similar expectations that you outlined above.
I imagine it might be easier to just compromise a weakly protected account than to actual put in a 2 years long effort with real contributions. If we mandated MFA for all contributors who contribute to these really important projects then we can know with greater certainty if it was really a long con vs. a recently compromised account.
For some random server, sure. For a state sponsored attack? Having an embedded exploit you can use when convenient, or better yet an unknown exploit affecting every linux-based system connected to the internet that you can use when war breaks out - that's invaluable.
Having one or two people on payroll to occasionally add commits to a project isn't exactly that expensive if it pays off. There are ~29,000,000 US government employees (federal, state and local). Other countries like China and India have tens of millions of government employees.
Even if they contract it out, at $350/hr (which is not a price that would raise any flags), that is less that $750k. Even with a fancy office, couple of laptops and 5' monitors, this is less than a day at the bombing range or a few minutes keeping an aircraft carrier operational.
Even a team of 10 people working on this - the code and social aspect - would be a drop in the bucket for any nation-state.
I find it funny how MFA is treated as if it would make account takeover suddenly impossible. It's just a bit more work, isn't it? And a big loss in convenience.
I'd much rather see passwords entirely replaced by key-based authentication. That would improve security. Adding 2FA to my password is just patching a fundamentally broken system.
Customer service at one of my banks has an official policy of sending me a verification code via email that I then read to them over the phone, and that's not even close to the most "wrong" 2FA implementation I've ever seen. Somehow that institution knows what a YubiKey is, but several major banks don't.
I'm security consultant in the financial industry. I've literally been involved in the decision making on this at a bank. Banks are very conservative, and behave like insecure teenagers. They won't do anything bold, they all just copy each other.
I pushed YubiKey as a solution and explained in detail why SMS was an awful choice, but they went with SMS anyway.
It mostly came down to cost. SMS was the cheapest option. YubiKey would involve buying and sending the keys to customers, and they having the pain/cost of supporting them. There was also the feeling that YubiKeys were too confusing for customers. The nail in the coffin was "SMS is the standard solution in the industry" plus "If it's good enough for VISA it's good enough for us".
Interesting. I assumed a lot of client software for small banks was vendored - I know that's the case for brokerages. Makes it all the weirder that they all imitate each other.
Here's the thing about SMS: your great aunt who doesn't know what a JPEG is, knows what a text is. Ok, she might not fully "get it" but she knows where to find a text message in her phone. My tech-literate fiancée struggles to get her YubiKey to work with her phone, and I've tried it with no more luck than she's had. YubiKeys should be supported but they're miles away from being usable enough to totally supplant other 2FA flows.
I'd guess part of the reason is that customers would blame the bank when their YubiKey doesn't work, which would become a nuisance for them as much as the YubiKey's usability issues are a nuisance for the customer.
I mean your employer wasn’t wrong. Yubikeys ARE way too confusing for the average user, way too easy to lose, etc. maybe have it as an option for power users, but they were right it would be a disastrous default.
Financial institutions are very slow to adopt new tech. Especially tech that will inevitably cost $$$ in support hours when users start locking themselves out of their accounts. There is little to no advantage to being the first bank to implement YubiKey 2FA. To a risk-averse org, the non-zero chance of a botched rollout or displeased customers outweighs any potential benefit.
A friensd bank, hopefully not the one i use, only allow a password off 6 digits. Yes You read it right, 6 fucking digits to login, i hace him the asvice to run away from that shitty bank
Did this bank start out as a "telephone bank"? One of the largest German consumer banks still does this because they were the first "direct bank" without locations and typing in digits on the telephone pad was the most secure way of authenticating without telling the "bank teller" your password. So it was actually a good security measure but it is apparently too complicated to update their backend to modern standards.
Nope, I read The Register (UK based) and they've had scandals from celebrities having their confidential SMS messages leaked; SMS spoofing; I think they even have SIM cloning going on every now and then in UK and some European countries. (since The Register is a tech site, my recollection is some carriers took technical measures to prevent these issues while quite a few didn't.)
I don't think it's a thing that happens that often in UK etc.; but, it doesn't happen that frequently in the US either. It's just a thing that can potentially happen.
Its also been a problem in Australia, Optus (2nd biggest teleco) used to allow number porting or activating sim against an existing account with a bare minimum of detail - Like a name, address and date of birth. If you had those details of a target you could clone their SIM and crack any SMS based MFA.
I don’t know about other parts, but here in France SMS is a shitshow. I regularly fail to receive them even though I know I have good reception.
This happened the other day while I was on a conference call with perfect audio and video using my phone’s mobile data.
A few weeks back, had some shop which sends out an SMS to inform you the job’s done tell me this is usually hit and miss when I complained about not hearing from them.
Many single radio phones can either receive sms/calls, or transmit data.
My relative owns such a device and cannot use internet during calls or receive/make calls during streaming like YT video playback.
In my case this is an iPhone 14 pro. I'm pretty sure I can receive calls while using data, since I often look things up on the internet while talking to my parents.
And, by the way, the SMS in question never arrived. I don't know if there's some kind of timeout happening, and the network gives up after a while. Some 15 years ago I remember getting texts after an hour or two if I only had spotty reception. This may of course have changed in the meantime, plus this is a different provider.
SMS is not E2E encrypted, so for all intents is just a plain text message that can/has been snooped. Might as well just send a plaintext emails as well.
I recently had an issue with a sim card and went to phone store that gave me a new one and disabled the old. They're supposed to ask for ID, but often doesn't bother. This is true for pretty much every country. Phone 2FA is simply completely insecure.
Banks are in a tough spot. Remember, banks have you as a customer, they also have a 100 year old person who still wants to come to the branch in person as a customer. Not everyone can grapple with the idea of a Yubikey, or why their bank shouldn't be protecting their money like it did in the past.
The problem is that the bank will automatically enable online access and SMS-confirmed transfers for that 100 year old person who doesn't even know how to use Internet.
Not actually. Even if you enabled passkey, you still can login to their phone app via SMS. So it is not more secure. People who knows how to do SMS attacks certainly knows how to install a mobile app. And BofA gave their customers a fake assurance.
yeah someone replied to one of my comments about adding MFA that an attacker can get around all that simply by buying the account from the author. I was way too narrowly focused on the technical aspects and was completely blind to other avenues like social engineering, etc.
>I'd much rather see passwords entirely replaced by key-based authentication
I've never understood how key-based systems are considered better. I understand the encryption angle, nobody is compromising that. But now I have a key I need to personally shepherd? where do I keep it, and my backups, and what is the protection on those places? how many local copies, how many offsite? And I still need a password to access/use it, but with no recourse should I lose or forget. how am I supposed to remember that? It's all just kicking the same cans down the same roads.
Passkeys are being introduced right now in browsers and popular sites like a MFA option, but I think the intention is that they will grow and become the main factor in the future.
I liked the username, password and TOTP combination. I could choose my own password manager, and TOTP generator app, based on my preferences.
I have a feeling this won't hold true forever. Microsoft has their own authenticator now, Steam has another one, Google has their "was this you?" built into the OS.
Monetization comes next? "View this ad before you login! Pay 50c to stay logged in for longer?"
MS Azure Active Entra's FIDO2 implementation only allows a select list of vendors. You need a certification from FIDO ($,$$$), you need to have an account that can upload on the MDS metadata service, and you need to talk to MS to see if they'll consider adding you to the list
It's not completely closed, but in practice no one on that list is a small independent open source project, those are all the kind of entrenched corporate security companies you'd expect
But the way it is designed, you can require a certain provider, and you can bet at least some sites will start requiring attestation from Google and or Apple.
Do they do attestation by default? I thought for Apple at least that was only a feature for enterprise managed devices (MDM). Attestation is also a registration-time check, so doesn’t necessarily constrain where the passkey is synced to later on.
I couldn’t imagine trying to train the general public to use mTLS and deploy that system.
I’m not even sure it is difficult. Most people I’ve talked to in tech don’t even realize it is a possibility. Certificates are “complicated” as they put it.
> Google has their "was this you?" built into the OS.
Not only that, but it's completely impossible to disable or remove that functionality or even make TOTP the primary option. Every single time I try to sign in, Google prompts my phone first, giving me a useless notification for later, and I have to manually click a couple of buttons to say "no I am not getting up to grab my phone and unlock it for this bullshit, let me enter my TOTP code". Every single time.
Doesn't passkeys give the service a signature to prove what type of hardware device you're using? e.g. it provides a way for the server to check whether you are using a software implementation? It's not really open if it essentially has type of DRM built in.
You're thinking of hardware-backed attestation, which provides a hardware root of trust. I believe passkeys are just challenge-response (using public key cryptography). You could probably add some sort of root of trust (for example, have the public key signed by the HSM that generated it) but that would be entirely additional to the passkey itself.
Passkeys do have the option of attestation, but the way Apple at least do them means Apple users won't have attestation, so most services won't require attestation.
KeepassXC is working on supporting them natively in software, so you would not need to trust big tech companies, unless you are logging into a service that requires attestation to be enabled.
Password managers are adding support (as in they control the keys) and I've used my yubikeys as "passkeys" (with the difference that I can't autofill the username).
It's a good spec. I wish more people who spread FUD about it being a "tech-giant" only thing would instead focus on the productive things like demanding proper import/export between providers.
You realise that the second your password manager has it, then it's no longer MFA but it's just 1 factor authentication with extra steps right?
Password manager turns something you know into something you own. If also the something you own is in the password manager itself… it's the same as requiring extra long passwords.
This is a state sponsored event.
Pretty poorly executed though as they were tweaking and modifying things in their and other tools after the fact though.
As a state sponsored project. What makes you think this is their only project and that this is a big setback?
I am paranoid myself to think yesterdays meeting went like :
"team #25 has failed/been found out. Reallocate resources to the other 49 teams."
As I said recently in a talk I gave, 2FA as implemented by pypy or github is meaningless, when in fact all actions are performed via tokens that never expire, that are saved inside a .txt file on the disk.
In pypi to obtain a token that is limited in scope you must first generate an unlimited token.
True story.
In gh you can generate a limited one, but it's not really clear on what the permissions actually mean, so it's trial and error… which means most people will get tired and grant random stuff to have them working.
I didn’t know that about pypi but github has seemed ok to me. I’ve also implemented my own scoped authentication systems so even if they’re not perfect I know it can be done
they might not have been playing the long con. maybe approached by actors willing to pay them a lot of money to try and slip in a back door. I'm sure a deep dive into code contributions would clear that up for anyone familiar with the code base and some free time.
They did fuck up quite a bit though.
They injected their payload before they checked if oss-fuzz or valgrind or ... would notice something wrong.
That is sloppy and should have been anticipated and addressed BEFORE activating the code.
Anyway. This team got caught. What are the odds that this state-actor that did this, that this was the only project / team / library that they decided to attack?
Doesn't it mandate it for everyone? I don't use it anymore and haven't logged in since forever, but I think I got a series of e-mails that it was being made mandatory.
It will soon. I think I have to sort it out before April 4. My passwords are already >20 random characters, so I wasn't going to do it until they told me to.
If you are using pass to store those, check out pass-otp and browserpass, since GitHub still allows TOTP for MFA. pass-otp is based on oathtool, so you can do it more manually too if you don't use pass.
Most people will have to sync their passwords (generally strong and unique, given that it's for github) to the same device where their MFA token is stored, rendering it (almost) completely moot, but at a significantly higher risk of permanent access loss (depending on what they do with the reset codes, which, if compromised, would also make MFA moot.) (a cookie theft makes it all moot as well.)
The worse part is that people think they're more protected, when they're really not.
Bringing everyone up to the level of "strong and unique password" sounds like a huge benefit. Even if your "generally" is true, which I doubt, that leaves a lot of gaps.
Doesn't help that a lot of companies still just allow anyone with access to the phone number to gain access to the account (via customer support or automated SMS-based account recovery).
SMS 2FA is the devil. It’s the reason I haven’t swapped phone numbers even though I get 10-15 spam texts a day. The spam blocking apps don’t help and intrude on my privacy.
Browsers don't save the TOTP seed and auto fill it for you for one, making it much less user friendly than a password in practice.
The main problem I have with MFA is that it gets used too frequently for things that don't need that much protection, which from my perspective is basically anything other than making a transfer or trade in my bank/brokerage. Just user-hostile requiring of manual action, including finding my phone that I don't always keep on me.
It's also often used as a way to justify collecting a phone number, which I wouldn't even have if not for MFA.
You know google authenticator doesn't matter right? You know you could always copy your totp seeds since day one regardless of which auth app or it's features or limits right? You know that a broken device does not matter at all, because you have other copies of your seeds just like the passwords, right?
When I said they are just another password, I was neither lying nor in error. I presume you can think of all the infinite ways that you would keep copies of a password so that when your phone or laptop with keepassxc on it breaks, you still have other copies you can use. Well when I say just like a password, that's what I mean. It's just another secret you can keep anywhere, copy 50 times in different password managers or encrypted files, print on paper and stick in a safe, whatever.
Even if some particular auth app does not provide any sort of manual export function (I think google auth did have an export function even before the recent cloud backup, but let's assume it didn't), you can still just save the original number the first time you get it from a qr code or a link. You just had to know that that's what those qr codes are doing. They aren't single-use, they are nothing more than a random secret which you can keep andbcopy and re-use forever, exactly the same as a password. You can copy that number into any password manager or plain file or whatever you want just like a password, and then use it to set up the same totp on 20 different apps on 20 different devices, all working at the same time, all generating valid totp codes at the same time, destroy them all, buy a new phone, retrieve any one of your backup keepass files or printouts, and enter them into a fresh app on a fresh phone and get all your totp fully working again. You are no more locked out than by having to reinstall a password manager app and access some copy of your password db to regain the ordinary passwords.
The only difference from a password is, the secret is not sent over the wire when you use it, something derived from it is.
Google authenticators particular built in cloud copy, or lack of, doesn't matter at all, and frankly I would not actually use that particular feature or that particular app. There are lots of totp apps on all platforms and they all work the same way, you enter the secret give it a name like your bank or whatever, select which algorithm (it's always the default, you never have to select anything) and instantly the app starts generating valid totp codes for that account the same as your lost device.
Aside from saving the actual seed, let's say you don't have the original qr code any more (you didn't print it or screenshot it or right-click save image?) there is yet another emergency recovery which is the 10 or 12 recovery passwords that every site gives you when you first set up totp. You were told to keep those. They are special single-use passwords that get you in without totp, but each one can only be used once. So, you are a complete space case and somehow don't have any other copiesbof your seeds in any form, including not even simple printouts or screenshots of the original qr code, STILL no problem. You just burn one of your 12 single-use emergency codes, log in, disable and re-enable totp on that site, get a new qr code and a new set of emergency codes. Your old totp seed and old emergency codes no longer work so thow those out. This time, not only keep the emergency codes, also keep the qr code, or more practical, just keep the seed value in the qr code. It's right there in the url in the qr code. Sometimes they even display the seed value itself in plain text so that you can cut & paste it somewhere, like into a field in keepass etc.
In fact keepass apps on all platforms also will not only store the seed value but display the current totp for it just like a totp app does. But a totp app is a more convenient.
And for proper security, you technically shouldn't store both the password and the totp seed for an account in the same place, so that if someone gains access to one, they don't gain access to both. That's inconvenient but has to be said just for full correctness.
I think most sites do a completely terrible job of conveying just what totp is when you're setting it up. They tell you to scan a qr code but they kind of hide what that actually is. They DO all explain about the emergency codes but really those emergency codes are kind of stupid. If you can preserve a copy of the emergency codes, then you can just as easily preserve a copy of the seed value itself exactly the same way, and then, what's the point of a hanful of single-use emergency passwords when you can just have your normal fully functional totp seed?
Maybe one use for the emergency passwords is you could give them to different loved ones instead of your actual seed value?
Anyway if they just explained how totp basically works, and told you to keep your seed value instead of some weird emergency passwords, you wouldn't be screwed when a device breaks, and you would know it and not be worried about it.
Now, if, because of that crappy way sites obscure the process, you currently don't have your seeds in any re-usable form, and also don't have your emergency codes, well then you will be F'd when your phone breaks.
But that is fixable. Right now while it works you can log in to each totp-enabled account, and disable & reenable totp to generate new seeds, and take copies of them this time. Set them up on some other device just to see that they work. Then you will no longer have to worry about that.
My original, correct, message was perfectly short.
You don't like long full detailed explainations, and you ignore short explainations. Pick a lane!
A friend of mine a long time ago used to have a humorous classification system, that people fell into 3 groups: The clued, The clue-able, The clue-proof.
Some people already understand a thing. Some people do not understand a thing, but CAN understand it. Some people exist in a force bubble of their own intention that actively repels understanding.
I see that in your classification system an important entry is missing. The ones who disagree.
In your quest to convince me you forgot to even stop to ponder if you're right at all. And in my view, you aren't.
Perhaps the problem isn't that I don't understand you. Perhaps I understand you perfectly well but I understand even more, to realise that you're wrong :)
This is a silly thing to argue about but hey I'm silly so let's unpack your critique of the classification system
There is no 4th classification. It only speaks of understanding not agreeing.
Things that are matters of opinion may still be understood or not understood.
Whether a thing is a matter of opinion or a matter of fact, both sides of a disagreement still slot into one of those classes.
If a thing is a matter of opinion, then one of the possible states is simply that both sides of a disagreement understand the thing.
In this case, it is not a matter of opinion, and if you want to claim that I am the one who does not understand, that is certainly possible, so by all mrans, show how. What fact did I say that was not true?
Keep trying soldier. You never know. (I mean _I_ know, but you don't. As far as you know, until you go find out, I might be wrong.)
Whatever you do, don't actually go find out how how it works.
Instead, continue avoiding finding out how it works, because holy cow after you've gone this far... it's one thing to just be wrong about something, everyone always has to start out not understanding something, that's no failing, but to have no idea what you're talking about yet try to argue about it, in error the whole time..., I mean they (me) were such an insufferable ass already trying to lecture YOU, but for them (me) to turn out to have been simply correct in every single fact they spoke, without even some technicality or anything to save a little face on? Absolutely unthinkable.
Definitely better to save yourself from that by just never investigating.
My original statement was only that this is not a 2fa problem, which was and still is true.
The fact that you did not know this does not change this fact.
I acknowledged that web sites don't explain this well, even actively hide it. So it's understandable not to know this.
But I also reminded that this doesn't actually matter because you WERE also given emergency recovery passwords, and told to keep them, and told why, and how important they were.
You were never at risk of being locked out from a broken device EVEN THOUGH you didn't know about saving the seed values, UNLESS you also discarded the emergency codes, which is not a 2fa problem, it's an "I didn't follow directions" problem.
And even if all of that happened, you can still, right now, go retroactively fix it all, and get all new seed values and save them this time, as long long as your one special device happens to be working right now. It doesn't matter what features google authenticator has today, or had a year ago. It's completely and utterly irrelevant.
My premis remains correct and applicable. Your statement that 2fa places you at at risk was incorrect. You may possibly be at risk, but if so you did that to yourself, 2fa did not do that to you.
> But I also reminded that this doesn't actually matter because you WERE also given emergency recovery passwords, and told to keep them, and told why, and how important they were.
Ah yes those… the codes I must go to a public library to print, on a public computer, public network and public printer. I can't really see any problem with the security of this.
And then I must never forget where I put that very important piece of paper. Not in 10 years and after moving 3 times…
You can save a few bits of text any way you want. You can write them in pencil if you want just as a backup against google killing your google drive or something. Or just keep them in a few copies of a password manager db in a few different places. It's trivial.
What in the world is this library drama?
No one is this obtuse, so your arguments are most likely disingenuous.
But if they are sincere, then find a nephew or someone to teach you how your computer works.
Libraries? Remembering something for 10 years? Moving? Oh the humanity!
Yes, you can keep them on the same device if you choose to.
Or not. You decide how much effort you want and where you want to place the convenience vs security slider.
Yes, if you keep both factors not only on the same device but in the same password manager, then both factors essentially combine into nothing but a longer password.
I did say from the very first, that the seeds are nothing other than another password.
Except there is still at least one difference which I will say for at least the 3rd time... the totp secret is not transmitted over the wire when it is used, the password is. That is actually a significant improvement all by itself even if you do everything else the easy less secure way.
And you do not have to store the seeds the convenient less secure way. You can have them in a different password app with a different master password on the same device, or on seperate devices, or in seperate physical forms. You can store them any way you want, securely, or less securely.
The point is that even while opting to do things all the very secure way, you are still not locked out of anything when a single special device breaks, because you are not limited to only keeping a single copy of the seeds or the emergency passwords in a single place like on a single device or a single piece of paper.
You are free to address any "but what about" questions you decide you care about in any way you feel like.
The only way you were ever screwed is by the fact that the first time you set up 2fa for any site, most sites don't explain the actual mechanics but just walk you through a sequence of actions to perform without telling you what they actually did, and so at the end of following those directions you ARE left with the seeds only stored in a single place. And in the particular case of Google Authenticator, stored in a more or less inaccessible place in some android sqlite file you can't even manually get to without rooting your phone probably. And were never even told about the seed value at all. You were given those emergency passwords instead.
That does leave you with a single precious drvice that must not break or be lost. But the problem is only a combination of those bad directions given by websites, and the limitations of one particular totp app when that app didn't happen to display or export or cloud-backup the seeds until recently.
Even now Googles answer is a crap answer, because Google can see the codes unencrypted on their server, and Google can kill your entire gooogle account at sny time and you lose everything, email, drive , everything, instantly, no human to argue with. That is why I said even today I still would not use Google Authenticator for totp.
Except even in that one worst case, you still had the emergency passwords, which you were always free to keep in whatever way works for you. There is no single thing you must or must not do, there is only what kinds of problems are the worst problems for you.
Example: if what you are most concerned about is someone else getting ahold of a copy of those emergency passwords, then you want to have very few copies of them and they should be off-line and inconvenient to access. IE a printed hard copy in a safe deposit box in switzerland.
If what you are most concerned about is accidentally destroying your life savings by losing the password and the investment site has no further way to let you prove your ownership, then keep 10 copies in 10 different physical forms and places so that no matter what happens, you will always be able to access at least one of them. One on goggle drive, one on someone else's google drive in case yours is killed, one on onedrive, one on paper at home, one on paper in your wallet, one on your previous phonr that you don't use but still works, etc etc.
You pick whichever is your biggest priority, and address that need however you want, from pure convenience to pure security and all possible points in between. The convenient way has security downsides. The secure way has convenience downsides. But you are not forced to live with the downsides of either the convenient way or the secure way.
> Why opposed to MFA? Source code is one of the most important assets in our realm.
Because if you don't use weak passwords MFA doesn't add value. I do recommend MFA for most people because for most people their password is the name of their dog (which I can look up on social media) followed by "1!" to satisfy the silly number and special character rules. So yes please use MFA.
But if your (like my) passwords are 128+bits out of /dev/random, MFA isn't adding value.
If you have a keylogger, they can also just take your session cookie/auth tokens or run arbitrary commands while you're logged in. MFA does nothing if you're logging into a service on a compromised device.
Keyloggers can be physically attached to your keyboard. There could also be a vulnerability in the encryption of wireles keyboards. Certificate-based MFA is also phishing resistant, unlike long, random, unique passwords.
There are plenty of scenarios where MFA is more secure than just a strong password.
These scenarios are getting into some Mission Impossible level threats.
Most people use their phones most of the time now, meaning the MFA device is the same device they're using.
Of the people who aren't using a phone, how many are using a laptop with a built in keyboard? It's pretty obvious if you have a USB dongle hanging off your laptop.
If you're using a desktop, it's going to be in a relatively secure environment. Bluetooth probably doesn't even reach outside. No one's breaking into my house to plant a keylogger. And a wireless keyboard seems kind of niche for a desktop. It's not going to move, so you're just introducing latency, dropouts, and batteries into a place where they're not needed.
Long, random, unique passwords are phishing resistant. I don't know my passwords to most sites. My web browser generates and stores them, and only uses them if it's on the right site. This has been built in functionality for years, and ironically it's sites like banks that are most likely to disable auto fill and require weak, manual passwords.
I mean, both can be true at the same time. I have to admit that I only use MFA when I'm forced to, because I also believe my strong passwords are good enough. Yet I can still acknowledge that MFA improves security further and in particular I can see why certain services make it a requirement, because they don't control how their users choose and use their passwords and any user compromise is associated with a real cost, either for them like in the case of credit card companies or banks, or a cost for society, like PyPI, Github, etc.
I don't think phishing is such an obscure scenario.
The point is also that you as an individual can make choices and assess risk. As a large service provider, you will always have people who reuse passwords, store them unencrypted, fall for phishing, etc. There is a percentage of users that will get their account compromised because of bad password handling which will cost you, and by enforcing MFA you can decrease that percentage, and if you mandate yubikeys or something similar the percentage will go to zero.
> I don't think phishing is such an obscure scenario.
For a typical person, maybe, but for a tech-minded individual who understands security, data entropy and what /dev/random is?
And I don't see how MFA stops phishing - it can get you to enter a token like it can get you to enter a password.
I'm also looking at this from the perspective of an individual, not a service provider, so the activities of the greater percentage of users is of little interest to me.
> That's why I qualified it with "certificate-based". The private key never leaves the device
Except that phishing doesn't require the private key - it just needs to echo back the generated token. And even if that isn't possible, what stops it obtaining the session token that's sent back?
From my understanding, FIDO isn't MFA though (the authenticator may present its own local challenge, but I don't think the remote party can mandate it).
There's also the issue of how many sites actually use it, as well as how it handles the loss of or inability to access private keys etc. I generally see stuff like 'recovery keys' being a solution, but now you're just back to a password, just with extra steps.
The phisher can just pass on whatever you sign, and capture the token the server sends back.
Sure, you can probably come up with some non-HTTPS scheme that can address this, but I don't see any site actually doing this, so you're back to the unrealistic scenario.
No, because the phisher will get a token that is designated for, say, mircos0ft.com which microsoft.com will not accept. It is signed with the user's private key and the attacker cannot forge a signature without it.
A password manager is also not going to fill in the password on mircos0ft.com so is perfectly safe in this scenario. You need a MitM-style attack or a full on client compromise in both cases, which are vulnerable to session cookie exfiltration or just remote control of your session no matter the authentication method.
If I were trying to phish someone, I wouldn't attack the public key crypto part, so how domains come into play during authentication doesn't matter. I'd just grab the "unencrypted" session token at the end of the exchange.
Even if you somehow protected the session token (sounds dubious), there's still plenty a phisher could do, since it has full MITM capability.
Session keys expire and can be scoped to do anything except reset password, export data, etc…that’s why you’ll sometimes be asked to login again on some websites.
If you're on a service on a compromised device, you have effectively logged into a phishing site. They can pop-up that same re-login page on you to authorize whatever action they're doing behind the scenes whenever they need to. They can pretend to be acting wonky with a "your session expired log in again" page, etc.
This is part of why MFA just to log in is a bad idea. It's much more sensible if you use it only for sensitive actions (e.g. changing password, authorizing a large transaction, etc.) that the user almost never does. But you need everyone to treat it that way, or users will think it's just normal to be asked to approve all the time.
Some USB keys have a LCD screen on it to prevent that. You can comprise the computer that the key was inserted to, but you cannot comprise the key. If you see the things messages shows up on your computer screen differs from the messages on the key, you reject the auth request.
The slogan is "something you know and something you have", right?
I don't have strong opinions about making it mandatory, but I turned on 2FA for all accounts of importance years ago. I use a password manager, which means everything I "know" could conceivably get popped with one exploit.
It's not that much friction to pull out (or find) my phone and authenticate. It only gets annoying when I switch phones, but I have a habit of only doing that every four years or so.
You sound like you know what you're doing, that's fine, but I don't think it's true that MFA doesn't add security on average.
Right. I don't ever want to tie login to a phone because phones are pretty disposable.
> I don't think it's true that MFA doesn't add security on average
You're right! On average it's better, because most people have bad password and/or reuse them in more than one place. So yes MFA is better.
But if your password is already impossible to guess (as 128+ random bits are) then tacking on a few more bytes of entropy (the TOTP seed) doesn't do much.
Those few bits are the difference between a keylogged password holder waltzing in and an automated monitor noticing that someone is failing the token check and locking the account before any damage occurs.
I think your missing parents point, both are just preshared keys, one has some additional fuzz around it so that the user in theory isn't themselves typing the same second key in all the time, but much of that security is in keeping the second secret in a little keychain device that cannot itself leak the secret. Once people put the seeds in their password managers/phones/etc its just more data to steal.
Plus, the server/provider side remains a huge weak point too. And the effort of enrolling/giving the user the initial seed is suspect.
This is why the FIDO/hardware passkeys/etc are so much better because is basically hardware enforced two way public key auth, done correctly there isn't any way to leak the private keys and its hard has hell to MITM. Which is why loss of the hw is so catastrophic. Most every other MFA scheme is just a bit of extra theater.
Exactly, that's it. Two parties have a shared secret of, say 16 bytes total, upon which authentication depends.
They could have a one byte long password but a 15 byte long shared secret used to compute the MFA code. The password is useless but the MFA seed is unguessable. Maybe have no password at all (zero length) and 16 byte seed. Or go the other way and a 16 byte password and zero seed. In terms of an attacker brute forcing the keyspace, it's always the same, 16 bytes.
We're basically saying (and as a generalization, this is true) that the password part is useless since people will just keep using their pets name, so let's put the strenght on the seed side. Fair enough, that's true.
But if you're willing to use a strong unique password then there's no real need.
(As to keyloggers, that's true, but not very interesting. If my machine is already compromised to the level that it has malicious code running logging all my input, it can steal both the passwords and the TOTP seeds and all the website content and filesystem content and so on. Game's over already.)
> This is why the FIDO/hardware passkeys/etc are so much better
Technically that's true. But in practice, we now have a few megacorporations trying to own your authentication flow in a way that introduces denial of service possibilities. I must control my authentication access, not cede control of it to a faceless corporation with no reachable support. I'd rather go back to using password123 everywhere.
Your password is useless when it comes to hardware keyloggers.
We run yearly tests to see if people check for "extra hardware". Needles to say we have a very high failure rate.
It's hard to get a software keylooger installed on a corp. machine. It's easy to get physical access to the office or even their homes and install keyloggers all over the place and download the data via BT.
> Your password is useless when it comes to hardware keyloggers.
You are of course correct.
This is where threat modeling comes in. To really say if something is more secure or less secure or a wash, threat modeling needs to be done, carefully considering which threats you want to cover and not cover.
I this thread I'm talking from the perspective of an average individual with a personal machine and who is not interesting enough to be targeted by corporate espionage or worse.
Thus, the threat of operatives breaking into my house and installing hardware keyloggers on my machines is not part of my threat model. I don't care about that at all, for my personal use.
For sensitive company machines or known CxOs and such, yes, but that's a whole different discussion and threat model exercise.
Which helps with some kinds of threats, but not all. It keeps someone from pretending to be the maintainer -- but if an actual maintainer is compromised, coerced, or just bad from the start and biding their time, they can still do whatever they want with full access rights.
Not MFA but git commit signing. I don't get why such core low-level projects don't mandate it. MFA doesn0t help if a github access token is stolen and I bet most of use such a token for pushing from an IDE.
Even if an access token to github is stolen, the sudden lack of signed commit should raise red flags. github should allow projects to force commit signing (if not already possible).
Then the access token plus the singing key would need to be stolen.
But of course all that doesn't help in the here more likley scenario of a long con by a state-sponsored hacker or in case of duress (which in certain countries seems pretty likley to happen)
This seems like a great way to invest in supporting open source projects in meantime if these projects are being used by these actors. Just have to maintain an internal fork without the backdoors
Maybe someone can disrupt the open source funding problem by brokering exploit bounties /s
Which like, also wouldn't be totally weird if I found out that the xz or whatever library maintainer worked for the DoE as a researcher? I kind of expect governments to be funding this stuff.
From what I read on masto, the original maint had personal life breakdown, etc. Their interest in staying as primary maint is gone.
This is a very strong argument for FOSS to pick up the good habit of ditching/un-mainlining projects where they are sitting around for state actors to volunteer injecting commits to, and dep-stripping active projects from this cruft.
Who wants to maintain on a shitty compression format? Someone who is dephunting, it turns out.
Okay so your pirate-torrent person needs liblzma.so Offer it in the scary/oldware section of the package library that you need to hunt down the instructions to turn on. Let the users see that it's marked as obsolete, enterprises will see that it should go on the banlist.
Collin worked on XZ and its predecessor ~15 years. It seems that he did that for free, at least in recent times. Anyone will lose motivation to work for free over this period of time.
At the same time, XZ became a cornerstone of major Linxus distributions, being systemd dependency and loaded, in particular, as part of sshd. What could go wrong?
In hindsight, the commercial idea of Red Hat, utilizing the free work of thousands of developers working "just for fun", turned out to be not so brilliant.
On the contrary, this is a good example for why 'vulnerable' OSS projects that have become critical components, for which the original developer has abandoned or lost interest, should be turned over to an entity like RedHat who can assign a paid developer. It's important to do this before some cloak and dagger rando steps out of the shadows to offer friendly help, who oh by the way happens to be a cryptography and compression expert.
A lot of comments in this thread seem to be missing the forest for the trees: this was a multiyear long operation that targeted a vulnerable developer of a heavily-used project.
This was not the work of some lone wolf. The amount of expertise needed and the amount of research and coordination needed to execute this required hundreds of man-hours. The culprits likely had a project manager....
Someone had to stalk out OSS developers to find out who was vulnerable (the xz maintainer had publicly disclosed burnout/mental health issues); then the elaborate trap was set.
The few usernames visible on GitHub are like pulling a stubborn weed that pops up in the yard... until you start pulling on it you don't realize the extensive reality lying beneath the surface.
The implied goal here was to add a backdoor into production Debian and Red Hat EL. Something that would take years to execute. This was NOT the work of one person.
Um, what? This incident is turning into such a big deal because xz is deeply ingrained as a core dependency in the software ecosystem. It's not an obscure tool for "pirates."
Warning, drunk brain talking. But a LLM driven email based "collaborator" could play a very long gMw adding basic features to a code made whilst earning trust backed by a generated online presence. My money is on a resurgance in the Web of Trust.
The web of trust is a really nice idea, but it works badly against that kind of attacks. Just consider that in the real world, most living people (all eight billions) are linked by only six degrees of separation. It really works, for code and for trusted social relations (like "I lend you 100 bucks and you pay me them back when you get your salary") mostly when you know the code author in person.
This is also not a new insight. In the beginning of the naughties, there was a web site named kuro5hin.org, which experiemented with user ratings and trust networks. It turned out impossible to prevent take-overs.
IIRC, kuro5hin and others all left out a crucial step in the web-of-trust approach: There were absolutely no repercussions when you extended trust to somebody who later turned out to be a bad actor.
It considers trust to be an individual metric instead of leaning more into the graph.
(There are other issues, e.g. the fact that "trust" isn't a universal metric either, but context dependent. There are folks whom you'd absolutely trust to e.g. do great & reliable work in a security context, but you'd still not hand them the keys to your car)
At least kuro5hin modeled a degradation of trust over time, which most models still skip.
It'd be a useful thing, but we have a long way to go before there's a working version.
Once you add punishment for handing out trust to bad actors, even in good faith (which you can't prove/disprove anyway), then you also need to somehow provide siginificant rewardsf for handing out trust to good actors - otherwise everyone is going to play it safe and not vouch for anyone and your system becomes useless.
There were experiments back in the day. Slashdot had one system based on randomly assigned moderation duty which worked pretty great actually, except that for the longest time you couldn't sort by it.
Kuro5hin had a system which didn't work at all, as you mentioned.
But the best was probably Raph Levien's Advogato. That had a web of trust system which actually worked. But had a pretty limited scope (open source devs).
Now everyone just slaps an upvote/downvote button on and calls it a day.
You're likely being downvoted because the Github profile looking like east Asian isn't evidence of where the attacker/attackers are from.
Nation states will go to long lengths to disguise their identity. Using broken Russian English when they are not Russian, putting comments in the code of another language, and all sorts of other things to create misdirection.
That's certainly true-- at the very least it "seems" like Asian, but it could very well be from any nation. If they were patient enough to work up to this point they would likely not be dumb enough to leak such information.