> The developers note, though, that if an attacker can make changes to an encrypted filesystem that is subsequently mounted by the user, all bets are off.
Did this line stand out to anyone else?
I would expect modification of an encrypted directory to cause the loss of any modified files, but I'm surprised at the specific wording of that statement. From that, I'm reading that it could be possible to modify an encrypted directory so as to cause key or plaintext content leakage if the directory is accessed again or the volume is re-mounted.
Does Full Disk Encryption have this problem as well (mounting a maliciously modified full disk encrypted filesystem could result in disclosure)?
This has been a problem with disk encryption since the dawn of time: there is no (at least AFAIK) scheme that allows efficiently hosting the MAC for an encrypted block in a way that meets various efficiency and robustness requirements.
Emulating an unencrypted disk means allowing updates at (traditionally) 512-byte sector granularity. Assuming we store a SHA256 MAC for the sector's ciphertext, for a 1TiB drive that means storing 64GiB of hashes.
In order to preserve alignment (required by various layers of the hardware and software stack), it's not possible to locate these hashes inline with the sectors they represent. In traditional media that meant adding an extra seek to every read or flush, or in other words at least halving the throughput of your drive.
This situation doesn't magically disappear with SSDs, the same performance problem just exhibits in a different way.
There's a bunch of other reasons full disk encryption isn't authenticated, but the above is a good starting point
edit: I have no idea how hardware disk encryption (e.g. TCG OPAL) functions, I guess it's at least vendor-dependent, but at least there is a possibility within traditional spinning rust drives to interleave authentication data more efficiently. In any case at that point, the attacker would need to physically disassemble the drive in order to perform an attack, so OPAL is probably a step up from plain software encryption if you're paranoid enough, yet somehow still trust your drive firmware
But this isn't disk encryption! Filesystems have metadata: which is where a MAC can go, next to the filename.
There's just no way this should be unauthenticated XTS, this is simply the wrong mode to use, you should use an AEAD. ChaCha20_Poly1305 - RFC7539: https://datatracker.ietf.org/doc/rfc7539/ - makes much more sense. Faster, too, on the target mobile devices which have poor crypt performance.
I don't think this should be merged early or as-is. Unauthenticated encryption is against good advice and a bad idea. Standardising that is an even worse idea.
Edit: I do see that they're intending this as a minimum viable patch for Android 'M' and they intend to add GCM support later (again, I'd point to CC20 as a newer, better alternative), they're just waiting on a transactional update first. I do question whether shipping it broadly with this design as-is should happen in the vanilla kernel, however: Android uses its own patches anyway. Maybe let this one mature a bit first before merging upstream?
> Filesystems have metadata: which is where a MAC can go, next to the filename.
File-level MACs would require reading/writing the entire file in one go, andpreclude efficient in-place modification of files. The alternative, lots of per-block MACs stored in extended attributes, wouldn't work for ext4 because of the 4 kb extended attribute size limitation, and would also have problems that the grandparent listed (lots of seeks).
A fair point. btrfs would have a much easier time of it. We then have the interesting consequence that we can observe times and sizes of COW updates, but in truth SSDs have that anyway.
ChaCha20 may be better in the abstract, but as a practical matter AES-GCM can take advantage of AESNI. I don't think a patch adding support for ChaCha20 would be particularly controversial anyway.
_wmd explains pretty well why FDE with authentication is technically challenging.
To be clear, the reason unauthenticated encryption is considered reasonable here is because the threat model is: Someone steals your laptop, you don't expect you'll ever get it back, but you don't want them to be able to extract any data.
On the other hand, if your threat model were "Someone secretly grabs your laptop when you're not looking, pulls out the hard drive, flips some bits, then puts it back, without you ever noticing.", then FDE can't protect you.
For a laptop or phone, the latter case seems reasonably implausible and so FDE is worthwhile. OTOH, for cloud storage use cases where you don't necessarily trust the storage machines, FDE as-is is basically worthless.
It might be an oblique reference to the fact that the filesystem code hasn't been specifically hardened with respect to a maliciously-crafted filesystem image.
Traditionally this was not seen as an attack vector, because only root could mount filesystems anyway (and without filesystem encryption, someone who could modify your filesystem image could just modify any binary to add a backdoor).
This is why only specifically whitelisted filesystem types can be mounted in unprivileged containers.
I think a better use case for this might be for the full partition to be encrypted with dm-crypt and then applications or users encrypt their data directories with ext4 directory encryption, though I could be wrong.
That's not totally a given anymore with so many UEFI laptops capable of Secure Boot. They additionally mentioned android as a use case for this which also generally comes with a locked bootloader among other boot chain security features.
You can build the kernel as an EFI module (using EFISTUB) and have EFI verify its signature. Most new laptops do support this and an increasing number of desktops and servers do too. In this case, some users may be surprised to find that even though they used ext4's encryption on /, someone can still modify the inode table since it remains unencrypted (though I could be reading the article wrong).
According to the comments, the primary benefits of filesystem-level encryption over full-disk (block-level) encryption are claimed to be:
1) The filesystem can be configured to expose different parts of the tree to different users, whereas FDE is all-or-nothing.
2) The filesystem can make room for more metadata than a block device, so it would be easier to implement authenticated encryption.
3) More fine-tuned control over caching.
But are these really significant advantages?
1) Most devices nowadays, including Chromesbooks and Android phones, are only ever used by one person.
2) We could modify dm-crypt to set aside a certain amount of space for MAC, and just expose a smaller block device to the next layer. Besides, the proposed changes to ext4 don't use authenticated encryption, either.
For point 2, you can't just expose a smaller block to the next layer - everything is designed to work with 512-byte logical blocks. You could possibly do it if you were willing to sacrifice half your disk space by pairing every 512-byte data block with a 512-byte authenticator block, but...
...the other issue is that you need a way to atomically update the data block and authenticator together. This is a similar problem to the "RAID write hole", and doing this at the block layer requires even more overhead - something equivalent to RAID write intent bitmaps.
The filesystem, on the other hand, already tends to have a journal, log-structure or COW rules that allows some form of atomic transactional updates. The encryption authenticator can piggy-back on this.
SSDs have fairly large page sizes and erase block sizes, so you're often reading and writing a lot more than necessary, anyway. A clever implementation might be able to take advantage of that to scatter the MAC throughout the drive without too much impact on performance or longevity.
But you know that servers host data shared by many persons, right?
Also, android can have multiple users. I imagine chromebooks can also have multiple users.
A related question: what's the best practice to encrypt the FS of a server and allow unattended reboots? One doesn't want to have to type in a password at each reboot (no unattended reboots and impractical for large number of servers) but the password can't be left attached to the server. So how is it done, if it is done? Thanks.
Mandos[0] does this, and allows you to configure a policy for how long the server is allowed to be offline without requiring an administrator to authorize boot. You could also cook up something for debian/ubuntu's support for embedding dropbear in the initramfs to supply a key.
1. The web page is not the primary entry point for the program; the Debian package is. So I don’t think the “bounce rate” is that large of a problem.
2. CAcert was chosen when the system was used for different purposes, in a different environment, by a different audience, and at a time when Debian shipped browsers with CACert’s root cert included. After that, it’s just been inertia.
3. I quote from the StartSSL F.A.Q.¹: “The Terms and Conditions of StartCom and the StartCom Certification Policy requires subscribers to provide the correct and complete personal details during registration.”. I generally don’t create accounts with external services, and as a sysadmin, I can and do run everything myself.
The only way to do this is to have the server only operate on encrypted data to begin with. Any scheme where the server can read the unencrypted data by itself can be subverted.
As with DRM, it is impractical to store both a lock and its key in the same place and expect it to be secure.
Exactly, that's why the password shouldn't be on the server, but having to type it in every time is inconvenient. Nothing I've been able to think about is secure so I was asking.
I also understand that any attacker gaining access to the server while it's running (the usual scenario) gets access to the unencrypted file system so maybe it doesn't make much sense to encrypt servers. Still I'm curious.
> Exactly, that's why the password shouldn't be on the server, but having to type it in every time is inconvenient. Nothing I've been able to think about is secure
That’s what I said to a friend of mine when we had numerous servers with encrypted disks and frequent power outages requiring us to type in passwords. He then suggested many different schemes to fix this, each of which I shot down as being insecure. Then he came up with something which I couldn’t shoot down, and he made a first proof-of-concept implementation, which is how Mandos¹ got started. This was many years ago; Mandos has since become available in Debian and Ubuntu; just do "apt-get install mandos".
I'm not sure how this is supposed to work in the mobile device (Android / Chrome OS) scenario.
These devices are typically on (but suspended) when they're lost. If this means the master key is still in memory, then the attacker has it.
If, on the other hand, the master key is wiped when the device suspends and is reloaded when you unlock the device, how does your SMS application write your messages (that come in while the device is suspended) to disk? How does the phone lookup your contacts to display the incoming caller name if the device was suspended when the call came in?
You can wipe the master key when you suspend. However it will just mean that there needs to be some unencrypted storage to write down things like incoming text messages. Caller ID will also not work (or be stored in the unencrypted storage).
There are ways to do it, but you will lose security/convenience. It all depend on your threat model and how inconvenient you're willing to make your phone.
I noticed that ecryptfs + ext4 doesn't support sparse files efficiently (it takes a long time to write to a large sparse file, as if it actually writes/encrypts all the zeroes).
Having encryption directly in ext4 should solve that, but I haven't tested the patchset yet.
Did this line stand out to anyone else?
I would expect modification of an encrypted directory to cause the loss of any modified files, but I'm surprised at the specific wording of that statement. From that, I'm reading that it could be possible to modify an encrypted directory so as to cause key or plaintext content leakage if the directory is accessed again or the volume is re-mounted.
Does Full Disk Encryption have this problem as well (mounting a maliciously modified full disk encrypted filesystem could result in disclosure)?