I made something like this! Except I have it plugged into an outlet in the kitchen, so no battery to deal with. It's a little hacky but it works for me.
Honestly: MC and MT are classical bloated Arduino projects with a giant single big-loop and an interrupt coming from the LoRa modem.
Both projects are hitting their limits because of this. Every new feature and every bug fix causes endless amount of pain and breakage.
I was involved in both - and gave up because of this.
IMHO both projects need some kind of thin-client which delegates most of the functionality into the client. Only keep some basic LoRa/Message Buffer/Routing/Battery Saving functionality in the hardware itself.
I hope TRMNL will start supporting 6 color e-paper displays more widely. I have one of these 6 color panels running with my own custom firmware on their BYOD license and I'm really happy with it, just wish it could take advantage of the color capabilities.
Yes! I'm in the same boat! My guess is that there's something in the backend (image cache/storage size) that presents as a cost/logistical problem instead of as a technical one.
Putting these people into chains was, as the linked article puts it: "Imperial Tyranny", and a completely unnecessary humiliation of the workers. Regardless of their visa status.
Correct, KEM's should be replaced ASAP since they are currently vulnerable to store-now-decrypt-later attacks. Digital signature algorithms are less urgent but considering how long it takes to roll out new cryptography standards, they should be preferred for any new designs. That said, the new PQC signatures are much larger than the current 32 byte ed25519 signatures that are most common, and that could end up being very difficult to integrate into embedded systems with low bandwidth or limited memory, ie. CAN bus secure diagnostics, meshtastic nodes etc.
I found this article helpful when I had that same question. Basically BER has some less rigid specifications for how to encode the data that can be convenient for the implementation. Such as symbol-terminated sequences instead of having to know their length ahead of time. But this means that there are many equivalent serializations of the same underlying object, which is problematic for cryptography, so DER is an unambiguous subset of BER which will have only one correct possible serialization for a given object.
I'm assuming when they say that this improves user experience, that it implies the use case is primarily TLS. In which case store-now-decrypt-later attacks are already considered an urgent threat with regard to post quantum crypto. With FIPS 203 being released and Chrome is already using an implementation based on the draft standard, this seems like this algo (at least for TLS) should be on its way out.
Thanks I forgot about that. So if understand it right, the idea is to provide some insurance in the case that these relatively young algorithms are broken as they get exposed to more and more cryptanalysis
No one other than NIST is recommending phasing out pre-quantum crypto. Everyone else is using a combination of pre-quantum and post-quantum because trust in the security and robustness of the post-quantum ecosystem is fairly low.
Most people would consider streaming music to be a negligible amount of bandwidth. Seems to me like SSH in a "high security mode" could just send X Kbps of bi-directional pad at a layer directly above encryption but below application. Then just use that channel for all the normal SSH traffic. You could either treat this as a bandwidth limited channel or do some slow time-constant ramping up and down at the risk of leaking information about file downloads or large command outputs.
This seems like a reasonable solution to me, and I often stream music through my ssh sessions via port forwards so I may well already be getting the benefit in some places
This and a similar suggestion in another thread may sound nice and easy, "just add a constant stream of noise", but it assumes you can generate enough constant noise and be able to intersperse the noise with valid commands without being able to distinguish these events. The problem is not necessarily that you want to hide (to a network adversary) that you've been typing. It's that you do not want to reveal, through some side-channel, what the exact contents were.
On the openssh-unix-dev mailing list, someone recently pointed out[0] that just periodically (without jitter) sending out packets may be problematic due to subtle differences in clock timing. Aside, they also link to a presentation[1] [PDF] that shows influence of temperature on clock skew (especially page 18) and that this gives a possibility for fingerprinting.
Then there's the challenge of keeping SSH interactive enough that people do not experience too much input lag while typing. What if the user typed a character, but due to such a timing side-channel preventive measure, that character needs to be sent in the next packet, adding latency to the user experience? Surely it improves security, but it may add too much frustration for regular usage.
But I don't think the conversation here is about anonymity, its about side channels to discover the actual content of the SSH session. The OP is looking at determining the command typed based on keystroke timing. The attacks you link would work for any traffic that could be intercepted, SSH or otherwise, and they wouldn't give any info about the content of the stream.
If we're just focused on removing all traces of keystroke timing from the channel, then I think a decoupled SSH transport layer which is providing say 1kB of zero-pad every 20ms to the the shell to fill up, along with a FIFO to spread that out, and maybe some logic to ramp up and down the channel bandwidth based on queue length, you would go a long way to mitigating this specific attack.
https://github.com/jonmon6691/arduino_busstop