There have been horrible implementation flaws in RSA too. For all we know, we're a survey away from finding tens of thousands of factor collisions in RSA keys. I really don't think it matters much whether you use RSA or DSA.
I think the intuition was more that RSA is a simpler algorithm to understand and implement, so the "goof surface" is lower. But yeah, someone somewhere will screw up everything.
It's actually the opposite. RSA is more complex and has more failure modes; the discrete log problem that DSA/DH/ElG use is about as simple as it gets.
Again: I don't think it matters. The Debian fiasco was a devastating fuckup in the core of the most important Unix crypto library, did not just affect DSA, and didn't itself have anything to do with DSA.
Not sure I follow. DSA requires a modular inverse algorithm which RSA doesn't (beyond that both are reliant on modular exponentiation and prime generation as the only "hard parts"). DSA has more algorithm parameters. It's just a more complicated scheme to implement any way you slice it. DSA requires more "units of mistake", and is more likely to be screwed up by the programmer.
Ah, I see where you're coming from. I have a mental shorthand that basically says "DSA is the one that works like DH", and you're right, you need the inverse for DSA.
But regarding failure modes, I'm thinking of (as a starting point) things like:
http://www.ams.org/notices/199902/boneh.pdf
There are a lot of implementation errors that happen with RSA. What's a comparable list for DSA? Failure to generate good nonces, and then...?
> don't use agent forwarding ever. If you know better, be damn sure you are right
I am curious. Why do you say this? My use case for agent forwarding is during automated deployment. Without agent forwarding, when pulling from SVN, I either need to enter password every time or have my private keys uploaded to the server where I want to deploy.
You probably need to rethink the security characteristics of your deployment architecture. You shouldn't be authenticating from a production host (which is presumably internet-facing, and thus more likely to be compromised), you should be pushing to it from a more protected source like a build machine behind a firewall.
Using agent forwarding in this case is precisely the problem being discussed. If someone roots your web server (or whatever it is), they can authenticate as your ssh account to any host that accepts it. So at a minimum your subversion server will be compromised if any of the production hosts are. Bad.
> You probably need to rethink the security characteristics of your deployment architecture.
I'm wiser now :). Although till I don't move to a push model, I think I will explore the restricted keys as suggested by ryan-c and use them.
Unfortunately, none of the one shot deployment systems point this out. Most of them do a git pull on production. When I built mine, I wanted to avoid putting my private keys up on the production, so I went with Agent Forwarding.
Anyone with access to your account (this would include root) on a machine which you have forwarded your agent to can use it to authenticate as you. For your use case, the least evil option would be to set up an SSH key that is restricted on the SVN server to only be able to do an svn export.
Sorry if I am missing out the sarcasm here :) but I did mean private keys since I am trying to authenticate from the prod machine to the source control server.
Then you're a fool! Generate new private keys on the PROD server and upload them.
I don't know if you can do it with your SCM, but a better solution would be the source control server to push to PROD rather than the other way round, as it prevents attacks from a compromised PROD server.
> Generate new private keys on the PROD server and upload them.
How is that any more secure then agent forwarding? The vulnerability with agent forwarding needs some work and right timing to be exploited after the prod server is rooted. Having a set of private keys lying around is offering access on a plate.
>I don't know if you can do it with your SCM, but a better solution would be the source control server to push to PROD rather than the other way round, as it prevents attacks from a compromised PROD server.
Yes, I am wiser (or less foolish if you prefer ;-) ) now. One can always fall back to rsync and friends if the SCM lacks.
Having specific keys for specific purposes is more secure because when you add the public half of it to your SVN server, you put the extra options along with the key that limit the server it can come from, and the command it can execute. This means that your seemingly-scary private key can now do one thing and one thing only - pull from svn.
Now you can do deployments without any constraints like "abhaga needs to be awake and have his computer on and be SSHed to the right places" :)
"Don’t Use a Blank Passphrase on Your Key This is basic security, plus allows you to “safely” move your keys between hosts..."
Then:
"Don’t Copy Your Private Key Around Remember this is your identity... Its never a good idea to copy it from system to system."
So put a passphrase on your private key so you can move it between systems, but also don't ever move it between systems.
Side note: I get annoyed by advice about how one should always always put a passphrase on private keys. It makes unchecked assumptions. The private keys on my laptop are stored on a fully encrypted drive that locks every time the computer sleeps. This laptop has far more sensitive data on it than the remote hosts I access (github and a VPS), which serve virtually all their unique data to the public via the web. I'm fine with a naked private key on this machine.
Do you recognize the threat of malware (such as a random script you download) just copying the private key and shipping it off?
A passphrase defeats that threat. And integrating the ssh agent with something like the gnome keyring means you never even have to remember your passphrase.
I confess, I hadn't thought of that threat. It's an interesting thing to think about.
My initial reaction is this: Couldn't malware on my laptop also monitor my keystrokes when I unlock the key? Or when I log in to my VPS web interface? I mean, if the goal is to have a malware infested computer that is no threat to external systems, it seems like there are tons of other files/apps/system you'd also want to password protect, to the point of making the computer almost impossible to use.
Still, it's an interesting point. SSH keys are more sensitive. I do keep an extra password on my password manager.
I get annoyed by advice about how one should always always put a passphrase on private keys. It makes unchecked assumptions. The private keys on my laptop are stored on a fully encrypted drive that locks every time the computer sleeps.
You're probably not the target audience. If you're savvy enough to make full disk encryption work, then you should already know how to use ssh-agent.
Your setup sounds like something I could want for myself. I have a truecrypt "virtual disk" that I would like to unmount on suspend and mount on resume. Can you elaborate on the details of your encryption setup?
I should revise my phrasing: The computer locks behind an OS X password, the drive, while fully encrypted, remains available as far as the OS is concerned. Something like you describe would be much better.
(An attacker in possession of a sleeping machine could theoretically get the ram, cool it with liquid nitrogen or similar, and try and extract the PGP key. I do at least have FireWire DMA turned off.)
Don't blindly accept key fingerprints, especially when the public key should already be in your known hosts file already and then type the password! ssh-keygen -l -f file is your friend.
Extra Dos:
1) Lie to your PHB that telnet is no longer included and that everyone must now use SSH and that there is absolutely no option to not have account passwords anymore. (This is a true story.)
2) Use it! Tunnel through port 53 and redirect to port 22 down the line when you are at a coffee shop instead of clicking on the "I Agree" button on the web portal and having everything you do being data mined. (In a TOS I saw one that granted perpetual rights to access your FB account - until you change your password of course. You had the option of using a CC to purchase access or by logging into FB.) Most of these coffee shop/airport portals don't block traffic on port 53 because of DNS. Most Starbucks block TCP on 53 now.
The article contains a mistake - where it says "Don’t Copy Your Public Key Around" it really means "Don’t Copy Your Private Key Around".
The advice to use ForwardAgent is also dubious, at least without fully describing the implications - which is that if you log into a compromised host, that host can use your credentials to access other hosts.
Agreed, I personally counter this by having different private keys for different networks and levels of security clearance. My work key gets me onto the work networks, once inside, I have a network specific key to use inside the network itself. This way I don't need to use ForwardAgent and my personal boxes will not be compromised if my work or any of my friends are.
If you don't run ssh on port 22, it's been proved that it receives a lot less outside login attempts and stops the logs filling up with login failures apart from anything else.
Two reasons:
1. Logs filling up with login failures from drive-bys masks legitimate/focused hack attempts.
2. If there's a security vulnerability found for sshd, non-standard port choice reduces the risk of drive-by scanners.
Non-standard ports don't stop dedicated attacks, but they do reduce noise that can obfuscate a dedicated attack and can reduce your exposure to uncommitted attackers.
The risk reduction is negligible if someone is doing a
portscan on your host. Connection attempts to non standard
ports will eventually occur.
The better solution is to use single packet
authorization.[1]
I wanted to stay away from server side settings. But I will say I have mixed feelings about both of these.
If you're using a firewall, the default port matters less. My practice is to restrict SSH to VPN connections only, or from a single bastion host. Finding networks that block odd ports starts loosing its charm after you've changed the port, and several years ago it was a pain to get some mobile ssh clients to use alternate ports.
Root login I generally believe should be turned off, and it certainly should not be allowed with passwords. I tend to think a well configured set of keys(one for each user who needs root) poses the same risk as users with sudo *, or the root password in su. As much as I hate to admit it there are some occasions where remote root access has saved the day.
On top of that, I disable all password logins on every server I run.
If for some reason I must get in and don't have access to my private key, I use a virtual console from my VPS provider to temporarily allow password logins and then immediately disable them when done.
I had a server compromised once because of a default password on the mysql account.
It's good practise first login as another user and then gain root priv's. This is auditable and if your sshd won't allow root login's, the can't be brute forced directly.
Your private key isn't that valuable. It's not like, say, a GPG key which you've had widely signed (there's no SSH "web of trust"). Such a key can be useful as it ages (though there are trade-offs in security -- risk of it being compromised also rises with time). You also don't lose access to previously encrypted data by losing an old SSH key, as you would with a GPG key.
So long as you can get new public keys loaded onto the systems you need to access, you're just fine generating a new keypair.
The problem is that I have 20-30 servers (I don't even know how many or which) that I have access to, and many times I'm the only one who can log in to them. If I lose my SSH key, I have to go around all these boxes changing keys, and many times there's nobody to let me in.
That's a case where having multiple keys (for multiple point-of-origin systems) which can access those hosts would be a Really Good Thing.
Distributed shell (dsh) and git can be helpful in keeping at least portions of things synched up. You can also mimic dsh with your own hostlist(s) and iterated tasks (for host in $( cat hostlist ); do echo "$host"; ssh $host 'do stuff'; done)
... or the like.
If you're managing these boxes as part of a team, having a secondary/tertiary persons with access (or more) can also help.
There's the use of password safes to keep track of access credentials as well (another hassle to keep synchronized, of course).
Ahhhhhhhhhhhh! Use multiple public keys. For each accessing device generate a separate private key and upload your new public keys. I don't know anywhere you can't have multiple public keys.
PS Do's should be Dos, but it's quite hard to read.
You're running ssh-agent on a desktop (or laptop) system which you leave either suspended or locked (you hope) via a screensaver (see the recent xscreensaver hotkey exploit/bug).
You're running ssh-agent directly via a shell on remote hosts (bad idea).
You can set a timeout for ssh-agent keys with the '-t life' option (default: seconds). Or when adding an identity. However there's no way to specify this in a config file (for sane defaults), and most mechanisms for launching ssh-agent don't allow the user to interact with the initiation in any sane way (e.g.: /etc/X11/Xsession*).
Specifying, say, 43200 - 86400 seconds (for a desktop), or some low multiple of 3600 seconds (for remote sessions) might be reasonably sane.
I'd pick agent forwarding over remote agents myself.
* don't use agent forwarding ever. If you know better, be damn sure you are right
* don't back up your private key. If you lose it, generate a new one and have someone add it
* do use passwordless keys, but only if they are command (and preferably IP) locked to trigger specific jobs
* use RSA keys, not DSA keys (see Debian random number fiasco for why)
I covered some of this in http://www.tenshu.net/2012/02/sysadmin-talks-openssh-tips-an...