“Open Source” does not imply “less secure”

Sometimes programmers hesitate to make their software open source because they think that revelation of the source code would allow attackers to ‘hack it’.

Certainly there are specific cases where this is true, but not as a general rule.

In my opinion, if inspection of the source code allows an attacker to ‘hack it’, then the programmer has done it wrong. Security primarily comes from writing secure algorithms, independent of their open source nature.

OpenSSL is a case in point: It is open source, but nevertheless it powers HTTPS all over the internet. “But,” you say, “it is only secure because its code is kinda obscure.” Well, no: Cryptographically secure algorithms exhibit very astonishing properties. For example, the One-time pad encryption technique is extremely simple and exhibits “Perfect secrecy” which is defined by Wikipedia as follows:

One-time pads are “information-theoretically secure” in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon about the same time. His result was published in the Bell Labs Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.

Claude Shannon proved, using information theory considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because, given a truly random key which is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely.

Take for example the following simple implementation of the One-time pad (via XOR) in Ruby (which took me just a couple of minutes to write):

This code is open source, but it nevertheless exhibits the property of perfect (i.e. 100%) security “even against adversaries with infinite computational power” — given that the key is never submitted over insecure channels.

Sure, the One-time pad is not practical, and one could probably exploit weaknesses in Ruby or the underlying operating system. But that is not the point. The point is that, given proper implementation of software, it can be made open source without compromising its security.

To contrast this with an example of (bad) source code which should not be made public because it only creates a false sense of security:

Here, ((msgraw[n] + 7) ^ 99) is equivalent to a hard-coded secret. Sure, the obscured message, when transmitted over a public network, may look random. But the algorithm could easily be reverse-engineered by cryptoanalysis. Also, if the souce code were revealed, it would be trivial to decode past and future messages.

Conclusion

“Open Source” does not imply “insecure”. Security comes from secure — not secret — algorithms (which of course includes the freedom of bugs). What counts as “secure” is defined mathematically, and “mathematics (and in extension, physics) can’t be bribed.” It is not easy to come up with such algorithms, but it is possible, and there are many successful examples.

Needless to say, not every little piece of code should be made open source — ideally programmers will only publish generally useful and readable software which they intend to maintain, but that is a subject for another blog post.

Reasonably secure unattended SSH logins from untrusted machines

There are certain cases where you want to operate a not completely trusted networked machine, and write scripts to automate some task which involves an unattended SSH login to a server.

With “not completely trusted machine” I mean a computer which is reasonably secured against unauthorized logins, but is physically unattended (which means that unknown persons can have physical access to it).

An established SSH connection has a number of security implications. As I have argued in a previous blog post “Unprivileged Unix Users vs. Untrusted Unix Users”, having access to a shell on a server is problematic if the user is untrusted (as is always the case when the user originates from an untrusted machine), even if he is unprivileged on the server. In my blog post I presented a method to confine a SSH user into a jail directory (via a PAM module using the Linux kernel’s chroot system call) to prevent reading of all world-readable files on the server. However, such a jail directory still doesn’t prevent SSH port forwarding (which I illustrated in this blog post).

In short, any kind of SSH access allows access to at least all of the server’s open TCP ports, even if they are behind its firewall.

Does this mean that giving any kind of SSH access to an untrusted machine should not be done in principle? It does seem so, but there are ways to make the attack surface smaller and make the setup reasonably secure.

Remember that SSH uses some way of authentication.This is either a plain password, or a public/private keypair. In both cases there are secrets which should not be stored on the untrusted machine in a way that allows revealing of the secrets.

So the question becomes: How to supply the secrets to SSH without making it too easy to reveal them?

A private SSH key is permanent and must be stored on a permanent medium of the untrusted machine. To mitigate the possibility that the medium (e.g. hard drive) is extracted and the private key revealed, the private key should be encrypted with a long passphrase. A SSH passphrase needn’t be manually typed every time a SSH connection is made. ssh connects to ssh-agent (if running) to use private keys which may have previously been decrypted via a passphrase.   ssh-agent holds this information in the RAM.

I said “RAM”: For the solution to our present problem, this will be as good as it gets. The method presented below would require technical skills to read out the RAM of a running machine with hardware probes only, which would require (extremely) specialized skills. In this blog post, this is the meaning of the term “reasonably secure”.

On desktop machines, ssh-agent is usually started together with the graphical user interface. Keys and its passphrases can be “added” to it with the command ssh-add. The actual program  ssh connects to  ssh-agent if the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are present. This means that any kind of shell script (even unattended ones called from cron) can benefit from this: passphrases won’t be asked if the corresponding key has already been decrypted in memory. The main advantage of this is that this has to be done only once after the reboot of the machine (because the reboot clears the RAM).

On a headless client, without graphical interface, ssh-agent may not even be installed, we have to start it in a custom way. There is an excellent program called keychain which makes this very easy. The sequence of our method will look like this:

  1. The machine is rebooted.
  2. An authorized administrator logs into the machine and uses the keychain command to enter the passphrase which is now stored in RAM by ssh-agent.
  3. The administrator now can log out. The authentication data will remain in the RAM and will be available to unattended shell scripts.
  4. Every login to the machine will clear the authentication information. This ensures that even a successful login of an attacker will render the private key useless. This implies a minor inconvenience for the administrator: He has to enter the passphrase at every login too.

Keychain is available in major distro’s repositories:

Add the following line to either ~/.bashrc or to the system-wide /etc/bash.bashrc:

This line will be executed at each login to the server. What does this command do?

  1. keychain will read the private key from the specified path.
  2. keychain will prompt for the passphrase belonging to this key (if there is one).
  3. keychain will look for a running instance of ssh-agent. If there is none, it will start it. If there is one, it will re-use it.
  4. Due to the --clear switch, keychain will clear all keys from ssh-agent. This renders the private key useless even if an attacker manages to successfully log in.
  5. keychain adds the private key plus entered passphrase to ssh-agent which stores it in the RAM.
  6. keychain outputs a short shell script (to stdout) which exports two environment variables (mentioned above) which point to the running instance of  ssh-agent for consumption by ssh.
  7. The eval command executes the shell script from keychain which does nothing more but set the two environment variables.

Environment variables are not fully global, they always belong to a running process. Thus, in every unattended script which uses ssh, you need to set these environment variables by evaluating the output of

for example, in a Bash script:

It makes sense to gracefully catch SSH connection problems in your scripts. If you don’t do that, the script may hang indefinitely prompting for a passphrase if it has not been added properly. To do this, do a ‘preflight’ ssh connection which simply returns an error:

 

Conclusion

In everyday practice, security is never perfect. This method is just one way to protect — within reasonable limits — a SSH connection of an unattended/untrusted machine “in the field” to a protected server. As always when dealing with the question of ‘security’, any kind of solution needs to be carefully vetted before deployment in production!

Never type plain passwords for SSH authentication

It could be said that SSH (Secure Shell) is an administrator’s most important and most frequently used tool. SSH uses public-key cryptography to establish a secure communication channel. The public/private keypair is either

  1. generated automatically, where the (typed or copy-pasted) plaintext password is transmitted over the encrypted channel to authenticate the user, or
  2. generated manually once, where the private key is permanently stored on the client and the public key is permanently stored on the server. This method also authenticates the user at the same time without submitting a password.

Even if it may have been secure in the 2000’s, the first method (typing or copy-pasting the plaintext password) really has become insecure for the following possible side-channel attacks belonging to the category of Keystroke logging:

  1. Security Vulnerabilities in Wireless Keyboards
  2. Keystroke Recognition from Wi-Fi Distortion
  3. Snooping on Text by Listening to the Keyboard
  4. Sniffing Keyboard Keystrokes with a Laser
  5. Hacking Your Computer Monitor
  6. Guessing Smart Phone PINs by Monitoring the Accelerometer
  7. more to be discovered!

Using the clipboard for copy-pasting is not really an option either because the clipboard is simply public storage. In short, using passwords, even ‘complicated’ ones, is really a bad idea.

The second method (a manually generated public/private keypair) is much more secure:

  1. The private key (the secret) on the client is never transmitted (I know, public key cryptography sounds like black magic, but it isn’t)
  2. You still can use a “passphrase” to additionally encrypt the private key. This would protect the key in case it is stolen. This “passphrase” doesn’t have to be stored anywhere, it can be simply remembered like a conventional password.
  3. “Mathematics can’t be bribed”: If every Hydrogen atom in the univerese were a CPU and able to enumerate 1000 RSA moduli per second, it would still take approx. 6 x 10211 years to enumerate all moduli to bruteforce a 1024-bit RSA key.[^1]

There is no earthly agancy which can “hack” strong and proper cryptography, even if they claim that they can. There is a theoretical lower limit of energy consumption of computation. See Landauer’s principle for regular computing and Margolus–Levitin theorem  for quantum computing.

Nothing in the world of cryptography is ‘cut and dried’, but there are certain best practices we as administrators can adopt. Using SSH keys properly is certainly one of these practices.

[^1]: https://crypto.stackexchange.com/questions/3043/how-much-computing-resource-is-required-to-brute-force-rsa

 

 

Encrypt backups at an untrusted remote location

In a previous blog post I argued that a good backup solution includes backups at different geographical locations to compensate for local desasters. If you don’t fully trust the location, the only solution is to keep an encrypted backup.

In this tutorial we’re going to set up an encrypted, mountable backup image which allows us to use regular file system operations like rsync.

First, on any kind of permanent medium available, create a large enough file which will hold the encrypted file system. You can later grow the file system (with dd and resize2fs) if needed. We will use dd to create this file and fill this file with zeros. This may take a couple of minutes, depending on the write speed of the hard drive. Here, we create a 500GB file:

We will use LUKS to set up a virtual mapping device node for us:

First, we generate a key/secret which will be used to generate the longer symmetric encryption key which in turn protects the actual data. We tap into the entropy pool of the Linux kernel and convert 32 bytes of random data into base64 format (this may take a long time; consider installing haveged as an additional entropy source):

Store the Base64-encoded key in a secure location and create backups! If this key/secret is lost, you will lose the backup. You have been warned!

Next, we will write the LUKS header into the backup image:

Next, we “open” the encrypted drive with the label “backup_crypt”:

This will create a device node /dev/mapper/backup_crypt which can be mounted like any other hard drive. Next, create an Ext4 file system on this raw device (“formatting”):

Now, the formatted device can be mounted like any other file system:

You can inspect the mount status by typing mount. If data is written to this mount point, it will be transparently encrypted to the underlying physical device.

If you are done writing data to it, you can unmount it as follows:

To re-mount it:

Note that we always specify the Base64-encoded key on the command line and pipe it into cryptsetup. This is better than creating a file somewhere on the hard drive, because it only resides in the RAM. If the machine is powered off, the decrypted mount point is lost and only the encrypted image remains.

If you are really security-conscientious, you need to read the manual of cryptsetup to optimize parameters. You may want to use a key/secret longer than the 32 bytes mentioned here.

Before data loss: How to make correct backups

Why should you regularly make backups? Because if you don’t, then this mistake will bite you, sooner or later. Why? Because of Murphy’s Law:

Anything that can go wrong, will go wrong.

And a variation of it, Finagle’s law, even says:

Anything that can go wrong, will—at the worst possible moment.

So, let’s prepare right now and look at ways to back up data correctly.

RAID data mirroring is not enough

Realtime data mirroring (no matter if it is software or hardware RAID) is good, but not enough. What if your location is hit by lightning, fire or water? What if your entire system gets stolen? Then RAID would have been exactly useless.

Threats to local backups on external media

Say that you have an external USB hard drive for you backups. This is good, but as long as it is connected to your computer, it still may be subject to destruction due to lightning.

In addition, if you leave your external USB hard drive mounted in your host OS, and if you make a mistake as an administrator, or have faulty software or malware, you may fully erase your main hard drive and the backup at the same time. This is not too unlikely!

It happened to me once. A simple mistyped rm -rf ./ as root user somewhere deep in the filesystem did exactly that (I accidentally typed a space between the dot and the slash). Yes, I erased my main hard drive and the backup (mounted under  /mnt) at the same time. The data loss was desastrous.

Independent of the above, local backps are still susceptible to fire, water, or theft.

The dangers of deleted or changed files

rsync is especially good if you transfer the data to your backup location via public networks, because it only transfers changes. It also supports the --delete flag which deletes remote files when they are no longer locally present. This is generally a good idea if you want your backup to be an exact copy, otherwise your backup will become messy by accumulating many deleted files, which will make restoration not very fun.

But the --delete flag is also a danger. Say you delete an important file locally. Two days later you discover this fact, and decide to restore it from your backup. But guess what, it will be gone there too if you have synced in the meantime.

This problem is also present when changing files. The only solution to this problem is to have rolling backups (backups of your backups) in regular or increasing intervals (weekly, monthly, yearly). This will multiply the storage requirements, but you really cannot get around it.

Restoration is as important as backing up

Let’s say you have 10 perfectly done backups. But if you can’t access them any more, or not quickly enough (e.g. due to low bandwidth, etc.) they will be useless for your purposes. You need to put as much thought and effort into an effective restoration method as you put into the backups in the first place.

What works?

In general, a good backup solution depends on the specific circumstances and needs. Backups can never be perfect (100.0% reliable), there will always be a small but nevertheless existing possibility of total data loss. But you can make that possibility very, very small. As a rough guideline, the following principles seem to minimize the risk:

  • You have more than one backup.
  • You have backups of your backups (“rolling backups“).
  • You do not leave local backup media connected or mounted.
  • Your backups are at geographically different locations to compensate for local desasters.
  • If your backup is at a remote location, you fully trust the location, or use proper encryption.
  • Restoration is effective.
  • Backup and restoration is automated and tested.
  • After each backup cycle, the backups are verified. If there was a failure, the administrator is notified.

It is your responsibility!

If you should lose data, don’t blame it on ‘evil’ external circumstances, because:

Never attribute to malice that which is adequately explained by stupidity.

What if all data are still lost? Well, in this case I only can say:

Every misfortune is a blessing in disguise.

Start working on your backup solution now!

Simple test if TCP port is open

There are other more complicated tools to achieve the same (like nmap whose manpage makes your head spin), but this is a very simple solution using netcat:

To programmatically evaluate the result, use the standard Bash $? variable. It will be set to 0 if the port was open, or 1 if the port was closed.

How to set up password-less SSH login for a Dropbear client

Dropbear is a replacement for standard OpenSSH for environments with low memory and processor resources. With OpenSSH, you can use the well-known ssh-keyen command to create a private/public keypair for the client. In Dropbear, it is a bit different. Here are the commands on the client:

The private key will be in ~/.ssh/id_dropbear. The public key is output to stdout.

On a Dropbear as well as on a OpenSSH server, you can put the client’s public key as usual into the  authorized_keys file to allow the client a password-less login:

Let me know in comments if you know of a better method!

How to install yubikey-manager on Debian

yubikey-manager is a Python application requiring some dependencies for it to be installed from the Python repositories, because it is not yet in the official Debian package repository. Here is how:

Here is the main commandline utility: