“Open Source” does not imply “less secure”

Sometimes programmers hesitate to make their software open source because they think that revelation of the source code would allow attackers to ‘hack it’.

Certainly there are specific cases where this is true, but not as a general rule.

In my opinion, if inspection of the source code allows an attacker to ‘hack it’, then the programmer has done it wrong. Security primarily comes from writing secure algorithms, independent of their open source nature.

OpenSSL is a case in point: It is open source, but nevertheless it powers HTTPS all over the internet. “But,” you say, “it is only secure because its code is kinda obscure.” Well, no: Cryptographically secure algorithms exhibit very astonishing properties. For example, the One-time pad encryption technique is extremely simple and exhibits “Perfect secrecy” which is defined by Wikipedia as follows:

One-time pads are “information-theoretically secure” in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon about the same time. His result was published in the Bell Labs Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.

Claude Shannon proved, using information theory considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because, given a truly random key which is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely.

Take for example the following simple implementation of the One-time pad (via XOR) in Ruby (which took me just a couple of minutes to write):

This code is open source, but it nevertheless exhibits the property of perfect (i.e. 100%) security “even against adversaries with infinite computational power” — given that the key is never submitted over insecure channels.

Sure, the One-time pad is not practical, and one could probably exploit weaknesses in Ruby or the underlying operating system. But that is not the point. The point is that, given proper implementation of software, it can be made open source without compromising its security.

To contrast this with an example of (bad) source code which should not be made public because it only creates a false sense of security:

Here, ((msgraw[n] + 7) ^ 99) is equivalent to a hard-coded secret. Sure, the obscured message, when transmitted over a public network, may look random. But the algorithm could easily be reverse-engineered by cryptoanalysis. Also, if the souce code were revealed, it would be trivial to decode past and future messages.

Conclusion

“Open Source” does not imply “insecure”. Security comes from secure — not secret — algorithms (which of course includes the freedom of bugs). What counts as “secure” is defined mathematically, and “mathematics (and in extension, physics) can’t be bribed.” It is not easy to come up with such algorithms, but it is possible, and there are many successful examples.

Needless to say, not every little piece of code should be made open source — ideally programmers will only publish generally useful and readable software which they intend to maintain, but that is a subject for another blog post.

Reasonably secure unattended SSH logins from untrusted machines

There are certain cases where you want to operate a not completely trusted networked machine, and write scripts to automate some task which involves an unattended SSH login to a server.

With “not completely trusted machine” I mean a computer which is reasonably secured against unauthorized logins, but is physically unattended (which means that unknown persons can have physical access to it).

An established SSH connection has a number of security implications. As I have argued in a previous blog post “Unprivileged Unix Users vs. Untrusted Unix Users”, having access to a shell on a server is problematic if the user is untrusted (as is always the case when the user originates from an untrusted machine), even if he is unprivileged on the server. In my blog post I presented a method to confine a SSH user into a jail directory (via a PAM module using the Linux kernel’s chroot system call) to prevent reading of all world-readable files on the server. However, such a jail directory still doesn’t prevent SSH port forwarding (which I illustrated in this blog post).

In short, any kind of SSH access allows access to at least all of the server’s open TCP ports, even if they are behind its firewall.

Does this mean that giving any kind of SSH access to an untrusted machine should not be done in principle? It does seem so, but there are ways to make the attack surface smaller and make the setup reasonably secure.

Remember that SSH uses some way of authentication.This is either a plain password, or a public/private keypair. In both cases there are secrets which should not be stored on the untrusted machine in a way that allows revealing of the secrets.

So the question becomes: How to supply the secrets to SSH without making it too easy to reveal them?

A private SSH key is permanent and must be stored on a permanent medium of the untrusted machine. To mitigate the possibility that the medium (e.g. hard drive) is extracted and the private key revealed, the private key should be encrypted with a long passphrase. A SSH passphrase needn’t be manually typed every time a SSH connection is made. ssh connects to ssh-agent (if running) to use private keys which may have previously been decrypted via a passphrase.   ssh-agent holds this information in the RAM.

I said “RAM”: For the solution to our present problem, this will be as good as it gets. The method presented below would require technical skills to read out the RAM of a running machine with hardware probes only, which would require (extremely) specialized skills. In this blog post, this is the meaning of the term “reasonably secure”.

On desktop machines, ssh-agent is usually started together with the graphical user interface. Keys and its passphrases can be “added” to it with the command ssh-add. The actual program  ssh connects to  ssh-agent if the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are present. This means that any kind of shell script (even unattended ones called from cron) can benefit from this: passphrases won’t be asked if the corresponding key has already been decrypted in memory. The main advantage of this is that this has to be done only once after the reboot of the machine (because the reboot clears the RAM).

On a headless client, without graphical interface, ssh-agent may not even be installed, we have to start it in a custom way. There is an excellent program called keychain which makes this very easy. The sequence of our method will look like this:

  1. The machine is rebooted.
  2. An authorized administrator logs into the machine and uses the keychain command to enter the passphrase which is now stored in RAM by ssh-agent.
  3. The administrator now can log out. The authentication data will remain in the RAM and will be available to unattended shell scripts.
  4. Every login to the machine will clear the authentication information. This ensures that even a successful login of an attacker will render the private key useless. This implies a minor inconvenience for the administrator: He has to enter the passphrase at every login too.

Keychain is available in major distro’s repositories:

Add the following line to either ~/.bashrc or to the system-wide /etc/bash.bashrc:

This line will be executed at each login to the server. What does this command do?

  1. keychain will read the private key from the specified path.
  2. keychain will prompt for the passphrase belonging to this key (if there is one).
  3. keychain will look for a running instance of ssh-agent. If there is none, it will start it. If there is one, it will re-use it.
  4. Due to the --clear switch, keychain will clear all keys from ssh-agent. This renders the private key useless even if an attacker manages to successfully log in.
  5. keychain adds the private key plus entered passphrase to ssh-agent which stores it in the RAM.
  6. keychain outputs a short shell script (to stdout) which exports two environment variables (mentioned above) which point to the running instance of  ssh-agent for consumption by ssh.
  7. The eval command executes the shell script from keychain which does nothing more but set the two environment variables.

Environment variables are not fully global, they always belong to a running process. Thus, in every unattended script which uses ssh, you need to set these environment variables by evaluating the output of

for example, in a Bash script:

It makes sense to gracefully catch SSH connection problems in your scripts. If you don’t do that, the script may hang indefinitely prompting for a passphrase if it has not been added properly. To do this, do a ‘preflight’ ssh connection which simply returns an error:

 

Conclusion

In everyday practice, security is never perfect. This method is just one way to protect — within reasonable limits — a SSH connection of an unattended/untrusted machine “in the field” to a protected server. As always when dealing with the question of ‘security’, any kind of solution needs to be carefully vetted before deployment in production!

Never type plain passwords for SSH authentication

It could be said that SSH (Secure Shell) is an administrator’s most important and most frequently used tool. SSH uses public-key cryptography to establish a secure communication channel. The public/private keypair is either

  1. generated automatically, where the (typed or copy-pasted) plaintext password is transmitted over the encrypted channel to authenticate the user, or
  2. generated manually once, where the private key is permanently stored on the client and the public key is permanently stored on the server. This method also authenticates the user at the same time without submitting a password.

Even if it may have been secure in the 2000’s, the first method (typing or copy-pasting the plaintext password) really has become insecure for the following possible side-channel attacks belonging to the category of Keystroke logging:

  1. Security Vulnerabilities in Wireless Keyboards
  2. Keystroke Recognition from Wi-Fi Distortion
  3. Snooping on Text by Listening to the Keyboard
  4. Sniffing Keyboard Keystrokes with a Laser
  5. Hacking Your Computer Monitor
  6. Guessing Smart Phone PINs by Monitoring the Accelerometer
  7. more to be discovered!

Using the clipboard for copy-pasting is not really an option either because the clipboard is simply public storage. In short, using passwords, even ‘complicated’ ones, is really a bad idea.

The second method (a manually generated public/private keypair) is much more secure:

  1. The private key (the secret) on the client is never transmitted (I know, public key cryptography sounds like black magic, but it isn’t)
  2. You still can use a “passphrase” to additionally encrypt the private key. This would protect the key in case it is stolen. This “passphrase” doesn’t have to be stored anywhere, it can be simply remembered like a conventional password.
  3. “Mathematics can’t be bribed”: If every Hydrogen atom in the univerese were a CPU and able to enumerate 1000 RSA moduli per second, it would still take approx. 6 x 10211 years to enumerate all moduli to bruteforce a 1024-bit RSA key.[^1]

There is no earthly agancy which can “hack” strong and proper cryptography, even if they claim that they can. There is a theoretical lower limit of energy consumption of computation. See Landauer’s principle for regular computing and Margolus–Levitin theorem  for quantum computing.

Nothing in the world of cryptography is ‘cut and dried’, but there are certain best practices we as administrators can adopt. Using SSH keys properly is certainly one of these practices.

[^1]: https://crypto.stackexchange.com/questions/3043/how-much-computing-resource-is-required-to-brute-force-rsa

 

 

Encrypt backups at an untrusted remote location

In a previous blog post I argued that a good backup solution includes backups at different geographical locations to compensate for local desasters. If you don’t fully trust the location, the only solution is to keep an encrypted backup.

In this tutorial we’re going to set up an encrypted, mountable backup image which allows us to use regular file system operations like rsync.

First, on any kind of permanent medium available, create a large enough file which will hold the encrypted file system. You can later grow the file system (with dd and resize2fs) if needed. We will use dd to create this file and fill this file with zeros. This may take a couple of minutes, depending on the write speed of the hard drive. Here, we create a 500GB file:

We will use LUKS to set up a virtual mapping device node for us:

First, we generate a key/secret which will be used to generate the longer symmetric encryption key which in turn protects the actual data. We tap into the entropy pool of the Linux kernel and convert 32 bytes of random data into base64 format (this may take a long time; consider installing haveged as an additional entropy source):

Store the Base64-encoded key in a secure location and create backups! If this key/secret is lost, you will lose the backup. You have been warned!

Next, we will write the LUKS header into the backup image:

Next, we “open” the encrypted drive with the label “backup_crypt”:

This will create a device node /dev/mapper/backup_crypt which can be mounted like any other hard drive. Next, create an Ext4 file system on this raw device (“formatting”):

Now, the formatted device can be mounted like any other file system:

You can inspect the mount status by typing mount. If data is written to this mount point, it will be transparently encrypted to the underlying physical device.

If you are done writing data to it, you can unmount it as follows:

To re-mount it:

Note that we always specify the Base64-encoded key on the command line and pipe it into cryptsetup. This is better than creating a file somewhere on the hard drive, because it only resides in the RAM. If the machine is powered off, the decrypted mount point is lost and only the encrypted image remains.

If you are really security-conscientious, you need to read the manual of cryptsetup to optimize parameters. You may want to use a key/secret longer than the 32 bytes mentioned here.

How to set up password-less SSH login for a Dropbear client

Dropbear is a replacement for standard OpenSSH for environments with low memory and processor resources. With OpenSSH, you can use the well-known ssh-keyen command to create a private/public keypair for the client. In Dropbear, it is a bit different. Here are the commands on the client:

The private key will be in ~/.ssh/id_dropbear. The public key is output to stdout.

On a Dropbear as well as on a OpenSSH server, you can put the client’s public key as usual into the  authorized_keys file to allow the client a password-less login:

Let me know in comments if you know of a better method!

How to install yubikey-manager on Debian

yubikey-manager is a Python application requiring some dependencies for it to be installed from the Python repositories, because it is not yet in the official Debian package repository. Here is how:

Here is the main commandline utility:

Hardening WordPress against hacking attempts

The WordPress Codex states:

Security in WordPress is taken very seriously

This may be the case, but in reality, you yourself have to take some additional measures so that you won’t have a false sense of security.

With the default settings of WordPress and PHP, the minute you host Wordpress and give access to one non-security-conscientious administrative user, your entire hosting environment should be considered as compromised.

The general problem with WordPress and PHP is that rather than thinking about which few essential features to turn on (whitelisting), you have to think about dozens of insecure features to turn off (blacklisting).

This excellent article (“Common WordPress Malware Infections”) gives you an overview what you’re up against when it comes to protecting WordPress from Malware.

Below are a couple of suggestions that should be undertaken, starting with the most important ones.

Disable WordPress File Editing

WordPress comes with the PHP file editor enabled by default. One of the most important rules of server security is that you never, ever, allow users to execute arbitrary program code. This is just inviting desaster. All it takes is the admin password to be stolen/sniffed/guessed to allow the WordPress PHP code to be injected with PHP malware. Then, if you haven’t taken other restricting measures in PHP.ini (see section below), PHP may now

  • Read all readable files on your entire server
    • Include /etc/passwd and expose the names of all user accounts publicly
    • Read database passwords from wp-config.php of all other WordPress installations and modify or even delete database records
    • Read source code of other web applications
    • etc.
  • Modify writable files
    • Inject more malware
    • etc.
  • Use PHP’s curl functions to make remote requests
    • Turns your server into part of a botnet

So amongst the first things to do when hosting WordPress, is to disable file editing capabilities:

But that measure assumes that WordPress plus third-party Plugins are secure enough to improve their own security, which one cannot assume, so it is better to…

The “Big Stick”: Remove Write File Permissions

I’ll posit here something that I believe to be self-evident:

It is safer to make WordPress files read-only and thus disallow frequent WordPress (and third-party Plugin) upgrades than it is to allow Wordpress (and third-party Plugins) to self-modify.

Until I learn that this postulate is incorrect, I’ll propose that you make all WordPress files (with the obvious exception of the uploads directory) owned by root, and writable only to root, interpreted by a non-root user. This will leverage the security inherent in the Linux Kernel:

Note that you still can upgrade WordPress from time to time manually. You could even write a shell script for it.

Restrict serving of files

Disable direct access to wp-config.php which contains very sensitive information which would be revealed should the PHP not be processed correctly. In Nginx:

Disable PHP execution in the uploads directory. In Nginx:

Restrict PHP

I’ll refer the reader to already written excellent external articles –  please do implement the suggestions therein:

Hardening PHP from PHP.ini

25 PHP Security Best Practices

Host WordPress in a Virualization Environment

In addition to all of the above, any kind of publicly exposed web application (not just WordPress) should really be hosted in an isolated environment. Docker seems promising for this purpose. I found the following great external tutorial about generating a LAMP Docker image:

WordPress Developer’s Intro To Docker

WordPress Developer’s Intro To Docker, Part Two

 

Hashing passwords: SHA-512 can be stronger than bcrypt (by doing more rounds)

'Hashed' brown potatoes. Hashing is important on more than just one level (picture by Jamie Davids, CC-BY-2.0)
‘Hashed’ brown potatoes. Hashing is important on more than just one level (picture by Jamie Davids, CC-BY-2.0)

On a server, user passwords are usually stored in a cryptographically secure way, by running the plain passwords through a one-way hashing function and storing its output instead. A good hash function is irreversible. Leaving dictionary attacks aside and by using salts, the only way to find the original input/password which generated its hash, is to simply try all possible inputs and compare the outputs with the stored hash (if the hash was stolen). This is called bruteforcing.

Speed Hashing

With the advent of GPU, FPGA and ASIC computing, it is possible to make bruteforcing very fast – this is what Bitcoin mining is all about, and there’s even a computer hardware industry revolving around it. Therefore, it is time to ask yourself if your hashes are strong enough for our modern times.

bcrypt was designed with GPU computing in mind and due to its RAM access requirements doesn’t lend itself well to parallelized implementations. However, we have to assume that computer hardware will continue to become faster and more optimized, therefore we should not rely on the security margin that bcrypt’s more difficult implementation in hardware and GPUs affords us for the time being.

For our real security margin, we should therefore only look at the so-called “variable cost factor” that bcrypt and other hashing functions support. With this feature, hashing function can be made arbitrarily slow and therefore costly, which helps deter brute-force attacks upon the hash.

This article will investigate how you can find out if bcrypt is available to you, and if not, show how to increase the variable cost factor of the more widely available SHA-512 hashing algorithm.

 

Do you have bcrypt?

If you build your own application (web, desktop, etc.) you can easily find and use bcrypt libraries (e.g. Ruby, Python, C, etc.).

However, some important 3rd party system programs (Exim, Dovecot, PAM, etc.) directly use glibc’s crypt() function to authenticate a user’s password. And glibc’s crypt(), in most Linux distributions except BSD, does NOT implement bcrypt! The reason for this is explained here.

If you are not sure if your Linux distribution’s glibc supports bcrypt ( man crypt won’t tell you much), simply try it by running the following C program. The function  crypt() takes two arguments:

For testing, we’ll firstly generate a MD5 hash because that is most certainly available on your platform. Add the following to a file called  crypttest.c:

Suppose xyz is our very short input/password. The $1$ option in the salt argument means: generate a MD5 hash. Type man crypt for an explanation.

Compile the C program:

Run it:

The output:

Next, in the C program change $1$ to $6$ which means SHA-512 hash. Recompile, run, and it will output:

Next, change $6$ to $2a$ whch means bcrypt. Recompile, run, and the output on my distribution (Debian) is:

So, on my Debian system, I’m out of luck. Am I going to change my Linux distribution just for the sake of bcrypt? Not at all. I’ll simply use more rounds of SHA-512 to make hash bruteforcing arbitrarily costly. It turns out that SHA-512 (bcrypt too, for that matter) supports an arbitrary number of rounds.

More rounds of SHA-512

glibc’s crypt() allows specification of the number of rounds of the main loop in the algorithm.  This feature is strangely absent in the man crypt documentation, but is documented here. glibc’s default of rounds for a SHA-512 hash is 5000. You can specify the number of rounds as an option in the salt argument. We’ll start with 100000 rounds.

In the C program, pass in as argument salt the string $6$rounds=100000$salthere . Recompile and measure the execution time:

It took 48 milliseconds to generate this hash. Let’s increase the rounds by a factor of 10, to 1 million:

We see that generation of one hash took 10 times longer, i.e. about half a second. It seems to be a linear relationship.

Glibc allows setting the cost factor (rounds) from 1000 to 999,999,999.

The Proof lies in the Bruteforcing

Remember that above we have chosen the input/password xyz (length 3). If we only allow for the characters [a-z], we get 26³ possibilities, or 17576 different passwords to try. With 48 ms per hash (100000 rounds) we expect the bruteforcing to have an upper limit of about 17576 x 0.048 s = 843 seconds (approx. 14 minutes).

We will use a hash solver called hashcat to confirm the expected bruteforcing time. Hashcat can use GPU computing, but I’m missing proper OpenCL drivers, so my results come from slower CPU computing only. But that doesn’t matter. The crux of the matter is that the variable cost factor (rounds) directly determines the cracking time, and it will have to be adapted to supposed computer hardware that might do the cracking.

Now we’re going to solve above SHA-512 hash (password/input was xyz) which was calculated with 100000 rounds:

Note that it took about 14 minutes to find the plaintext password xyz from the hash, which confirms above estimation.

Now let’s try to crack a bcrypt hash of the same input xyz which I generated on this website:

Note that bcrypt’s cost factor in this example is 08 (see bcrypt’s documentation for more info on this). The bruteforce cracking time of the same password took only 3 minutes and 30 seconds. This is about 3 times faster than the SHA-512 example, even though bcrypt is frequently described as being slow. This variable cost factor is often overlooked by bloggers who too quickly argue for one or the other hash function.

 

Conclusion

bcrypt is suggested in many blog posts in favor of other hashing algorithms. I have shown that by specifying a variable cost factor (rounds) to the SHA-512 algorithm, it is possible to arbitrarily increase the cost of bruteforcing the hash. Both SHA-512 and bcrypt can therefore be not said to be faster or slower than the other.

The variable cost factor (rounds) should be chosen in such a way that even resourceful attackers will not be able to crack passwords in a reasonable time, and that the number of authenticating users to your server won’t consume too much CPU resources.

When a hash is stolen, the salt may be stolen too, because they are usually stored together. Therefore, a salt won’t protect against too short or dictionary-based passwords. The importance of choosing long and random passwords with lower, upper and special symbols can’t be emphasized enough. This is also true when no hashes are stolen, and attackers simply try to authenticate directly with your application by simply trying every single password. With random passwords, and assuming that the hash function implementations are indeed non-reversible, it is trivial to calculate the cost of brute-forcing their hashes.

100% HTTPS in the internet? Non-Profit makes it possible!

Selection_008HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com 

File permissions for successful SSH login via authorized_keys

Setting correct file permissions for ssh is a bit tricky

If you want to ssh into your server without being repeatedly prompted for the password you can copy your public ssh key into a file called authorized_keys  in the .ssh subdirectory of the home directory of the remove server account. However, this works only if the permissions for this file are set correctly.

First, if you have not done so already, generate the public key for your local user:

 

This will create a file ~/.ssh/id_rsa.pub

Append the only line in this file into the file ~/.ssh/authorized_keys  of the remote user account. Create the directory and file if it does not exist.

Now try to ssh into your remote account. If ssh is still asking for the remote user’s password, check the permissions of the following files and directories:

  • The permissions of the home directory of the remote user must be 755
  • The permissions of the remote .ssh directory must be 700
  • The permissions of the remote authorized_keys file must be 600

… of course all of those must be owned by the remote user, and not by root.

Now, you should be able to ssh into the remote account without being asked for the password!