Hardening WordPress against hacking attempts

The WordPress Codex states:

Security in WordPress is taken very seriously

This may be the case, but in reality, you yourself have to take some additional measures so that you won’t have a false sense of security.

With the default settings of WordPress and PHP, the minute you host Wordpress and give access to one non-security-conscientious administrative user, your entire hosting environment should be considered as compromised.

The general problem with WordPress and PHP is that rather than thinking about which few essential features to turn on (whitelisting), you have to think about dozens of insecure features to turn off (blacklisting).

This excellent article (“Common WordPress Malware Infections”) gives you an overview what you’re up against when it comes to protecting WordPress from Malware.

Below are a couple of suggestions that should be undertaken, starting with the most important ones.

Disable WordPress File Editing

WordPress comes with the PHP file editor enabled by default. One of the most important rules of server security is that you never, ever, allow users to execute arbitrary program code. This is just inviting desaster. All it takes is the admin password to be stolen/sniffed/guessed to allow the WordPress PHP code to be injected with PHP malware. Then, if you haven’t taken other restricting measures in PHP.ini (see section below), PHP may now

  • Read all readable files on your entire server
    • Include /etc/passwd and expose the names of all user accounts publicly
    • Read database passwords from wp-config.php of all other WordPress installations and modify or even delete database records
    • Read source code of other web applications
    • etc.
  • Modify writable files
    • Inject more malware
    • etc.
  • Use PHP’s curl functions to make remote requests
    • Turns your server into part of a botnet

So amongst the first things to do when hosting WordPress, is to disable file editing capabilities:

But that measure assumes that WordPress plus third-party Plugins are secure enough to improve their own security, which one cannot assume, so it is better to…

The “Big Stick”: Remove Write File Permissions

I’ll posit here something that I believe to be self-evident:

It is safer to make WordPress files read-only and thus disallow frequent WordPress (and third-party Plugin) upgrades than it is to allow Wordpress (and third-party Plugins) to self-modify.

Until I learn that this postulate is incorrect, I’ll propose that you make all WordPress files (with the obvious exception of the uploads directory) owned by root, and writable only to root, interpreted by a non-root user. This will leverage the security inherent in the Linux Kernel:

Note that you still can upgrade WordPress from time to time manually. You could even write a shell script for it.

Restrict serving of files

Disable direct access to wp-config.php which contains very sensitive information which would be revealed should the PHP not be processed correctly. In Nginx:

Disable PHP execution in the uploads directory. In Nginx:

Restrict PHP

I’ll refer the reader to already written excellent external articles –  please do implement the suggestions therein:

Hardening PHP from PHP.ini

25 PHP Security Best Practices

Host WordPress in a Virualization Environment

In addition to all of the above, any kind of publicly exposed web application (not just WordPress) should really be hosted in an isolated environment. Docker seems promising for this purpose. I found the following great external tutorial about generating a LAMP Docker image:

WordPress Developer’s Intro To Docker

WordPress Developer’s Intro To Docker, Part Two

 

100% HTTPS in the internet? Non-Profit makes it possible!

Selection_008HTTPS on 100% of websites in the internet? This just has gotten a lot easier! Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a Section 501(c)(3) Non-Profit entity dedicated to reduce financial, technological, and education barriers to secure communication over the Internet.

Let’s Encrypt offers free-of-cost certificates that can be used for HTTPS websites, even when these websites are ran for commercial purposes. Unlike traditional CA’s they don’t require cumbersome registration, paperwork, set-up and payment. The certificates are fetched in an automated way through an API (the ACME Protocol — Automatic Certificate Management Environment), which includes steps to prove that you have control over a domain.

Dedicated to transparency, generated certificates are registered and submitted to Certificate Transparency logs. Here is the generous legal Subscriber Agreement.

Automated API? This sounds too complicated! It is actually not. There are a number of API libraries and clients available that do the work for you. One of them is Certbot. It is a regular command-line program written in Python and the source code is available on Github.

After downloading the certbot-auto script (see their documentation), fetching certificates consists of just one command line (in this example certs for 3 domains are fetched in one command with the -d switch):

With the -w  flag you tell the script where to put temporary static files (a sub-folder .well-known  will be created) that, during the API control flow, serve as proof to the CA’s server that you have control over the domain. This is identical to Google’s method of verifying a domain for Google Analytics or Google Webmaster Tools by hosting a static text file.

Eventually, the (already chained, which is nice!) certificate and private key are copied into /etc/letsencrypt/live/example.com/ :

Then it is only a matter of pointing your web server (Nginx, Apache, etc.) to these two files, and that’s trivial.

Let’s Encrypt certificates are valid for 90 days. The automatic renewal of ALL certificates that you have loaded to your machine is as easy as …

… which they suggest should be put into a Cron job, run twice daily. It will renew the certificates just in time. No longer do you have to set a reminder in your calendar to renew a certificate, and then copy-paste it manually!

A bit of a downside is that Let’s Encrypt unfortunately doesn’t support wildcard domain certificates. For these, you still have to pay money to some other CA’s who support them. But in above shown code example, you would generate only 1 certificate for the domains example.com and its two subdomains www.example.com and blah.example.com. The two subdomains are listed in the Subject Alternative Name field of the certificate, which is as close to wildcard subdomains as it gets. But except for SAAS providers and other specialized businesses, not having wildcard certificates should not be too big of an issue, especially when one can automate the certificate setup.

On the upside, they even made sure that their certificates work down to Windows XP!

Today, I set up 3 sites with Let’s Encrypt (one of them had several subdomains), and it was a matter of a few minutes. It literally took me longer to configure proper redirects in Nginx (no fault of Nginx, I just keep forgetting how it’s done properly) than to fetch all the certificates. And it even gave me time to write this blog post!

Honestly, I never agreed with the fact that for commercial certificate authorities, one has to pay 1000, 100 or even 30 bucks per certificate per year. Where’s the work invested into such a certificate that is worth so much? The generation of a certificate is automated, and is done in a fraction of a second on the CPU. Anyway, that now seems to be a thing of the past.

A big Thumbs-up and Thanks go to the Let’s Encrypt CA, the ISRG, and to Non-Profit enterprises in general! I believe that Non-Profits are the Magic Way of the Future!

Icon made by Freepik from www.flaticon.com 

Resume rsync transfers with the –partial switch

Recently I wanted to rsync a 16GB file to a remote server. The ETA was calculated as 23 hours and, as it usually happens, the file transfer aborted after about 20 hours due to a changing dynamic IP address of the modem. I thought: “No problem, I just re-run the rsync command and the file transfer will resume where it left off.” Wrong! Because by default, on the remote side, rsync creates a temporary file beginning with a dot, and once rsync is aborted during the file transfer, this temporary file is deleted without a trace! Which, in my case, meant that I wasted 20 hours of bandwidth.

It turns out that one can tell rsync to keep temporary files by passing the argument --partial . I tested it, and the temporary file indeed is kept around even after the file transfer aborts prematurely. Then, when simply re-running the same rsync command, the file transfer is resumed.

In my opinion, rsync should adopt this behavior by default. Simple thing to fix, but definitely an argument that should be passed every time!

Added: Simply use -P  ! This implies --partial  and you’ll also see a nice progress output for free!

Unprivileged Unix Users vs. Untrusted Unix Users. How to harden your server security by confining shell users into a minimal jail

As a server administrator, I recently discovered a severe oversight of mine, one that was so big that I didn’t consciously see it for years.

What can Unprivileged Unix Users do on your server?

Any so-called “unprivileged Unix users” who have SSH access to a server (be it simply for the purpose of rsync’ing files) is not really “unprivileged” as the word may suggest. Due to the world-readable permissions of many system directories, set by default in many Linux distributions, such an user can read a large percentage of directories and files existing on the server, many of which can and should be considered as a secret. For example, on my Debian system, the default permissions are:

/etc: world-readable including most configuration files, amongst them passwd which contains plain-text names of other users
/boot: world-readable including all files
/home: world-readable including all subdirectories
/mnt: world-readable
/src: world-readable
/srv: world-readable
/var: world-readable
etc.

Many questions are asked about how to lock a particular user into their home directory. User “zwets” on askubuntu.com explained that this is besides the point and even “silly”:

A user … is trusted: he has had to authenticate and runs without elevated privileges. Therefore file permissions suffice to keep him from changing files he does not own, and from reading things he must not see. World-readability is the default though, for good reason: users actually need most of the the stuff that’s on the file system. To keep users out of a directory, explicitly make it inaccessible. […]

Users need access to commands and applications. These are in directories like /usr/bin, so unless you copy all commands they need from there to their home directories, users will need access to /bin and /usr/bin. But that’s only the start. Applications need libraries from /usr/lib and /lib, which in turn need access to system resources, which are in /dev, and to configuration files in /etc. This was just the read-only part. They’ll also want /tmp and often /var to write into. So, if you want to constrain a user within his home directory, you are going to have to copy a lot into it. In fact, pretty much an entire base file system — which you already have, located at /.

I agree with this assessment. Traditionally, on shared machines, users needed to have at least read access to many things. Thus, once you give someone even just “unprivileged” shell access, this user — not only including his technical knowledge but also the security of the setup of his own machine which might be subject to exploits — is still explicitly and ultimately trusted to see and handle confidentially all world-readable information.

The problem: Sometimes, being “unprivileged” is not enough. The expansion towards “untrusted” users.

As a server administrator you sometimes have to give shell access to some user or machine who is not ultimately trusted. This happens in the very common case where you transfer files via rsync  (backups, anyone?) to/from machines that do not belong to you (e.g. a regular backup service for clients), or even those machines which do belong to you but which are not physically guarded 24/7 against untrusted access. rsync  however requires shell access, period. And if that shell access is granted, rsync  can read out all world-readable information, and for that matter, even when you have put into place rbash  (“restricted bash”) or rssh  (“restricted ssh”) as a login shell. So now, in our scenario, we are facing a situation where someone ultimately not trusted can rsync all world-readable information from the server to anywhere he wants to simply by doing:

One may suggest to simply harden the file permissions for those untrusted users, and I agree that this is a good practice in any case. But is it practical? Hardening the file permissions of dozens of configuration files in /etc  alone is not an easy task and is likely to break things. For one obvious example: Do I know, without investing a lot of research (including trial-and-error), which consequences chmod o-rwx /etc/passwd  will have? Which programs for which users will it break? Or worse, will I even be able to reboot the system?

And what if you have a lot of trusted users working on your server, all having created many personal files, all relying on the world-readable nature as a way to share those files, and all assuming that world-readable does not literally mean ‘World readable’? Grasping the full extent of the user collaboration and migrating towards group-readable instead of world-readable file permissions likely will be a lot of work, and again, may break things.

In my opinion, for existing server machines, this kind of work is too expensive to be justified by the benefits.

So, no matter from which angle you look at this problem, having ultimately non-trusted users on the system is a problem that can only be satisfactorily solved by jailing them into some kind of chroot directory, and allowing only those tasks that are absolutely neccessary for them (in our scenario, rsync  only). Notwithstanding that, and to repeat, users who are not jailed must be considered as ultimately trusted.

The solution: Low-level system utilities and a minimal jail

For above reasons regarding untrusted users, ‘hardening’ shell access via rbash  or even rssh  is just a cosmetic measure that still doesn’t prevent world-readable files to be literally readable by the World (you have to assume that untrusted users will share data liberally). rssh  has a built-in feature for chroot’ing, but it was originally written for RedHat and the documentation about it is vague, and it wouldn’t even accept a chroot environment created by debootstrap.

Luckily, there is a low-level solution, directly built into the Linux kernel and core packages. We will utilize the ability of PAM to ‘jailroot’ a SSH session on a per-user basis, and we will manually create a very minimal chroot jail for this purpose. We will jail two untrusted system users called “jailer” and “inmate” and re-use the same jail. Each user which will be able to rsync  files, but either will not be able to escape the jail, nor see the files of the other.

The following diagram shows the directory structure of the jail that we will create:

The following commands are based on Debian and have been tested in Debian Wheezy.

First, create user accounts for two ultimately untrusted users, called “jailer” and “inmate” (note that the users are members of the host operating system, not the jail):

Their home directories will be /home/jailer  and /home/inmate  respectively. They need home directories so that you can set up SSH keys (via ~/.ssh/authorized_keys ) for passwordless-login later.

Second, install the PAM module that allows chroot’ing an authenticated session:

The installed configuration file is /etc/security/chroot.conf . Into this configuration file, add

These two lines mean that after completing the SSH authentication, the users jailer and inmate will be jailed into the directory /home/minjail of the host system. Note that both users will share the same jail.

Third, we have to enable the “chroot” PAM module for SSH sessions. For this, edit /etc/pam.d/sshd  and add to the end

After saving the file, the changes are immediately valid for the next initiated SSH session — thankfully, there is no need to restart any service.

Making a minimal jail for chroot

All that is missing now is a minimal jail, to be made in /home/minjail . We will do this manually, but it would be easy to make a script that does it for you. In our jail, and for our described scenario, we only need to provide rsync  to the untrusted users. And just for experimentation, we will also add the ls  command. However, the policy is: Only add into the jail what is absolutely neccessary. The more binaries and libraries you make available, the higher the likelihood that bugs may be exploited to break out of the jail. We do the following as the root user:

Next, create the home directories for both users:

Next, for each binary you want to make available, repeat the following steps (we do it here for rsync , but repeat the steps for ls ):

1. Find out where a particular program lives:

2. Copy this program into the same location in the jail:

3. Find out which libraries are used by that binary

4. Copy these libraries into the corresponding locations inside of the jail (linux-gate.so.1 is a virtual file in the kernel and doesn’t have to be copied):

After these 4 steps have been repeated for each program, finish the minimal jail with proper permissions and ownership:

The permission 751 of the ./home  directory ( drwxr-x--x  root  root) will allow any user to enter the directory, but forbid do see which subdirectories it contains (information about other users is considered private). The permission 750 of the user directories ( drwxr-x--- ) makes sure that only the corresponding user will be able to enter.

We are all done!

Test the jail from another machine

As stated above, our scenario is to allow untrusted users to rsync  files (e.g. as a backup solution). Let’s try it, in both directions!

Testing file upload via rsync

Both users “jailer” and “inmate” can rsync a file into their respective home directory inside the jail. See:

To allow password-less transfers, set up a public key in /home/jailer/.ssh/authorized_keys  of the host operating system.

Testing file download via rsync

This is the real test. We will attempt do download as much as possible with rsync (we will try to get the root directory recursively):

Here you see that all world-readable files were transferred (the programs ls and rsync and their libraries), but nothing from the home directory inside of the jail.

However, rsync  succeeds to grab the user’s home directory. This is expected and desired behavior:

 

Testing shell access

We have seen that we cannot do damage or reveal sensitive information with rsync . But as stated above, rsync  cannot be had without shell access. So now, we’ll log in to a bash shell and see which damage we can do:

Put "/bin/bash -i"  as an argument to use the host system’s bash in interactive mode, otherwise you would have to set up special device nodes for the terminal inside of the jail, which makes it more vulnerable for exploits.

We are now dumped to a primitive shell:

At this point, you can explore the jail. Try to do some damage (Careful! Make sure you’re not in your live host system, prefer an experimental virtual machine instead!!!) or try to read other user’s files. However, you will likely not succeed, since everything you have available is Bash’s builtin commands plus rsync  and ls , all chroot’ed by a system call to the host’s kernel.

If any reader of this article should discover exploits of this method, please leave a comment.

Conclusion

I have argued that the term “unprivileged user” on a Unix-like operating system can be misunderstood, and that the term “untrusted user” should be introduced in certain use cases for clarity. I have presented an outline of an inexpensive method to accomodate untrusted users on a shared machine for various purposes with the help of the low-level Linux kernel system call chroot()  through a PAM module called pam_chroot.so  as well as a minimal, manually created jail. This method still is experimental and has not entirely been vetted by security specialists.

 

 

Exim and Spamassassin: Rewriting headers, adding SPAM and Score to Subject

This tutorial is a follow-up to my article Setting up Exim4 Mail Transfer Agent with Anti-Spam, Greylisting and Anti-Malware.

I finally got around solving this problem: If an email has a certain spam score, above a certain threshold, Exim should rewrite the Subject header to contain the string  *** SPAM (x.x points) *** {original subject}

Spamassassin has a configuration option to rewrite a subject header in its configuration file /etc/spamassassin/local.cf  …

… but this is misleading, because it is used only when Spamassassin is used stand-alone. If used in combination with a MTA (Mail Transfer Agent) like Exim, the MTA is ultimately responsible for modifying emails. So, the solution lies in the proper configuration of Exim. To modify an already accepted message, the Exim documentation suggests a System Filter. You can set it up like this:

Enable the system filter in your main Exim configuration file. Add to it:

Then create the file  /etc/exim4/system.filter , set proper ownership and permission, then insert:

This means: If the header  $header_X-Spam_score_int  is present (has been added by Exim in the acl_check_data  ACL section, see my previous tutorial), and is more than 50 (this is 5.0), rewrite the Subject header. The regular expression checks if the spam score is valid and not negative.

Note that in the acl_check_data section of the Exim config, you can deny a message above a certain spam score threshold. This means, in combination with this System Filter, you can do the following:

  • If spam score is above 10, reject/bounce email from the ACL.
  • If spam score is above 5, rewrite the Subject.

Do not Panic! Remote Server (Hetzner) not rebooting any more – A Solution

GDFL 1.2
GDFL 1.2

I went through this experience recently. First of all, don’t panic! I panicked, and because of this, I made a mistake: I didn’t wait long enough for it to come online. Had I waited up to 60 minutes, it would probably have come online (see reason below). The story:

I had broken packages on my Ubuntu 10.04 server and decided to fix them by

While updating, I noticed that the package grub-pc  also was upgraded, apparently a new bootloader (or bootloader configuration) was installed. This made me feel uncomfortable, since I didn’t know if the server would reboot after this upgrade. So, because of the saying “The devil you know is better than the devil you don’t know” (and a desire to sleep peacefully at night!) I decided to reboot the server and see what would happen. To my big dismay, it did not come online. SSH connections failed with "Port 22: Connection refused" .

I panicked and asked the Hetzner Support (which is very responsive and supportive btw!) to install a LARA Remote Console so that I could see the text output of the booting screen. After some regular startup text, the screen became blank. I panicked even more and took immediate steps to move all data to a new server. It took me 6 hours to complete the most important parts, and another full week of restless work to finish it. We will see that it was not neccessary.

Most regular servers at the Hetzner datacenter are running Software RAID. It seems that after a reboot (especially if you send a Hardware Reset) the OS needs some time to re-sync or check the file system. I am not sure what caused the delay in my case. Re-syncing the entire RAID array can take up to 1-2 hours, depending on your hardware and disc space.

So, wait at least 1 hour for it to come online, especially after a hardware reset! When you activate Hetzner’s Rescue system (which is very good btw!) it will stay active for a minimum of 1 hour, so your server will be down for 1 hour at least, in any event. So you are not losing much by waiting a bit longer.

Now, in my case, I assumed that Grub2 was broken. So I activated the Hetzner Rescue System, booted into it, and reinstalled Grub2. I have found the following method here and it worked for me. First you have to mount the regular RAID filesystem under /mnt :

At this point, you are in your regular root directory. To reinstall Grub2 the Debian Way, I did:

To make really sure, I reconfigured the package:

It will ask you where to install the bootloader. I selected:

No errors were reported. I rebooted again and it did not come online immediately, for the reasons previously mentioned. I waited long enough (in my case, 15 minutes) and it did come online. So, rule number one is: Don’t panic!

Setting up Exim4 Mail Transfer Agent with Anti-Spam, Greylisting and Anti-Malware

Email logoRecently my Exim mail server was hopelessly spammed to such an extent that I wasn’t even able to clear the mail queue with rm ./*, nor even list the files, nor even count the files with ls.  How still I managed to delete probably millions of mail files in one folder can be read in my post “Removing a million files in a directory“.

After this shock, I decided to integrate Anti-Spam and Anti-Malware systems into my Exim MTA, and within a few hours my server was back up and running, and this time, bouncing spam emails back to where they belong (hell). I did this in Ubuntu 10.04 LTS, but I know it also works in Debian 7 Wheezy, as I’ve recently used my own tutorial.

My own knowledge about Exim comes from the excellent Official Guide to Exim by Exim’s author, Philip Hazel. Email servers are very complex. You won’t be able to go very far with ‘duct tape’ administration, nor come up with a good fix in a case of emergency. With a misconfigured email server, is very easy to get your IP address blacklisted in the entire internet. Therefore I would suggest you grab your own copy of the Official Guide to Exim from Amazon.

 

How to install Exim

I already was running Exim, but still, if you are interested, here is what to do. First, you can install Exim by running

but by default, this installs the package exim4-daemon-light, which does not have advanced capabilities compiled in. Since we want to do spam filtering, we need to install exim4-daemon-heavy:

This will remove exim4-daemon-light automatically.

We have to enable Exim for internet communications, since the default is only localhost communications. In /etc/exim4/update-exim4.conf.conf  change the line  dc_eximconfig_configtype='local'  to

Enter the domains you want to receive emails for to the line:

Make exim listen to all interfaces by emptying the following config line:

Enable TLS for Exim by running the script

… and add the following line somewhere at the top of Exim’s configuration template file /etc/exim4/exim4.conf.template .

Every time you modify /etc/exim4/exim4.conf.template , you have to run update-exim4.conf  and do  service exim4 restart .

Next, we will install and configure Spamassassin for Exim. Luckily, it is a Debian package.

Spam Assassin

Installation

You can find instructions in this Debian Wiki https://wiki.debian.org/Exim, but you will find all commands here for convenience.

This starts a daemon called spamd  automatically. However, it is disabled by default. Enable it by changing the following line in /etc/default/spamassassin :

Debian-specific modification: In the same file, change this line

to this:

This instructs the spamd daemon to run as the user debian-spamd  which is created when you install spamassassin. Its home directory is /var/lib/spamassassin . I had to do this because the following error messages was regularly logged into /var/log/syslog :

Also, enable the cronjob to automatically update the spamassassin’s rules:

The conjob file in question is /etc/cron.daily/spamassassin , which in turn calls sa-update .

Next, you can configure the behaviour of spamassassin in the config file /etc/spamassassin/local.cf . Especially the entries “rewrite_header” and “required_score” are interesting, but later we will configure Exim to do these jobs for us directly.

Restart the daemon to make the changes effective:

Integration into Exim

All of following modifications have to take place in /etc/exim4/exim4.conf.template.

Now we have to enable spamd in Exim’s configuration file. Search for the line containing spamd_address and uncomment it like this:

It is a good idea to add special headers to each processed email which specify the spam score. The configuration is alredy there, we just have to uncomment it (see also official documentation of this here):

Note that I have replaced spam = Debian-exim:true  with spam = debian-spamd:true to match the user of the spamd daemon. If you want to know what the :true  part means, study this section of the official Exim documentation.

At this point, emails which are definitely spam will still be delivered. This is not good, since when a mail client of one of your customers is compromised, it could send thousands of spam emails per day, which still would cause your server and IP to be blacklisted. If you want to bounce emails that are over a certain spam point threshold, add the following lines directly below. In this case, the threshold is 12.0 (but you have to enter it without the comma):

Generate the real Exim4 configuration from /etc/exim4/exim4.conf.template  that we’ve just edited by running

Test

Now, let’s test if our spamassassin setup was successful. Send an email to your Exim server that contains the following line. This code is taken from Spamassassin’s GTUBE (the Generic Test for Unsolicited Bulk Email):

The email should bounce back immediately and contain the message that we’ve entered above:

Now, send yourself a normal email. After you have received it, in your mail client (I’m using Icedove) inspect the mail source by pressing Ctrl + U. It should contain the following special header lines:

So far, so good.

Greylisting

Next, let’s set up Greylisting, another spam defense measure. The tool of my choice was greylistd because Debian has its own package greylistd.

Installation

You will be shown a configuration notice, instructing you how to enable greylistd in the Exim configuration. The following will show you what to do.

Integration into Exim

As far as I could determine, this automatically adds some lines in the already existing Exim configuration templates exim4.conf.template and inside of the directory /etc/exim4/conf.d. It adds it right after acl_check_rcpt:.

At this point, greylistd is already running. In case you want to restart the service, run

Of course, we have to restart Exim again so that our new configuration becomes active:

Test

Observe the contents of the Exim log file

and send yourself a regular email. You will see a line in the logfile similar to this:

When it says “temporarily rejected RCPT” and “greylisted” it means that greylistd is working.

 

Anti-Malware and Anti-Virus

As an email provider you certainly want to have some anti-malware and anti-virus measures in place. My choice was ClamAV, since it’s part of Debian and rather easy to integrate with Exim. The following instructions are loosely based on the Ubuntu Wiki EximClamAV, but it contains one step too much which seems to be no longer neccessary (the part about creating a new file). In any case, here are my working steps.

Installation

First, install the daemon:

It will output the following failures, but don’t worry, they are harmless:

To get rid of the messages, you have to run

This command updates your virus databases from the internet. You should create a cronjob to run it regularly. Now you are able to restart the daemon without failure messages:

Next, add the clamav daemon user to the Debian-exim group, so that it can access the spool files:

Integration into Exim

Locate the line in /etc/exim4/exim4.conf.template which contains av_scanner, uncomment it, and change it to the following (as the Ubuntu Wiki correctly says, the default value does not work):

Next, uncomment the following lines in /etc/exim4/exim4.conf.template (this is where the Ubuntu Wiki says that you should create a new file, but it’s already there):

As we have done before, we have to restart Exim so that our new configuration becomes active:

Test

There is a special string that you can use to test Anti-Malware programs. It is taken from the eicar website.  At the bottom of the linked page you will find it:

Send an email to Exim which contains this string in a separate line. It should bounce immediately. The bounced message will contain:

When you get this, ClamAV is working correctly.

 

Setting up Exim with DKIM (DomainKeys Identified Mail)

Google Mail suggests using DKIM for ISP providers. When you are signing outgoing messages with DKIM, you are reducing the chance that Google thinks you are a spammer. See also Google’s Bulk Senders Guidline as to how Google judges the Anti-Spam qualities of your mails. DKIM is not a strong Anti-Spam indicator, it only ensures (due to private/public key encryption) that an email was actually sent from the server it claims it was sent. Anyhow, here is how to do it (based on the articles here and here):

Generate a private key with openssl:

Extract the public key:

Change permissions and ownership to make it readable for Exim and for security reasons:

Next add the following to  exim4.conf.template  (or to your split-configuration files if you use that method), right before the line  remote_smtp: 

More information about these parameters see this section of the official Exim documentation.

Update the configuration and restart Exim:

Now send a test email to some address (e.g. free mail provider) which is not handled by your email server and inspect the sources of the received email. It should contain a line DKIM-Signature . To avoid confusion: If you are sending an email to yourself, which is received by the same server which you are configuring, no DKIM Signature is added (since not neccessary).

Next, you have to add the DKIM public key as a TXT “selector record” to the DNS zone of yourdomain.com. For DKIM_DOMAIN  and DKIM_SELECTOR  you have specified above, you have to add the following entry:

where p=  gives the public key from /etc/exim4/dkim.public.key  without headers and without line breaks.

You also should add a “DKIM policy record” for the subdomain _domainkey to state that all emails must be signed with DKIM. Otherwise, DKIM could simply be omitted by a spoofer without consequences. Again, this is a simple TXT entry in the DNS:

You can use this tester to check the policy record: http://domainkeys.sourceforge.net/policycheck.html

However, this “o” policy record does not seem to be documented in any RFC (I found it on various blog posts), and it is superseded by RFC5617 (DKIM ADSP Author Domain Signing Practices). For ADSP you would have to add:

Similar in function to ADSP seems to be DMARC. I’ll write about this in a future blog post.

For details on DKIM see: RFC specification

Test

Now check if your zone records have been saved and taken:

It should output the contents of the TXT zone entry which you’ve made above. If you can see them, send an email to the excellent SMTP tester email check-auth@verifier.port25.com

If it responds with

Then the DKIM set-up was successful.

 

Additional Anti-Spam measures

A characteristic behaviour of malicious Spam senders is that they send a Spam Flood (as many messages as possible in the least time possible).  If possible, they will send many messages in just one SMTP connection (i.e. several MAIL commands in one session). If that happens, you will be blacklisted very soon, and with tens of thousands of send Spam emails, it will be very difficult to re-gain a good reputation for email providers like Google or Yahoo. Legitimate senders however will only send a few single messages, one message per SMTP connection, and it reasonably will take them at least 10 seconds to write a short email. So, we can rate-limit the submission of messages, having a great impact on Spammers, but little impact on legitimate senders. Exim has a rate-limiting feature, but since it’s a science to configure Exim properly, I decided for an easier, more robust, low-level approach using the iptables firewall.

First, we will lock Exim down to just allow one MAIL command per SMTP session. From the Exim Main Configuration Documentation, we can write somewhere in  /etc/exim4/exim4.conf.template

Next, we are going to limit the number of parallel SMTP connections per sending host to 1:

And we will limit the maximum SMTP connections which Exim will allow. You can set this to the approximate number of clients you have, plus a margin:

Now that we know that there only can be 1 message per SMTP connection, we limit the SMTP connection frequency in our firewall:

This will limit incoming SMTP connections to 6 in 1 minute, which is one connection per 10 seconds in average — enough for private or business emails, not enough for spammers.

SMTP banner delay

(inspiration from here and here) Exim drops the connection if a SMTP client attempts to send something before the SMTP banner is displayed, this is already spam protection built into Exim:

To further slow down Spam we simply delay this banner. Somewhere at the beginning of the Exim config file, write:

In layman’s terms, this tells Exim which ACL to execute when a connection is initiated. Then, after begin acl  add it:

You can test this by telnet’ing to your server. The banner should appear only after 10 seconds.

More Anti Spam measures

https://github.com/Exim/exim/wiki/BlockCracking (not tested)

 

 

Conclusion

If you have succeeded so far, test your new Exim installation with the great tool at:

http://www.allaboutspam.com/email-server-test/

Exim is a very complex program (the most complex one I’ve encountered so far) and you can literally spend a lifetime studying it. The complexity seems to stem from the complexity of the email delivery process itself. Despite that fact, this tutorial will enable you to set up Exim with Anti-Malware and Anti-Spam measures in less than 1 hour of work. It is by no means exhaustive, but it at least bounces Spam emails above a certain threshold which is the most important thing when you don’t want your server and IP address to be blacklisted all over the internet for all eternity. It also adds value to your customers when you are operating Exim as a business.

But: Anyone who would like to operate Exim for customers, to be a professional email hosting company, should not think twice, but ten times if it’s really worth it. Things do go wrong all the time, and if you don’t have an exhaustive knowledge about what exactly to do in case of a failure — immediately, right there and then — you could upset all your customers pretty quickly. So, my advice would be: “hands-off” unless you know what you are doing. As I like to say: “No funny stuff!”

Follow up:

Exim and Spamassassin: Rewriting Subject lines, adding SPAM and Score