Encrypt backups at an untrusted remote location

In a previous blog post I argued that a good backup solution includes backups at different geographical locations to compensate for local desasters. If you don’t fully trust the location, the only solution is to keep an encrypted backup.

In this tutorial we’re going to set up an encrypted, mountable backup image which allows us to use regular file system operations like rsync.

First, on any kind of permanent medium available, create a large enough file which will hold the encrypted file system. You can later grow the file system (with dd and resize2fs) if needed. We will use dd to create this file and fill this file with zeros. This may take a couple of minutes, depending on the write speed of the hard drive. Here, we create a 500GB file:

We will use LUKS to set up a virtual mapping device node for us:

First, we generate a key/secret which will be used to generate the longer symmetric encryption key which in turn protects the actual data. We tap into the entropy pool of the Linux kernel and convert 32 bytes of random data into base64 format (this may take a long time; consider installing haveged as an additional entropy source):

Store the Base64-encoded key in a secure location and create backups! If this key/secret is lost, you will lose the backup. You have been warned!

Next, we will write the LUKS header into the backup image:

Next, we “open” the encrypted drive with the label “backup_crypt”:

This will create a device node /dev/mapper/backup_crypt which can be mounted like any other hard drive. Next, create an Ext4 file system on this raw device (“formatting”):

Now, the formatted device can be mounted like any other file system:

You can inspect the mount status by typing mount. If data is written to this mount point, it will be transparently encrypted to the underlying physical device.

If you are done writing data to it, you can unmount it as follows:

To re-mount it:

Note that we always specify the Base64-encoded key on the command line and pipe it into cryptsetup. This is better than creating a file somewhere on the hard drive, because it only resides in the RAM. If the machine is powered off, the decrypted mount point is lost and only the encrypted image remains.

If you are really security-conscientious, you need to read the manual of cryptsetup to optimize parameters. You may want to use a key/secret longer than the 32 bytes mentioned here.

Before data loss: How to make correct backups

Why should you regularly make backups? Because if you don’t, then this mistake will bite you, sooner or later. Why? Because of Murphy’s Law:

Anything that can go wrong, will go wrong.

And a variation of it, Finagle’s law, even says:

Anything that can go wrong, will—at the worst possible moment.

So, let’s prepare right now and look at ways to back up data correctly.

RAID data mirroring is not enough

Realtime data mirroring (no matter if it is software or hardware RAID) is good, but not enough. What if your location is hit by lightning, fire or water? What if your entire system gets stolen? Then RAID would have been exactly useless.

Threats to local backups on external media

Say that you have an external USB hard drive for you backups. This is good, but as long as it is connected to your computer, it still may be subject to destruction due to lightning.

In addition, if you leave your external USB hard drive mounted in your host OS, and if you make a mistake as an administrator, or have faulty software or malware, you may fully erase your main hard drive and the backup at the same time. This is not too unlikely!

It happened to me once. A simple mistyped rm -rf ./ as root user somewhere deep in the filesystem did exactly that (I accidentally typed a space between the dot and the slash). Yes, I erased my main hard drive and the backup (mounted under  /mnt) at the same time. The data loss was desastrous.

Independent of the above, local backps are still susceptible to fire, water, or theft.

The dangers of deleted or changed files

rsync is especially good if you transfer the data to your backup location via public networks, because it only transfers changes. It also supports the --delete flag which deletes remote files when they are no longer locally present. This is generally a good idea if you want your backup to be an exact copy, otherwise your backup will become messy by accumulating many deleted files, which will make restoration not very fun.

But the --delete flag is also a danger. Say you delete an important file locally. Two days later you discover this fact, and decide to restore it from your backup. But guess what, it will be gone there too if you have synced in the meantime.

This problem is also present when changing files. The only solution to this problem is to have rolling backups (backups of your backups) in regular or increasing intervals (weekly, monthly, yearly). This will multiply the storage requirements, but you really cannot get around it.

Restoration is as important as backing up

Let’s say you have 10 perfectly done backups. But if you can’t access them any more, or not quickly enough (e.g. due to low bandwidth, etc.) they will be useless for your purposes. You need to put as much thought and effort into an effective restoration method as you put into the backups in the first place.

What works?

In general, a good backup solution depends on the specific circumstances and needs. Backups can never be perfect (100.0% reliable), there will always be a small but nevertheless existing possibility of total data loss. But you can make that possibility very, very small. As a rough guideline, the following principles seem to minimize the risk:

  • You have more than one backup.
  • You have backups of your backups (“rolling backups“).
  • You do not leave local backup media connected or mounted.
  • Your backups are at geographically different locations to compensate for local desasters.
  • If your backup is at a remote location, you fully trust the location, or use proper encryption.
  • Restoration is effective.
  • Backup and restoration is automated and tested.
  • After each backup cycle, the backups are verified. If there was a failure, the administrator is notified.

It is your responsibility!

If you should lose data, don’t blame it on ‘evil’ external circumstances, because:

Never attribute to malice that which is adequately explained by stupidity.

What if all data are still lost? Well, in this case I only can say:

Every misfortune is a blessing in disguise.

Start working on your backup solution now!

Simple test if TCP port is open

There are other more complicated tools to achieve the same (like nmap whose manpage makes your head spin), but this is a very simple solution using netcat:

To programmatically evaluate the result, use the standard Bash $? variable. It will be set to 0 if the port was open, or 1 if the port was closed.

How to set up password-less SSH login for a Dropbear client

Dropbear is a replacement for standard OpenSSH for environments with low memory and processor resources. With OpenSSH, you can use the well-known ssh-keyen command to create a private/public keypair for the client. In Dropbear, it is a bit different. Here are the commands on the client:

The private key will be in ~/.ssh/id_dropbear. The public key is output to stdout.

On a Dropbear as well as on a OpenSSH server, you can put the client’s public key as usual into the  authorized_keys file to allow the client a password-less login:

Let me know in comments if you know of a better method!

How to install yubikey-manager on Debian

yubikey-manager is a Python application requiring some dependencies for it to be installed from the Python repositories, because it is not yet in the official Debian package repository. Here is how:

Here is the main commandline utility:

WooCommerce Shipping Plugin “External Fetch”

I just wrote the following shipping plugin for WooCommerce because existing plugins would not cover the case I’m working at. It is available on my github account under the permissive MIT license.

https://github.com/michaelfranzl/woocommerce-shipping-external-fetch

woocommerce-shipping-external-fetch

Shipping Plugin for WooCommerce which HTTP PUTs the cart contents in JSON format to an external web service specified by protocol/host/port/URI, and receives the calculated shipping offer (costs, labels etc.) as JSON from that webservice.

Useful when the shipping calculation complexity exceeds the capabilities of other shipping calculation plugins.

The external shipping calculation application is, of course, business specific and thus not included in this repository. However, an example of a JSON request and reponse is shown below.

When the webservice is not reachable, not responsive, or returns HTTP status codes other than 200 (i.e. when it experiences a server error), this plugin offers free shipping to the customer, since technical problems are the ‘fault’ of the store owner and should not prevent a customer from completing an order. A setting to configure this behavior is not included, but it would be easy to add.

This plugin supports WooCommerce shipping zones. So, in theory you could have different web services dedicated to different shipping zones.

To add the plugin to an existing shipping zone:

  1. Go to WooCommerce -> Settings -> Shipping
  2. Click on “Manage shipping methods” below a Zone
  3. Click “Add shipping method” button
  4. Select “External Fetch” and click “Add shipping method” button
  5. Configure the Plugin by clicking on “Edit”
  6. Customize “Method title” and “Method description” and set the “JSON API Endpoint” to, for example  http://localhost:4040/calculate

At this point you can add a product to the cart, set up a simple webserver listening at port 4040 on the same machine, and receive/send JSON as shown in the section “Examples” further below.

This plugin is for DEVELOPERS only and will likely remain in an ALPHA state.

License

Copyright 2017 Michael Franzl

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Examples

Cart contents in JSON format as sent by this plugin:

 

Response in JSON format as expected by this plugin:

This plugin will add the  description values in a  <div class="shipping_rate_description" below each selectable shipping rate entry on the checkout page to give the customer a better idea about the shipping method.

You can put a message into  cart_no_shipping_available_html which will be added to the default WooCommerce message when there are no shipping methods available.

 

vim – Unable to paste with middle mouse button – E353: Nothing in register ” error – Solution

Put set clipboard=unnamed within your ~/.vimrc VIM configuration file.

Re-posted from https://linuxconfig.org/vim-unable-to-paste-e353-nothing-in-register-error-solution

How to compile ezstream from source

Debian Stretch’s version of ezstream is currently a bit out of date. Here is how you compile ezstream from source to get the latest improvements and bugfixes. Not even the INSTALL file in the ezstream repo has all the steps:

Note that the configuration file structure has changed from what can be found on older blog posts on the internet. For example, to pipe OGG Vorbis data into ezstream without re-encoding, you can use something like teststream.xml:

Then, to stream 30 seconds of brown noise with a sine sweep to an Icecast server for testing purposes:

 

Zero Client: Boot kernel and root filesystem from network with a Raspberry Pi2 or Pi3

Boot your Raspberry Pi from nothing but an Ethernet cable

A Zero Client is a computer that has nothing on its permanent storage but a bootloader. Rather, it loads everything from the network.

With the method presented in this article, you will be able to boot a Raspberry Pi into a full Debian OS with nothing more on the SD card other than the Raspberry firmware files and the u-boot bootloader on a FAT file system. The Linux kernel and the actual OS will be served over the local ethernet network.

We will only focus on the Raspberry Pi 3, but the instructions should work with minor adaptations also on a Pi 2.

The following instructions assume that you have already built…

  1. a full root file system for the Raspberry
  2. a u-boot binary, and
  3. a Linux kernel

… based on my previous blog post. Thus, you should already have the following directory structure:

We will do all the work inside of the ~/workspace directory.

Preparation of the SD card

You will only need a small SD card with a FAT filesystem on it. The actual storage of files in the running OS will be transparently done over the network. Mount the filesystem on /mnt/sdcard and do the following:

Copy firmware

Copy u-boot bootloader

Create config.txt

config.txt is the configuration file read by the Raspberry firmware blobs. Most importantly, it tells the firmware what kernel to load. “Kernel” is a misleading term here, since we will boot u-boot rather than the kernel.

Create /mnt/sdcard/config.txt with the following contents:

 

Make an universal boot script for the u-boot bootloader

To achieve maximum flexibility — to avoid the repetitive dance of manually removing the SD card, copying files to it, and re-inserting it — we will make an universal u-boot startup script that does nothing else than loading yet another u-boot script from the network. This way, there is nothing specific about the to-be-loaded Kernel or OS on the SD card at all.

Create a file boot.scr.mkimage  with the following contents:

Replace the server IP with the actual static IP of your server. Note that this script does nothing else other than loading yet another script called netboot-${serial#}.scr  from the server. serial# is the serial number which u-boot extracts from the Raspberry Pi hardware. This is usually the ethernet network device HW address. This way, you can have separate startup scripts for several Raspberry Pi’s if you have more than one. To keep the setup simple, set the file name to something predictable.

Compile the script into an u-boot readable image:

Copy boot.scr to the SD card:

The SD card preparation is complete at this point. We will now focus on the serving of the files necessary for boot.

Preparation of the file server

Do all of the following as ‘root’ user on a regular PC running Debian 9 (“Stretch”). This PC will act as the “server”.  This server will serve the files necessary to network-boot the Raspberry.

The directory /srv/tftp will hold …

  • an u-boot start script file
  • the kernel uImage file
  • and the binary device tree file.

… to be served by a TFTP server.

The directory /srv/rootfs_rpi3 will hold our entire root file system to be served by a NFS server:

You will find installation instructions of both TFTP and NFS servers further down.

 

Serve the root file system

Let’s copy the pre-built root file system into the directory from where it will be served by the NFS server:

(notice the slash at the end of the source directory)

 

Fix the root file system for network booting

Edit  /srv/rootfs_rpi3/etc/fstab  and comment out all lines. We don’t need to mount anything from the SD card.

When network-booting the Linux kernel, the kernel will configure the network device for us (either with a static IP or DHCP). Any userspace programs attempting to re-configure the network device will cause problems, i.e. a loss of conncection to the NFS server. Thus, we need to prevent systemd-networkd from managing the Ethernet device. Make the device unmanaged by removing the folowing ethernet configuration file:

If you don’t do that, you’ll get the following kernel message during boot:

That is because systemd has shut down and then re-started the ethernet device. Apparently NFS transfers are sensitive to that.

In case you want to log into the chroot to make additional changes that can only be done from within (e.g. running systemctl scripts etc.), you can do:

 

Serve Kernel uImage

In this step, we create a Linux kernel uImage that can be directly read by the u-boot bootloader. We read Image.gz directly from the Kernel source directory, and output it into the /srv/tftp directory where a TFTP server will serve it to the Raspberry:

 

Serve device tree binary

The u-boot bootloader will also need to load the device tree binary and pass it to the Linux kernel, so copy that too into the /srv/tftp directory.

 

Serve secondary u-boot script loading the kernel

Create a file netboot-rpi3.scr.mkimage with the following contents:

Replace the server IP with the static IP of your server PC. Then compile this script into an u-boot readable image and output it directly to the /srv/tftp directory:

Make sure that the filename of the .scr file matches with whatever file name you’ve set in the universal .scr script that we’ve prepared further above.

 

Install a NFS server

The NFS server will serve the root file system to the Raspberry and provide transparent storage.

Edit /etc/exports and add:

To apply the changed ‘exports’ configuration, run

Useful to know about the NFS server:

You can restart the NFS server by running service nfs-kernel-server restart

Configuration files are /etc/default/nfs-kernel-server  and /etc/default/nfs-common

 

Test NFS server

If you want to be sure that the NFS server works correctly, do the following on another PC:

Mount the root file system (fix the static IP for your server):

 

 

Install a TFTP server

To install:

After installation, check if the TFTP server is running:

This command will tell you the default serving directory (/srv/tftp):

Here is another command that tells you if the TFTP server is listening:

To get help about this server: man tftpd

Test TFTP

If you want to be sure that the TFTP server works correctly, do the following on another PC:

Then see if the server serves the Linux kernel we’ve installed before:

You now should have a local copy of the linux-rpi3.uImage file.

 

Complete

If you’ve done all of the above correctly, you can insert the prepared SD card into your Raspberry Pi and reboot it. The following will happen:

  1. The Raspberry Pi GPU will load the firmware blobs from the SD card.
  2. The firmware blobs will boot the image specified in config.txt. In our case, this is the u-boot binary on the SD card.
  3. The u-boot bootloader will boot.
  4. The u-boot bootloader loads and runs the universal boot.scr script from the SD card.
  5. The boot.scr downloads the specified secondary boot script from the network and runs it.
  6. The secondary boot script …
    • downloads the device tree binary from the network and loads it into memory.
    • downloads the Linux kernel from the network and loads it into memory
    • passes the device tree binary to the kernel, and boots the kernel
  7. the Linux kernel will bring up the ethernet device, connect to the NFS server, and load the regular OS from there.

Many things can go wrong in this rather long sequence, so if you run into trouble, check the Raspberry boot messages output on an attached screen or serial console, and the log files of the NFS and TFTP servers on your server PC.

 

Resources

https://www.raspberrypi.org/documentation/linux/kernel/building.md

http://www.whaleblubber.ca/boot-raspberry-pi-nfs/

https://cellux.github.io/articles/moving-to-nfs-root/

http://billauer.co.il/blog/2011/01/diskless-boot-nfs-cobbler/

https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt

http://wiki.linux-nfs.org/wiki/index.php/General_troubleshooting_recommendations

https://wiki.archlinux.org/index.php/NFS