Running a graphical window program via SSH on a remote machine (with GPU hardware acceleration)

Note 1: Even though it’s mid-2018, this post is still about the X Window System. Things still are in the transition phase towards Wayland, and things might get better or different over time.

Note 2: This post is not about displaying a graphical window of a program running on a remote machine on the local machine (like VNC or X forwarding). It is about running a remote program and displaying its graphical window on the remote machine itself, as if it had been directly started by a user sitting in front of the remote display. One obvious use case for the solution to this problem would be a remote graphics rendering farm, where programs must make use of the GPU hardware acceleration of the machine they’re running on.

Note that graphical programs started via Xvfb or via X login sessions on fake/software displays (started by some VNC servers) will not use GPU hardware acceleration. The project VirtualGL might be a viable solution too, but I haven’t looked into that yet.

Some experiments on localhost

I’m going to explore the behavior of localhost relative to our problem first. You’ll  need to be logged in to an X graphical environment with monitor attached.

The trivial case: No SSH login session

Running a local program with a graphical window from a local terminal on a local machine is trivial when you are logged into the graphical environment: For example, in a terminal, simply type glxgears and it will run and display with GPU hardware acceleration.

With SSH login session to the same user

Things become a bit more interesting when you use SSH to connect to your current user on localhost. Let’s say your local username is “me”. Try

It will output:

This can be fixed by setting the DISPLAY variable to the same value that is set for the non-SSH session:

Glxgears will run at this point.

With SSH login session to another user

Things become even more interesting when you SSH into some other local user on localhost, called “other” below.

You will get the message:

Trying to export DISPLAY as before won’t help us now:

You will receive the message:

This is now a permission problem. There are two solutions for it:

Solution 1: Relax permissions vIA XHOST PROGRAM

To allow non-networked connections to the X server, you can run (as user “me” which is currently using the X environment):

Then DISPLAY=:0 glxgears will start working as user “other”.

For security reasons, you should undo what you just did:

Settings via xhost are not permanent across reboots.

Solution 2: via Xauthority file

If you don’t want or can’t use the xhost program, there is a second way (which I like better because it only involves files and file permissions):

User “me” has an environment variable  env | grep XAUTHORITY

(I’m using the gdm display manager. The path could be different in your case.)

This file contains a secret which is readable only for user “me”, for security reasons. As a quick test, make this file available world-readable in /tmp:

Then, as user “other”:

Glxgears will run again.

To make sure that we are using hardware acceleration, run glxinfo:

This prints for me:

Make sure you remove  /tmp/xauthority_me after this test.

Note that the Xauthority file is different after each reboot. But it should be trivial to make it available to other users in a secure way if done properly.

Application on remote machine

If you were able to make things work on the local machine, the same steps should work on a remote machine, too. To clarify, the remote machine needs:

  • A real X login session active (you will likely need to set up auto-login in your display manager if the machine is not accessible).
  • A real monitor attached. Modern graphics cards and/or BIOSes simply shut down the GPU to save power when there is no real device attached to the HDMI port. This is is not Linux or driver specific. Instead of real monitors, you probably want to use “HDMI emulator” hardware plugs – they are cheap-ish and small. Otherwise, the graphical window might not even get painted into the graphics memory. The usual symptom is a black screen when using VNC.

Summary

If you SSH-login into the remote machine, as the user that is currently logged in to the X graphical environment, you can just set the DISPLAY environment variable when running a program, and the program should show on the screen.

If you SSH-login into the remote machine, as a user that is not currently logged in to the X graphical environment, but some other user is, you can set both DISPLAY and XAUTHORITY environment variables as explained further above, and the program should show up on the screen.

Related Links

https://serverfault.com/questions/186805/remote-offscreen-rendering

https://stackoverflow.com/questions/6281998/can-i-run-glu-opengl-on-a-headless-server#8961649

https://superuser.com/questions/305220/issue-with-vnc-when-there-is-no-monitor

https://askubuntu.com/questions/453109/add-fake-display-when-no-monitor-is-plugged-in

https://software.intel.com/en-us/forums/intel-business-client-software-development/topic/279956

How to digitize old VHS videos with an EasyCAP UTV007 USB converter on Linux

VHS is dead. If you don’t have a functioning VHS player any more, your only option is to buy second-hand devices. But if you still have old, valuable VHS videos (e.g. family videos) you should digitize them today, as long as there are still working VHS players around.

Our goal is to feed the audio/video (AV) signals coming out of an old VHS player into an EasyCAP UTV007 USB video grabber, which can receive 3 RCA cables (yellow for Composite Video, white for left channel audio, red for right channel audio).

EasyCAP UTV007 USB video grabber

 

VHS players usually have a SCART output which lucklily carries all the needed signals.

SCART connector

Via a Multi AV SCART adapter you can output the AV signals into three separate RCA cables (male-to-male), and from there into the EasyCap video grabber. If your adapter should have an input/output switch, set it to “output”.

Multi AV Adapter outputting 3 RCA connectors (yellow for Composite Video, white for left channel audio, red for right channel audio)

The EasyCAP USB converter uses a UTV007 chip, which is supported by Linux out-of-the-box. (Who said that installing drivers is a pain in Linux???) After plugging the converter into an USB slot, you should get two additional devices:

  1. A video device called “usbtv”
  2. A sound card called “USBTV007 Video Grabber [EasyCAP] Analog Stereo”

Too see if you have the video device, run v4l2-ctl --list-devices . It will output something like:

To see if you have the audio device, run

It will output something like:

To quickly test if you are getting any video, use a webcam application of your choice (e.g. “cheese“) and select “usbtv” as video source under “Preferences”. Note that this will only get video, but no audio.

We will use GStreamer to grab video and audio separately, and mux them together into a container format.

Install GStreamer

To install GStreamer on Debian-based distributions (like Ubuntu), run

Test video with GStreamer

Now, test if you can grab the video with GStreamer. This will read the video from /dev/video0 (device name from v4l2-ctl --list-devices above) and directly output in a window:

Test audio with GStreamer

Now, test if you can grab the audio with GStreamer. This will read the audio from the ALSA soundcard ID hw:3 (this ID comes from the output of pactl list above) and output it to PulseAudio (should go to your currently selected speakers/headphones):

Convert audio and video into a file

If both audio and video tested OK separately, we now can grab them both at the same time, mux them into a container format, and output it to a file /tmp/vhs.mkv. I’m choosing Matroska .mkv containing H264 video and Ogg Vorbis audio:

Record some video and then press Ctrl+C. The file /tmp/vhs.mkv should now have audio and video.

It would be nice if we could see the video as we are recording it, so that we know when it ends. The command below will do this:

You also can re-encode the video by running it through ffmpeg:

You can adjust the video and audio bitrate depending on the type and length of video so that your file will not be too large. The nice side-effect is that the coarser the video encoding, the more of the fine-grained noise in the VHS video is smoothed out.

Voila! You now should be able to record and archive all your old family videos for posterity!

Digitization of VHS video with Gstreamer.

 

phantom.py: A lean replacement for bulky headless browser frameworks

This is a simple but fully scriptable headless QtWebKit browser using PyQt5 in Python3, specialized in executing external JavaScript and generating PDF files. A lean replacement for other bulky headless browser frameworks. (Source code at end of this post as well as in this github gist)

Usage

If you have a display attached:

If you don’t have a display attached (i.e. on a remote server):

Arguments:

  • <url> Can be a http(s) URL or a path to a local file
  • <pdf-file> Path and name of PDF file to generate
  • [<javascript-file>] (optional) Path and name of a JavaScript file to execute

Features

  • Generate a PDF screenshot of the web page after it is completely loaded.
  • Optionally execute a local JavaScript file specified by the argument <javascript-file> after the web page is completely loaded, and before the PDF is generated.
  • console.log’s will be printed to stdout.
  • Easily add new features by changing the source code of this script, without compiling C++ code. For more advanced applications, consider attaching PyQt objects/methods to WebKit’s JavaScript space by using  QWebFrame::addToJavaScriptWindowObject().

If you execute an external <javascript-file>, phantom.py has no way of knowing when that script has finished doing its work. For this reason, the external script should execute  console.log("__PHANTOM_PY_DONE__"); when done. This will trigger the PDF generation, after which phantom.py will exit. If no  __PHANTOM_PY_DONE__ string is seen on the console for 10 seconds, phantom.py will exit without doing anything. This behavior could be implemented more elegantly without console.log’s but it is the simplest solution.

It is important to remember that since you’re just running WebKit, you can use everything that WebKit supports, including the usual JS client libraries, CSS, CSS @media types, etc.

Dependencies

  • Python3
  • PyQt5
  • xvfb (optional for display-less machines)

Installation of dependencies in Debian Stretch is easy:

Finding the equivalent for other OSes is an exercise that I leave to you.

Examples

Given the following file /tmp/test.html:

… and the following file /tmp/test.js:

… and running this script (without attached display) …

… you will get a PDF file /tmp/out.pdf with the contents “foo bar baz”.

Note that the second occurrence of “foo” has been replaced by the web page’s own script, and the third occurrence of “foo” by the external JS file.

Source Code

 

Reasonably secure unattended SSH logins from untrusted machines

There are certain cases where you want to operate a not completely trusted networked machine, and write scripts to automate some task which involves an unattended SSH login to a server.

With “not completely trusted machine” I mean a computer which is reasonably secured against unauthorized logins, but is physically unattended (which means that unknown persons can have physical access to it).

An established SSH connection has a number of security implications. As I have argued in a previous blog post “Unprivileged Unix Users vs. Untrusted Unix Users”, having access to a shell on a server is problematic if the user is untrusted (as is always the case when the user originates from an untrusted machine), even if he is unprivileged on the server. In my blog post I presented a method to confine a SSH user into a jail directory (via a PAM module using the Linux kernel’s chroot system call) to prevent reading of all world-readable files on the server. However, such a jail directory still doesn’t prevent SSH port forwarding (which I illustrated in this blog post).

In short, any kind of SSH access allows access to at least all of the server’s open TCP ports, even if they are behind its firewall.

Does this mean that giving any kind of SSH access to an untrusted machine should not be done in principle? It does seem so, but there are ways to make the attack surface smaller and make the setup reasonably secure.

Remember that SSH uses some way of authentication.This is either a plain password, or a public/private keypair. In both cases there are secrets which should not be stored on the untrusted machine in a way that allows revealing of the secrets.

So the question becomes: How to supply the secrets to SSH without making it too easy to reveal them?

A private SSH key is permanent and must be stored on a permanent medium of the untrusted machine. To mitigate the possibility that the medium (e.g. hard drive) is extracted and the private key revealed, the private key should be encrypted with a long passphrase. A SSH passphrase needn’t be manually typed every time a SSH connection is made. ssh connects to ssh-agent (if running) to use private keys which may have previously been decrypted via a passphrase.   ssh-agent holds this information in the RAM.

I said “RAM”: For the solution to our present problem, this will be as good as it gets. The method presented below would require technical skills to read out the RAM of a running machine with hardware probes only, which would require (extremely) specialized skills. In this blog post, this is the meaning of the term “reasonably secure”.

On desktop machines, ssh-agent is usually started together with the graphical user interface. Keys and its passphrases can be “added” to it with the command ssh-add. The actual program  ssh connects to  ssh-agent if the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are present. This means that any kind of shell script (even unattended ones called from cron) can benefit from this: passphrases won’t be asked if the corresponding key has already been decrypted in memory. The main advantage of this is that this has to be done only once after the reboot of the machine (because the reboot clears the RAM).

On a headless client, without graphical interface, ssh-agent may not even be installed, we have to start it in a custom way. There is an excellent program called keychain which makes this very easy. The sequence of our method will look like this:

  1. The machine is rebooted.
  2. An authorized administrator logs into the machine and uses the keychain command to enter the passphrase which is now stored in RAM by ssh-agent.
  3. The administrator now can log out. The authentication data will remain in the RAM and will be available to unattended shell scripts.
  4. Every login to the machine will clear the authentication information. This ensures that even a successful login of an attacker will render the private key useless. This implies a minor inconvenience for the administrator: He has to enter the passphrase at every login too.

Keychain is available in major distro’s repositories:

Add the following line to either ~/.bashrc or to the system-wide /etc/bash.bashrc:

This line will be executed at each login to the server. What does this command do?

  1. keychain will read the private key from the specified path.
  2. keychain will prompt for the passphrase belonging to this key (if there is one).
  3. keychain will look for a running instance of ssh-agent. If there is none, it will start it. If there is one, it will re-use it.
  4. Due to the --clear switch, keychain will clear all keys from ssh-agent. This renders the private key useless even if an attacker manages to successfully log in.
  5. keychain adds the private key plus entered passphrase to ssh-agent which stores it in the RAM.
  6. keychain outputs a short shell script (to stdout) which exports two environment variables (mentioned above) which point to the running instance of  ssh-agent for consumption by ssh.
  7. The eval command executes the shell script from keychain which does nothing more but set the two environment variables.

Environment variables are not fully global, they always belong to a running process. Thus, in every unattended script which uses ssh, you need to set these environment variables by evaluating the output of

for example, in a Bash script:

It makes sense to gracefully catch SSH connection problems in your scripts. If you don’t do that, the script may hang indefinitely prompting for a passphrase if it has not been added properly. To do this, do a ‘preflight’ ssh connection which simply returns an error:

 

Conclusion

In everyday practice, security is never perfect. This method is just one way to protect — within reasonable limits — a SSH connection of an unattended/untrusted machine “in the field” to a protected server. As always when dealing with the question of ‘security’, any kind of solution needs to be carefully vetted before deployment in production!

Encrypt backups at an untrusted remote location

In a previous blog post I argued that a good backup solution includes backups at different geographical locations to compensate for local desasters. If you don’t fully trust the location, the only solution is to keep an encrypted backup.

In this tutorial we’re going to set up an encrypted, mountable backup image which allows us to use regular file system operations like rsync.

First, on any kind of permanent medium available, create a large enough file which will hold the encrypted file system. You can later grow the file system (with dd and resize2fs) if needed. We will use dd to create this file and fill this file with zeros. This may take a couple of minutes, depending on the write speed of the hard drive. Here, we create a 500GB file:

A quicker method to do the same (file will not be filled with zeroes) is:

Now we will use LUKS to set up a virtual mapping device node for us:

First, we generate a key/secret which will be used to generate the longer symmetric encryption key which in turn protects the actual data. We tap into the entropy pool of the Linux kernel and convert 32 bytes of random data into base64 format (this may take a long time; consider installing haveged as an additional entropy source):

Store the Base64-encoded key in a secure location and create backups! If this key/secret is lost, you will lose the backup. You have been warned!

Next, we will write the LUKS header into the backup image:

Next, we “open” the encrypted drive with the label “backup_crypt”:

This will create a device node /dev/mapper/backup_crypt which can be mounted like any other hard drive. Next, create an Ext4 file system on this raw device (“formatting”):

Now, the formatted device can be mounted like any other file system:

You can inspect the mount status by typing mount. If data is written to this mount point, it will be transparently encrypted to the underlying physical device.

If you are done writing data to it, you can unmount it as follows:

To re-mount it:

Note that we always specify the Base64-encoded key on the command line and pipe it into cryptsetup. This is better than creating a file somewhere on the hard drive, because it only resides in the RAM. If the machine is powered off, the decrypted mount point is lost and only the encrypted image remains.

If you are really security-conscientious, you need to read the manual of cryptsetup to optimize parameters. You may want to use a key/secret longer than the 32 bytes mentioned here.

Simple test if TCP port is open

There are other more complicated tools to achieve the same (like nmap whose manpage makes your head spin), but this is a very simple solution using netcat:

To programmatically evaluate the result, use the standard Bash $? variable. It will be set to 0 if the port was open, or 1 if the port was closed.

How to set up password-less SSH login for a Dropbear client

Dropbear is a replacement for standard OpenSSH for environments with low memory and processor resources. With OpenSSH, you can use the well-known ssh-keyen command to create a private/public keypair for the client. In Dropbear, it is a bit different. Here are the commands on the client:

Both private and public keys will be in ~/.ssh/id_dropbear, however in a binary format.

To output the public key to stdout in the usual SSH-compatible format, use:

 

How to install yubikey-manager on Debian

yubikey-manager is a Python application requiring some dependencies for it to be installed from the Python repositories, because it is not yet in the official Debian package repository. Here is how:

Here is the main commandline utility:

Zero Client: Boot kernel and root filesystem from network with a Raspberry Pi2 or Pi3

Boot your Raspberry Pi from nothing but an Ethernet cable

A Zero Client is a computer that has nothing on its permanent storage but a bootloader. Rather, it loads everything from the network.

With the method presented in this article, you will be able to boot a Raspberry Pi into a full Debian OS with nothing more on the SD card other than the Raspberry firmware files and the u-boot bootloader on a FAT file system. The Linux kernel and the actual OS will be served over the local ethernet network.

We will only focus on the Raspberry Pi 3, but the instructions should work with minor adaptations also on a Pi 2.

The following instructions assume that you have already built…

  1. a full root file system for the Raspberry
  2. a u-boot binary, and
  3. a Linux kernel

… based on my previous blog post. Thus, you should already have the following directory structure:

We will do all the work inside of the ~/workspace directory.

Preparation of the SD card

You will only need a small SD card with a FAT filesystem on it. The actual storage of files in the running OS will be transparently done over the network. Mount the filesystem on /mnt/sdcard and do the following:

Copy firmware

Copy u-boot bootloader

Create config.txt

config.txt is the configuration file read by the Raspberry firmware blobs. Most importantly, it tells the firmware what kernel to load. “Kernel” is a misleading term here, since we will boot u-boot rather than the kernel.

Create /mnt/sdcard/config.txt with the following contents:

 

Make an universal boot script for the u-boot bootloader

To achieve maximum flexibility — to avoid the repetitive dance of manually removing the SD card, copying files to it, and re-inserting it — we will make an universal u-boot startup script that does nothing else than loading yet another u-boot script from the network. This way, there is nothing specific about the to-be-loaded Kernel or OS on the SD card at all.

Create a file boot.scr.mkimage  with the following contents:

Replace the server IP with the actual static IP of your server. Note that this script does nothing else other than loading yet another script called netboot-${serial#}.scr  from the server. serial# is the serial number which u-boot extracts from the Raspberry Pi hardware. This is usually the ethernet network device HW address. This way, you can have separate startup scripts for several Raspberry Pi’s if you have more than one. To keep the setup simple, set the file name to something predictable.

Compile the script into an u-boot readable image:

Copy boot.scr to the SD card:

The SD card preparation is complete at this point. We will now focus on the serving of the files necessary for boot.

Preparation of the file server

Do all of the following as ‘root’ user on a regular PC running Debian 9 (“Stretch”). This PC will act as the “server”.  This server will serve the files necessary to network-boot the Raspberry.

The directory /srv/tftp will hold …

  • an u-boot start script file
  • the kernel uImage file
  • and the binary device tree file.

… to be served by a TFTP server.

The directory /srv/rootfs_rpi3 will hold our entire root file system to be served by a NFS server:

You will find installation instructions of both TFTP and NFS servers further down.

 

Serve the root file system

Let’s copy the pre-built root file system into the directory from where it will be served by the NFS server:

(notice the slash at the end of the source directory)

 

Fix the root file system for network booting

Edit  /srv/rootfs_rpi3/etc/fstab  and comment out all lines. We don’t need to mount anything from the SD card.

When network-booting the Linux kernel, the kernel will configure the network device for us (either with a static IP or DHCP). Any userspace programs attempting to re-configure the network device will cause problems, i.e. a loss of conncection to the NFS server. Thus, we need to prevent systemd-networkd from managing the Ethernet device. Make the device unmanaged by removing the folowing ethernet configuration file:

If you don’t do that, you’ll get the following kernel message during boot:

That is because systemd has shut down and then re-started the ethernet device. Apparently NFS transfers are sensitive to that.

In case you want to log into the chroot to make additional changes that can only be done from within (e.g. running systemctl scripts etc.), you can do:

 

Serve Kernel uImage

In this step, we create a Linux kernel uImage that can be directly read by the u-boot bootloader. We read Image.gz directly from the Kernel source directory, and output it into the /srv/tftp directory where a TFTP server will serve it to the Raspberry:

 

Serve device tree binary

The u-boot bootloader will also need to load the device tree binary and pass it to the Linux kernel, so copy that too into the /srv/tftp directory.

 

Serve secondary u-boot script loading the kernel

Create a file netboot-rpi3.scr.mkimage with the following contents:

Replace the server IP with the static IP of your server PC. Then compile this script into an u-boot readable image and output it directly to the /srv/tftp directory:

Make sure that the filename of the .scr file matches with whatever file name you’ve set in the universal .scr script that we’ve prepared further above.

 

Install a NFS server

The NFS server will serve the root file system to the Raspberry and provide transparent storage.

Edit /etc/exports and add:

To apply the changed ‘exports’ configuration, run

Useful to know about the NFS server:

You can restart the NFS server by running service nfs-kernel-server restart

Configuration files are /etc/default/nfs-kernel-server  and /etc/default/nfs-common

 

Test NFS server

If you want to be sure that the NFS server works correctly, do the following on another PC:

Mount the root file system (fix the static IP for your server):

 

 

Install a TFTP server

To install:

After installation, check if the TFTP server is running:

This command will tell you the default serving directory (/srv/tftp):

Here is another command that tells you if the TFTP server is listening:

To get help about this server: man tftpd

Test TFTP

If you want to be sure that the TFTP server works correctly, do the following on another PC:

Then see if the server serves the Linux kernel we’ve installed before:

You now should have a local copy of the linux-rpi3.uImage file.

 

Complete

If you’ve done all of the above correctly, you can insert the prepared SD card into your Raspberry Pi and reboot it. The following will happen:

  1. The Raspberry Pi GPU will load the firmware blobs from the SD card.
  2. The firmware blobs will boot the image specified in config.txt. In our case, this is the u-boot binary on the SD card.
  3. The u-boot bootloader will boot.
  4. The u-boot bootloader loads and runs the universal boot.scr script from the SD card.
  5. The boot.scr downloads the specified secondary boot script from the network and runs it.
  6. The secondary boot script …
    • downloads the device tree binary from the network and loads it into memory.
    • downloads the Linux kernel from the network and loads it into memory
    • passes the device tree binary to the kernel, and boots the kernel
  7. the Linux kernel will bring up the ethernet device, connect to the NFS server, and load the regular OS from there.

Many things can go wrong in this rather long sequence, so if you run into trouble, check the Raspberry boot messages output on an attached screen or serial console, and the log files of the NFS and TFTP servers on your server PC.

 

Resources

https://www.raspberrypi.org/documentation/linux/kernel/building.md

http://www.whaleblubber.ca/boot-raspberry-pi-nfs/

https://cellux.github.io/articles/moving-to-nfs-root/

http://billauer.co.il/blog/2011/01/diskless-boot-nfs-cobbler/

https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt

http://wiki.linux-nfs.org/wiki/index.php/General_troubleshooting_recommendations

https://wiki.archlinux.org/index.php/NFS

How to turn the Raspberry Pi into a Gateway to mobile phone internet

Your DSL internet connection is too slow? Want to set up an improvised office? You do not want to pay for a DSL internet plan when you already have a fast 4G mobile plan? If yes to one of the above, it is quite easy to configure a Raspberry Pi to share one mobile internet connection to an Ethernet network.

raspberry pi mobile gateway topology
Turning a Raspberry Pi into a Gateway to mobile internet (Image license CC BY-SA 3.0)

Strictly speaking, you don’t have to use a Raspberry Pi to do this. A laptop or desktop computer with any Operating System would work too, but the Raspberry is so small and consumes only 2-3 W of electrical power, and is so cool (quite literally!), so will will make use of this awesomeness!

 

Prerequisites

The following step-by-step guide is based on a pure Debian 9 (“Stretch”) distribution with a mainline/vanilla/unpatched Linux kernel built according to my previous blog post:

Raspberry Pi2 and Pi3 running pure Debian 9 (“Stretch”) and the Linux Mainline/Vanilla Kernel

  • We will not focus on the Raspbian OS nor on any other distribution, because documentation for these other setups exists in abundance.
  • You should not have a graphical interface installed. GUIs also install the NetworkManager service for systemd (Debian package “network-manager”), and I have not tested how NetworkManager interacts with the methods presented below. In addition, a bare-bone system is the preferred choice because it saves RAM and CPU resources.
  • In any case, you should attach a keyboard and screen to the Raspberry because you may temporarily lose network connectivity during the setup.
  • You also need a smart phone with an internet plan, supporting USB tethering. I have only tested recent Android based smartphones. Keep in mind during the following steps that, with most smart phones, you need to re-enable USB tethering after reboots or USB cable reconnects.

 

Goals

  • Computers in the LAN will be able to set the Raspberry Pi’s static IP address as internet Gateway and DNS server.
  • The Raspberry Pi will prefer a smart phone connection (tethered USB) to forward traffic.
  • If the smart phone is disconnected, the Rasbperry Pi will automatically fall back to an already existing gateway if present (i.e. a DSL modem)

 

Step 1: Install a DNS server

This ensures that cached DNS lookups are very fast when a DNS query has already been fetched.

Tell “bind” to use Google’s public DNS servers (they are good). Edit /etc/bind/named.conf.options and change the “forward” block to:

Restart “bind”:

 

Step 2: Configure a static IP address for the Ethernet adapter

If you already have a DHCP server running in your local network (we will use the subnet 192.168.0.0 in this guide), give the Raspberry Pi a free static IP address in this existing subnet, e.g. 192.168.0.250.

If you don’t have an existing DHCP server running in your local network, we will set one up on the Raspberry (see Step 8 below).

In both cases, we will give our Rasberry the static IP address 192.168.0.250. Using systemd, change the config file of your ethernet connection /etc/systemd/network/eth.network:

 

If your LAN already has an internet gateway, e.g. a DSL modem with address 192.168.0.1, add the following (optional) section to the same config file:

The large positive integer value of “Metric” ensures that other configured gateways with a lower Metric will be preferred. This will come in handy in the next step where the smart phone will be our preferred gateway with a Metric value of 1024.

Now reboot the Raspberry or run systemctl restart systemd-networkd.  You may lose network connectivity at this point if you are logged in via ssh.

Now, check that networkctl status eth0 matches our wanted static IP address:

Next, check the output of route -n (the kernel routing table). It should show:

If you have added the optional  [Route] section, you should also see the following as first line, which is our current default route to the internet:

 

 

Step 3: Set the smart phone connection as gateway

Plug in your phone’s USB cable into one of the Raspberry’s USB connectors. Then turn on USB tethering in the Settings UI of your smart phone.

Run networkctl. You should see the following entry amongst the other network connections (notice “off” and “unmanaged”).

 

To have the “systemd-networkd” service manage the “usb0” network device, create a file /etc/systemd/network/mobile.network with the following contents:

To apply this config file, run systemctl restart systemd-networkd .  After a few seconds,  networkctl should output (notice the “routable” and “configured” parts):

You also can check networkctl status usb0  to see the dynamic IP address obtained from the DHCP server on the smart phone. For Android phones this is usually in the subnet 42.

Next, check the output of route -n. Now, the phone connection “usb0” should be on the top of the list thanks to the lower metric of 1024:

 

Step 4: Check internet connectivity

With this routing table, we already can connect to the internet via the smart phone. To make sure that we are routed via the smart phone, we will ask the Linux kernel which gateway it would take first for traffic. ip route get 8.8.8.8  should ouput the IP address of the smart phone (192.168.42.129, subnet 42):

Let’s ping Google’s server a few times: ping 8.8.8.8  to see if we have an actual working route to the internet:

The answer: Yes!

Check phone’s DNS server

 

Now let’s check if the phone’s DNS server is working. Type  dig google.com (install Debian package “dnsutils” if not yet installed), and make sure that you’ve got an “ANSWER SECTION”:

Note that the response came from the phone’s IP. So, “systemd” has correctly configured the phone’s IP address as DNS server for the Raspberry (that information came from the phone’s DHCP server).

Run  dig google.com again. This time the result should be cached and returned much faster (just 1ms):

Check local DNS server

Type  dig @localhost google.com:

Note that this time, the response came from the “bind” DNS server which we have installed in Step 1. It, in turn, forwards queries via the phone connection. This server will be used for all requests via Ethernet.

Step 5: Turn on IP protocol forwarding for the Linux kernel

By default, this feature is turned off. Check the current status of this feature:

sysctl -a | grep net\.ipv4\.ip_forward  will output:

To permanently set this variable to 1, create /etc/sysctl.d/30-ipforward.conf and add the following:

Reload all settings by typing  sysctl --system. Now, and also after a reboot, the “ip_forward” variable should stay enabled.

 

Step 6: Turn on Network address translation (NAT) aka. “Masquerading” between Ethernet and USB Smart Phone network links

Create a shell script  /usr/bin/startgateway.sh with the following contents and make it executable ( chmod a+x):

This will masquerade IP packets coming in through the Ethernet adapter as if they were coming from the Raspberry itself, forward them to the USB smart phone connection, and the incoming answers (from remote servers) will be re-written and forwarded back to whereever in the LAN they came from. That is the central purpose of the problem we’re trying to solve in this tutorial.

Run this script. Check the output of iptables -L -n -v:

 

To run this shell script at system boot, right after the network links have been brought up, create the following systemd service file:

Add the following:

 

 

Step 7: Test the Raspberry Gateway!

On another machine in your LAN (can be Linux, Windows or Mac), configure the Ethernet connection manually. Set the following:

  • Static IP Address: 192.168.0.10 (or any other freely available address on this subnet)
  • Gateway: 192.168.0.250
  • DNS: 192.168.0.250

Then run traceroute 8.8.8.8  on that other machine. Truncated output:

The route is correctly resolved. First traffic goes to the Raspberry Pi, then to the smart phone, and from there to the internet.

If you can’t run traceroute on that other machine, using a regular browser to browse the internet should work at this point!

 

Step 8: Running a DHCP server on the Raspberry

TODO

 

Conclusion

This tutorial may seem long, but the commands are few, and with a bit of practice you can turn your Raspberry Pi into a mobile phone Gateway in 10 minutes to enjoy faster 4G internet when your other modems are too slow.