GStreamer pipeline recipe: Stream file via RTP using rtpbin

Given an audio/video file encoded with

then the following GStreamer pipeline (I’m using version 1.13.1) will stream it via RTP using rtpbin to localhost ports 50000-50003:

The receiver outputting the media to screen and speakers:


  • The sender uses almost no CPU because the media is not transcoded.
  • Instead of vp9dec you can use avdec_vp9.
  • The sync attributes must be specified exactly as given.
  • When the sender is restarted while the client is running, the client terminates with the error streaming stopped, reason not-linked.
  • media, encoding-name and clock-rate attributes are required.
  • The encoding names specified in the receiver must match those of the sender.
  • It is not necessary to specify RTP payload numbers.
  • If on the receiver  sync=false, audio and video are not in sync.
  • Above example only supports one receiver. To support multiple receivers, you can multicast the UDP packets to the loopback network device with the following modifications:
    • udpsink options: host= auto-multicast=true multicast-iface=lo ttl-mc=0 bind-address=
    • udpsrc options: address= auto-multicast=true multicast-iface=lo ttl-mc=0 bind-address=
    • Note: is just an example multicast address. ttl-mc=0 is important, otherwise the packets will be forwarded across network boundaries. You should be careful with multicasting, and educate yourself before you try it.

Receiver without rtpbin

To receive without using rtpbin:

Here, the sender can be restarted without bringing the receiver down.

How to digitize old VHS videos with an EasyCAP UTV007 USB converter on Linux

VHS is dead. If you don’t have a functioning VHS player any more, your only option is to buy second-hand devices. But if you still have old, valuable VHS videos (e.g. family videos) you should digitize them today, as long as there are still working VHS players around.

Our goal is to feed the audio/video (AV) signals coming out of an old VHS player into an EasyCAP UTV007 USB video grabber, which can receive 3 RCA cables (yellow for Composite Video, white for left channel audio, red for right channel audio).

EasyCAP UTV007 USB video grabber


VHS players usually have a SCART output which lucklily carries all the needed signals.

SCART connector

Via a Multi AV SCART adapter you can output the AV signals into three separate RCA cables (male-to-male), and from there into the EasyCap video grabber. If your adapter should have an input/output switch, set it to “output”.

Multi AV Adapter outputting 3 RCA connectors (yellow for Composite Video, white for left channel audio, red for right channel audio)

The EasyCAP USB converter uses a UTV007 chip, which is supported by Linux out-of-the-box. (Who said that installing drivers is a pain in Linux???) After plugging the converter into an USB slot, you should get two additional devices:

  1. A video device called “usbtv”
  2. A sound card called “USBTV007 Video Grabber [EasyCAP] Analog Stereo”

Too see if you have the video device, run v4l2-ctl --list-devices . It will output something like:

To see if you have the audio device, run

It will output something like:

To quickly test if you are getting any video, use a webcam application of your choice (e.g. “cheese“) and select “usbtv” as video source under “Preferences”. Note that this will only get video, but no audio.

We will use GStreamer to grab video and audio separately, and mux them together into a container format.

Install GStreamer

To install GStreamer on Debian-based distributions (like Ubuntu), run

Test video with GStreamer

Now, test if you can grab the video with GStreamer. This will read the video from /dev/video0 (device name from v4l2-ctl --list-devices above) and directly output in a window:

Test audio with GStreamer

Now, test if you can grab the audio with GStreamer. This will read the audio from the ALSA soundcard ID hw:3 (this ID comes from the output of pactl list above) and output it to PulseAudio (should go to your currently selected speakers/headphones):

Convert audio and video into a file

If both audio and video tested OK separately, we now can grab them both at the same time, mux them into a container format, and output it to a file /tmp/vhs.mkv. I’m choosing Matroska .mkv containing H264 video and Ogg Vorbis audio:

I had to increase the default queue size because I would get dropped audio otherwise.

Record some video and then press Ctrl+C. The file /tmp/vhs.mkv should now have audio and video.

It would be nice if we could see the video as we are recording it, so that we know when it ends. The command below will do this:

For some reason, the generated .mkv file was not seekable for me. Simply re-encode it by running it through ffmpeg:

You can adjust the video and audio bitrate depending on the type and length of video so that your file will not be too large. The nice side-effect is that the coarser the video encoding, the more of the fine-grained noise in the VHS video is smoothed out.

Voila! You now should be able to record and archive all your old family videos for posterity!

Digitization of VHS video with Gstreamer.


PhantomJS alternative: Write short PyQt scripts instead (

For a project of mine, I needed a ‘headless’ script that renders dynamically generated HTML (via JavaScript) to a PDF file. In looking for existing headless browsers, I found PhantomJS, CasperJS and similar projects.

PhantomJS looked most promising, but it had bugs related to the CSS @media type ‘print’ which made the project useless for generating properly formatted PDFs.[1][2] These issues are still open after three years. As of 2017, the official PhantomJS binary download is still at version 2.1 which includes an older downstream QtWebKit version 5.5.1 [3].

In trying to fix the bugs myself, or upgrade QtWebKit, I spent two full days trying to build PhantomJS, but eventually had to give up because the build system seems to be reworked right now. It seems that instead of patching the custom QtWebKit, the development team is moving towards making “PhantomJS a normal Qt application with system-installed Qt and QtWebKit”. [4]

So, I started questioning PhantomJS. What magic does it really do? By inspecting the source code, I found out that its entire concept hinges on just two Qt functions: QWebFrame::addToJavaScriptWindowObject() and QWebFrame::evaluateJavascript(). The former allows exposing Qt objects to the JavaScript space of WebKit. For example, PhantomJS’s JavaScript API method injectJs is simply a wrapper for an underlying evaluateJavascript call inside of a C++ function of a C++ object which has been previously attached via addToJavaScriptWindowObject.

In short, PhantomJS’s custom (and in my opinion rather sparse) JavaScript API simply calls Qt’s C++ functions.

The obvious disadvantage of this concept is that an extension or fixing of the API requires a recompilation of the C++ code.

It is obviously much simpler to directly script Qt, for example via PyQt.

So, I started writing a short PyQt application, and after just 90 lines of Python code, I had what I needed: a headless browser using an up-to-date version of WebKit, which did not have the shortcomings of the version in PhantomJS.

The script is published on my blog and as a Github gist. I don’t expect getting 20,000 Github stars like others, but honestly, following (perhaps too simple) logic, my script should get more. 😉 A lean replacement for bulky headless browser frameworks






[4]: A lean replacement for bulky headless browser frameworks

This is a simple but fully scriptable headless QtWebKit browser using PyQt5 in Python3, specialized in executing external JavaScript and generating PDF files. A lean replacement for other bulky headless browser frameworks. (Source code at end of this post as well as in this github gist)


If you have a display attached:

If you don’t have a display attached (i.e. on a remote server):


  • <url> Can be a http(s) URL or a path to a local file
  • <pdf-file> Path and name of PDF file to generate
  • [<javascript-file>] (optional) Path and name of a JavaScript file to execute


  • Generate a PDF screenshot of the web page after it is completely loaded.
  • Optionally execute a local JavaScript file specified by the argument <javascript-file> after the web page is completely loaded, and before the PDF is generated.
  • console.log’s will be printed to stdout.
  • Easily add new features by changing the source code of this script, without compiling C++ code. For more advanced applications, consider attaching PyQt objects/methods to WebKit’s JavaScript space by using  QWebFrame::addToJavaScriptWindowObject().

If you execute an external <javascript-file>, has no way of knowing when that script has finished doing its work. For this reason, the external script should execute  console.log("__PHANTOM_PY_DONE__"); when done. This will trigger the PDF generation, after which will exit. If no  __PHANTOM_PY_DONE__ string is seen on the console for 10 seconds, will exit without doing anything. This behavior could be implemented more elegantly without console.log’s but it is the simplest solution.

It is important to remember that since you’re just running WebKit, you can use everything that WebKit supports, including the usual JS client libraries, CSS, CSS @media types, etc.


  • Python3
  • PyQt5
  • xvfb (optional for display-less machines)

Installation of dependencies in Debian Stretch is easy:

Finding the equivalent for other OSes is an exercise that I leave to you.


Given the following file /tmp/test.html:

… and the following file /tmp/test.js:

… and running this script (without attached display) …

… you will get a PDF file /tmp/out.pdf with the contents “foo bar baz”.

Note that the second occurrence of “foo” has been replaced by the web page’s own script, and the third occurrence of “foo” by the external JS file.

Source Code


“Open Source” does not imply “less secure”

Sometimes programmers hesitate to make their software open source because they think that revelation of the source code would allow attackers to ‘hack it’.

Certainly there are specific cases where this is true, but not as a general rule.

In my opinion, if inspection of the source code allows an attacker to ‘hack it’, then the programmer has done it wrong. Security primarily comes from writing secure algorithms, independent of their open source nature.

OpenSSL is a case in point: It is open source, but nevertheless it powers HTTPS all over the internet. “But,” you say, “it is only secure because its code is kinda obscure.” Well, no: Cryptographically secure algorithms exhibit very astonishing properties. For example, the One-time pad encryption technique is extremely simple and exhibits “Perfect secrecy” which is defined by Wikipedia as follows:

One-time pads are “information-theoretically secure” in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon about the same time. His result was published in the Bell Labs Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.

Claude Shannon proved, using information theory considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because, given a truly random key which is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely.

Take for example the following simple implementation of the One-time pad (via XOR) in Ruby (which took me just a couple of minutes to write):

This code is open source, but it nevertheless exhibits the property of perfect (i.e. 100%) security “even against adversaries with infinite computational power” — given that the key is never submitted over insecure channels.

Sure, the One-time pad is not practical, and one could probably exploit weaknesses in Ruby or the underlying operating system. But that is not the point. The point is that, given proper implementation of software, it can be made open source without compromising its security.

To contrast this with an example of (bad) source code which should not be made public because it only creates a false sense of security:

Here, ((msgraw[n] + 7) ^ 99) is equivalent to a hard-coded secret. Sure, the obscured message, when transmitted over a public network, may look random. But the algorithm could easily be reverse-engineered by cryptoanalysis. Also, if the souce code were revealed, it would be trivial to decode past and future messages.


“Open Source” does not imply “insecure”. Security comes from secure — not secret — algorithms (which of course includes the freedom of bugs). What counts as “secure” is defined mathematically, and “mathematics (and in extension, physics) can’t be bribed.” It is not easy to come up with such algorithms, but it is possible, and there are many successful examples.

Needless to say, not every little piece of code should be made open source — ideally programmers will only publish generally useful and readable software which they intend to maintain, but that is a subject for another blog post.

Reasonably secure unattended SSH logins from untrusted machines

There are certain cases where you want to operate a not completely trusted networked machine, and write scripts to automate some task which involves an unattended SSH login to a server.

With “not completely trusted machine” I mean a computer which is reasonably secured against unauthorized logins, but is physically unattended (which means that unknown persons can have physical access to it).

An established SSH connection has a number of security implications. As I have argued in a previous blog post “Unprivileged Unix Users vs. Untrusted Unix Users”, having access to a shell on a server is problematic if the user is untrusted (as is always the case when the user originates from an untrusted machine), even if he is unprivileged on the server. In my blog post I presented a method to confine a SSH user into a jail directory (via a PAM module using the Linux kernel’s chroot system call) to prevent reading of all world-readable files on the server. However, such a jail directory still doesn’t prevent SSH port forwarding (which I illustrated in this blog post).

In short, any kind of SSH access allows access to at least all of the server’s open TCP ports, even if they are behind its firewall.

Does this mean that giving any kind of SSH access to an untrusted machine should not be done in principle? It does seem so, but there are ways to make the attack surface smaller and make the setup reasonably secure.

Remember that SSH uses some way of authentication.This is either a plain password, or a public/private keypair. In both cases there are secrets which should not be stored on the untrusted machine in a way that allows revealing of the secrets.

So the question becomes: How to supply the secrets to SSH without making it too easy to reveal them?

A private SSH key is permanent and must be stored on a permanent medium of the untrusted machine. To mitigate the possibility that the medium (e.g. hard drive) is extracted and the private key revealed, the private key should be encrypted with a long passphrase. A SSH passphrase needn’t be manually typed every time a SSH connection is made. ssh connects to ssh-agent (if running) to use private keys which may have previously been decrypted via a passphrase.   ssh-agent holds this information in the RAM.

I said “RAM”: For the solution to our present problem, this will be as good as it gets. The method presented below would require technical skills to read out the RAM of a running machine with hardware probes only, which would require (extremely) specialized skills. In this blog post, this is the meaning of the term “reasonably secure”.

On desktop machines, ssh-agent is usually started together with the graphical user interface. Keys and its passphrases can be “added” to it with the command ssh-add. The actual program  ssh connects to  ssh-agent if the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are present. This means that any kind of shell script (even unattended ones called from cron) can benefit from this: passphrases won’t be asked if the corresponding key has already been decrypted in memory. The main advantage of this is that this has to be done only once after the reboot of the machine (because the reboot clears the RAM).

On a headless client, without graphical interface, ssh-agent may not even be installed, we have to start it in a custom way. There is an excellent program called keychain which makes this very easy. The sequence of our method will look like this:

  1. The machine is rebooted.
  2. An authorized administrator logs into the machine and uses the keychain command to enter the passphrase which is now stored in RAM by ssh-agent.
  3. The administrator now can log out. The authentication data will remain in the RAM and will be available to unattended shell scripts.
  4. Every login to the machine will clear the authentication information. This ensures that even a successful login of an attacker will render the private key useless. This implies a minor inconvenience for the administrator: He has to enter the passphrase at every login too.

Keychain is available in major distro’s repositories:

Add the following line to either ~/.bashrc or to the system-wide /etc/bash.bashrc:

This line will be executed at each login to the server. What does this command do?

  1. keychain will read the private key from the specified path.
  2. keychain will prompt for the passphrase belonging to this key (if there is one).
  3. keychain will look for a running instance of ssh-agent. If there is none, it will start it. If there is one, it will re-use it.
  4. Due to the --clear switch, keychain will clear all keys from ssh-agent. This renders the private key useless even if an attacker manages to successfully log in.
  5. keychain adds the private key plus entered passphrase to ssh-agent which stores it in the RAM.
  6. keychain outputs a short shell script (to stdout) which exports two environment variables (mentioned above) which point to the running instance of  ssh-agent for consumption by ssh.
  7. The eval command executes the shell script from keychain which does nothing more but set the two environment variables.

Environment variables are not fully global, they always belong to a running process. Thus, in every unattended script which uses ssh, you need to set these environment variables by evaluating the output of

for example, in a Bash script:

It makes sense to gracefully catch SSH connection problems in your scripts. If you don’t do that, the script may hang indefinitely prompting for a passphrase if it has not been added properly. To do this, do a ‘preflight’ ssh connection which simply returns an error:



In everyday practice, security is never perfect. This method is just one way to protect — within reasonable limits — a SSH connection of an unattended/untrusted machine “in the field” to a protected server. As always when dealing with the question of ‘security’, any kind of solution needs to be carefully vetted before deployment in production!

Never type plain passwords for SSH authentication

It could be said that SSH (Secure Shell) is an administrator’s most important and most frequently used tool. SSH uses public-key cryptography to establish a secure communication channel. The public/private keypair is either

  1. generated automatically, where the (typed or copy-pasted) plaintext password is transmitted over the encrypted channel to authenticate the user, or
  2. generated manually once, where the private key is permanently stored on the client and the public key is permanently stored on the server. This method also authenticates the user at the same time without submitting a password.

Even if it may have been secure in the 2000’s, the first method (typing or copy-pasting the plaintext password) really has become insecure for the following possible side-channel attacks belonging to the category of Keystroke logging:

  1. Security Vulnerabilities in Wireless Keyboards
  2. Keystroke Recognition from Wi-Fi Distortion
  3. Snooping on Text by Listening to the Keyboard
  4. Sniffing Keyboard Keystrokes with a Laser
  5. Hacking Your Computer Monitor
  6. Guessing Smart Phone PINs by Monitoring the Accelerometer
  7. more to be discovered!

Using the clipboard for copy-pasting is not really an option either because the clipboard is simply public storage. In short, using passwords, even ‘complicated’ ones, is really a bad idea.

The second method (a manually generated public/private keypair) is much more secure:

  1. The private key (the secret) on the client is never transmitted (I know, public key cryptography sounds like black magic, but it isn’t)
  2. You still can use a “passphrase” to additionally encrypt the private key. This would protect the key in case it is stolen. This “passphrase” doesn’t have to be stored anywhere, it can be simply remembered like a conventional password.
  3. “Mathematics can’t be bribed”: If every Hydrogen atom in the univerese were a CPU and able to enumerate 1000 RSA moduli per second, it would still take approx. 6 x 10211 years to enumerate all moduli to bruteforce a 1024-bit RSA key.[^1]

There is no earthly agancy which can “hack” strong and proper cryptography, even if they claim that they can. There is a theoretical lower limit of energy consumption of computation. See Landauer’s principle for regular computing and Margolus–Levitin theorem  for quantum computing.

Nothing in the world of cryptography is ‘cut and dried’, but there are certain best practices we as administrators can adopt. Using SSH keys properly is certainly one of these practices.




Encrypt backups at an untrusted remote location

In a previous blog post I argued that a good backup solution includes backups at different geographical locations to compensate for local desasters. If you don’t fully trust the location, the only solution is to keep an encrypted backup.

In this tutorial we’re going to set up an encrypted, mountable backup image which allows us to use regular file system operations like rsync.

First, on any kind of permanent medium available, create a large enough file which will hold the encrypted file system. You can later grow the file system (with dd and resize2fs) if needed. We will use dd to create this file and fill this file with zeros. This may take a couple of minutes, depending on the write speed of the hard drive. Here, we create a 500GB file:

We will use LUKS to set up a virtual mapping device node for us:

First, we generate a key/secret which will be used to generate the longer symmetric encryption key which in turn protects the actual data. We tap into the entropy pool of the Linux kernel and convert 32 bytes of random data into base64 format (this may take a long time; consider installing haveged as an additional entropy source):

Store the Base64-encoded key in a secure location and create backups! If this key/secret is lost, you will lose the backup. You have been warned!

Next, we will write the LUKS header into the backup image:

Next, we “open” the encrypted drive with the label “backup_crypt”:

This will create a device node /dev/mapper/backup_crypt which can be mounted like any other hard drive. Next, create an Ext4 file system on this raw device (“formatting”):

Now, the formatted device can be mounted like any other file system:

You can inspect the mount status by typing mount. If data is written to this mount point, it will be transparently encrypted to the underlying physical device.

If you are done writing data to it, you can unmount it as follows:

To re-mount it:

Note that we always specify the Base64-encoded key on the command line and pipe it into cryptsetup. This is better than creating a file somewhere on the hard drive, because it only resides in the RAM. If the machine is powered off, the decrypted mount point is lost and only the encrypted image remains.

If you are really security-conscientious, you need to read the manual of cryptsetup to optimize parameters. You may want to use a key/secret longer than the 32 bytes mentioned here.

Before data loss: How to make correct backups

Why should you regularly make backups? Because if you don’t, then this mistake will bite you, sooner or later. Why? Because of Murphy’s Law:

Anything that can go wrong, will go wrong.

And a variation of it, Finagle’s law, even says:

Anything that can go wrong, will—at the worst possible moment.

So, let’s prepare right now and look at ways to back up data correctly.

RAID data mirroring is not enough

Realtime data mirroring (no matter if it is software or hardware RAID) is good, but not enough. What if your location is hit by lightning, fire or water? What if your entire system gets stolen? Then RAID would have been exactly useless.

Threats to local backups on external media

Say that you have an external USB hard drive for you backups. This is good, but as long as it is connected to your computer, it still may be subject to destruction due to lightning.

In addition, if you leave your external USB hard drive mounted in your host OS, and if you make a mistake as an administrator, or have faulty software or malware, you may fully erase your main hard drive and the backup at the same time. This is not too unlikely!

It happened to me once. A simple mistyped rm -rf ./ as root user somewhere deep in the filesystem did exactly that (I accidentally typed a space between the dot and the slash). Yes, I erased my main hard drive and the backup (mounted under  /mnt) at the same time. The data loss was desastrous.

Independent of the above, local backps are still susceptible to fire, water, or theft.

The dangers of deleted or changed files

rsync is especially good if you transfer the data to your backup location via public networks, because it only transfers changes. It also supports the --delete flag which deletes remote files when they are no longer locally present. This is generally a good idea if you want your backup to be an exact copy, otherwise your backup will become messy by accumulating many deleted files, which will make restoration not very fun.

But the --delete flag is also a danger. Say you delete an important file locally. Two days later you discover this fact, and decide to restore it from your backup. But guess what, it will be gone there too if you have synced in the meantime.

This problem is also present when changing files. The only solution to this problem is to have rolling backups (backups of your backups) in regular or increasing intervals (weekly, monthly, yearly). This will multiply the storage requirements, but you really cannot get around it.

Restoration is as important as backing up

Let’s say you have 10 perfectly done backups. But if you can’t access them any more, or not quickly enough (e.g. due to low bandwidth, etc.) they will be useless for your purposes. You need to put as much thought and effort into an effective restoration method as you put into the backups in the first place.

What works?

In general, a good backup solution depends on the specific circumstances and needs. Backups can never be perfect (100.0% reliable), there will always be a small but nevertheless existing possibility of total data loss. But you can make that possibility very, very small. As a rough guideline, the following principles seem to minimize the risk:

  • You have more than one backup.
  • You have backups of your backups (“rolling backups“).
  • You do not leave local backup media connected or mounted.
  • Your backups are at geographically different locations to compensate for local desasters.
  • If your backup is at a remote location, you fully trust the location, or use proper encryption.
  • Restoration is effective.
  • Backup and restoration is automated and tested.
  • After each backup cycle, the backups are verified. If there was a failure, the administrator is notified.

It is your responsibility!

If you should lose data, don’t blame it on ‘evil’ external circumstances, because:

Never attribute to malice that which is adequately explained by stupidity.

What if all data are still lost? Well, in this case I only can say:

Every misfortune is a blessing in disguise.

Start working on your backup solution now!

Simple test if TCP port is open

There are other more complicated tools to achieve the same (like nmap whose manpage makes your head spin), but this is a very simple solution using netcat:

To programmatically evaluate the result, use the standard Bash $? variable. It will be set to 0 if the port was open, or 1 if the port was closed.