Easy containers on Arch Linux with systemd

dock

After numerous issues over the past few months with Docker, I decided to look into other options for crash-and-burn environments, and for “software as a function” invocations.

I was pleasantly surprised to see that systemd and Arch Linux provide everything that I need already.

 

# Install pacstrap
sudo pacman -S arch-install-scripts

# Create subvolume for container (you could just use a normal directory instead)
sudo btrfs subvol create /opt/gcc295

# Install the base system and a few other packages
sudo pacstrap -icd /opt/gcc295 base wget curl vim ed make git patch bash perl --ignore linux

# Enable and start networking
echo 'nameserver 8.8.4.4' | sudo tee /opt/gcc295/etc/resolv.conf
sudo systemctl enable systemd-networkd
sudo systemctl start systemd-networkd

# Set the root password for the container
sudo passwd -R /opt/gcc295

# Boot into the container for the first time, log in as root
sudo systemd-nspawn -bnD /opt/gcc295

# Start networking in the container
systemctl start systemd-networkd

# Do what you need to within the container to prepare it for its intended purpose
wget ...
tar xaf ...
cd ...
./configure ...
make ...
make install ...

# Ctrl+] three times to stop and exit the container

# Fork the container to test some idea or develop another container based on this one
sudo systemd-nspawn -bnD /opt/gcc3 --template =/opt/gcc295

# Do what you need to within that container in order to prepare it for usage, then ^]^]^] as before to exit the container

# Run a container using a temporary snapshot, so any changes to the container state are not preserved (i.e. changes are deleted/destroyed/lost).
sudo systemd-nspawn -bnxD /opt/gcc3 

# Optionally, create read-only BTRFS snapshots of prepared containers, then use those snapshots, to ensure that you don't modify them accidentally.
btrfs subvol snapshot -r ...

# You can pass the --read-only option to systemd-nspawn, to force the root file-system to be read-only, if you know that your container shouldn't need to modify it at all.  This may be faster, but contained software which stores/mutates state information in the file system will fail.
sudo systemd-nspawn -bnD --read-only /opt/gcc3

# Execute a program from a container, with arguments, and with a host path (~/arm-kernel-2.6) bound into the container (at /kernel)
sudo systemd-nspawn -xanD /opt/gcc3 --bind=~/arm-kernel-2.6:/kernel make -C /kernel

# Wrapped in a script for ease of use.  Uses -q switch to suppress output from nspawn.
#!/bin/bash
set -e
sudo systemd-nspawn -qxanD /opt/gcc295 --bind="${src}":/src make -C /src "$@"

I use BTRFS subvolumes for containers, to allow easy version-control of them.  You could just install to a normal folder though, subvolumes/BTRFS aren’t required.

Use pacstrap to install the base system and any extra packages that you might want.

To fork the container (as in Git, not the POSIX variety), use the –template argument to systemd-nspawn. This works best if the container folder is a BTRFS subvolume, as BTRFS’ COW snapshot mechanism will be used for forking.

To launch a temporary container based on your template, use the -x option to systemd-nspawn. This works best if the container folder is a BTRFS subvolume, for the same reason as for forking.

Of course, if you do use BTRFS subvolumes, then you can make backups and incremental patches for container development using btrfs send/receive. For complete backups, you can also use TAR or CPIO regardless of whether you’re using BTRFS subvolumes or not.

Occasionally useful Linux tricks

List the contents of all files in a folder

# Lists contents of all files in current directory, with filename and line-number prepended to each line
grep -n . *

# Recursively list contents of all files in current directory and subdirectories, with filename and line-number prepended to each line
grep -Rn .

You’ve been added to more groups, but don’t want to log off and back on again to use the new privileges:

sudo sudo -u mark bash

The first sudo gives us root access which is necessary for the second sudo, which logs us back in as ourself and starts a bash shell. This shell has the privileges of the new groups you were added to.

Transferring data over a slow network:

# Both of these are too slow due to our crappy internet connection
ssh user@server 'producer' | consumer
producer | ssh user@server 'consumer'

# If our CPU is sitting idle while waiting for the data to transfer, let's give it some work to do!
ssh user@server 'producer | pbzip2 -c9' | pbzip2 -d | consumer
producer | pbzip2 -c9 | ssh user@server 'pbzip2 -d | consumer'

These use the multithreaded implementation of bzip2 to achieve fairly fast and powerful compression to squeeze information along the wire faster.

If pbzip2 leaves you CPU-bottlenecked, you can reduce the compression level (e.g. -c5 instead of -c9) or use gzip which is faster but won’t compress as well. To use parallel gzip, you’ll want to replace pbzip2 with pigz.

If you still have plenty of CPU and also RAM to spare when using pbzip2 and the transfer is taking too long, try parallel lzma instead with pxz in place of pbzip2.

Monitoring the progress of transfers / measuring speed of devices

# Test sequential speed of disk sda by reading first 4GB of the disk (sudo needed for raw disk access)
sudo pv -bartpSs 4G /dev/sda > /dev/null

# File archiving over SSH (with pbzip2 as shown previously)
size="$(ssh user@server 'du -csB1 /path/to/files')"
ssh -T user@server "tar -c /path/to/files | pv -cbarteps ${size} --force | pbzip2 -c9" > /path/to/archives/name.tar.bz2

Running the above operations without making the machines grind to a halt

# CPU-heavy workloads can be told to play nicely by prepending them with "nice"
... | nice pbzip2 -c9 | ...

# IO-heavy workloads can be told to play nicely by giving them idle priority with "ionice"
ionice -c3 tar -c /path/to/files | ... | ionice -c3 > /path/to/archives/name.tar.bz2

# Example from (3) with progress:
size="$(ssh user@server 'du -csB1 /path/to/files')"
ssh -T user@server "ionice -c3 tar -c /path/to/files | pv -cbarteps ${size} --force | nice pbzip2 -c9" | ionice -c3 cat > /path/to/archives/name.tar.bz2

Firing up Eclipse CDT in a temporary Ubuntu environment with Docker

# Download an Ubuntu image
docker pull ubuntu

# Install eclipse-cdt
docker run -i ubuntu apt-get -y install eclipse-cdt

# Get ID of that container (which is now dead)
docker ps -a

# Snapshot that container
docker commit [ID] eclipse-cdt

# Run eclipse with workspace directory mapped to the host (select /root/workspace when Eclipse asks for workspace path)
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -v ~/eclipse-workspace:/root/workspace eclipse-cdt eclipse

Speed & ping tests from shell

Having *finally* got internet access in our new London flat, a month after we ordered it, I wanted to test the quality of the connection.

Downstream speed test

Use a compressed Linux kernel source for our speed test, taking advantage of how cURL annoyingly violates the UNIX “rule of silence”:

$ curl 'https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.3.tar.xz' > /dev/null

As the file is highly compressed, we are measuring raw bandwidth regardless of whether some part of our link has transparent compression. When the speed has stabilised, Ctrl+C to end the test.

Ping test

Run as root, required for ping flooding. Google probably won’t feel/notice/mind you ping-flooding their DNS servers:

$ ping -f -c 1000 -i 0.03 -W 1 -s 1350 -M do 8.8.8.8

Adjust 1350 to be slightly less than your MTU (1350 is a safe guess if you’re unsure). When the flood is complete (~10 seconds), you should receive your mean ping time, jitter, packet loss, etc.