Digital piano: Systemd user service to start low-latency synth and bind to MIDI controller

For years, I’ve had an M-Audio Evolution MK449C MIDI controller. It has plenty of nice controls: 8 rotary, 9 sliders, 10 toggleable buttons, 1 wheel, 1 return-to-centre wheel, and obviously also four octaves keys.

While (for the ~£50 I paid for it back in 2008) I can’t find any reason to fault it massively, I have finally decided to replace/augment it with a Roland A-88. That’s a full 88 weighted keys with piano mechanisms to add to the immersion. But also 3x the weight and with less controls.

Back when I used a desktop PC, I used the ultra-low latency JACK audio environment as standard, with the more friendly PulseAudio running on top of it. This way, I could plug the inputs and outputs of any programs together, for chaining effects, mixing, monitoring, etc. Like what KX Studio did for Creative sound cards, but on steroids.

On a laptop though, this isn’t so practical. JACK (and also PulseAudio with low fragment sizes) consumes quite a bit of CPU and battery, which isn’t ideal on a portable device. So I needed a convenient way to switch between PulseAudio→ALSA for normal use, and PulseAudio→JACK→ALSA for real-time use. Ideally in such a way that plugging in a MIDI controller causes switching to the JACK-enabled stack where PulseAudio has low fragment sizes (for low latency), and removing all MIDI controllers causes switching back to the (boot-default) PulseAudio with large fragment sizes (for low CPU usage).

Systemd to the rescue!

In addition to general system administration, systemd can also be used on a per-user basis (therefore not requiring root either). I chose to use systemd user services to orchestrate the audio configuration, since this is the exact kind of task that systemd excels at.

Simplified configuration is provided here:

Preparation

# Create directory in home folder for user-systemd units
mkdir -p ~/.config/systemd/user

# Enter the directory
cd ~/.config/systemd/user

# Create the unit files (use your preferred editor, e.g. gedit, nano, ed)
vim pulse.service jack.service synth@.service keyboard.service

pulse.service

[Unit]
Description=Pulseaudio
Conflicts=jack.service

[Service]
Type=simple
Environment="DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus"
ExecStart=/usr/bin/pulseaudio -nF /etc/pulse/default.pa

[Install]
WantedBy=multi-user.target

jack.service

[Unit]
Description=Jack
Conflicts=pulse.service

[Service]
# Grant your self the right to use such limits in /etc/security/limits.d/
LimitNICE=-15
LimitRTPRIO=99
# Configure DEVICE environment variable to the ALSA name of your output device
Type=simple
Environment=DEVICE=hw:0
Environment=JACK_NO_AUDIO_RESERVATION=1
ExecStartPre=-/usr/bin/pulseaudio -k
# Configure appropriate period size/count and sample-rate
ExecStart=/usr/bin/jackd -d alsa -P ${DEVICE} -n 2 -p 128 -r 48000 -X raw

[Install]
WantedBy=multi-user.target

synth@.service

[Unit]
Description=Fluidsynth with %I soundfont

BindsTo=jack.service
After=jack.service

Before=keyboard.service
Wants=keyboard.service

[Service]
LimitNICE=-15
LimitRTPRIO=99

# Configure path to your soundfonts
Environment=SOUNDFONT=/opt/soundfonts/%I.sf2

Type=simple
# Wait for JACK to start
ExecStartPre=/usr/bin/sleep 1
# Configure appropriate sample rate & buffer size
ExecStart=/usr/bin/fluidsynth --server --no-shell --audio-driver=jack --audio-bufcount=2 --midi-driver=jack --sample-rate=48000 --audio-bufsize=64 --connect-jack-outputs --gain 1 ${SOUNDFONT}

[Install]
WantedBy=multi-user.target

keyboard.service

[Unit]
Description=Connect MIDI keyboard to Fluidsynth

[Service]
Type=oneshot
# Wait for synth to start
ExecStartPre=/usr/bin/sleep 2
# Configure regexes as appropriate
ExecStart=/usr/bin/sh -c "/usr/bin/jack_connect $(jack_lsp | grep -xP 'system:midi_capture_\\d+' | tail -n1) $(jack_lsp | grep -xP 'fluidsynth:midi_\\d+' | tail -n1)"

[Install]
WantedBy=multi-user.target

Helper script

Helper script, placed in my user bin folder (~/.bin, or whatever you decided to call yours):

#!/bin/bash

set -euo pipefail

declare -r patch="${1:---help}"

if [ "$patch" == '--help' ]; then
        echo "Patches:"
        ls -1d /opt/soundfonts/*.sf2 | xargs -n1 -I{} echo ' * {}'
        exit 1
fi

systemctl --user stop "synth@*"

if [ "$patch" == '--stop' ]; then
        systemctl --user start pulse
        echo 'Synth stopped'
        exit 0
fi

systemctl --user start "synth@${patch}"

Usage

With soundfonts in the corresponding directory, it’s just a case of using udev rules to run the helper script with the name of the default soundfont as the argument when the controller is plugged in, and then to run the script with –stop argument when the last controller is removed.

The synth@.service and keyboard.service files aren’t necessary if you always use some other program which can connect things up for you (e.g. Ardour). I use them for convenience to give a quick and simple plug’n’play experience for when I’m feeling lazy.

I called the helper script “synth”, and can use it to run FluidSynth with a specific soundfont:

# Loads bosendorfer.sf2
synth bosendorfer

Why doesn’t “sudo echo value > target” work, and why pipe into “tee”?

Quick one here for beginners, since I’ve been asked this question quite a bit lately.

Problem and reason for it

When you type “sudo echo value > /sys/target” into a POSIX shell, the interpreter parses the command and comes up with something like this:

Program: sudo
Parameters: echo value
Input: STDIN
Output: /sys/target
Error: STDERR

Note that “sudo” is not a shell built-in command, it is a separate program which is not even installed on many Linux systems. When “sudo” is executed with those arguments, it opens a root shell and in that shell runs “echo value”. Whether “echo” is a shell built-in, or a separate binary (e.g. /bin/echo) the result should be the same: “value” is written to STDOUT.

The STDOUT for sudo however is not the console since sudo was run with the redirection to /sys/target. This leads us onto the reason why your command doesn’t work.

In order to execute your original command, a few system calls are involved:

  • Fork: The shell process fork into two processes
  • Open: The child process opens /sys/target for writing
  • Dup2: The child process maps the opened file as the STDOUT descriptor (replacing the previous file assigned to that descriptor)
  • Exec: The child process calls exec to replace itself with the desired program (sudo)

Note that this all occurs in your current security context, since “sudo” is not launched yet (the “exec” call launches “sudo”). So as a result, the “open” system call will fail, since your user account does not have the required permissions for opening the target file for writing. The “echo” and “sudo” do not fail because they aren’t even started.

Solutions

The solution to this is to do the output redirection from within the security context that “sudo” gives you. There are various ways to achieve this:

tee

Tee duplicates its input stream to multiple outputs (hence the name, “T”). Play with the following examples:

echo Hello | tee
echo Hello | tee /dev/stdout
echo Hello | tee /dev/stdout /dev/stderr
echo Hello | tee /dev/stdout /dev/stdout /dev/stdout

This is commonly used for redirecting from programs run with sudo:

echo value | sudo tee /sys/target

The “echo” program does not need to be run with sudo as just writes its parameters to STDOUT. “tee” is run with sudo as it needs to open the target file for writing. Note that since “tee” also copies the input data to its output, you will see your value written to the terminal when using this method.

Nested shell

Another method is to have “sudo” start a shell and redirect from within that shell:

sudo sh -c 'echo value > /sys/target'

If using single-quotes to wrap the command, this will prevent environment variables from your interactive shell from getting expanded into the command. Instead, expansion takes place in the root shell. This may be beneficial in some cases.

sudoers

The previous methods use sudo, which will typically ask for a password. If you want to permit certain operations to be run without requiring manual password entry (e.g. for scripting or for hoykeys), you can put the commands into scripts, chown the scripts so they’re owned by root, then edit the “sudoers” file appropriately.

setuid

You can also replace the scripts with C programs that call setuid to acquire root. After compiling, you must chown the programs to root and set the setuid bit on them so that they can acquire root automatically.

This method requires care since you’re allowing any user to run certain root-only operations. Be careful of what commands you permit to be run like this, take care to avoid buffer overruns/underruns, and compile in ultra-strict mode (-Wall -Wextra -pedantic -Werror -fstack-protector).

Easy containers on Arch Linux with systemd

dock

After numerous issues over the past few months with Docker, I decided to look into other options for crash-and-burn environments, and for “software as a function” invocations.

I was pleasantly surprised to see that systemd and Arch Linux provide everything that I need already.

 

# Install pacstrap
sudo pacman -S arch-install-scripts

# Create subvolume for container (you could just use a normal directory instead)
sudo btrfs subvol create /opt/gcc295

# Install the base system and a few other packages
sudo pacstrap -icd /opt/gcc295 base wget curl vim ed make git patch bash perl --ignore linux

# Enable and start networking
echo 'nameserver 8.8.4.4' | sudo tee /opt/gcc295/etc/resolv.conf
sudo systemctl enable systemd-networkd
sudo systemctl start systemd-networkd

# Set the root password for the container
sudo passwd -R /opt/gcc295

# Boot into the container for the first time, log in as root
sudo systemd-nspawn -bnD /opt/gcc295

# Start networking in the container
systemctl start systemd-networkd

# Do what you need to within the container to prepare it for its intended purpose
wget ...
tar xaf ...
cd ...
./configure ...
make ...
make install ...

# Ctrl+] three times to stop and exit the container

# Fork the container to test some idea or develop another container based on this one
sudo systemd-nspawn -bnD /opt/gcc3 --template =/opt/gcc295

# Do what you need to within that container in order to prepare it for usage, then ^]^]^] as before to exit the container

# Run a container using a temporary snapshot, so any changes to the container state are not preserved (i.e. changes are deleted/destroyed/lost).
sudo systemd-nspawn -bnxD /opt/gcc3 

# Optionally, create read-only BTRFS snapshots of prepared containers, then use those snapshots, to ensure that you don't modify them accidentally.
btrfs subvol snapshot -r ...

# You can pass the --read-only option to systemd-nspawn, to force the root file-system to be read-only, if you know that your container shouldn't need to modify it at all.  This may be faster, but contained software which stores/mutates state information in the file system will fail.
sudo systemd-nspawn -bnD --read-only /opt/gcc3

# Execute a program from a container, with arguments, and with a host path (~/arm-kernel-2.6) bound into the container (at /kernel)
sudo systemd-nspawn -xanD /opt/gcc3 --bind=~/arm-kernel-2.6:/kernel make -C /kernel

# Wrapped in a script for ease of use.  Uses -q switch to suppress output from nspawn.
#!/bin/bash
set -e
sudo systemd-nspawn -qxanD /opt/gcc295 --bind="${src}":/src make -C /src "$@"

I use BTRFS subvolumes for containers, to allow easy version-control of them.  You could just install to a normal folder though, subvolumes/BTRFS aren’t required.

Use pacstrap to install the base system and any extra packages that you might want.

To fork the container (as in Git, not the POSIX variety), use the –template argument to systemd-nspawn. This works best if the container folder is a BTRFS subvolume, as BTRFS’ COW snapshot mechanism will be used for forking.

To launch a temporary container based on your template, use the -x option to systemd-nspawn. This works best if the container folder is a BTRFS subvolume, for the same reason as for forking.

Of course, if you do use BTRFS subvolumes, then you can make backups and incremental patches for container development using btrfs send/receive. For complete backups, you can also use TAR or CPIO regardless of whether you’re using BTRFS subvolumes or not.