Photography: post-processing – and some amazing free, open-source software workflows


This article describes some free, open-source photo/video processing software that I use since I moved from Windows to Linux.  Most of this software also runs on Windows, and it is also free.  As much as I like Adobe CS5 and Sony Vegas, I would struggle to move back to them (excluding Illustrator/InDesign) now that I’ve got used to the open-source offerings.

If you are comfortable with building programs from the source (or use Arch Linux, which makes this really really easy) then do so and you could get significant performance improvements on modern processors.  Unlike the somewhat expensive offerings from Adobe, these open-source programs don’t install nasty viruses on your PC*.

* Some Adobe products (when legally purchased) come with a rootkit virus called “FlexNet” which can cause data-loss and corrupt boot-loaders (making systems unbootable).  Amusingly, the pirate copies of Adobe’s products generally have this virus removed from them.  That’s how I wasted over £100 by buying Adobe software, only to end up using a pirate copy…

Photo processing on Linux

Before I left the UK, I ran on Arch Linux.  When travelling, I boot public PCs with Debian from my phone via DriveDroid.  This means that Photoshop and Lightroom are not available for me to use in my photography workflow, as they are amongst the few programs that can’t be run in Linux via Wine.

This isn’t much of a problem though, as the open-source offerings for photography are generally very powerful and well maintained.  They have a steeper learning curve than Adobe’s products, but allow you to do more in less time when you get used to them.

When mentioning open-source image-editing software, people usually think of “The GIMP“, which is probably the ugliest piece of software I have ever seen (even if I include commercial scientific software).  Its numerous text-rendering issues (on all operating systems) make it generally a pain to work with.  Running an old version of PaintShop Pro or PhotoDeluxe under Wine is preferable to using The GIMP, despite PSP being incredibly buggy and PD being ancient.  Thankfully, there are many alternative free/open-source image processing programs available which are powerful, consistent and usable.

On the camera

On the camera, I would love to use some custom firmware similar to Magic Lantern, which is available for Canon DSLRs.  I suppose I can wait for Vitaliy Kiselev’s hack to expand and mature.  For lack of this, I just shoot RAW in the usual ways, using the standard delay/repeat/timelapse/remote/long-exposure features.

Cataloging and RAW processing

After taking the RAW files off my camera, I first catalogue and process them with RawTherapee.  I hear that there is a stable Windows port of this program available too.  RawTherapee has the best noise-reduction that I have come across yet, has brilliant colour correction, and makes batch processing effortless (which should be a minimum requirement for RAW processing software).  The only downside is that it can not read the white-balance setting from Nikon’s RAW files, but this is because Nikon encrypt that data it in an attempt to force their slow, unstable, bug-ridden ViewNX/CaptureNX upon people.

If I just want to process a load of files with the same processing settings, then I can use RawTherapee’s batch processing although for simple jobs I often use ImageMagick instead.

Video processing

For video processing I’ll use Kdenlive (which has completely replaced Sony Vegas for me).  As for time-lapses, I’ll usually stick to ffmpeg or slowmoVideo, the latter being pretty powerful – it supports non-linear time flow, optical flow interpolation, and optical-flow driven motion-blur.


HDR?  What HDR?  I probably should start learning this style, as it does produce some really nice results when people take the time to do it properly (i.e. go beyond just “pressing the HDR button”).


As a general rule, I only use effects which operate “globally”, i.e. to the whole image.  If an effect requires me to click on the actual image, then I won’t use it (excluding crop).  This is just a personal rule to prevent me from spending hours on a single image, developing amazing art like some others do.  I occasionally break this rule to clone out annoying people, parked cars, or lens flares but this is quite rare.


There’s lots of cool free/open-source image processing software out there.  Get rid of any bad impressions that you might have from The GIMP and branch out into the purpose-built software.  The learning curves are steep, but the rewards are greater.  A list of software mentioned is given below:

  • The GIMP – included for completeness, but DO NOT USE this unless you hate yourself.
  • Magic Lantern – Like CHDK, but for Canon DSLRs.
  • Vitaliy Kiselev’s hack – Like CHDK but for Nikon DSLRs.  I haven’t tried this yet.
  • RawTherapee – RAW processing and batch-processing software with the best noise-reduction I’ve come across yet.
  • ImageMagick – command-line image processing, useful for automated tasks and batch processing, e.g. resize to fixed width and watermark all images.
  • Kdenlive – Powerful non-linear video editor with great shake-reduction.
  • ffmpeg – command-line video processing, Debian users will probably have the libav fork instead.
  • slowmoVideo – very good timelapse/slowmotion software supporting non-linear time flow, optical flow interpolation and nice motion blur.
  • DriveDroid – not free/open-source but useful when travelling: it allows me to boot Arch Linux onto any hostel PC from my Android phone, and have a personal desktop complete with all my software.

All of these programs also work on Windows 2003, excluding the camera firmwares and also DriveDroid which is a mobile Android app.

Effects are evil

RAW, optical filters, post-processing: Haters gonna hate

I often hear people criticising the those who use optical filters or who post-process photos, the brainfart usually goes something like “the photo should capture how we see the scene” or some other dogmatic nonsense along those lines.  These people usually say that we should just shoot on automatic, and publish the resulting JPEG image as it comes off the camera instead of using manual mode, raw sensor data, and a post-processing workflow.

This is nonsense for several reasons, the best probably being that the camera doesn’t see the scene even remotely the same as we do.  They have a very wild colour range which is nothing like that of the human eye, which is why there are strong infra-red filters built into most camera sensors or lenses.  When you shoot a JPEG in awesome mode “automatic mode” you aren’t taking a picture “without effects”, you are merely having the camera apply effects automatically based on its “best guess” of how the scene looks to a human.  Should the camera guess wrong, you have no way to fix the image manually (without compromising quality significantly) since you have no RAW file.  Congratulations.

Aside from dynamic range expansion, white balance adjustments, distortion correction, noise reduction and the other usual “automatic” effects used by people who think that they take photos with no effects, there is another idiotic flaw in their reasoning.  Colour and intensity perception isn’t uniform amongst humans, and also varies considerably for each individual, depending on their recent visual experiences.  That is to say, the human visual system exhibits hysteresis.

Scotopic and photopic vision

During the day, the eye uses colour-sensitive cone receptors to detect light, known as photopic vision.  These are mostly concentrated in the central “high-definition” region of the retina, the macula lutea.  After half an hour or without any bright light, the eye transitions into “night-vision” mode or scotopic vision, where extremely sensitive rod receptors are used to detect light, and the colour-sensitive cones are almost completely shut off.  Rods are most sensitive to blue and violet, with very little sensitivity to hues from green to red, so colour perception with rod-vision is considerably reduced* compared to cone-vision.  Rods do not occur as densely anywhere in the retina as cones do in the macula, so rod-vision is also of considerably lower “resolution” than cone-vision.

* Colour vision during scotopic vision is only possible because the cones don’t completely shut off.

Different eyes see different primary colours

Colour perception also varies from person to person.  We typically have three types of colour receptor: L-cones, M-cones and S-cones.  There are different genetic variations of each of these receptors.  Hence, different people may have slightly different primary colours.  The L (“long”) receptor pigment is located on the X chromosome, so women have two red pigments encoded in their DNA.  In some rare circumstances, this can lead to two separate red receptors being present – and thus such women have vision with four primary colours.

Silicon sensors in cameras see a considerably different set of primary colours than human eyes do, and have wildly different response curves compares to cone receptors.  They are very sensitive to infra-red, hence your camera typically has an infra-red removing filter built into the sensor (or the lens on comact cameras).  Removing this filter from a compact camera results in a cool night-vision camera.

A variety of other causes can affect the way that a single individual perceives colours too, such as calcium intake and potassium intake.

Even we don’t see colours “as they are”, our perceived colour space is extremely degenerate

Pure monochromatic yellow light appears the same** as a carefully measured mix of red and green light, since we have no yellow-sensitive receptors but only perceive yellow due to the way it stimulates both red and green receptors simultaneously.  Colour-mixing effects like this can be used to create effects, sometimes used in low-budget plays and gigs.  A pigment which responds to pure yellow but not to red or green, will appear yellow under pure yellow light, but black under a mixture of reasonably pure red and green light.  Hence, performances that cannot afford projectors or stage lasers can still have some animation, appearances, disappearances, and flashing in their backdrop with clever use of light filters and simple DMX programmes.

** The white on the screen that you are currently looking at is actually a mix of red, green, and blue light.  Take a close look at the screen with a magnifier to see the pattern of red/green/blue sub-pixels that have been tricking your brain into thinking that things on your screen were yellow/white/cyan/purple for all these years.


Unless all your digital photos are a mess of green and purple, you are using effects and some kind of post-processing workflow whether you realise it or not.  That “automatic” mode isn’t a “no effects”, but rather a “guess some effects and hope for the best” switch.  Shooting JPEG has only one advantage over RAW: Smaller file size.  While the “automatic” mode on modern cameras is extremely good at guessing the levels of each effect that it applies, it is rarely as good as a human choice.

While you’re thinking about how much science has ruined your views once again, why not also calibrate your monitor so you can get the most out of those [thirstylink linkid=”1340″ linktext=”£500 gold-plated HDMI leads”] that the TV shop told you were “totally necessary”?