Effects are evil

RAW, optical filters, post-processing: Haters gonna hate

I often hear people criticising the those who use optical filters or who post-process photos, the brainfart usually goes something like “the photo should capture how we see the scene” or some other dogmatic nonsense along those lines.  These people usually say that we should just shoot on automatic, and publish the resulting JPEG image as it comes off the camera instead of using manual mode, raw sensor data, and a post-processing workflow.

This is nonsense for several reasons, the best probably being that the camera doesn’t see the scene even remotely the same as we do.  They have a very wild colour range which is nothing like that of the human eye, which is why there are strong infra-red filters built into most camera sensors or lenses.  When you shoot a JPEG in awesome mode “automatic mode” you aren’t taking a picture “without effects”, you are merely having the camera apply effects automatically based on its “best guess” of how the scene looks to a human.  Should the camera guess wrong, you have no way to fix the image manually (without compromising quality significantly) since you have no RAW file.  Congratulations.

Aside from dynamic range expansion, white balance adjustments, distortion correction, noise reduction and the other usual “automatic” effects used by people who think that they take photos with no effects, there is another idiotic flaw in their reasoning.  Colour and intensity perception isn’t uniform amongst humans, and also varies considerably for each individual, depending on their recent visual experiences.  That is to say, the human visual system exhibits hysteresis.

Scotopic and photopic vision

During the day, the eye uses colour-sensitive cone receptors to detect light, known as photopic vision.  These are mostly concentrated in the central “high-definition” region of the retina, the macula lutea.  After half an hour or without any bright light, the eye transitions into “night-vision” mode or scotopic vision, where extremely sensitive rod receptors are used to detect light, and the colour-sensitive cones are almost completely shut off.  Rods are most sensitive to blue and violet, with very little sensitivity to hues from green to red, so colour perception with rod-vision is considerably reduced* compared to cone-vision.  Rods do not occur as densely anywhere in the retina as cones do in the macula, so rod-vision is also of considerably lower “resolution” than cone-vision.

* Colour vision during scotopic vision is only possible because the cones don’t completely shut off.

Different eyes see different primary colours

Colour perception also varies from person to person.  We typically have three types of colour receptor: L-cones, M-cones and S-cones.  There are different genetic variations of each of these receptors.  Hence, different people may have slightly different primary colours.  The L (“long”) receptor pigment is located on the X chromosome, so women have two red pigments encoded in their DNA.  In some rare circumstances, this can lead to two separate red receptors being present – and thus such women have vision with four primary colours.

Silicon sensors in cameras see a considerably different set of primary colours than human eyes do, and have wildly different response curves compares to cone receptors.  They are very sensitive to infra-red, hence your camera typically has an infra-red removing filter built into the sensor (or the lens on comact cameras).  Removing this filter from a compact camera results in a cool night-vision camera.

A variety of other causes can affect the way that a single individual perceives colours too, such as calcium intake and potassium intake.

Even we don’t see colours “as they are”, our perceived colour space is extremely degenerate

Pure monochromatic yellow light appears the same** as a carefully measured mix of red and green light, since we have no yellow-sensitive receptors but only perceive yellow due to the way it stimulates both red and green receptors simultaneously.  Colour-mixing effects like this can be used to create effects, sometimes used in low-budget plays and gigs.  A pigment which responds to pure yellow but not to red or green, will appear yellow under pure yellow light, but black under a mixture of reasonably pure red and green light.  Hence, performances that cannot afford projectors or stage lasers can still have some animation, appearances, disappearances, and flashing in their backdrop with clever use of light filters and simple DMX programmes.

** The white on the screen that you are currently looking at is actually a mix of red, green, and blue light.  Take a close look at the screen with a magnifier to see the pattern of red/green/blue sub-pixels that have been tricking your brain into thinking that things on your screen were yellow/white/cyan/purple for all these years.

Summary

Unless all your digital photos are a mess of green and purple, you are using effects and some kind of post-processing workflow whether you realise it or not.  That “automatic” mode isn’t a “no effects”, but rather a “guess some effects and hope for the best” switch.  Shooting JPEG has only one advantage over RAW: Smaller file size.  While the “automatic” mode on modern cameras is extremely good at guessing the levels of each effect that it applies, it is rarely as good as a human choice.

While you’re thinking about how much science has ruined your views once again, why not also calibrate your monitor so you can get the most out of those [thirstylink linkid=”1340″ linktext=”£500 gold-plated HDMI leads”] that the TV shop told you were “totally necessary”?

LaTeX neural networks

A LaTeX/TikZ/PGF package for drawing directed graphs, such as neural network schematics.

I started a project to create an open-source mid-level Machine Learning textbook, based on some notes from a Caltech course and a Coursera course. The contributions from the community were of poor quality and laden with mistakes, so I eventually terminated the project (having rejected all public submissions). I will work on the book as an occasional project for my own benefit, but it is not a priority any more. I have stopped working on the book.

Neural network implementation of an exclusive-or (XOR) logic gateTo rapidly produce neural network illustrations in the book, I created a LaTeX package to wrap all the TikZ/PGF clutter. The result is a set of LaTeX macros that allow high quality neural-network graphs to be drawn rapidly, and I have since made the package publicly available via my github repository.

Available on CTAN: /graphics/pgf/contrib/neuralnetwork
Download:
LaTeX neural networks package
Latest version (github): battlesnake/neural

Neural network with two hidden layers

Camera habbits

I wanted to have a brief forage into three dimensional data visualisation.  The photos on this website provided me with a quick way to build a dataset: digital cameras store plenty of metadata in each photo, in addition to the image itself.

I put together a little PHP script that reads the “camera settings” data from every photo on this site and collates it into a flat database.  An R script then takes two columns from the database and produces 2D density plots, (no longer) shown below.  An initial attempt to produce full 3D animations using R was hampered by a resource leak in the RGL graphics package.  This was resolved by having the R script render only one frame rather than the whole animation.  A separate BASH script calls the R script repeatedly to render each frame of the animation, and then to coalesce the frames.

This visualisation allowed me to see what camera configurations I tend to use the most, information which has no real value to me besides curiosity at the moment.  It has also persuaded me to move away from using R for analysis, it is such a painful environment for doing anything remotely graphical…

3d-FocalLength-Aperture-ShutterSpeed-high

The faster shutter speeds are enabled by the reasonable sharpness offered by my 35mm prime at its open end, while the blob around f/4 to f/5 is due to the widest ends of my Sigma 150-500mm, Sigma 10-20mm and (now sold) Nikon 55-300mm.  The blobs centered at 3″/f10 are probably from waterfall long-exposuresjupiter and the slower shutter speeds around the f/1.8 aperture region will be from fibre-optic art.  The cluster from f5/150mm-f6.3/500mm is from using the big Sigma wide open, with the cluster around 500mm being mostly from low-budget astrophotography with the same lens.  It’s interesting how easily trends can be visualised with just a little bit of crude scripting.

The end result of all this is that I abuse the open end of my f/1.8 prime far too much, when I could probably get sharper, less hazy photos by resorting to a slightly slower shutter instead…