An Intro to Color

In my experience, one of the most contentious things about image processing is color. When most people see a pretty picture of the Solar System, they understandably want to see a location as if they were standing (or floating) there. But this simple request often turns into arguments involving the application of color theory, the limits of spacecraft technology, and even solipsist philosophy. Because there is an element of the personal in interpreting reality, these arguments have a tendency to get heated. So, let's nip this in the bud: spacecraft are an imperfect tool of replicating human vision, and while we have ways to get close it doesn't entirely match the experience of being there. Conversely, humans in space will never have an "unfiltered" look at a planetary surface - they'll always be looking through the filter of a spacesuit helmet.

This page briefly walks through what color is, how we measure and represent it with a spacecraft camera, and the terminology I use when describing color images produced by spacecraft.

Very broadly, cameras all work in the same way: a photon interacts with a medium and causes a reaction that records the interaction. In film, a photon frees an election to reduce a soluable crystal into an insoluable form that sticks around when the film is developed. In analog TV, the photon leaves a charge on a metal plate that can be measured with a cathode ray tube. In digital images, photons activate electrons on a grid of semiconducting surfaces. If we extend this principle to a human eye, it works like a camera too. A photon slightly changes the structure of a protein, starting a cascade of electrochemical reactions that eventually register in our nervous system. Regardless of the details, imaging is an inherently grayscale process. All these processes record is the total intensity of light within a specific wavelength range the surfaces are sensitive to.

To actually see color, you need a way of both changing the energies of photons you can detect and comparing the intensities with one another. In human eyes, the photosensitive proteins have slightly different compositions, so photons of a given energy might activate one protein but not the others. The brain then works its magic to process color from this reaction, but it's essentially a comparison of how many proteins of each type are activated. In theory, you could use this concept in cameras by changing the composition of the detector, but this is impractical. Instead, cameras rely on filtering, a process that blocks photons outside a given range of energies from striking the sensor. To replicate human vision with a camera, we apply red, green, and blue filters, which only allow photons corresponding to the energies we associate those colors with. If a surface looks blue in a picture, it's because more photons got through the blue filter than the red or green filters.

Expanding Color

We can expand this concept to colors humans can't see. In the near-infrared and near-ultraviolet, materials often show a lot more variability in the photon energies they reflect and absorb. Applying to photography outside the range of human vision opens up whole new ways of looking at things! For example, most minerals reflect and absorb light pretty much the same way in the visible spectrum. That's why rocks are commonly some variation on gray or dull brown. But in the near-infrared, minerals start absorbing photons in very different ways depending on their elemental composition, crystal structure, and which molecules (like water) are incorporated. This is extremely useful when you're exploring another world - it vastly expands your ability to tell different kinds of rocks apart. If you equip a spacecraft with filters designed to look for these differences, you can start doing geology from a distance.

This is why spacecraft are commonly equipped with a wide range of filters. We can divide filters into two general groups. The first group is "broadband filters", which is used when we want to get an idea of how intense light is over a wide portion of the spectrum. Some broadband filters cover the entire wavelength range detectable by an instrument (a 'clear' or 'panchromatic' filter), or large segments of that wavelength range. Most spacecraft include several filters covering roughly the same amount of the spectrum, but each targeted to a range of colors (usually something like as 'infrared', 'near-infrared', 'red', 'green', 'blue', 'near-UV', 'UV'). The second group is "narrowband filters", which are used to filter out everything but a narrow range of wavelengths. These filters are commonly used to study gases, which absorb or emit light only at specific wavelengths. For example, if we want to map the distribution of methane in Saturn's atmosphere, we use a filter that looks exclusively at light with an 889 nanometer wavelength. This is because methane is particularly good at absorbing photons at this wavelength, so if it's present, that part of the photo will look dark, and if it's not present, that part of the photo will look bright.

A word of warning -- what constitutes "broadband" and "narrowband" varies between instrument descriptions. Generally, if a spacecraft has a panchromatic filter, it might be described as "broadband", while filters covering a smaller portion of the detectors wavelength sensitivity might be described as "narrowband". This is particularly the case with early missions relying on analog detectors, since the lower sensitivity necessarily limited how much light you could filter out before the detector wouldn't see anything at all.

Understanding Exposure

Most photographers take color photos in a single shot. We're able to do this because the sensors in our camera are covered with a checkerboard grid of red, green, and blue filters called a Bayer filter. Each pixel is covered with a single filter, and the sensor internally estimates the intensity of the other two colors by comparing with the surrounding pixels. However, this process is rarely used on spacecraft - using a Bayer filter effectively reduces a sensor's spatial resolution of color by ~1/3. Instead, they are equipped with filter wheels that rotate filters in front of the entire sensor. This means to produce a color image, a spacecraft needs to take three separate photos.

When taking pictures, imaging scientists set the exposure length so that each image collects "enough" light to see features of interest. The brightness of a surface can change a lot depending on the wavelength you're viewing at. This means that an exposure set to collect "enough" light through one filter may be too much or too little light through another. As a result, comparing raw exposures loses the relative brightness information we need to interpret colors. If we use those raw exposures to make a color image, we'll probably end up with wildly distorted colors.

This leads us to the concept of a photographic stop. A stop is a way to directly compare how much light was let into a camera during an exposure. Increasing an exposure by one stop is equivalent to doubling the amount of light entering the camera. Terrestrial photographers can control this by changing the length of the exposure, the aperature of the camera lens, or sensor's sensitivity to light. For most spacecraft, this is usually controlled only using the exposure length. The lens aperature is set in stone on the ground because it means one less moving part to break, and changing the sensitivity could lead to low signal-to-noise ratios. Fortunately, this makes adjusting the brightnesses of different lengths to a constant value easy. Let's say we have a photo taken through a red filter with an exposure of 0.5 s, and another through a blue filter for 1.0 s. Because one image was exposed for twice as long, it doubled the amount of light entering, so it has a stop of 1. We can adjust the red image by a stop of +1 or the blue image by a stop of -1 so that the images have the same theoretical exposure time as one another. By adjusting the stop, a resulting color image will no longer be distorted.

Assembling Color

There are a lot of ways the pictures taken through different filters can be combined. A person writing a press release might want a photo taken through red, green, and blue filters to give the public a sense of being there and seeing a place for themselves. A geologist looking to understand rock types might want to use a combination of blue, red, and infrared filters to see subtle differences in the rocks. An atmospheric scientist may want to use different infrared filters to map the distribution and concentration of a gas. And so on. Ultimately, the use of color depends on the purpose, whether it is a replication or extension of our natural senses.

With that in mind, here's how I use different terms related to color:

True color - You will almost never see me use this term. To me, "true color" implies a rigorous process of matching the colors a spacecraft saw to the colors a human eye can see. This means knowing precisely how much light is passing through a spacecraft filter, then converting it to a value that matches exactly how much of that light would have been picked up by the receptor cells in our eyes. There are ways of doing this (and some people are very good at it!), but it involves a lot of calculus, matrix math, and application of color theory. No thanks. I also think it's somewhat limiting - after all, there is natural variation in human eyes. Some people can see a little further into the IR and/or UV than others, which means that "true color" will vary from person to person. There's also a problem of adaptation. The Moon is light, light brown in color, but since our only reference point is that light brown, our eyes interpret it as gray. Finally, there's the "filter" problem. You always see the world through a filter of some sort. Your corneas, your glasses, a spacecraft window, etc. are all filters. I think the term "true color" lends an air of objectivity to an inherently subjective sense.

Natural color - This is my preferred term for images using the visible portion of the spectrum. It's less rigorous - as long as the "blue" generally corresponds to what we perceive as blue, "green" to green, and "red" to red, it's a naturalistic representation of what we might see. That said, I still make minor adjustments based on how filters handle light. This is a common issue, particularly for spacecraft with analog TV cameras. These cameras were blind to actual red light, because they relied on the photoelectric effect to generate a signal. So what was called a 'red' filter is actually measuring orange light. On the Martian surface, a lot of the color differences we see become more vivid in redder wavelengths, so color images returned by Viking might be missing color variation we're actually capable of seeing. Making these adjustments is part science, part guesswork, which is another reason I prefer the term "natural color".

Extended color - This is a term I like to use for images that "extend" past the visible portion of the spectrum. Usually this means that "blue" corresponds to ultraviolet light, "green" to light in the middle of the visible spectrum, and "red" to light in the infrared. This is particularly useful for interesting images of icy bodies, which tend to be relatively bland-looking at visible wavelengths. The inclusion of light outside the visible spectrum brings out interesting features such as fresh new ice (usually bright in the infrared) or extremely old dust (which turns dark in the ultraviolet).

Enhanced color - This is usually how I describe heavily processed images. The color and brightnesses may not be "realistic", but it does give a very strong sense of differences within the photo, which is useful for picking up on subtle changes that aren't immediately visible. A good example is making a rock map of the Moon from your backyard. Go take a photo of the Moon at the highest possible resolution, turn the saturation up as far as it will go, and you can see first-order differences in rocks on the Moon. Some of the dark regions will be bluish (titanium-rich lava flow), some will be yellow (iron-rich lava flows), and the bright regions will be slightly pinkish (feldspar-rich).

False color - This is a term that people rarely agree on, and it means many things to many people. It's a term I only use occasionally, and I reserve it generally for filter combinations that don't use the visible spectrum, combinations of narrowband filters, or imagery incorporating filter math (where "color" is derived from harshly stretched differences between two filters). Some people classify these uses under "enhanced color" or "representational color". Other people reserve the term for artificially-colored intensity maps (like radar), or when colors are presented out of order.