Note: this was originally written for my old Tumblr blog in late 2013-mid 2014
These days, charge-coupled devices (CCD) are the go-to choice of imager for missions into deep space. Highly sensitive to light and relatively easy to calibrate and process, they’re ideal for missions far from the Sun or snapping lots of pictures during a high-speed flyby. But the CCD is a relatively recent addition to the field of spaceflight, only coming into widespread use during the late 80s. How did spacecraft image their targets before the CCD? The answer is this thing: *voyager ISS vidicon schematic* (Image credit: Wikipedia) ‘This thing’ is a device known as a vidicon. The vidicon was the primary imaging device used on nearly every deep-space NASA mission from 1965 to 1977. The Voyagers were among the last spacecraft to use vidicons. So how does it work? Light is focused through a camera lens (at left in this diagram) and onto a photosensitive plate. This plate collects a charge with exposure to light. To return this image to Earth, the charged plate is scanned with a CRT. The charged surface causes a slight deflection of the electron beam shot out by the CRT, which is registered as a voltage change at the CRT. The voltage change is digitally recorded and then sent back to Earth, where it can be stored and/or fed into another CRT to reproduce the image. The vidicon was useful for deep space missions because it had advantages over other imaging methods employed early in the Space Age. For example, the Lunar Orbiter probes, used to survey landing sites to be used by the Apollo program, took black and white film images of the Moon, scanned them, and transmitted the image back to Earth. Although that method produced extremely detailed images of the lunar surface, it also took a lot of time for the film to process onboard the spacecraft, and more importantly, only had a finite supply of film. For missions of longer duration, using an intermediate film step was totally impractical. Despite its advantages, the vidicon also had severe drawbacks, which I will discuss in the next post.
As I mentioned in my last post, the vidicon had several drawbacks. Today I’ll talk about one of these - the problem of geometric distortion. This is a problem inherent in every vidicon device, although it can be minimized to some degree. If you’ll remember, the vidicon works by a CRT “reading” an electromagnetically charged image plate. The CRT scans back and forth across the plate, reading changes in charge. The problem is that areas of intense charge (say, a bright or overexposed area on the plate) pulls the electron beam off course. The effect is like sticking a magnet on the screen of a CRT television. Here’s a quick MS Paint drawing to visualize: *paint drawing of geometric distortion* This kind of distortion is tolerable if all you want is a pretty picture. But when you need to make scientific measurements of features seen in the image, it becomes a major problem. How do you know what the actual size of an object is in the image if it’s distorted? The solution the Voyager engineers came up with is a grid of dots painted onto the imaging plate itself. These dots would always turn up black, because they blocked the CRT from reading the charge on the plate underneath. And, because the physical location of the dots was known, it allowed them to compare where the dots were appearing on distorted images and where they should have been on an undistorted image. And, with a bit of computer processing (available to the Voyager science team, albeit much less capable and more time-sucking than today’s computers), the image could be warped to remove the distortion, and voila(!), an image you could use to measure feature sizes. Although the original correction of the images involved physically measuring each reseau mark and feeding its location into a processing program, the same thing can easily be done today by using a routine such as a puppet warp in photoshop. Here’s an animated comparison between an uncorrected and corrected image: *comparison flicker*
One of the problems inherent in any medium, whether it be film, vidicon, or CCD, is that the the material that makes up the imaging surface is never quite homogeneous. These are due to manufacturing defects that can be minimized but not eliminated. In film, it was uneven application of the emulsion; in vidicons, the problem is in manufacturing the photosensitive plate. If you’ll remember, the vidicon uses a photoelectric plate to store an image until it’s scanned by the CRT for transmission. The Voyager’s vidicon used an amorphous compound of selenium and sulfur - a compound that was very efficient at capturing photons and converting them to electricity. The problem comes in the manufacturing process. How do you ensure that the selenium sulfur compound is perfectly homogeneous across the plate? The answer is that you don’t - you make it as homogeneous as possible, but you won’t quite get it perfect. This causes a problem, though. Since the compound isn’t quite evenly distributed across the plate, some areas of the plate collect more charge than others, making some areas of the image brighter than others. The solution to this problem is known as flatfielding. By taking a calibration picture with an even amount of illumination, you can capture the imaging plate’s response to light. Once the camera’s response to light is known, you can correct for it in processing. Let’s take a look at a Voyager image of Neptune: *photo of neptune* This image has been processed to highlight the uneven response of the camera. As you’ll notices, the corners of the frame are brighter than the center - this is a normal behavior of a vidicon. But on Neptune itself, you can see a couple of dark splotches and a bright semicircle on the upper corner. Flatfielding won’t take care of the circle, since I believe that’s an optical effect from the camera’s lens. However, flatfielding should take care of the dark splotches. I’ve been working to find a calibration frame that will help me do so, but I haven’t had much luck there.