What Is Drizzle?

In the old days when CCD sensors were still quite small (such as 512*512), under-sampling was sometimes unavoidable in order to obtain a larger field of view. This is especially true in space missions, where resource (on-board computing, power consumption etc) is limited. The priority is usually to make the instruments robust in the space rather than adopting the newest sensor and processor technology. The most well known example is the Wide-Field Planetary Camera2 (WFPC2) on the Hubble Space Telescope. The wide-field sensors on WFPC2 terribly under-sample the HST images. To achieve the angular resolution that the HST should have, a special image processing algorithm called "drizzle" was developed. This page pretty much explains what drizzle really is, but I would like to provide my personal explanation and examples of drizzle.

Super-resampling an under-sampled image is not drizzle. It is just super-resample by interpolating pixels. It does not really recover the spatial information that was lost in the initial under-sampling. However, drizzle does, to a fair degree.

Drizzle does not super-resample images. Drizzle shrinks the size of pixels of an under-sampled image. After the pixels are shrunk, lots of area in the image become blank. (We will soon see what this means.) This is the key difference between drizzle and super-resampling. In a super-resampled image, the blank pixels are interpolated. Drizzle does not interpolate pixels (or at least avoids interpolation as much as possible). It just leaves the extra pixels blank.

To see how drizzle works and to compare it with super-resampling, here is a simple example. This simulated example is created from a real Hubble Deep Field image. This is the original image that the simulations are based upon.

Fig.1 - Original Image.

To simulate under-sampled, dithered images, I first created 9 images in which the original image was shifted (0,0), (1,0), (2,0), (0,1), (1,1)...(2,2) pixels. To make each image different, I added Gaussian random noise to each shifted image. The rms of the added noise is 2.5x the rms noise in the original. Then I down-sampled the 9 shifted images by 3x. The down-sampling was box-averaging (over 3x3 pixels). Below is one of the 9 simulated, dithered, under-sampled images.

Fig.2 - One of the 3x-undersampled dither subframe.

Clearly, the resolution in Fig.2 is worsen by the large pixels. The other 8 images are just like this one, but they have offsets of 0.333 pixels between each other along x and y directions.

One can super-sample these images by 3x, but the result will never be like the original. Below is the result of 3x super-sampling of one of the 3x under-sampled image. Here, the super-sampling was bilinear interpolation. Bicubic interpolation generally gives better-looking results, but it is not really any different in terms of the amount of lost spatial information.

Fig.3 - 3x super-sampling of the 3x under-sampled image. (Mouse over to see the original. Ignore the offset between the images.)

Next, I shifted the 9 super-sampled images back to their original position and stacked them. Below is the result.

Fig.4 - Stack of the super-sampled images. (Mouse over to see the original. Ignore the offset between the images. The offset was correctly taken care of in the image processing. I just didn't want to spend extra time in Photosho to align the JPEG images.)

It can be seen that the stacking reduced the noise (only for the noise that I added, not the noise that already exists in the original), and also makes the bright cores of the galaxies look smoother. Because of the 0.333 pixel dithering, the stacked image does contain more spatial information than any of the subframe. However, the image quality (resolution-wise) is still much lower than the original. Lot of spatial information was lost in the initial under-sampling, and super-sampling plust shifting and stacking cannot recover that.

How about drizzle? An example of a drizzle subframe looks like this:

Fig.5 - One of the 9 drizzle subframes.

The pixel dimension of the image in Fig.5 is 3x larger than that in Fig.2, just like the super-sampled images. However, the drizzle subframe does not interpolate the pixels. It just leaves the extra pixels blank. In other words, in this drizzle subframe, only 1/9 of the pixels have values and the rest 8/9 pixels do not. This is what I meant by shrinking the pixels.

We have 9 frames like this. There is a 1 pixel offset between each of them along x and y, and each of them only has 1/9 pixels that have values. Now you see where I am going. Shifting and combining 9 of them can fill in all the blank pixels. Below is the result.

Fig.6 - Drizzle combined image. (Mouse over to see the original.)

There is still a small amount of spatial information lost in the drizzle combined image. However, the image quality is obviously higher than the super-sample combined image, and much closer to the original. It is also a bit noisier than the original, because of the added noise. The added noise is 2.5x that in the original, and 9 frames were combined. Thus the combined, added noise becomes 2.5/sqrt(9) = 0.833 times the original. This makes the drizzle-combined image sqrt(1+0.833^2) = 1.30x noisier than the original.

This is just a simple example. In the real world, drizzle still does some small amount of interpolation. This is because usually the dithering offset is not an exact fraction of the pixel size. Even for the case of the HST where sub-pixel dithering can be performed accurately, correcting image distortion still requires interpolation to redistribute the flux in a shrunk pixel to its neighbor pixels. Nevertheless, this example should provide a good idea about what drizzle is and the difference betwen drizzle and super-sample stacking.