Image processing in astrophysics

Astronomical imaging – acquisition and reduction/analysis

Over the last 200 years, the main technique for astronomical observations has progressed from naked eye monitoring and drawing records, all the way to digital data acquisition. Even though the other, more traditional techniques, remain valued and recognized as scientific ways to observe the sky, digital acquisition became a simpler, faster and reliably accurate way to observe and report astronomical observations. The benefits that came with this advanced technology simplified our work and made it easier for us to discover or to obtain new data on objects that can either reside in our solar system, or millions of light years away.

Nowadays, even in a modest astronomical observatory setting, they will use a CCD camera detector fitted with an array of filters for multispectral photometry, attached to the telescope. The CCD camera guarantees a linear response, an essential functionality when observing brightness variations. Not long ago, astronomical detectors had less than 100 kilo pixels and were far noisier and less sensitive than today’s cameras, not to mention that the colour imaging with CCD cameras is also quite new.  Moreover, the device allows for a longer exposure time which generates a superior result than the use of photographic plates or visual observations.

Another method for observing bright celestial bodies (the Sun, the Moon and some planets) is using a high time cadence image acquisition that can be a CMOS detector or a CCD detector, a so called “planetary camera”. Unlike astronomical CCD cameras, planetary cameras are designed to take a large number of images in a short period of time. Most have sensors with an integrated Bayer filter for colour imaging purposes. The shutter is electronic so there are no moving parts, and the electronic sensors are designed to produce up to 60 images per second with a resolution of 640 x 480px. Although the Bayer array means that the effective resolution is about twice that of the physical pixels, simultaneously recording three colour channels confers a significant advantage on planetary imaging.

Targets for astronomical imaging

In the following paper I will discuss the subject of planetary imaging, especially for Jupiter and it’s satellites. The main targets in planetary imaging are the brightest planets such as Mercury, Venus, Mars and two of the Jovian planets.

Mercury, the closest planet from the Sun, can be observed after the sunset and just before the sunrise, only for a couple of hours. During this time, the planet can achieve a visual magnitude ranging between -2 and 1. It has a high albedo and it is one of the planets on which the surface can be seen. As an inferior planet it can be seen as a full disc when it’s further from Earth and as a narrow crescent when it’s near. The best time to observe it is when the planet is at the quadrature (90° from the Sun) that means that the disc is half lit.

Venus, another telluric planet, like Mercury, can be observed when it’s near quadrature and presents a half-illuminated disc, up to four to five times bigger than that of Mercury.  Without filters, the planet presents itself as covered with clouds, but by applying filters such as violet, red, blue or UV, we can see beneath the upper atmospheric layer.

The optimal period to observe the planet Mars comes about every two years, when the planet is at the opposition, when Earth is physically between Mars and the Sun. Mars then reaches an angular diameter between 14 and 25 arc-seconds. The most favourable season for observing Mars lasts four to six months. The planet presents a full disc at opposition and shows a small phase at quadrature. Its appearance varies enormously, depending on the filter used. In blue light the disc appears featureless, but lighter clouds can be seen over the poles. In red light, the dark surface features and the polar caps can be easily seen.

The planet Jupiter has the largest angular diameter and presents a lot of details on its disc: belts of coloured clouds, small and big hurricanes, all in constant movement around the sphere.

The features on Jupiter’s disc always change position and require intensive monitoring. In order to have a lot of recordings concerning the planet’s state, observing campaigns are often proposed, and these require involvement from the astronomical community. For example, this year’s JunoCam campaign managed by NASA and Mission Juno (linked to the Juno spacecraft, launched in 2011 and set to arrive mid-2016) is asking for Jupiter images or maps from both the professional and amateur communities. Another recently-ended campaign was PHEMU 2015 that required photometry of Galilean satellites.

Saturn is the second largest planet and its disc is usually without details: just a few cloud bands and the ring system. Sometimes small white cyclones appear which require observations and positional measurements. Although Saturn’s rings are decidedly a popular attraction with observatory visitors and professionals alike, they are not currently scientifically observed from Earth.


Obtaining images

Images straight from the CCD camera carry a significant number of unwanted signals. In order to obtain a clean image, the goal is to correct and calibrate a raw image in such a way that the unprocessed image is showing, and accurately portrays the intensity of light that fell on the CCD chip. The unwanted signals in an unprocessed CCD image include two additive components and one multiplicative component. The additive components are a voltage offset, or bias, from zero volts, and a signal generated by thermal emission of electrons that grows linearly with exposure time. The multiplicative error arises because photosites have differing sensitivities to light.

Calibration involves removing the bias, subtracting the dark current, and dividing the image. The standard strategy for an observer to accomplish a satisfactorily calibrated image includes performing a few tasks before and during the observing session. Firstly, the observer shoots flat-field frames either from the twilight sky or from a low intensity illuminated panel. Secondly, during the observing session, it is necessary from time to time to take dark frames that have the same integration time as the images. At the end, the dark frames and flat-field frames are stacked and used on the raw image in order to calibrate it. This procedure delivers high quality images that are suitable for display, and can be used for precise astrometry and photometry.


Dark Frames

The dark frames capture a sample of the dark current to be subtracted so as to oust the dark layer from an image. During the time that a dark frame is taken, the telescope is covered so that no light is allowed to strike the CCD. Depending on the calibration protocol, the exposure time for dark frames chosen can be equal to or greater than the exposure time for the images that are going to be calibrated. The dark frames contain the thermal electrons that accumulate during integration and also the thermal noise. The aim is to have a dark frame that is sufficiently accurate that subtracting it from a raw image will not increase the noise in the final calibrated image.


Dark frame, exposure 40 seconds

Flat-Field Frames

The flat-field frames record the response of the entire optical system (the telescope, filters, cover glass, CCD, etc) to a uniform field of light. The resulting flat-field images cannot distinguish whether optical vignetting or quantum efficiency produce a particular pixel value in the frame, but as long as the optical system and CCD does not change, a good flat-field allows the observer to correct both effects as if they were one. Like all images, a flat-field frame contains not only the signal that we want but also bias and thermal electrons. To make a high quality master flat-field, about 10 to 15 raw flat-field frames are required. The usual recommendation is for flat-field frames to be taken before the observation session, either in daylight or twilight. The exposure time is varied, so that the frames are not saturated or under saturated.

flat frame

Flat-field frame 20 seconds

Bias Frames

A bias frame contains the readout noise of the sensor which is produced inside the electronic components that read the pixels. It can be low, but never zero. When taking a zero length exposure, each pixel will have a slightly different value, except for a small amount of noise. Since the bias is consistent from image to image, it can be subtracted. A way to reduce the overall noise of the system is to cool down the CCD, a method that is used on all professional CCD cameras.


Standard image calibration

This requires multiple dark, flat and bias frames, and produces a corrected image for bias, dark current and dust from the optical path. In order to obtain a satisfactory signal-noise ratio, it’s necessary to create and use a median of multiple frames from all the categories mentioned above. A stack of reduction images from the same type of category is called a “master”. For a master dark frame, between 5 and 7 raw dark frames are needed. To make a master flat-field frame, there are up to 10 raw flat-field frames involved, and for a master bias frame 5 raw bias frames are required. All the masters are then subtracted from the image that is going to be calibrated.



Raw image of Messier 42 – The Great Orion Nebula


Calibrated image with dark and flat-field frames of Messier 42


Planetary camera image calibration

Images taken with a planetary camera are not calibrated in the same way as the CCD images, because a significantly large number of images per second are recorded. A poor atmospheric seeing can be avoided by selecting the best frames from a series of images. On a large number of images it’s simply not practical to subtract master dark frames and master flats, so that the only method to get a final good result is to stack and mediate the series of the top quality images taken. Furthermore, in order to raise and distinguish small details in the resulting image obtained by staking, a few digital adjustments can be made, such as: noise removal by averaging filter, noise removal by median filter, sharpening (edge enhancement), and deblurring.


My observing setup

In order to exemplify these procedures and methods, I will present a series of images, as well as the way they were subsequently processed. These images were taken with a C9.25 Celestron telescope, with a primary mirror of 235 mm in diameter and focal length of f/10.

To achieve a greater focal length of the telescope I used a Barlow lens, which is a diverging lens placed in front of the camera (more specifically, the eyepiece) in order to increase the magnification of the image. It is almost always necessary to enlarge planetary images from their size at the focus to scale appropriately for digital imaging. For a better sampling of image detail, the bright central region of the diffraction disk should be twice as large as the pixels that sample the image.

The digital camera used was a DBK 21AU618.AS.  This camera has a minimal noise even in case of long exposures (i.e. 60 minutes, the longest possible shutter speed on this equipment).

The telescope was mounted on a German equatorial mount, fitted with a drive system.


My observing session

During the PHEMU 2015 campaign of observations of the mutual phenomena of the Galilean satellites of Jupiter, throughout which the Galilean satellites occulted and eclipsed each other, I captured the moment when satellite Io passed over (transited) Jupiter. The observing session started at 21:42 TLR and ended after almost two hours at 23:22 TLR, on March 21st, 2015. A number of 86 videos were acquired, each with a duration of approximately 1 minute and totalising 108 GB of data. I then selected the best frames from each series of images and turned them into a median image that I processed afterwards.

The second part of processing was achieved using the different filters in the MATLAB program that I mentioned before. Further explanations will be presented later in the paper. The final results revealed smaller details on Jupiter and its satellite, on a sharp and accurate image.

The first filter used was sharpening, to enhance structures or other details in the image. The distortion of an image may be a result of several factors such as the deflection of light rays, defective equipment, and electronic noise or other signals. Sharpness is actually the contrast between different colors. A rapid transition from black to white will create a sharp aspect. In contrast, a more gradual transition from black via gray to white will seem blurry. Sharpening images increases the contrast along the edges where different colors meet.

The MATLAB function used for sharpening

The MATLAB function used for sharpening


Raw median images

Raw median images


Images processed in MATLAB with sharpening filter

Images processed in MATLAB with sharpening filter

Noise removal by averaging filter and noise removal by median filter

These filters are being used to clean the image from dots (usually hot and dark pixels from the sensor readout). This type of filter can also be used to further increase the signal to noise ratio (SNR).

The averaging filter, called a ‘low-pass filter’, averages out rapid changes in intensity, by calculating the average of a pixel and all of its immediate neighbours and then replacing the original value with the result. The process repeats itself for the rest of the image, every pixel being replaced by the average of the surrounding pixels in a square around it. For example we can choose a 3×3 kernel, where the closest three pixels are being averaged.


100 100 100 100 100
100 200 205 203 100
100 195 200 200 100
100 200 205 195 100
100 100 100 100 100

 Original pixel values from the image

100 100 100 100 100
100 144 167 145 100
100 167 200 168 100
100 144 166 144 100
100 100 100 100 100

 3×3 average kernel


Sometimes a 5×5 or a 7×7 kernel is used, depending on the quantity of the noise on the image. A variation of this kind of kernel filter is called a ‘Gaussian Blur’, which allows the user to define a particular shape of the blur kernel. An averaging filter usually blurs the edges of the image, especially on large kernels.

The unwanted noise can be removed with a median filter which, instead of averaging the neighbouring pixels, takes the median value as standard (by sorting all the pixels in an increasing order and calculating the middle value). A median value rejects any values that are substantially different from the others, removing sudden and unrepeating noise such as hot pixels and cosmic rays.


100 100 100 100 100
100 200 205 203 100
100 195 200 200 100
100 200 205 195 100
100 100 100 100 100

Original pixel values from the image

100 100 100 100 100
100 100 200 100 100
100 200 200 200 100
100 100 195 100 100
100 100 100 100 100

  3×3 median kernel

Image sharpening and noise removal in MATLAB

All the filters described above can be applied in MATLAB, with the following commands:

Sharpening an image

Imsharpen – returns an enhanced version of the grayscale or truecolor (RGB) input image, where the image features, such as edges, have been sharpened using the unsharp masking method.

Imsharpen (A)

We can also sharpen the image, specifying the radius and the amount parameters.


Radius – is the standard deviation of the Gaussian lowpass filter.

Standard deviation of the Gaussian lowpass filter is specified as a numeric value. This value controls the size of the region around the edge pixels that is affected by sharpening. A large value sharpens wider regions around the edges, whereas a small value sharpens narrower regions around edges.

Amount – is the strength of the sharpening effect.

The strength of the sharpening effect is specified as a numeric value. A higher value leads to a larger increase in the contrast of the sharpened pixels. Typical values for this parameter are within the range [0, 2], although values greater than 2 are allowed. Very large values for this parameter may create undesirable effects in the output image.


Median filter – 2-D median filtering

medfilt2 – performs median filtering of the matrix A in two dimensions. Each output pixel contains the median value in a 3-by-3 neighborhood around the corresponding pixel in the input image.

medfilt2(A, [m,n])

Performs median filtering, where each output pixel contains the median value in the m-by-n neighborhood around the corresponding pixel in the input image. By default, the neighborhood size is 3×3.


Averaging filter

MATLAB has several two-dimensional and multidimensional filtering functions:

The function named filter2 performs two-dimensional correlation, conv2 performs two-dimensional convolution, and convn performs multidimensional convolution. Each of these filtering functions always converts the input to double, and the output is always double. These other filtering functions always assume the input is zero padded, and they do not support other padding options.

In contrast, the imfilter function does not convert input images to double. The imfilter function also offers a flexible set of boundary padding options.

B = imfilter(A,h) filters the multidimensional array A with the multidimensional filter h. The array A can be logical or a nonsparse numeric array of any class and dimension. The result B has the same size and class as A. imfilter computes each element of the output, B, using double-precision floating point. If A is an integer or logical array, imfilter truncates output elements that exceed the range of the given type, and rounds fractional values.



Processing astronomical images for scientific results can be difficult, and is very time-consuming. As stressed in this paper, there are a few important and essential steps to transforming a raw image into one that you can get data from. These steps are not far different from the ones needed in the case of processing images of asteroids, supernovas, novas, comets, etc.

MATLAB assists throughout the entire image analysis process, starting with dark image and flat-field frames reduction, and continuing with applying necessary filters in order to emphasise details or image manipulation so as to accentuate key-pixels.  At the same time, MATLAB can perform image adjustments not only to process it scientifically, but to improve its visual aspect and quality. It features a series of available and easy-to-use functions, such as brightness scaling, histrogram shaping, unsharp masking, walvelet spatial filtering, deconvolution, gradient correction, noise removal and others.

Processing images using different filters can benefit but also interfere with or even destroy certain image areas. This is easier to manage using MATLAB, because the scaling and filter-adjustments are performed by the observer, so as to manipulate the image and enhance it without the loss of important data or valuable pixels.

In the case exemplified, from a raw image that seemed flat and without any details, I was able to determine the true colour of the planet, details such as the coloured bands and cyclones of different dimensions became visible, and even a few details on Io, the satellite observed.

Final results after applying filters

Final results after applying filters


1. Richard Berry and James Burnell, “The Handbook of Astronomical Image Processing”, 2nd edition, Willmann-Bell.Inc 2011.

2. Uvais Qidwai and C.H. Chen Digital, “Image Processing-An Algorithmic Approach with MATLAB”, 2009 by Taylor & Francis Group.

3. Gonzalez and Woods, “Digital image processing”, 2nd edition, Prentice Hall, 2002.

4. Vicent Peris, tutorial on “Master calibration frames: acquisition and processing”, PixInsight.




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s