A Dummies Guide to Deep Sky Astrophotography

"The vast majority of Astrophotographers take celestial protraits because their pictures reveal another universe, one of beauty and wonderment that is largely invisible to the naked eye"

-Terence Dickason
"The Backyard Astronomers Guide"

 

 

 

 

The objects that I photograph are mostly much to faint to be seen with the naked eye, and often only barely visible through a small telescope. To see them as they appear in my images requires very long exposures, or the "stacking" of large numbers or moderate length exposures. This Article is a brief overview of the process involved creating these images.

 

The heart of my imaging setup is a Starlight Express MX-716 "Astro-Cam" , which in turn is built around a Sony ICX423AL CCD imaging chip (see picture). The chip contains an array of 752x586 "Pixels", which are tiny light sensors which convert light into electricity, or more precisely, convert individual "photons" (particles of light) into electrons (particles of electrons). Once the conversion has occured, the electron can be stored by the pixel indefinately, until "read-Out", at which time the number of electrons stored in each pixel is counted by a circuit called an Analog-To-Digital Converter and the resulting count is sent to my laptop computer so an image can be built (all of this is basically identical to a digital camera, webcam, etc.). Faint objects emit (or reflect) far fewer photons then (for example) objects illuminated by daylight, and the number of photons striking the chip constantly fluctuates for reasons ranging from atmospheric turbulance to Heisenberg's Uncertainanty principle. So to get an accurate average per unit time (which what we are really after here) requires very long exposures.


Entire Setup: Scope, Mounting, Camera & Color Wheel
View of Camera showing CCD sensor chip

The heart of my imaging setup is a Starlight Express MX-716 "Astro-Cam" , which in turn is built around a Sony ICX423AL CCD imaging chip (see picture). The chip contains an array of 752x586 "Pixels", which are tiny light sensors which convert light into electricity, or more precisely, convert individual "photons" (particles of light) into electrons (particles of electrons). Once the conversion has occured, the electron can be stored by the pixel indefinately, until "read-Out", at which time the number of electrons stored in each pixel is counted by a circuit called an Analog-To-Digital Converter and the resulting count is sent to my laptop computer so an image can be built (all of this is basically identical to a digital camera, webcam, etc.). Faint objects emit (or reflect) far fewer photons then (for example) objects illuminated by daylight, and the number of photons striking the chip constantly fluctuates for reasons ranging from atmospheric turbulance to Heisenberg's Uncertainanty principle. So to get an accurate average per unit time (which what we are really after here) requires very long exposures.

This is further complicated by the rotation of the earth. The earth rotates once every 24 hours or about 1 degree every four minutes, and it would be impossible to take even a one-second exposure of an astronomical object if this rotation were not compensated for (without compensation, the images end up with horrendous motion blur, or "star trailing"). My scope mounting has a built in clock drive that rotates the scope from east to west at a rate matching the earths rotation rate, this allows individual frames of about a minute or so without motion blur (the clock drive is not perfectly accurate, and neither is the process of aligning the mountings rotation axis to the celestial north pole). But as mentioned above, the effective exposure time can be extended indefinitely by taking a series of moderate length exposures (30-120 seconds, usually), then using computer software to automatically align and sum up the images (There are a number of programs that can do this, my favorite is K3CCD tools , I also like AIP4WIN ). Typically my images are built from about an hours worth of 30 to 60 second "subframes". It should be noted that an alternative method is to use "AutoGuiding", in which a second CCD camera tracks the object being photographed and feed correction data back to the mounting, which allows for much longer "Real" exposure times. I have experimented with this a bit but most of my pictures are taken without it.

Single 30 second frame of M51
Sum of 60 30 second frames shows much lower noise

It is also worth noting that although most of my images are in color, the camera I use actually images in black-and-white. "Color" imaging chips, like the ones in typical Digital Cameras, have a tiny Red, Green or Blue filter in front of each pixel. This allows true color images to be built from the chip output, but substantially reduces the chips sensitivity - the red filter blocks all blue and green light, the Green filter blocks red and blue, and so on - the result is that sensitivity is in theory cut by 2/3rds. The approach I prefer is called "LRGB" imaging ("Luminance"/ "Red"/"Green"/"Blue") in which I combine a high resolution monochrome image with lower resolution images taken through external color filters (I have a "filter wheel" which goes between the camera and telecope and lets me flip through the filters easily). I then use a program called Registar to automatically align the separate frames (Note to other software geeks - Registar has been shipping for at least 6 years and is only on version 1.0.7!). BTW, a common question is whether objects are really as colorful as portrayed in my images. In most cases they are, but there is an important caveat in that human beings are effectively colorblind at low light levels, so if you look at the objects pictured through a small scope expect to see little or no color. Having said that, emission nebulae like the Orion and Veil nebulas are brilliant colorful (given a sufficiently long exposure time), galaxies really are pretty close to grey and I usually need to turn the color saturation way up to see any color. Star clusters are usually somewhere inbetween.

The telescope I use is a mid-eighties vintage Televue "Genesis" refractor. It is fairly small - the Aperature (diameter of main lens) is only 4" and the focal length (tube length, approximately) is 20". When doing Astrophotography the most important staistic is really the scopes "F-Ratio", which is the focal length divided by the aperature. Low F-ratio means more light gathered per unit time, regardless of the telescope's size. The Genesis has a focal ratio of about F5, which is about as low as you can go without particularly exotic and expensive optics. A larger scope would give me a smaller field of view (=higher effective magnification), but not necessarily more light gathering ability. The Genesis has a very large field of view, larger then the full moon. So I concentrate on photographing large, faint objects. The Genesis was designed by a guy named Al Nagler (who I have spoken to on the phone), and built in his factory in upstate New York. Televue has been replaced it by the "NP-101", which is supposed to be even better but is extremely expensive.

As I mentioned earlier, the Camera I use is called a Starligt Express MX-716. Like my scope, it is a fairly old design and was purchased used. It is not much different then a webcam, except that:

It was designed by a guy named Terry Platt and is built in Great Britan. Starlight Express still makes a version of this camera, the main difference is that the RS232 interface has been replaced by a much faster USB2 connection.

Anyway, this has been a very high level overview of the process involved, it is in no way a tutorial. It took me several years to get good images fairly consistently, and I still botch a lot of them . For much more information about Astrophotography in general click Here.