Pixel based detectors, particularly optical CCDs like what was used in the SDSS camera, are ubiquitous in astronomy. Is there any dead area on the detectors? Not the obvious gaps between each individual sensor, but on the sensor itself. That is, does a typical detector have any gaps between the pixels from microscopic wires laid on the front, alternating doping regions in the silicon, or just areas where a photon can hit and the photo-electron is unlikely to be collected? What fraction of the area is dead?
An astronomical CCD is in principle sensitive across its entire active surface; there are no physical gaps between pixels. The pixels themselves are controlled and maintained by the underlying electronic circuitry, which sets up potential barriers between individual pixels. In practice, there are variations in the actual sensitivity within individual pixels: sensitivity is highest in the center and falls off towards the edges, although it never goes to zero anywhere within the pixel.
This paper (PDF) is a recent analysis of a (frontside-illuminated) CCD; Figure 13 shows the measured pixel response functions (how sensitive an individual pixel is across its surface), and Figures 14 and 15 show the photometric sensitivity map (what the sensitivity is for uniform illumination). In the latter case, you can see that the sensitivity never varies more than about 20% or so, depending on the wavelength of the light. Some of the variation corresponds directly to wiring within the CCD absorbing some of the light, but this has a very minor effect on the sensitivity.
The comparison made in Florin Andrei's answer to a consumer-camera-style CMOS sensor, as shown in the image, is misleading, both because CMOS pixels are physically somewhat more discrete in ways that CCD pixels are not, and because astronomical CCDs do not have per-pixel filters glued on top of them.
(The argument that "there has to be a narrow slice of silicon in between pixels to prevent short-circuits" is not correct for CCDs; the electrical separation between pixels is maintained by the underlying electronics, not by any material barrier. In fact, you have to be able to create temporary "short-circuits" between adjacent pixels in a column in order to transfer the accumulated electrons during readout. That's what the "charger-transfer" in "charger-transfer device" means.)
CCDs, CMOS, and KIDS
Another good overview is given by the article Charge Coupled Devices in Astronomy by Craig Mackay, in Annual Reviews of Astronomy and Astrophysics, vol 24, p. 255 (1986).
The photoelectric effect is fundamental to the operation of a CCD. Atoms in a silicon crystal have electrons arranged in discrete energy bands. The lower energy band is called the Valence Band, the upper band is the Conduction Band. Most of the electrons occupy the Valence band but can be excited into the conduction band by heating or by the absorption of a photon. The energy required for this transition is 1.26 electron volts. Once in this conduction band the electron is free to move about in the lattice of the silicon crystal. It leaves behind a "hole" in the valence band which acts like a positively charged carrier. In the absence of an external electric field the hole and electron will quickly re-combine and be lost. In a CCD an electric field is introduced to sweep these charge carriers apart and prevent recombination.
Thermally generated electrons are indistinguishable from photo-generated electrons. They constitute a noise source known as "Dark Current" and it is important that CCDs are kept cold to reduce their number.
1.26eV corresponds to the energy of light with a wavelength of 1 micron. Beyond this wavelength silicon becomes transparent and CCDs constructed from silicon become insensitive.
The Conveyor Belt Analogy
First, watch this little animation . we'll go over the steps in detail in a moment.
First, we open the shutter and let rain (light) fall on the array, filling the buckets (pixels). At the end of the exposure, we close the shutter.
Now, shift the buckets along the conveyor belts.
Dump the first set of buckets into the special conveyor belt (the serial register) at the end of the array.
Now, leave the ordinary conveyor belts fixed for a moment, and start to shift the special conveyor belt so that the first bucket empties into the graduated cylinder (readout amplifier).
Record the amount of water (charge) in this first bucket, then shift the special conveyor belt again to bring the second bucket to the graduated cylinder.
Record this bucket's contents, too, and then repeat until we've read all the buckets on the special conveyor belt.
Now, with a fresh set of empty buckets on the special conveyor belt, move the main conveyor belts forward again to bring the next row of buckets to the edge of the array.
Dump the next set of buckets into the special conveyor belt.
Now, leave the ordinary conveyor belts fixed again, and start to shift the special conveyor belt so that its first bucket empties into the graduated cylinder.
Record the amount of water in this first bucket, then shift the special conveyor belt again to bring the second bucket to the graduated cylinder.
Record this bucket's contents, too, and then repeat until we've read all the buckets on the special conveyor belt.
Repeat this procedure until all the bucket have been shifted to the special conveyor belt and dumped one by one into the graduated cylinder.
Below is a view of the procedure in action, showing a small image gradually being shifted and transferred down the chip as it is read out.
Note that as the image is shifted and read out, it disappears from the chip. This means that in CCDs, all readouts are destructive: they destroy the pattern of electrons (based on the pattern of light) as the information is gathered. That means that there is no way to check on the progress of a long exposure if you guessed the wrong exposure time, you might saturate your target and have to start all over again.
But how does this all happen? How are electrons moved from one location to another inside the silicon?
Structure of a CCD
The diagram below shows a small section (a few pixels) of the image area of a CCD. This pattern is repeated.
Every third electrode is connected together. Bus wires running down the edge of the chip make the connection. The channel stops are formed from high concentrations of Boron in the silicon.
Once again every third electrode is in the serial register connected together. Below the image area (the area containing the horizontal electrodes) is the "Serial register". This also consists of a group of small surface electrodes. There are three electrodes for every column of the image area
In the photomicrograph of an EEV CCD below, the serial register is bent double to move the output amplifier away from the edge of the chip. This useful if the CCD is to be used as part of a mosaic. The arrows indicate how charge is transferred through the device. Click on the image to see a bigger photograph.
Electric Field in a CCD
The n-type layer contains an excess of electrons that diffuse into the p-layer. The p-layer contains an excess of holes that diffuse into the n-layer. This structure is identical to that of a diode junction. The diffusion creates a charge imbalance and induces an internal electric field. The electric potential reaches a maximum just inside the n-layer, and it is here that any photo-generated electrons will collect. All science CCDs have this junction structure, known as a "Buried Channel". It has the advantage of keeping the photo-electrons confined away from the surface of the CCD where they could become trapped. It also reduces the amount of thermally generated noise (dark current).
During integration of the image, one of the electrodes in each pixel is held at a positive potential. This further increases the potential in the silicon below that electrode and it is here that the photoelectrons are accumulated. The neighboring electrodes, with their lower potentials, act as potential barriers that define the vertical boundaries of the pixel. The horizontal boundaries are defined by the channel stops.
Photons entering the CCD create electron-hole pairs. The electrons are then attracted towards the most positive potential in the device where they create "charge packets". Each packet corresponds to one pixel.
Transferring the charge from pixel to pixel
Now, watch as the voltages supplied to the electrodes change, and the electron packets move in response.
At the end, we have moved all the charge packets over one pixel, and the voltages are back where they started. By repeating the sequence of voltage changes, we can move the packets another pixel down the column.
Frontside vs. backside illumination
Now, exactly how should we orient the chip? In order to corral the electrons properly, the electrodes must sit close to the region in which photons are absorbed. The simplest way to make this happen is to let the photons fly THROUGH the electrodes, from "above" the chip. This is called a front-side illuminated design.
Image courtesy of Vik Dhillon
Of course, that means that some of the photons may bounce off the electrodes, so this simple design throws away some of the precious light from our targets.
We can avoid that loss if we turn the chip upside-down, so that the electrodes are underneath the silicon, and allow photons to shine onto the "back" of the chip.
Image courtesy of Vik Dhillon
A backside-illuminated chip will have higher quantum efficiency, but suffers from a couple of annoying issues:
- the chip must be made THIN, so that light is absorbed close to the electrodes .
- but thinning a chip is difficult some chips are ruined as they are slimmed down (see this document by Michael Lesser for some details)
- and a thin chip is so fragile and flimsy that it is easily broken or damaged
- and thin chips tend to suffer from fringing due to interference between the layers of material
- due to all of these factors, back-illuminated chips are more expensive
But for some applications, all that extra hassle and expense may be worthwhile:
Chart of QE for different chips courtesy of Apogee Instruments
The basic CMOS
Complementary Metal Oxide Semiconductor (CMOS) imaging sensors are very similar to CCDs in most ways.
- based on silicon -- same effective wavelength range
- one photon turns into one electron
- very similar manufacturing process
The big difference is how the pool of electrons knocked free within each pixel is turned into a signal. In CCDs, the entire pool is shifted across the chip, from one pixel to the next . before finally reaching a single amplifier there, the electrons are finally "counted". Among the important consequences are that it can take a long time to read out a big chip, since millions of pixels all must be processed by a single amplifier and that the pool of electrons from each pixel is discarded after having been measured.
But in CMOS devices, each pixel is not simply an tiny volume of silicon, inside which electrons sit and wait. Instead, each pixel is manufactured with its own amplifier, allowing it to read its own charge without having to move the electrons. This additional circuitry is the origin of the term active pixel sensor, which is sometimes applied to these devices.
Image courtesy of Stefano Meroli's blog
That means that a CMOS chip is more complicated to create, certainly, but does give it some advantages over the CCD (as well as a few disadvantages):
quick readouts Since each pixel has its own amplifier, there's no need for each pixel's charge to wait in line for a single amplifier. I'm currently involved with a group which is using a large CMOS detector (over 2 MPixels) to make very short exposures: 0.5 seconds.
non-destructive reads The charge from one pixel doesn't have to be discarded to make room for the next pixel's electrons. If one desires, one can design a CMOS chip which can read out each pixel repeatedly, without removing the charge each time.
This allows one to use different algorithms to determine the charge in a pixel, which might lead to better results it also permits one to check an image in the middle of a long exposure.
no Charge Transfer Efficiency (CTE) issues As one transfers a pool of electrons across a CCD chip, some may leak out along the way some stray electrons may jump into the pool from other locations, too. These errors lead to a systematic effect in CCD photometry known as CTE.
If one doesn't move the pool, then one suffers from no CTE.
There are a few drawbacks with CMOS devices, of course. For example, because each pixel has its own amplifier, there can be small differences in the conversion of charge to voltage (and hence to output signal) from one pixel to another. A CMOS sensor which is front-illuminated will have a smaller light-collecting area than a similar CCD, because the additional circuitry blocks some of the light which would otherwise land in each pixel.
Still, if you look at the (low-cost) amateur astronomical camera market these days, you will see little difference between the sensitivity of CCD and CMOS cameras.
Image modified slightly from Pavel Pech's blog
These days, CMOS chips appear to be favored by big manufacturers for the most common applications: ordinary digital cameras, smartphones, industrial imaging. It is likely that CMOS sensors will continue to improve over the next decade.
KIDS - the sensor of the future?
One common feature of both CCDs and CMOS sensors -- when they are used in the optical regime -- is that they convert each optical photon to a single electron. Whether the photon is blue, green, or red, it knocks one electron free to roam through the silicon crystal.
Recently, some astronomers have touted the possibilities of a very different sort of detector, one which is based on a completely different sort of technology. Microwave Kinetic Inductance DeviceS (MKIDS, or KIDS for short) were originally developed with millimeter and sub-millimeter radio waves in mind, but they can be adapted to measure optical and near-IR photons as well. One reason that some scientists would like to use KIDS is that they CAN provide some energy information for each photon that strikes them. Just think: a digital sensor with linear response and short readout time, which provides both an image and a spectrum at the same time!
Figure 2 taken from Romani et al., ApJ 521L, 153 (1999)
- start with a superconducting material, shaped into a special form, and made part of a resonant circuit. Measure carefully the response of the circuit to an input signal which is close to resonance
- allow a photon to fall onto the superconductor
- the photon's energy breaks apart Cooper pairs of electrons, creating a brief flurry of quasi-particles the larger the photon's energy, the larger the number of quasi-particles created
- the presence of quasi-particles modifies the properties of the circuit, changing its resonant freqency slightly the larger the number of quasi-particles, the larger the change in frequency
- measure the response of the circuit to the input signal -- it will now be different
- after a short time, the quasi-particles re-combine into Cooper pairs of electrons, and the circuit returns to its usual status
- go to step 2
The lifetime of the quasi-particles can be very short -- between 10 -3 and 10 -7 seconds -- which means that IF we can read out our device very rapidly, we might be able to detect each photon as it strikes the detector.
One clever trick is to design each pixel of a device so that it has a slightly different resonant frequency by sending a "comb" of many, many slightly different frequencies into the device, one can probe each pixel simultaneously. "Reading" all those pixels then becomes a matter of examining the output at each frequency.
By modifying the composition and thickness of the materials used to build the micro-circuits within each pixel, one can change the frequency response of the device to suit one's needs. With suitable choices for the materials, a single device can respond to light over a range of one or two decades in frequency (or wavelength).
Let's look at one particular device: ARCONS (the Array Camera for Optical to Near-infrared Spectrophotometry). This detector looks sort of like a typical CCD camera at first glance
Figure 2 taken from O'Brien et al., IAUS, 285, 385 (2012)
but if you zoom in, you see that each pixel has a lot inside:
Each of these pixels is about 222 microns on a side, but the section labelled "TiN Inductor" is the only region of the pixel which responds to light.
The total array size is 44 x 46 pixels -- so it's not all that big. And not all of the pixels work properly.
Figure 10 taken from O'Brien et al., IAUS, 285, 385 (2012)
The quasiparticle lifetime in ARCONS is about 50 microseconds, and ARCONS should be able to record the time of arrival of each photon to about 2 microseconds.
ARCONS achieves a spectral resolution of roughly R = 10 in the visible part of the spectrum. That means that
Perhaps this figure will give you a clue .
The quantum efficiency of ARCONS is shown below -- compare the dotted black line (ARCONS only in the lab) to that of other sensors.
Figure 12 taken from O'Brien et al., IAUS, 285, 385 (2012)
Compare this image of Arp 147 taken by ARCONS on the 200-inch Mt. Palomar telescope to that taken by HST in the inset. The ARCONS image was created by combining 36 exposures of 1 minute each, and using the energy information from each photon to create an RGB color code. Not bad, is it?
Figure 14 taken from O'Brien et al., IAUS, 285, 385 (2012)
Now, in my opinion, there's just one problem looming over the development of KIDS for astronomy. That little issue of superconducting means that the detector must be cooled to below 2 degrees Kelvin. As you can see below, machines that can produce temperatures this low tend to be both large and heavy . not to mention expensive.
Parts of a dilution refrigerator for a detector developed at CalTech
For more information
- You can find information on CCDs from lectures in other courses I've taught:
- Introduction to CCDs
- CCDs: the dark side
- Measuring the gain of a CCD
Copyright © Michael Richmond. This work is licensed under a Creative Commons License.
Modern Industry: Neutron Basic Interactions, Sources and Detectors for Materials Testing and Inspection
Microchannel plate detectors (MCPs)
Microchannel plates are currently being applied for high-resolution neutron counting with spatial resolution as small as
20 μm and temporal resolution in the order of
MCPs upon inspection look like a thin glass disk perforated with microscopic holes. They typically consist of several million microchannels or capillaries fused together into a monolithic disk-like structure (a section of which is shown in Fig. 24 ). Each capillary is an independent, microscopic channel electron multiplier ( Fig. 24 ).
Fig. 24 . Expanded view and cut of a microchannel plate detector where the thin glass matrix is doped with the neutron-absorbing nucleus 10 B. Reaction products recoiling out of the glass into the evacuated microchannels induce an avalanche of electrons that are accelerated in an electric potential put across the plate and electrons exit the structure in the bottom. Here, the electrons are registered by auxiliary electronic equipment.
The glass is doped with a neutron-absorbing material, typically either 6 Li or 10 B. Upon reaction with an incoming thermal or epithermal neutron, the emitted alpha and triton particles penetrate the thin glass walls and enter into the evacuated capillaries where they hit the capillary wall (mostly two times). Electrons are kicked out from an electron-donating material on the wall surface of the capillaries. These electrons are accelerated in a potential put across the microchannel plate, hitting the wall several times– kicking out new electrons etc. An avalanche of electrons exits the channels with a current sufficiently large to be registered by backing electron-detecting devices. MCPs are able to amplify extremely weak signals by 10 3 –10 7 , yielding output signals that can be easily handled by auxiliary electronic equipment.
The microchannels are usually of a circular or square geometry with 6–12.5 μm wide pores with 7.2–15 μm center-to-center spacing. The thickness of the disk is typically 40–250 times larger than the pore diameter (the L/D or aspect ratio), resulting in typical wafer thicknesses of only 0.4–1 mm. MCPs can be fabricated with an active area ≥ 10 × 10 cm 2 .
Although the glass mixture can contain only a limited proportion of neutron-absorbing atoms, this deficiency is compensated in MCP collimators by the possibility to produce structures with very large aspect ratios. The ratio of pore length to its width LMCP/d for current technology can be as high as 250:1.
These detectors have several areas of use, but especially within the neutron scattering, radiography and tomography examinations since they provide position-sensitive properties with spatial resolution in the 20–50 μm range.
What does the active area on a typical CCD based detector look like? - Astronomy
4.2 Megapixel, Single-Shot Color,
Self-Guiding, CCD Camera
Update November 26, 2007:Our special buy of KAI-4020CM CCDs is now exhausted by the orders we have received and we are sold out of the special (limited edition) of the ST-4000XCM camera. Update March 31, 2008: We will continue to offer the ST-4000XCM camera with the newer KAI-4022CM CCD as a regular ST model camera at list US List price of $4495.
The ST-4000XCM is a new addition to the "ST" line of self-guiding cameras. It uses a large 4.2 megapixel KAI-4022CM color CCD. This sized CCD was previously available only in an STL camera body. The KAI-4022CM CCD has 2048 x 2048 pixels at 7.4 microns square. This is the same size CCD used in the STL-4020 camera. The KAI-4022CM comes in a single class without column defects. The KODAK DIGITAL SCIENCE KAI-4022CM is a high-performance multi-megapixel image sensor designed for a wide range of scientific, medical imaging, and machine vision applications. The 7.4 mm square pixels with microlenses provide high sensitivity. The vertical overflow drain structure provides antiblooming protection, and enables electronic shuttering for precise exposure control to 0.001 seconds. Other features include low read noise, low dark current, negligible lag and low smear. This CCD uses a high gain output amplifier that reduces the read noise by almost half compared to previous versions. Our preliminary tests of this CCD installed in prototype ST camera body exhibits a read noise of less than 8e- rms and a dark current of less than 0.1e- at 0 degrees C. The KAI-4022CM CCD is a progressive scan detector with an active image area of 15.2 x 15.2 mm.
- The largest CCD available in an ST series camera - 100% larger than the ST-2000XCM
- Single-Shot Color CCD with 300X antiblooming protection
- 4.2 million pixels: 2,048 x 2,048 at 7.4 microns
- High resolution on smaller telescopes with 7.4 micron square pixels
- Built-in TC-237H autoguider for self-guiding
- External self-guiding with the optional Remote Guide Head containing a TC-237H CCD
- Mechanical shutter for automatic dark frames
- Electronic shutter for exposure times to 0.001 second
- Good sensitivity
- Low read noise
- Low dark current
- Thermoelectric cooling with air only or with water circulation
- Fast downloads - 10 seconds for a high resolution full frame image, 2 seconds for quarter frame
- Professional software
- Easy to use
- Full compliment of optional custom accessories
- Low price
- SBIG quality and support
This generous 4.2 Megpaixel CCD is twice the size as the KAI-2020CM used in the ST-2000XCM camera. It is approximately 50% larger than the KAF-3200ME CCD used in the ST-10XME. With a diagonal measurement of
21.5mm, it is the largest CCD currently available in the ST series camera body. Because of the size of the sensor package, this camera is only available in single-shot color in the ST body, without filter wheel. For monochrome with 2" filters, there is no significant savings over the STL model that includes a 2" filter wheel.
The ST-4000XCM camera comes with a built-in, custom designed, MAR coated, UV+IR blocking filter for optimum color balance in astronomical imaging and improved performance with camera lenses and short fast refractors. The characteristics of this custom filter are essentially the same as the separate Baader UV+IR blocking filter ("Luminance Filter") that we include with other single-shot color cameras. Since no external UV/IR blocking filter is required, using a camera lens is a simple matter of attaching a lens adapter. The built-in UV+IR filter helps shape the red cut-off but does not significantly attenuate the important wavelengths of H-alpha and [SII]. The UV+IR filter has better than 97% transmission at H-alpha. Additionally, the transmission curves of the RGB filters on the CCD itself happen to place the mostly unwanted wavelength of Sodium light pollution in the minimum gap between the red and green filters while passing H-alpha and [SII]. The peak red transmission is around 525nm. By way of comparison, typical unmodified DSLR cameras tend to have peak red transmission around 600nm with a significant fall off at H-alpha and almost no response to [SII]. This means that the DSLR is roughly twice as sensitive to Sodium light pollution as it is to H-alpha. The opposite is true for the ST-4000XCM where the curve of the red filter attenuates the Sodium line and transmits twice as much light near H-alpha (See charts below).
The ST-4000XCM supports the optional Remote Guiding Head and is fully compatible with the new AO-8 Adaptive Optics accessory. It has the standard ST camera heat exchanger with water cooling capability. With both a mechanical and electronic shutter, the ST-4000XCM can automatically take dark frames as needed with exposure times from 0.001 seconds to 1 hour. Self-guiding is possible with either the built-in TC-237H CCD or the optional Remote Guide Head with external TC-237H CCD.
Note: Absolute QE for the DSLR is estimated. See for example http://astrosurf.com/buil/d70v10d/spectro3.gif
For more information about single-shot color cameras, comparison data, image processing tips and some sample images from users click here.
A Natural Step-Up From a DSLR
We are sometimes asked by DSLR users considering moving up to a dedicated astronomical CCD camera what benefits they will get in one of our Astronomical CCD cameras compared to a less expensive consumer digital camera. The answer, simply, is sensitivity and performance. So what makes an astronomical CCD camera more sensitive and a better performer? The paragraphs below briefly address the various factors that answer this question:
DSLRs are designed to take pictures of terrestrial scenes in daylight conditions where there is typically plenty of natural light or artificial light provided by the photographer. Contrast, brightness and dynamic range in the scenes are high. Exposures are usually very short, fractions of a second up to several seconds. The noise inherent in the camera is generally not much of an issue because terrestrial scenes provide plenty of signal compared to any camera noise. Some consumer cameras have gone a step further and include a "noise reduction" mode where a dark frame is subtracted or other processing steps are taken in the camera to reduce unacceptable levels of noise for dark scenes that require longer exposures. Even with this enhancement, however, consumer digital cameras are generally limited to exposures of a few minutes before camera noise becomes a problem when used for astronomy.
Astronomical CCD cameras are designed from the beginning to take pictures of very faint objects at night against a dark background, the proverbial black cat in a coal bin. There is no possibility of adding artificial light to brighten the scene. Contrast, brightness and dynamic range are typically very low. Often, the objects are only a few percent brighter than the background. Light from the object is at a premium and single exposures up to an hour long are possible. The camera electronics are designed from the first step to the last to contribute as little noise to an image as possible within certain practical limits. The dominant source of noise from the camera in long exposures however is dark current in the CCD itself. This is thermally generated charge that can only be reduced by cooling the sensor.
Others have compared the sensitivity of cameras based on Kodak CCDs to the sensitivity of popular Canon and Nikon DSLR cameras. Johannes Schedler, known for his excellent DSLR astro images, compared the sensitivity of the STL-11000M camera to his Canon 10D and found the STL-11000M to be about 4 times as sensitive. Christian Buil independently compared Canon 10D, Canon 20D and a Nikon D70 DSLRs to a monochrome Kodak 0402ME based CCD camera and found the 0402ME significantly more sensitive. From the Quantum Efficiency Curves of the KAI-4020M we see that the sensitivity of the KAI-4021M monochrome (and its smaller counterpart, the KAI-2020) is slightly higher than the KAI-11000M. Taking this into account, and the effect of the color filters, it is fair to conclude that the ST-4000XCM is more sensitive than a typical DSLR camera in blue and green wavelengths, and significantly more sensitive in the red and H-alpha. The DSLR's deficiency in red sensitivity can be mitigated to a degree by modifying the camera but even in these cases, exposures are still limited to around 10 minutes due to the dark current in the CCD or CMOS sensor. The ST-4000XCM's thermoelectric cooling and low initial dark current allow the camera to expose an hour at at time, if desired and conditions permit.
The first light image at right is a single, self-guided, 30 minute exposure taken through a TeleVue 4" f/5 refractor by Alan Holmes while testing the guiding performance of the prototype ST-4000XCM camera. The image is reduced to 25% for display.
In addition to quantum efficiency and spectral response, sensitivity to weak signal is improved as the noise from the camera is reduced. In the case of the KAI-4021CM CCD the read noise is exceptionally low, typically around 8e- rms. The dark current is also quite low, less than about 0.1 e- at 0 degrees C. DSLR camera manufacturers can take steps in their circuit designs to reduce noise from their electronics, but there is little they can do in circuit design to lower the dark current (and therefore the dark current noise) that is ever present in an image because dark current is a property of the CCD itself. Dark current in an image increases in proportion to the length of the exposure. For terrestrial imaging, with short exposures, the dark current is so small that it can usually be ignored. However, for the longer exposures needed for astronomical imaging, dark current is typically the dominant source of noise from the camera. Fortunately, another characteristic of CCDs is that dark current increases with temperature. Conversely, lowering the temperature of the device lowers the dark current. This is such an important factor for astronomical imaging that virtually every good dedicated astronomical CCD camera made has some provision for lowering the temperature of the CCD, even if it is just the addition of a heat sink to passively dissipate heat from the device. The ST-4000XCM camera has built-in thermoelectric cooling that will reduce the temperature of the CCD significantly with air only and even more with water circulation. The KAI-4021CM's dark current is reduced by 50% for every drop of
6 degrees C. Cooling the CCD by 30 to 40 degrees below ambient lowers the dark current by approximately 100X. This is but one significant advantage of an astronomical CCD camera and a primary limitation with a consumer DSLR camera.
Longer exposures require guiding. The best images of dim deep space objects you will see are the result of relatively long exposures, usually many minutes up to several hours total exposure time. Virtually every telescope mount made for amateur astronomy requires guiding corrections during long exposures in order to prevent stars from appearing like streaks instead of points. The need for this depends on the focal length one is using, the resolution desired in the final image, and the quality of the mount. But in general, guiding is required for best results when imaging dim objects even with the best mounts. When imaging with a DSLR, a separate guider may be added to accomplish this task. There is the added cost and complexity associated with this: a separate guide scope, good mounting system and controlling two cameras at the same time. The ST-4000XCM camera, on the other hand, has a second CCD built into the camera next to the imaging CCD. This TC-237H guiding CCD is 657 x 495 pixels at 7.4 microns square. It is the same CCD that we previously used as the main CCD in the ST-237 camera and STV camera/autoguider. This guiding CCD is controlled with the same software that controls the camera. There is nothing else to buy, no external cables or mounting requirements, and since the built-in guiding CCD is looking through the same optical tube as the imaging CCD, it is the most accurate way to guide a long exposure, particularly when imaging at a relatively long focal length. If desired, however, an external guider is easy to add to the ST-4000XCM. Simply plug the optional Remote Guide Head into the ST-4000XCM and guiding can be accomplished using a shorter focal length optical arrangement such as the eFinder. The guiding tests seen in the pair of images above right were made with a mount intentionally set up with poor polar alignment in order to test the guiding capability of the Remote Head and eFinder combination under less than ideal conditions. First an unguided 10 minute exposure was taken to show the extent of the error (left), then self-guiding was turned on and a second 10 minute exposure was taken to determine how well the error was corrected (right).
The ST-4000XCM camera fully supports the AO-8 Adaptive Optics system. SBIG, in concert with Benoit Schillings and Brad Wallis, introduced the first affordable Adaptive Optics system for imaging deep space objects with amateur's CCD cameras. This was the AO-7. Since then, we have added a second generation AO-L for our Large Format cameras and the newest AO-8 replaces the older AO-7. The AO-8 is controlled by the on-board guiding CCD in the camera, or by the external guiding CCD in the Remote Guide Head. The motion of a guide star is monitored and corrections are made to an optical element in the light path to stabilize the image on the main CCD. This technique can result in improved resolution and sharper images. In the case of a poor mount, it can mean the difference between an unusable image and a good one. In the case of a good mount and good seeing it can mean the difference between a good image and a great one. In comparison to the old AO-7, the new AO-8 can follow a guide star that is drifting over a much wider range, about 40 arcseconds. Since most good mounts are capable of reducing periodic error to within this range, no connection to the mount is even required to guide long exposures with arcsecond accuracy using the AO-8 and the ST-4000XCM (or any ST camera). Moreover, the AO-8 is capable of making faster, better, moves than can be made by trying to correcting the mount. This is a tremendous advantage, and a convenience. The images above, right, are both 15 minute exposures of the same double star taken on the same night through the same scope, one right after the other, enlarged 300%. The image on the left is without the AO and the image on the right is with the AO turned on. In this case the AO improved the brightness (peak value) and resolution (FWHM) by approximately 30%.
After all is said and done, if a camera is difficult to use at the telescope, no matter how good the hardware, image quality may suffer. Getting focused, framing an object that is difficult to see, processing the results, etc., all go into the final result. Good software makes these and other tasks easier to get right. The DSLR is designed to be used in one's hands without extensive external control of the camera's functions when attached to a telescope at night. It can be done, usually by adding third party software such as Maxim. This may add cost that needs to be considered. Like all other SBIG cameras, the ST-4000XCM includes more excellent software than any other camera from any other manufacturer, plus some extras. Here is the software that you get with the ST-4000XCM:
CCDOPS version 5 is SBIG's full featured camera control software for Windows. Our own software package has evolved over the past 15+ years into one of the best, if not the best, basic camera control packages offered by any astronomical CCD camera manufacturer. This software controls all camera functions, self-guiding, autoguiding, color filter wheel and adaptive optics. It also has easy to use single-shot color processing for SBIG color cameras. CCDOPS is free with all SBIG cameras and can be downloaded for free from our web site.
CCDSoftV5 Professional astronomical software package. Jointly developed by SBIG and Software Bisque, CCDSoftV5 incorporates many of the camera control functions of CCDOPS, plus additional camera control features, in addition to its many other astrometry, image processing and telescope control functions. CCDSoftV5 is included with SBIG CCD cameras at no additional charge. Purchased separately it is $349.
TheSky Version 5 TheSky Version 5 is Software Bisque's well known planetarium and start charting software package that includes telescope control for many popular telescope models. This is an indispensable tool for planning an evenings imaging session. Field of view indicators for the imaging and tracking CCD, plus the ability to image link with CCDSoftV5 make TheSky one of the most useful planetarium programs you can own. TheSky version 5, level II, (full functioning demo) is included with all SBIG cameras at no additional charge. Upgrades from Software Bisque are available to higher levels and to the latest version 6.
EquinoX EquinoX software for Mac OS-X operating systems is a planetarium program that now includes SBIG camera control (check with Microprojects for specific camera models). A free copy of EquinoX will be provided on request to new SBIG camea purchasers running Mac OS-X systems. Purchased separately it is $49
Trial Versions and Discounts
Maxim DL/CCD Maxim is the leading third party software package that supports SBIG cameras. As a special accommodation to SBIG camera purchasers, SBIG offers a discount certificate for Maxim DL/CCD at $50 off the list price with any new camera (30 day trial version available from Diffraction Limited).
CCDWARE Titles: A CD with free 60 day trial versions of all of CCDWARE's titles and a discount certificate offering 25% off the price of any of the titles included below:
The New CCD Astronomy is a great book for beginners. It walks you through every facet of CCD imaging from selecting a camera to taking and processing your first images. A very thorough examination of this new aspect to amateur astronomy. New camera purchasers get a $10 discount certificate. Purchased separately it is $49.
Each new ST-4000XCM camera system includes everything you need to :
This paper describes the initial configuration of STACEE, a new ground-based gamma-ray detector. Before addressing the detector itself we provide a brief summary of the current experimental situation in gamma-ray astronomy and show the motivation for STACEE and similar detectors.
Gamma-ray astronomy has recently become a very exciting area of research. During the lifetime of NASA's Compton Gamma-Ray Observatory (CGRO) from April, 1991 to May, 2000 and following the construction of ground based detectors during the 1990's, the field experienced rapid growth. The amount and quality of data increased and theoretical understanding of the related astrophysics improved greatly.
Two instruments that were aboard the CGRO are of special interest to high energy astrophysics. The Burst and Transient Source Experiment (BATSE) amassed a large data set of enigmatic gamma-ray bursts (GRBs) and the Energetic Gamma Ray Experiment Telescope (EGRET) produced a catalog of over 200 high energy point sources . Six of these sources have been identified with pulsars within our galaxy and more than 70 have been found to be active galactic nuclei (AGNs) at great distances. Optical or radio counterparts for the remaining sources have yet to be identified.
The EGRET telescope detected gamma rays by converting gamma ray to e + e − pairs in a spark chamber tracking device surrounded by an anti-coincidence shield which vetoed charged particles. This latter feature ensured an excellent signal-to-background ratio. EGRET could operate in this way because it was in orbit above the earth's atmosphere. Thus it was necessarily a small detector and could only look at sources below about 10 GeV . This upper limit was given by statistics the exact value was defined by intrinsic source strength, the steepness of its spectrum and the observing time allocated.
Most ground-based gamma-ray detectors use the atmospheric Cherenkov technique. Most resemble the Whipple telescope , which was the first experiment to convincingly detect gamma-ray sources. Typical Cherenkov telescopes detect gamma rays by using large, steerable mirrors to collect and focus Cherenkov light produced by the relativistic electrons in air showers resulting from interactions of high energy gamma rays in the upper atmosphere. This Cherenkov light is distributed on the ground in a circular pool with a diameter of 200– 300 m . The Cherenkov telescopes need only capture a small part of the total pool to detect a gamma ray so the telescopes have very large effective collection areas relative to satellite detectors, albeit at higher energy thresholds.
The energy threshold for Cherenkov telescopes is the result of a competition between collecting a small number of photons from a low energy shower and rejecting a large number of photons from night sky background. It dictated by a number of important parameters, including photon collection area (A), detector field of view (Ω) , integration time (τ) and efficiency for getting a photoelectron from a photon hitting the primary collection mirror (ε). It can be summarized in the following approximate formula: E th ∼ Ωτ Aε . All of the parameters except A are limited by current technology (e.g. ε depends partly on quantum efficiency of photocathodes) or by the physics of the air shower (e.g. τ cannot be less than the time over which shower photons arrive at the detector). The only parameter which is practical to control is the collection area of the instrument. For present generation imaging detectors, this is less than 100 m 2 and thresholds are typically greater than 300 GeV .
The energy range between EGRET and the Cherenkov telescopes remained unexplored until recently since there were no detectors sensitive to the region from 10 to 300 GeV . There are, however, new detectors being built or commissioned that use the solar tower concept. This concept is a variant of the air Cherenkov technique whereby the collecting mirror is synthesized by an array of large, steerable mirrors (heliostats) at a central-tower solar energy installation. The large effective size of the collecting mirror allows one to trigger at lower photon densities and therefore lower primary gamma-ray energies.
It should be noted that future satellite detectors such as GLAST  will explore some of this region but will be statistics-limited above some energy which will depend on the source strength. Ground-based detectors will be able to complement low energy satellite measurements with data taken over shorter time intervals. This is important for detecting flaring activity in AGNs as well as pulsed emission from pulsars. With longer integration times they will be sensitive to fainter sources.
STACEE (Solar Tower Atmospheric Cherenkov Effect Experiment) is designed to lower the threshold of ground based gamma-ray astronomy to approximately 50 GeV , near the upper limit of satellite detectors. Three other projects of a similar nature, CELESTE , , SOLAR-2  and GRAAL  have also recently been built or are under construction.
STACEE will investigate established and putative gamma-ray sources. One of its principal aims is to follow the spectra of AGNs out to energies beyond that of EGRET measurements to determine where the spectra steepen drastically. These cut-offs are expected since, although Whipple-type Cherenkov telescopes have the sensitivity to see many of the EGRET AGNs if their spectra continue without a break, they have not detected them. This effect could be due to cut-off mechanisms intrinsic to the source or to absorption effects between the source and the detector. A likely mechanism is pair production wherein the high energy gamma ray combines with a low energy photon (optical or infrared) from the extragalactic background radiation field. The infrared (IR) component of this field is difficult to measure directly but can provide information on the early universe since the photons are red-shifted relics of light from early galaxies. Correlating spectral cut-offs with source distance will help elucidate the nature of the IR field.
The field of high energy gamma-ray astronomy has recently been reviewed in three comprehensive articles , , .
How to Choose as Astro Camera
to Match Your Imaging Needs
Michael Barber / QHYCCD
Updated October 2020
Guidelines for selecting the best camera for your telescope and observing conditions:
As an introductory note, I actually wrote this article a few years ago when CCD cameras ruled the roost. How things have changed! At the request of a colleague, I've taken a fresh look at the issues and updated my recommendations based on current CMOS camera models. Many of the concepts apply equally well to CCD and CMOS cameras but I am more familiar with QHYCCD products now so I will use examples of our CMOS models to illustrate a few points in the article.
1. Cost / Size
The best camera for you isn’t always the biggest or most expensive. An expensive camera with pixels that are too big can be a waste of good money. An inexpensive camera with lots of small pixels may not be appropriate for your telescope and might suffer from poor sensitivity, again wasting money. A camera that is too big or too small for your scope and mount will result in disappointment. A camera that is too big and heavy can tax your mount. One that is too small will not give you much satisfaction.
Take some time to think about how you intend to use your camera and to learn about the various factors that can affect its performance for your intended use. As a very general rule, astro cameras cost more the bigger they are. So, the more you pay, the bigger the detector and the bigger the field of view it is capable of capturing in a single frame. There are exceptions, of course. Modern CMOS cameras now offer large sensors (generous field of view) with relatively small pixels (high resolution) and good sensitivity (high QE and low noise) at prices that are significantly lower than older CCD based cameras. Very good low noise, high QE cameras sell for less than $1000. When I first wrote this article, cameras with the KAF-8300 were ground-breaking. 8 Megapixels, reasonable sized sensor and all for about $2000. QHYCCD still makes a camera with this sensor, however it has clearly been eclipsed by the more recent plethora of cameras with Sony and other CMOS sensors. One popular example is the QHY163M, a 16-megapixel camera with a 4/3-inch sensor about the same size as the old 8300 that sells for about half the price of an 8300 based camera. And concurrent with this update, we are about to release the QHY492M, a back-illuminated 4/3- inch monochrome camera with even higher resolution, higher QE and lower noise than the 163M. The new 492M will be priced well under $1500.
After the overall size of the sensor, all else being equal, price is usually determined by the number of pixels and sensitivity of the sensor. That is, between two sensors of the same size, type and sensitivity, the sensor with the greater number of pixels will generally cost more. Conversely, between two sensors of the same size, type and number of pixels, the sensor with the greater sensitivity will generally cost more. Naturally, then, a large sensor with lots of pixels and high sensitivity costs the most and since the sensor itself is often the most expensive component in a camera, the more expensive the sensor, the more expensive the camera
If you intend to image primarily planets or bright objects or large fields of view through relatively fast optical systems, then sensitivity may not be so important a factor as the size of the sensor and the resolution. If, however, you intend to image small faint objects through a long focal length scope or if you intend to use narrowband or photometric filters, then the higher sensitivity of one of the full frame sensors may be an important factor in your decision.
Our advice to find the best balance of these factors is to set a budget for your camera system and then, based on your major interests, buy a camera within that budget that has the desired balance of sensor size, sensitivity and resolution to fit your telescope. Remember to add the cost of any accessories you intend to include like autoguider, filter wheels, etc. Some important sensor parameters are discussed in more detail below.
2. Field of View
The field of view (FOV) that your camera will see through a given telescope is determined by physical size of the sensor and the focal length of the telescope. Note that this has nothing to do with the number of pixels. A sensor that has 512 x 512 pixels that are 20 microns square will have exactly the same field of view as a sensor with 1024 x 1024 pixels that are 10 microns square even though the latter sensor has four times as many pixels. This is also why binning 2×2 or 3×3 affects resolution but does not affect the field of view of the sensor.
Larger sensors have larger fields of view at a given focal length. You can change the field of view of a sensor only by changing the focal length of the telescope. By using a focal reducer, you shorten the effective focal length of the telescope and increase the field of view (and make the image brighter in the process). By using a barlow or eyepiece projection you effectively lengthen the focal length of the telescope and decrease the field of view (and make the image dimmer in the process)
In order to determine the field of view for a given sensor, note the sensor’s length and width dimensions (or diagonal) in millimeters and use the formula to determining the field of view for that sensor through any telescope as follows:
(135.3 x D ) / L = Field of View in arcminutes
where D is the length or width dimension of the sensor in millimeters, and L is the focal length of your telescope in inches. You can use the same formula to find the diagonal field of view if you know this dimension. So, for example, if you wanted to know the diagonal field of view of the QHY174 when attached to a 6″ f/7 telescope you would first determine the focal length of the telescope by multiplying its aperture, 6 inches, by its focal ratio, 7, to get its focal length, 42 inches. The diagonal dimension of the sensor is 13.4 mm. To calculate the field of view multiply 135.3 x 13.4 = 1,813 and then divide by the focal length of 42 inches = 43.2 arcminutes. Just big enough to capture a full disk of the sun or moon. By way of comparison, the diagonal field of view of the QHY600 through the same telescope would be 135.3 x 43.3 = 5,858 divided by 42 = 139.5 arcminutes, about three times the field of view. The table above shows the calculated diagonal field of view in arcminutes for several sensors at various focal lengths (without regard for the pixel resolution at any given focal length).
Once you know the field of view of the sensor then it helps to know how big the objects are that you intend to image. Celestial objects come in a very wide range of sizes. No one telescope / camera combination is appropriate for them all. Large objects are sometimes imaged by making a mosaic of several frames. Planets are best imaged with smaller cameras as the download times are shorter and the planets do not require a large field of view to see them in their entirety.
For comparison, a few popular objects are listed in the table above with their angular sizes. It is easy to see that there is no one telescope / camera combination that will nicely frame all of these objects. Some of the largest objects (like the North American nebula) are best imaged using a camera lens.
Several things determine the ultimate sensitivity of your system such as focal ratio, pixel size and quantum efficiency of the detector. The quantum efficiency of the sensor is a measure of how efficient it is at converting incoming photons of light to electrons. The electrons in a pixel well are counted and determine the brightness value for that pixel. The more efficient the sensor is at converting photons to electrons, the greater the sensitivity on long exposures. A sensor with higher QE requires less time to acquire an image with equal signal to noise to one taken with a sensor having lower QE. The quantum efficiency of each of the new cameras is generally noted in the specification section of the camera page and a comparison chart of a few models is shown below.
When considering QE, however, keep in mind that it is only one factor in the overall sensitivity of your camera / telescope system. An optical system with a faster f/ratio is more sensitive to extended objects than a slower system. Each pixel also acts like a small aperture when imaging extended objects. But smaller pixels may yield higher resolution without loss of sensitivity if properly matched to your telescope. Using pixels that are too small will result in oversampling, that is, sampling the FWHM with more pixels that are necessary. Using pixels that are too big will result in under sampling. Over sampling can result in some loss of sensitivity while under sampling results in loss of resolution of detail.
The goal is to sample the FWHM (full width half-maximum) of best star images your seeing allows with 2 to 4 pixels. This will give the best balance of sensitivity and resolution. A good match of pixel size to focal length (see below) will optimize the sensitivity of the system without compromising resolution.
In general, try to choose as fast a system as you can manage that will yield an appropriate focal length for the pixel size of your camera and the sensor size of your camera. Or, if you already have a telescope with a fixed focal length and focal ratio, then select a camera with a pixel size to match. This is not an exact process. The telescope’s focal length can be adjusted using a focal reducer or barlow. The camera’s pixel size can be adjusted by binning 2×2 or 3×3 to effectively double or triple the size of the pixel. Often, the camera will be used on more than one telescope. So, one should not be too concerned about finding the perfect match of pixel size to telescope. But it can help to find the “middle of the road” for your focal length where changes in focal length or pixel size will expand the usefulness of the sensor / telescope combinations.
4. Resolution (Pixel Size and Focal Length):
Resolution comes in two flavors these days. In the commercial world of digital devices, the word resolution is often used synonymously with the number of pixels used in a device. You are used to seeing ads for scanners with a “resolution” of 2,000 x 3,000 pixels, etc. Computer monitors have various “resolution” settings which are basically the number of pixels displayed. We use the word here in its literal sense, which is ability to resolve detail.
Typically, seeing limits the resolution of a good system. Seeing is often measured in terms of the Full Width Half Maximum (FWHM) of a star image on a long exposure. That is, the size of a star’s image in arcseconds when measured at half the maximum value for that star in a long exposure. As a general rule, one wants to sample such a star image with at least 2 pixels, preferably 3 or even more depending on the processing steps to be performed and the final display size desired. This means that if the atmosphere and optical system allow the smallest star images of say 2.6 arcseconds in diameter (FWHM) then one needs a telescope focal length and pixel size that will let each pixel see about 1/2 to 1/3 of 2.6 arcseconds.
In this example the individual pixel field of view should be on the order of 1.3 to 0.86 arcseconds per pixel for an optimum balance of extended object sensitivity to resolution of fine detail. If you aim for a pixel FOV of about 1 arcsecond per pixel, or a little less, through a given focal length, then you should be fine for the majority of typical sites and imaging requirements. If your seeing is better than typical, then you should aim for 0.5 or 0.6 arcseconds per pixel. If your seeing is much worse than typical, then you can get away with 1.5 or even 2 arcseconds per pixel. The table below shows the field of view per pixel for each of our cameras at various focal lengths. Select the focal length or range of focal lengths of your telescope in inches or millimeters and look across the table for a pixel size that yields a pixel field of view in the range that suites you seeing. Below the table is a general guide of the resolution to look for under some typical seeing conditions. Note that the exception to these general rules is planetary imaging where, because the objects are relatively bright, sensitivity is not as big an issue as it is for deep space and resolution is paramount. In this case, aim for 0.25 to 0.5 arcseconds per pixel. Some planetary imagers use 2x or 3X barlows with large SCTs. A C11 at 2X is 240 inches FL yielding a pixel FOV of less than 0.25 arcseconds with all but our largest deep space cameras! This obviously requires excellent seeing and good familiarity with your equipment.
Also note that camera with smaller pixels may be binned 2×2 or even 3×3 to create larger pixels and expand the useful range of the camera. The overall field of view of the sensor does not change however, and a camera with larger pixels and a larger field of view might be preferable if it will not be used on shorter focal length instruments.
One of the first things you might notice about the pixel FOV chart (or pixel resolution chart), above, is that the 268, 600 and 411 cameras all have the same pixel field of view. This is because they all have 3.76 micron pixels. This means that the QHY268 camera has the same resolution (ability to resolve detail) as the QHY411 even though it has only 26 megapixels compared to latter’s 150 megapixels! The big difference between them is the overall size of the sensor, i.e., field of view. Finally, this section needs a caveat that all of these efforts to match pixel size to your telescope are a guide only and should not be taken as a hard and fast rule that you must follow. When the 35mm format QHY600 camera is equipped with a standard 50 mm camera lens, for instance, beautiful wide field images are routinely captured even though the pixel field of view is 15 arcseconds per pixel.
5. Read Noise
Of the several possible sources of noise in an astronomical image, most of us concentrate on reducing one or more of the three main sources: dark current, read noise and sky background. Of the three, obviously, only the first two can be blamed on the camera! But even the third source of noise, sky background, can be managed at the camera without having to move to the Andes mountains to find dark skies. Read noise is noise that is introduced into the image as the sensor is read out after exposure. Unlike dark current or sky background noise, read noise does not increase with integration time. It is one dose of noise per exposure whether it is a one second exposure or an hour-long exposure. It does increase as you add multiple exposures to produce a final image.
If a camera has high read noise, the imaging strategy is to increase the integration time until the read noise becomes insignificant compared to other sources of noise like dark current or sky background. If you can increase your integration time by holding off the effects of sky background (by say using a long focal length and imaging through narrowband filters) then dark current becomes the limiting noise factor. Cooling reduces this culprit, so by employing these various means, long deep exposures even in relatively light polluted skies are possible. Modern CMOS sensors have such low read noise (and such low dark current) this is not much of an issue any more.
It is fair to say that, today, the appearance of highly sensitive, low noise, CMOS sensors has changed the landscape when it comes to imaging philosophy (and guiding). QHYCCD makes both CCD and CMOS cameras using a variety of popular sensors. Looking at the chart, below, it is clear that even at lowest gain our CMOS cameras generally have much lower read noise than our CCD cameras. At high gain, where many of our CMOS cameras achieve around 1e- of read noise, this difference is even greater. A few years ago, when CCDs were the sensor of choice, imagers worked very hard to build a guiding system that could track accurately for hours, not minutes. The idea was to "bake in" the exposures to get the most of each frame and thus reduce the contribution of read noise when adding several sub-exposures. Some CCD cameras have read noise as high as 15 or 20 electrons. Even the best struggle to get read noise much under 9 or 10 electrons and almost none have read noise below 5 electrons.
The AVERAGE read noise (at high gain) of the eight CMOS cameras in graph, above, is one electron! At lowest gain the same eight models average only 3.6e-. With such low read noise, taking multiple shorter exposures has become commonplace, particularly in planetary imaging. Shorter exposures mean less stress on the guiding system but also requires sensors to have high QE to do well in less time. This is now the case as well. Compare, for example, the QE of the CMOS cameras and the CCD cameras in the quantum efficiency chart shown in Part 1. The average peak QE for the three CCD sensors in that chart is around 56% while the average QE for the four CMOS sensors is around 85%. Even discarding the highest CMOS and lowest CCD, the difference is still about 60% vs 80% and I think it's fair to say that this is pretty typical when you compare modern CMOS vs. older CCDs. Of course, there are exceptions, but generally it is the case that modern CMOS sensors are more sensitive than typical CCD of days past.
6. Special Planetary Imaging Considerations
This image of Jupiter was taken many years ago by Ed Grafton using an ST-6 camera and C-14 telescope. The ST-6 had 375x242 pixels at 23x27 microns! Read noise was 23e- and dark current was 10e-/p/s at -30C. Download time for one frame was 22 seconds. The ST-6 had a monochrome sensor so this color image was made with separate RGB frames shot through color filters, each one taking 22 seconds to download. The camera sold for about $3000 (20 years ago).
Based on everything that has been said in this article, there is nothing about this camera that would lead you to select it for planetary imaging (or any other kind of imaging for that matter) but this image isn't too bad! I wanted to include it here to illustrate two things: First, planetary imaging ain't what it used to be and, second, that despite figuring out all the fine details of what is optimum, the person taking the image and his location play the most important roles in the results. One should not get too obsessed with specs and technical details.
The availability of sensors that have both high QE and very low read noise has facilitated a different approach to imaging planets. In deep space imaging one generally needs longer exposures to capture the subtle detail of very dim objects. But for planets, which are much brighter, the key is to take exposures as short as possible to "freeze' the seeing and then stack hundreds or thousands of these images to bring out the subtle detail.
Today, more often than not, planetary imaging is done with a small, fast, uncooled camera like the QHY5III462C. Compared to the ST-6 it has 23X more pixels, 46X lower read noise, and it can capture almost 3,000 images in the time it took the ST-6 to download one frame. And it costs 1/10th as much. The 462C also has 2.9um pixels compared to the ST-6's huge 23x27um pixels.
To get an image scale a little better than 0.25 arcseconds with his ST-6, Ed used eyepiece projection to effectively increase the focal length of his C14 to 24,000mm (f/68). To achieve the same pixel scale with the 462C camera, one needs a focal length about 1/10th of Ed's - about 2400mm, or very roughly the typical focal length of a C11 at prime focus. As mentioned before, the emphasis today is taking hundreds or thousands of images, then grading and stacking them to bring out detail, similar in application to "lucky imaging." This was simply not possible with CCD cameras that had high read noise and took 22 seconds to download a single frame to boot.
Christopher Go with QHY5III462C and C14 Damien Peach with QHY5III462C and C 14
Possibly the two best amateur planetary imagers in the world today are Christopher Go and Damien Peach. Both use C-14 scopes with their planetary cameras. Christopher Go uses a QHY5III290M or QHY5III462C. Damien Peach uses several cameras and has just recently turned in some incredible images with a QHY5III462C. In all cases the pixel scale is about 0.15 arcsec. While this would seem to violate the formula for optimum pixel size per focal length used for deep space imaging, their results clearly demonstrate that planetary imaging is an exception that has its own set of rules and resolution is king.
7. Cooling and Dark Current Noise
Cooling and dark current noise increase with exposure time and are therefore more significant issues in deep space imaging. They are mentioned together because one is dependent on the other. Dark current is the generation of electrons in the sensor itself just by virtue of being turned on. It is called dark current because it will produce these electrons in the pixels even if you are not exposing the sensor to light during an integration (i.e., taking a exposure in complete darkness). Dark current is usually expressed as electrons per pixel per second at a specific temperature. (e.g., e-/p/s @ -15C).
One fortunate property of dark current is that it is greater at higher temperature and is reduced at lower temperature. This is why cooling CCD and CMOS sensors is a common design feature of cameras intended to take long exposures. Another fortunate property of dark current is that it creates a pattern that is quite repeatable. This means that you can take an image of just the dark current (a "dark frame") and subtract the result from a light frame to remove the dark current pattern from an exposure of long duration, leaving only the random noise. Of course, the less dark current there is, the less noise will remain after subtracting the dark frame. Noise associated with dark current is also sometimes referred to as "thermal noise." Dark current noise follows Poisson statistics, the rms dark current noise is the square root of the dark current.
Since dark current can be reduced in CCD and CMOS sensors by reducing the temperature of the sensor, nearly every astronomical camera intended to be used for long exposures features thermoelectric cooling of the sensor. Typically, the dark current present in the sensor is reduced by 50% for about every 6 to 7 degrees C of cooling. In other words, if the sensor has 10e-/pixel/second of dark current at 25 degrees C, and the temperature of the is lowered to 18 or 19 degrees C then the dark current will be reduced to only 5e-/pixel/second, and if the temperature is lowered another 6 or 7 degrees to around 12 or 13 degrees C then the dark current will be
2.5e-/pixel/second, and so forth.
When this article was first written, cameras having less than about 0.1e- of dark current at zero degrees C were considered pretty low. It meant that dramatic cooling of the sensor was not required to get very low dark current under typical operating conditions. Cooling an 8300 sensor to -20C, for example, reduced the dark current to only 0.01e-/pixel/second. To reach comparable dark current with, say, a KAF-3200 sensor, it would require cooling to -40C and for a KAF-1001 with its large 24um pixels, such low dark current could not be reached even if the sensor was cooled to -50C. Again, all of this has changed with the current level of CMOS technology. The chart below compares the effect of cooling on dark current of an 8300 sensor and Sony's new IMX571 used in the new QHY268 cameras.
At zero C where the 8300 has about 0.1e- of dark current, the Sony sensor has less than 0.005e-. About 20X lower! And where the 8300 reaches 0.01e- at -20C the Sony part reaches this same level of dark current at +10C.
What this means is that in modern CMOS astro cameras, dramatic cooling is not as critical a requirement as it once was for noisier CCDs in days past. Cooling of CMOS sensors to -20C or
-30C is enough to reduce the dark current to almost insignificant levels. At -20C for example, the 8300 sensor has 0.01e- of dark current whereas the IMX571 has an amazingly low 0.0005e-.
8. Sky Background Noise
Sky background illumination or brightness is the number of counts in the image in areas free of stars or nebulosity and is due to city lights and sky glow. Unlike read noise, sky background and dark current noise accumulate over time. High levels of sky background can increase the noise in images just like dark current.
Most of us live in or near urban areas where sky background is greater than it is out in the country. The sky background is often the limiting factor in taking astronomical images, unless one has very dark skies or is imaging through narrowband filters. In our area, here in Santa Barbara, at f/6, we are typically limited to around 10 – 15 minutes of exposure time before sky background overwhelms the dark current noise.
The maps of Europe and North America on the previous page are colored according to the brightness of the sky background and the legend describes the brightness in terms of reduced visual perception of the night sky:
Black - Pristine Sky
Blue - Degraded near the horizon
Green - Degraded to the zenith
Yellow - Natural sky lost
Red - Milky Way lost
White - Cones active
The sky background spectrum (depending on where you live) has a significant spike in intensity around 5577 angstroms (around 558nm) right between the green and red filter passbands in an RGB filter set. Many years ago, my partner at the time, Alan Holmes, designed LRGB filters with a gap between the green and red filters to balance the intensity of O-III and H-a from emission nebula while still properly rendering the continuum of background stars. This gap also happened to coincide with the 558nm background spectrum and so reduced sky background while color imaging. When introduced, this design was criticized by some (who were making their own filter designs) but the results obtained with this design were spectacular and it is interesting to see today several top manufacturers using a similar approach to LRGB filter transmission design (See for example the graph of RGB filters displayed in the section on filter wheels and filters).
Another way to reduce sky background is to simply use a red filter or LPR filter for monochrome imaging or narrowband filters when imaging certain kinds of objects. Imaging with narrowband filters significantly reduces sky background by allowing only a narrow passband at selected wavelengths for nebula that emit light in the wavelengths of H-alpha and/or O-III. With an H-alpha filter, for instance, exposures of half an hour to an hour are not a problem in our location where 15 to 20 minutes would be the limit without filters.
9. Guiding and Polar Alignment
The need for guiding is often overlooked – or thought of only after everything else – when initially building an imaging system. Guiding is an extremely important function in astronomical imaging that should not be trivialized. Without good guiding you will not get very good images. Good guiding begins by having a good polar alignment of your scope. If your polar alignment of off, it will introduce image rotation in images of long duration. Even when the final image is made up of multiple short exposures, none of which appear to have much rotation, the result will show the shift in star positions over time. Image rotation manifests as star images looking like small arcs rather than single points. Getting good polar alignment can be a tedious task when you do it without aid.
One of the things that QHYCCD is famous for introducing to astrophotographers is the PoleMaster accessory that makes getting highly accurate polar alignment a relatively simple task. Simple enough to do before every imaging session. The PoleMaster is an indispensable accessory to improve your alignment.
As the resolution of sensors increases with more and smaller pixels, guiding becomes more critical. Most imagers use either off-axis guiding or guiding through a separate guide scope. Of these two solutions, the off-axis arrangement offers the best accuracy as separate guide scopes are subject to differential deflection that can cause guiding errors. A word of caution, however. Many inexpensive radial off-axis guiders have a severe problem in that a small prism or mirror is used to pick off a tiny portion of the light to direct to the eyepiece. Guide stars tend to be dim, and one is forced to rotate the assembly to find a guide star. When one rotates the assembly, the star motion directions (in response to guiding inputs) also rotate, and one is forced to recalibrate the autoguider quite often. Also, the dim stars force some autoguiders to require very long exposures, negating their ability to compensate for periodic errors and drive hops. In short, many radial guiders are clumsy to use.
QHYCCD offers several sizes of off-axis guiders to accommodate various camera and sensor sizes. These OAGs use large enough prisms to avoid the problem mentioned above and the method of attachment to the camera or filter wheel is rock solid.
The other main alternative is to use a separate guide scope. While this can work quite well for refractors and fast Newtonians, it is not the best solution for SCT systems. The problem here is differential deflection – slight tilts or wobbles of the primary mirror can shift a star position significantly on the imaging CCD. The mirror tends to shift since the gravity loads change as the telescope counteracts the earth’s rotation.
So, how does all of this affect guiding decisions? Well, low noise and high QE make multiple short exposures a viable alternative to single long guided exposures. In this case good polar alignment is still critical but it is easier to manage good guiding through an exposure of a few minutes rather than one hour. And if you do get some wind or other unexpected bump in guiding, its less painful to throw out one short bad frame than to find out after a long night of guiding that you had a problem!
10. Filter Wheels and Filters
I should start this section with a caveat. It used to be the case that if you wanted to take GOOD color images, you would naturally choose a mono camera and shoot through LRGB filters. It also used to be the case that putting an automatic transmission in a high-performance sports car was akin to wearing tennis shoes with a tuxedo. So called "one-shot" color cameras were for beginners or the lazy. Again, there were exceptions, but the exceptions were few and far between and there was good reason. Color CCD cameras were generally far less sensitive than their monochrome counterparts. At my former company, maybe 1% of cameras sold with the popular 35mm sized 11002 sensors were the one-shot color version. The same is true of the very popular 8300 which also came in color. In both cases the peak QE for RGB was between 30% to 40%. Imagers just preferred to take LRGB to get the best results.
Orion by Tony Hallas using a QHY128C color camera
This has now changed. It is still true that using a filter wheel with select filters, one has much more control over the passbands and color balance, not to mention the ability to use specialized filters for certain kinds of imaging. Nevertheless, just as most high-performance sports cars now come with automatic transmissions as standard equipment the incredible sensitivity of back-illuminated color sensors, driven by the high-end consumer camera market, has made imaging with color sensor much more commonplace and quite respectable. The latest QHY410C is a back-illuminated version of the QHY128C. With 5.94um pixels it is expected to be the most sensitive color camera we've ever made.
OK, having said that, why get a filter wheel with filters instead of a color camera? There are several reasons why you might like to have external filters: First, you can select filters with the passbands that you want, and you can freely change them. Second, filters made for astronomy generally have higher transmission ratios than the filters built onto a color sensor. Third, custom filters, like emission line filters, IR filters and photometric filters can be used without interference by the built-in RGB filters over the sensor. Filter wheels come in a variety of capacities, usually 5 positions for LRGB and clear, or 7 positions (or more) for LRGB plus narrowband filters, etc.
Although the Bayer matrix layer of RGB filters on sensors have made improvements over the years, just as CMOS sensors have improved, the example shown below of a KAF-16200 CCD sensor that is available in both monochrome and color highlights several points in favor of using a mono camera and filter wheel for advanced color imaging. The QE chart for both the mono and color version of the sensor are from the sensor manufacturer and the external filter transmission characteristics are from Antlia, an astronomy filter manufacturer of high quality filters.
It is clear that the on-chip filters have a significant effect on the overall QE of the sensor. Using external filters in this case appears to improve this by 30% or more (just eyeballing). Also, while both types of filters capture the blue-green O-III emission lines around 500nm equally with the blue and green passbands, the external filter set does so with much higher efficiency. The other obvious difference is the gap I mentioned before - between the green and red filters of the external filter set. This means that the Bayer filters would capture the sky background (light pollution) around 578nm with both the green and red filters, but the external filter set would not see this portion of the sky background at all.
11. Adapters and Accessories
Fitting together a camera, OAG, filter wheel, field flattener or reducer, focuser and getting it all to focus with your OTA can be a challenging experience! It can also be extremely frustrating, particularly when you are building a system made of pieces from various manufacturers who are each trying to make their bit fit as many different configurations as possible.
QHYCCD makes a set of adapter rings that can space pieces of a system just right in a variety of configurations. Recently, a sort of "standard" has evolved that requires 55mm of backfocus for field flatteners or other rear optical elements of several poplar types of scopes. In response to this QHYCCD now includes a set of adapter rings with each camera/filter wheel and/or OAG configuration to enable the user to achieve this magic number without having to figure out what adapters are needed. The most important thing here is to do one's homework before hand and make sure that all the pieces you want to combine are compatible with the optics you intend to use.
For example, Canon and Nikon camera lens adapters fit a variety of other things, even filter wheels. However, certain camera, OAG and filter wheel combinations require more space than the backfocus requirement of the lens allows and infinity focus may not be possible even though the parts can all be mechanically screwed together. So planning in advance may save you a frustrating gotcha!
12. Software compatibility
Unlike DSLRs, virtually every astronomical camera is operated with an external computer of some kind. So, no matter how good your camera might be, if it does not have good control software, it's just an expensive bunch of wires and metal. Getting focused, framing an object that is difficult to see, processing the results, etc., all go into the final result. Good software makes these and other tasks easier to get right. To make QHY cameras compatible with the widest variety of third part software, we offer both native and ASCOM drivers as well as drivers for TheSky. And as of this writing, we are working on new drivers for Software Bisque's Linux based Fusion System and are about to release a new version of the compact StarMaster controller.
Probably the last thing you want to think about is what happens if my camera fails? QHYCCD cameras have a two-year warranty. But other things can happen, too. Cameras fall and hit the cement they don't do so well under water lightning strikes (literally!) gremlins inside (no, not literally). The point is you want to be comfortable with your purchase and know that if lightning does strike, you have some expedient way to getting a repair or replacement without flying to Timbuktu.
QHYCCD is aware of this concern and for this very reason we have already created a stock of new cameras in the US for quick and easy replacements when the delivery drops your camera off the back of the truck. In addition, we are also setting up a complete repair facility here in the US for in warranty and out of warranty repairs so cameras do not need to be sent overseas for routine repairs and maintenance. This repair facility should be operational by end of 2020.
14. Putting It All together
Don't obsess over number too much. Use them to get you in the ball park and then PLAY BALL!
Make a couple of big decisions up front, like do you want to do color imaging with a color camera or with a mono camera and filter wheel. The color camera is simpler and cheaper - the mono camera and filter wheel with filters is more equipment, more to go wrong and more expensive. But it is also more flexible and offers advantages especially if you are in a light polluted area and/or want to use narrowband filters. Decide how you are going to guide. The best method is off-axis. Also, consider a PoleMaster to get good polar alignment. This will save you hours of frustration.
Try to narrow down what you are interested in imaging. If you want to start just imaging planets, save your money and get a planetary camera. They are small and relatively inexpensive and good. You can always use it for a guider as well when you decide to do deep space imaging. If your interest is solar or lunar imaging, remember that these are both about 1/2 degree in diameter and pick a sensor/scope combination that can accommodate a full disk. Then you can use a barlow for higher magnification of surface features. The sun and moon are also relatively bright objects and the need to optimize the sensitivity of your camera by matching the pixel size to the focal length is somewhat meaningless. Here it is more important to achieve good resolution (ability to resolve detail) and for this you basically need lots of small pixels.
If you know you want to image deep space, use the charts in Part 1 to select a sensor size and pixel size that is good for your intended targets and your scope. If you don't know, buy as much sensor as you can afford. Again, the numbers should be used as a guide, not a requirement. With today's high QE low noise CMOS cameras, I think it's better to over sample than under sample. You will get higher resolution and lose very little sensitivity if you sample FWHM stellar images with 3 or 4 pixels instead of 2. You can always bin 2x2 for nights of poor seeing (but you can't "unbin" 1x1 pixels to improve resolution). Many beautiful wide field images are taken with cameras that have pixels that are "too big" and many fine deep space images of spiral galaxies have been taken with pixels that are "too small." Some of the most beautiful astro images are taken with camera lenses. They are a great way to get started and I highly recommend it. They are easier to guide and produce very satisfying results while you are learning to use your equipment. And for planets, over sampling has become the order of the day when stacking hundreds or thousands of images to tease out detail. Above all, have fun and enjoy the night sky!
Appended below is a table of cameras sorted by use (deep space or planets) and type ( mono or color). Within each category they are sorted by sensor diagonal size. As is usually the case you will see that price also scales by sensor size, the exceptions being those cameras that are on sale due to the release of newer models with some additional features like higher sensitivity, etc. New models are highlighted in yellow.
Aspects of Image Quality
Table 2 shows some relevant technical features of various radiography systems.
Pixel Size, Matrix, and Detector Size
Digital images consist of picture elements, or pixels. The two-dimensional collection of pixels in the image is called the matrix, which is usually expressed as length (in pixels) by width (in pixels) ( , Table 2). Maximum achievable spatial resolution (Nyquist frequency, given in cycles per millimeter) is defined by pixel size and spacing. The smaller the pixel size (or the larger the matrix), the higher the maximum achievable spatial resolution.
The overall detector size determines if the detector is suitable for all clinical applications. Larger detector areas are needed for chest imaging than for imaging of the extremities. In cassette-based systems, different sizes are available.
Spatial resolution refers to the minimum resolvable separation between high-contrast objects. In digital detectors, spatial resolution is defined and limited by the minimum pixel size. Increasing the radiation applied to the detector will not improve the maximum spatial resolution. On the other hand, scatter of x-ray quanta and light photons within the detector influences spatial resolution. Therefore, the intrinsic spatial resolution for selenium-based direct conversion detectors is higher than that for indirect conversion detectors. Structured scintillators offer advantages over unstructured scintillators.
According to the Nyquist theorem, given a pixel size a, the maximum achievable spatial resolution is a/2. At a pixel size of 200 μm, the maximum detectable spatial frequency will be 2.5 cycles/mm. The diagnostic range for general radiography is 0–3 cycles/mm ( , 34, , 54) only older generations of storage phosphors do not meet this criterion ( , Table 2). For digital mammography, the demanded diagnostic spatial resolution is substantially higher (>5 cycles/mm), indicating the need for specially designed dedicated detectors with smaller pixel sizes and higher resolutions ( , 11).
Modulation Transfer Function
Modulation transfer function (MTF) is the capacity of the detector to transfer the modulation of the input signal at a given spatial frequency to its output ( , 55). At radiography, objects having different sizes and opacity are displayed with different gray-scale values in an image. MTF has to do with the display of contrast and object size. More specifically, MTF is responsible for converting contrast values of different-sized objects (object contrast) into contrast intensity levels in the image (image contrast). For general imaging, the relevant details are in a range between 0 and 2 cycles/mm, which demands high MTF values.
MTF is a useful measure of true or effective resolution, since it accounts for the amount of blur and contrast over a range of spatial frequencies. MTF values of various detectors were measured and further discussed by Illers et al ( , 56).
Dynamic range is a measure of the signal response of a detector that is exposed to x-rays ( , 55). In conventional screen-film combinations, the dynamic range gradation curve is S shaped within a narrow exposure range for optimal film blackening ( , Fig 8) thus, the film has a low tolerance for an exposure that is higher or lower than required, resulting in failed exposures or insufficient image quality. For digital detectors, dynamic range is the range of x-ray exposure over which a meaningful image can be obtained. Digital detectors have a wider and linear dynamic range, which, in clinical practice, virtually eliminates the risk of a failed exposure. Another positive effect of a wide dynamic range is that differences between specific tissue absorptions (eg, bone vs soft tissue) can be displayed in one image without the need for additional images. On the other hand, because detector function improves as radiation exposure increases, special care has to be taken not to overexpose the patient by applying more radiation than is needed for a diagnostically sufficient image.
Detective Quantum Efficiency
Detective quantum efficiency (DQE) is one of the fundamental physical variables related to image quality in radiography and refers to the efficiency of a detector in converting incident x-ray energy into an image signal. DQE is calculated by comparing the signal-to-noise ratio at the detector output with that at the detector input as a function of spatial frequency ( , 55). DQE is dependent on radiation exposure, spatial frequency, MTF, and detector material. The quality (voltage and current) of the radiation applied is also an important influence on DQE ( , 41).
High DQE values indicate that less radiation is needed to achieve identical image quality increasing the DQE and leaving radiation exposure constant will improve image quality.
The ideal detector would have a DQE of 1, meaning that all the radiation energy is absorbed and converted into image information. In practice, the DQE of digital detectors is limited to about 0.45 at 0.5 cycles/mm ( , Fig 9). During the past few years, various methods of measuring DQE have been established ( , 41), making the comparison of DQE values difficult if not impossible. In 2003, the IEC62220–1 standard was introduced to standardize DQE measurements and make them comparable.
The DQE curves for four different digital detectors are shown in , Figure 9. Screen-film systems have a DQE comparable to that of detector CR 2 in , Figure 9.
What does the active area on a typical CCD based detector look like? - Astronomy
Modern digital cameras contain electronic sensors that have predictable properties. Foremost among those properties is their relatively high Quantum Efficiency, or ability to absorb photons and generate electrons. Second is that the electronics are so good in most cameras, that noise is under 2 electrons and rarely worse than about 15 electrons from the sensor read amplifier. With the low noise and high Quantum Efficiency, along with the general properties of how the sensors collect the electrons generated from photons, it is possible to make general predictions about camera performance. An important concept emerges from these predictions that we are reaching fundamental physical limits concerning dynamic range and noise performance of sensors. But downstream electronics after the signal is read from the sensor is still a limiting factor. See References 18, 20 (from electronic sensor companies) and Reference 24 (from University class lecture notes) for more details about the above well-established concepts and how electronic sensors operate.
If you find the information on this site useful, please support Clarkvision and make a donation (link below).
The ideal sensor absorbs every photon, each photon would liberate an electron and every electron would be collected and counted to form the image, all done with no added noise. Would images from such a camera be perfect (no noise and infinite dynamic range)? NO! All measurements of light (photons) still have inherent noise, called photon noise. The dynamic range is not infinite, but would have a maximum of the number of photons collected. For example, if you collected 1,000 photons, the dynamic range would be 1000:1 or almost 10 photographic stops.
- Dynamic Range per pixel = Maximum signal per pixel (electrons) / minimum discernible signal per pixel (electrons), (eqn intro-1)
Traditionally, the minimum discernible signal has been limited by the sensor read noise and downstream electronics. But circa 2016, read noise is near 1 electron in some consumer cameras, and sub-electron sensors are operational in labs and will likely reach consumer cameras soon. That means single photons can be detected. Such photon counting systems require a more detailed equation than equation intro-1.
- Dynamic Range per pixel = Maximum signal per pixel (electrons) / Measured Read Noise per pixel (electrons), (eqn intro-2a)
which at the sensor level is:
- Dynamic Range per pixel = Maximum signal per pixel (electrons) / sqrt (read noise squared + dark current(T)*t + 1) (eqn intro-3),
In the perfect sensor (above), the read noise would be zero, but the minimum discernible signal is 1 photon and the noise would be square root 1 = 1 photon, giving the dynamic range of 1000 (in the 1000 photon example above). In real digital cameras, amplifier and analog-to-digital converter noise contributes to the apparent read noise, so each ISO has a different measured read noise, resulting in changes in the dynamic range with different ISOs. Thus, in most digital cameras, dynamic range in the image data for a pixel is less than the dynamic range of a pixel at the sensor due to downstream electronics limitations, especially at low ISO. The equation intro-3 above is used for the measured dynamic range per pixel out of the camera. This article will present data and models of the read noise, full well capacity, dynamic range and other parameters. (Note Oct 2016: the above equations are to clarify the changing landscape of sensors the data below on dynamic range still uses equation intro-2a. When this article get a coming major update, I will update the dynamic range numbers to equation intro-3. Most numbers will not change and none will change significantly.)
The dynamic range chosen for clarkvision uses the common standard of signal-to-noise ratio, S/N = 1.0. Some other sites use other values, e.g. 4. Note in an image context image info can be seen well below S/N = 1. Fine grained film on the same image scale has S/N less than about 20, so so in my opinion, setting a noise floor above S/N = 1 is inconsistent with observed image quality.
In the physics of photon counting, the noise in the signal is equal to the square root of the number of photons counted because photon arrival times are random. The reason for this dependence is Poisson Statistics (Wikipedia has an excellent article on Poisson statistics). For example Table 1 shows the signal-to-noise ratio when detecting different numbers of photons. Table 1a
Photons Noise Signal-to-noise
9 3 3 100 10 10 900 30 30 10000 100 100 40000 200 200 90000 300 300
Why is this important? It turns out that the noise making up the majority of images we view from good modern digital cameras is dominated by photon counting statistics, not other sources. So to make an image with a high signal-to-noise ratio, one must collect the most photons possible. Modern electronic sensors have a method for collecting the electrons from photons (they are called photoelectrons) and storing them in the sensor until the electrons are transferred from the chip to the electronics in the camera where the signal is amplified, digitized and converted into an array of numbers to be recorded in a memory card and later displayed as an image by a computer.
Another reason photon noise is important is that in a photon noise limited system, a single measurement (e.g. the signal in a single pixel), one knows the signal, the noise and the S/N. There is no need to make multiple measurements or do statistics on many pixels.
Both CCD and CMOS silicon sensors used in today's digital cameras exploit a property of semiconductors. Silicon is a semiconductor. When a photon is incident on the silicon, the photon may be absorbed, and the energy from the photon excites an electron, moving it into what is called the "conduction band" from the low energy state called the "valence band." There is an energy gap, called the "band gap" across which the electron must move. The band gap sets the lower limit (longest wavelength) of the photon energy that can be absorbed by the electron to move it into the conduction band (see Reference 24 for more details). For silicon, that wavelength is about 11,000 angstroms (1.1 microns) in the infrared. Photons with wavelengths shorter than this value have higher energies, and those energies include wavelengths visible to our eyes, called the visible spectrum. Once an electron is excited into the conduction band, the challenge is to capture it before it moves far (like electrons flowing great distances in a copper wire, where electrons are flowing in the conduction band).
The electric field in the silicon is modified by adding impurities (called doping, e.g. parts per million of arsenic or boron or other elements in the columns in the periodic table on each side of silicon) to control were the electrons flow. Voltages are applied to the silicon, and when a photon is absorbed, primarily by the electrons in the valence band, the electrons will be excited into the conduction band and flow toward positive voltage. These electrons are also called "photoelectrons." The local electric fields produced by the doping and applied voltages trap the electrons in small regions (pixels in imaging sensors). The trapped electrons correspond to absorbed photons, and in the sensor industry, photons and electrons (photoelectrons) are interchanged in describing sensor performance.
Thus, when a digital camera reads 10,000 electrons, it corresponds to absorbing 10,000 photons. So the graphs shown in this article that are in units of electrons, like Sensor Full Capacity, also indicate how many photons the sensor pixel captured. The camera electronics also generates a small amount of noise, and from a measurement perspective, that noise is in electrons and the noise source, whether camera electronics or from photon noise, gets mixed into the images you observe. With measurement techniques, the various noise sources can be isolated and their individual contributions measured. This article summarizes available data for numerous sensors, both digital cameras and from sensor manufacturer data sheets.
The absorption lengths of photons in Table 1B are the 1/e depth (e = 2.7183), or the 63% probability of a photon being absorbed along that length. Some photons can, in reality travel several times this distance before being absorbed. These absorption lengths impact performance as pixels become smaller. For example, small sensor digital cameras currently have pixels smaller than 2-microns. What happens when red photons enter the silicon and after 5 microns only 63% of them are absorbed, and after 10 microns (10 pixels) 13% are still moving through the silicon being absorbed at greater distances from the original pixel? Well, it can't be good in terms of a color imaging sensor. If the absorbed photon results in an electron in the conduction band, it likely contributes to photons several pixels away from the target pixel.
Different Photography Situations
The sensor data in this article can be applied in multiple ways. Early in the Digital Camera era, the number of pixels was relatively similar and sensor sizes varied. But now (2009 and beyond) there is a large variety of megapixels to choose from within one sensor size, and the choices will likely be greater in the future. Some discussions on the internet have debated pixel size where some take extreme positions of larger pixels are better and others claiming smaller pixels are better. In practice, there are situations where large and other situations where smaller pixels will produce better images. But there are also situations where it matters so little, only a lab measurement could tell the difference! In theory, if the read noise were zero, one could synthesize in post processing, any equivalent pixel size. Some cameras are getting close to this ideal. Sensor data will be presented which will help resolve such differences.
However, I want to be clear regarding the sensor data: the pixel and its size is only a holder for the photoelectrons. It is the lens that delivers the photons to the sensor. Just because a pixel is larger does not mean that it will produce better or lower noise images if the lens and exposure time does not fill the pixel with enough light (and thus, photoelectrons). If one is working above base ISO, the pixel will not be filled to its full capacity. The larger pixel has the potential to collect more light. But larger pixels see larger angular areas from the lens, so resolve less detail. There is a trade between the lens collecting the light, the focal length spreading out the light, the pixels chopping up the light. and the exposure time limiting the light collected. For the impact on an image of these parameters, the full system must be considered. That is done in the article: Telephoto Reach, Part 2: Telephoto + Camera System Performance (A Omega Product, or Etendue) (Advanced Concepts). This article simply describes the capabilities of sensors, the pixels in those sensors and their potential to deliver quality images.
Extreme pixel sizes generally do not produce high quality images. For example, if large pixels are better, consider a camera with the pixel so large, there is only one pixel. Obviously, there is little image information and the image would not look good. On the internet, people argue about smaller pixels, and say, after all film has grain (single pixels) with only 1 bit of dynamic range per grain (on or off). But film grain has a 3-dimensional structure and it is grain clumps which provide tonality, not individual grains. The 3-dimensional grain distribution also gives film its characteristic curve so that once a grain has absorbed a photon it is no longer sensitive, and within a grain clump it is the probability of another grain absorbing a photon that gives a logarithmic response of the grain clump, extending its dynamic range. These are properties unlike the 2-dimensional grid In a digital camera electronic sensor. As pixels become smaller, signal level drops per pixel and read noise may become more dominant, and that limits small pixel performance. Yet more pixels provide more resolution (if the lens can deliver that resolution), so clearly there are optimums in image quality from a single large pixel to small pixels dominated by noise. But different situations call for different sized pixels, so there is not one optimum.
There are generally 4 imaging situations where pixel and sensor performance will influence the quality of the resulting images. We will make the assumption that output is a "print," but it could also be a monitor or some other output device.
1) Full sensor, maximum print size. Focal length is changed to frame the scene and no cropping is done. More pixels shows finer detail, but if sensor size is not increased, the pixels get noisier as the sensor is divided into smaller and smaller pixels. More pixels deliver finer but noisier details. A larger sensor and greater focal length can maintain low noise and deliver finer details assuming the lens scales with the sensor (e.g. use the same f/ratio and exposure time). The apparent image quality is given by the Full sensor Apparent Image Quality, or FSAIQ metric.
2) Full sensor, constant print size. No cropping. If your image has more pixels, you will get more pixels per inch on the print. As the number of pixels increases for constant sensor size, the light received per unit area stays relatively constant until pixels become very small, then light levels, dynamic range drops and read noise per unit area increases. For constant print size, as long as there are enough pixels (e.g. enough pixels per inch to be limited by the printer), increasing the number of pixels will not improve, nor harm print quality, but as pixels become very small, read noise and dynamic range will drop, hurting the output. The FSAIQ metric is best metric for this situation but once the pixel density matches or exceeds the output device, no further improvement is likely and when pixels become very small, image quality will drop assuming the detail can be resolved with the human eye.
3) Focal Length Limited, image is cropped but constant print size. For example, you want to print your cropped subject on 8x10 paper. Thus, if you have more pixels, the print will have more pixels per inch. For constant sensor size, more pixels provide for more fine detail. As the number of pixels increases for constant sensor size, the light received per unit area stays relatively constant until pixels become very small, then light levels and dynamic range per unit area drops and read noise per unit area increases. An example might be you want to make and 8x10 print of the Moon. A 1 megapixel camera with large pixels will give high dynamic range and low noise, but not much detail. Decreasing the pixel size with the same focal length lens (assuming the lens can deliver more detail) will provide more resolution on the print and that can be more important than higher dynamic range and lower noise. The FLL-AIQ1600 metric is the best metric for this situation.
4) Focal length limited, image is cropped but printed at maximum size. If you have more pixels, for example, you can print larger. For example, you want the largest print possible of a distant bird. Smaller pixels will provide more resolution on the bird, but with a constant sensor size, each pixel will collect less light and have lower signal-to-noise ratios. As long as noise does not become too noticeable, and dynamic range is not a limiting factor, the improved resolution will be seen by most viewers as more important. Maximum megapixels for a given sized sensor is probably the best metric, as long as the lens can deliver the image detail and noise and dynamic range are not compromised too severely. Maximum megapixels also means the smallest pixels, so a metric is 1/pixel pitch. Note, diffraction is the ultimate limit to image detail.
Complications regarding the perceived performance of pixels, pixel size and sensor size in digital cameras over the last decade (about 2000 to 2009) is a significant refinement in technology. While the quantum efficiency of sensors in digital cameras has not really changed much, other factors that have improved include: fill factor (the fraction of a pixel that is sensitive to light), higher transmission of the filters over the sensor, better micro lenses, lower read noise, and lower fixed pattern noise.
Current DSLRs (2010 - 2014) with similar sensor sizes have a range of about two in pixel size. Whether image detail at the cost of noise and possibly dynamic range is more important to you depends on your tastes and on your application. There is no perfect sensor, and large or small pixels in a given sensor size will give better performance in different situations. The sensor data on this page will help you to understand what these trades are, how technology is changing, and should enable you to make better decisions for your imaging needs.
Image quality is subjective and the bottom line is lighting, composition and subject are more important than the inherent image quality that a camera delivers. I have been studying sensor performance for two reasons: intellectual curiosity and a better camera for astrophotography. In comparing results on this page, do not get too carried away with over-interpreting the results. You would probably do better spending your time out photographing and refining your knowledge on lighting, composition, and subject. (see: http://www.clarkvision.com/articles/lighting.composition.subject).
What follows are sensor performance data. For each property, note the trends. See the section on the Sensor Performance Model. for details of the models.
A growing misinterpretation of results like those I present below is that larger pixels are less noisy. The signal-to-noise ratio is dependent on the amount of light collected, and the light collected is delivered by the the lens. It is the lens, it's focal length and the exposure time that determines the amount of light collected. A larger pixel only enables more light to be collected, but at the expense of less detail resolved (given the same focal length lens).
But consider a big bucket and a little bucket. Turn on some water for a short time in each bucket. Which bucket has more water? Both buckets contain the same amount of water. It is the force and duration of water that determines how much water is in the bucket, not the size of the bucket. The larger bucket only allows more total water to be poured into the bucket. Same with pixels. So in the sensor analyses below, the size of the pixel only enables more light (electrons) to be stored in the pixel. It is up to the lens and exposure time to actually deliver those photons. For more on this subject, see: Camera System Performance (A Omega Product, or Etendue)
Full Well Capacity
The property that describes the capacity to hold the electrons in each pixel that are generated from photons is called the " Full Well Capacity ." As a pixel holds more electrons, the charge density increases. There are finite upper limits to the charge density of electron storage, and undesirable side effects can occur, including charge leaking into adjacent pixels, called blooming (e.g. see reference 19). Blooming was common in early CCDs causing streaks from bright objects in the image.
Full well capacities of some cameras and sensors are shown in Figure 1. Because of the finite and fixed absorption lengths of photons in silicon (Table 1b), the full well capacities are basically a function of pixel area (and not volume).
Figure 1. Digital camera and sensor Full Well Capacities per pixel are shown. Digital camera data are shown as brown diamonds, and sensor data from manufacturer's data sheets are shown in blue squares. Data values are from Table 2. Note how recent Canon cameras, like the 1DIV, 5DIII,6D and 1DX fall along the model line, indicating similar technology level. This trend indicates a maturing of sensor technology in the Canon line. Cameras below the model trend, e.g. see the Canon 10D, an early model indicate how much the technology improved. Note the Canon 20D and 30D use the same sensor. Details of the 2 sensor models are given below, see Sensor Performance Model. The model uses an electron density of 1700 electrons/square micron (orange line) and 1900 electrons/square micron (blue line). The higher the electron density, the greater the problem with side effects, including blooming so densities are generally kept below about 2000 electrons/square micron.
The bottom line for the full well capacity from the recent trends in the Canon line, is that sensor technology in that line is mature and the choice scales with pixel size. But note, this is just one of several parameters.
Full well capacity is important for maximum signal-to-noise ratio and dynamic range. Figure 2 shows the signal-to-noise ratio at ISO 100 on an 18% gray card. Eighteen percent is close to the average scene intensity in regular photographs, so Figure 2 shows the typical signal-to-noise ratio in a typical photograph. Dynamic range is shown in Figure 4, and shows a small trend with pixel size. Because this trend scales directly from the square root of the full well capacity and we see the same maturing of technology with recent Canon cameras as we do in Figure 1.
Full well capacity does not necessarily indicate low light performance even though more electrons (electrons excited and collected by the absorption of photons) means better low light performance. For example, the Nikon D50 plots low in Figure 1. But that full well occurs at ISO 200 where most other cameras are at ISO 50 to 100. Thus, the Nikon D50 is actually more sensitive, and this is indicated on the Unity Gain ISO data discussed below and presented in Figure 6 (where the Nikon D50 plots very high). Low light performance with a given lens is controlled by the Quantum Efficiency of the device combined with the total photons the device collects.
Figure 2. The signal-to-noise ratio per pixel of an 18% gray card assuming the camera meter would place 100% reflectance at the saturation level of the sensor at ISO 100 (in practice many cameras are close to this exposure level). Note the D50 has a minimum ISO of 200, so the signal-to-noise ratio is for ISO 200, and plots square root 2 lower (for a ISO 200 signal-to-noise ratio plot, the D50 would appear relatively higher). There is a clear trend of increasing signal-to-noise ratio with increasing pixel size. Data from Table 2. Details of the model are given below and are the same as those in Figure 1, see Sensor Performance Model. Digital camera data are shown as brown diamonds, and sensor data derived from the full well capacities from manufacturer's data sheets are shown in blue squares. The values for the blue squares was computed by the equation: square root (full well capacity *0.18).
For detecting the lowest signals, read noise is a controlling factor. Read noise is expressed in electrons, and represents a noise floor for low signal detection. For example, if read noise was 10 electrons, and you had only one photon converted in a pixel during an exposure, the signal would mostly be lost in the read noise. (It is possible to see an image where the signal is 1/10 the read noise, where one uses many pixels see: Night and Low Light Photography with Digital Cameras http://www.clarkvision.com/articles/night.and.low.light.photography.) Older CCDs tend to have read noise levels in the 15 to 20 or more electrons. Newer CCDs in better cameras tend to run in the 6 to 8 electron range, and some are as low as 3 to 4 electrons. The best CMOS sensors currently have read noise less than 2 electrons and the Canon 1DX shows less than 1 electron at very high ISOs. Figure 3 shows read noise for various cameras and commercially available sensors. One can see that there is no real trend with pixel pitch.
Read noise dominates the signal-to-noise ratio of the lowest signals for short exposures of less than a few seconds to a minute or so. For longer exposures, thermal noise usually becomes a factor. Thermal noise increases with temperature, as well as exposure time. Thermal noise results from noise in dark current, and the noise value is the square root of the number of dark-current generated electrons. Thermal noise is discussed in more detail below.
Figure 3. Read noise per pixel for various sensors. Data from Table 2. Note older cameras (e.g. Canon 10D, S60) have higher read noise than newer models. Currently Canon's technology leads in read noise performance. Lower read noise values = better performance. Nikon currently clips the average read noise at zero, losing some data. Canon includes an offset, so processing by some raw converters can preserve the low end noise, which can be important for averaging multiple frames to detect very low intensity subjects (as in astrophotography).
A large dynamic range is important in photography for many situations. The pixel size in digital cameras also affects dynamic range. Sensor dynamic range is defined here to be the maximum signal divided by the noise floor in a pixel at each ISO. The noise floor is a combination of the sensor read noise, analog-to-digital conversion limitations, and amplifier noise. These three parameters can not be separated when evaluating digital cameras, and is generally called the read noise. As you might have surmised by now, with the larger pixels potentially collecting more photons, those larger pixels can also have a higher dynamic range. Figure 4 shows the maximum dynamic range possible per pixel from each sensor, based on full-well capacity / best read noise, assuming no limitation from A/D converters. Figure 5 shows the measured dynamic range from 3 cameras with significantly different pixel sizes as a function of ISO. The full sensor analyses for these 3 cameras (as well as other cameras) can be found at: http://www.clarkvision.com/articles/index.html#sensor_analysis. One sees that actual dynamic range of a digital camera decreases with increasing ISO as long as the range is not limited by the A/D converter. At higher ISOs, it is obvious that large pixel cameras have significantly better dynamic range than small pixel cameras, but at low ISO there is not much difference. If 16-bit or higher analog-to-digital converters were used, with correspondingly lower noise amplifiers, the dynamic range could increase by about 2 stops on the larger pixel cameras. The smallest pixel cameras do not collect enough photons to benefit from higher bit converters concerning dynamic range per pixel.
Figure 4. Dynamic range per pixel of sensors. NOTE: this is the capability of the sensor, NOT what the camera can actually deliver in a single exposure. Many sensors are limited to just under 12 photographic stops by the camera's 12-bit analog-to-digital (A/D) converter. The 14-bit A/D limit is difficult to achieve in the high speed, low power applications of a digital camera, thus current 14-bit cameras are only slightly improving over 12-bit systems see Figures 5 and 8 for more information. Look for future DSLRs to use 16 bit A/Ds. The film dynamic range is for similar spatial resolution as the digital sensors and applies to slide film print film does slightly better, but is not up to the large pixel digital cameras. Sensor data is from Table 2. Details of the model are given below, see Sensor Performance Model. Ultimately, with zero electronics noise, dynamic range would be limited by the number of photons collected, thus would still show a dependence on pixel size.
Figure 5a. The measured dynamic range per pixel for 4 different cameras is shown. Large pixel cameras have a larger dynamic range. The small pixel camera has a very good dynamic range, but that range rapidly deteriorates with increasing ISO. The large pixel cameras produced up to 2007 were limited by 12-bit analog-to-digital converters at low ISOs. The lower noise, 14-bit Canon 1D Mark III has boosted performance beyond the slightly larger pixel 1D Mark II. High ISO performance is about 1/2 stop better, similar to what was claimed by Canon when the camera was announced. This improvement is due to a better fill factor and lower read noise. Without the low noise 14-bit converter, the Mark III would plot to the lower left of the Mark II. The flattening of the dynamic range toward lower ISOs is due to noise in the camera electronics, such as the A/D converter (See Figure 8 for models of noise sources). Canon 50D data point: dynamic range = 10.7 stops at ISO 400 (reference 27).
Figure 5b. The measured dynamic range per pixel for 3 different cameras is shown with models of the expected dynamic range. Large pixel cameras have a larger dynamic range, both measured and in theory. The dynamic range is often limited by the A/D converter and other electronics in the system, illustrated by the measured data falling below the model at lower ISOs.
NOTE: Unity Gain is a flawed concept in my opinion. It is included here for historical reference. Contrary to some posts on the net, I did not start this concept. It seems like a good idea: that the fundamental counting unit is one quanta: an electron. It seemed that one should not need to digitize a signal finer than 1 electron. In the early days of digital cameras (before about circa 2008), the electronics in digital cameras were too noisy to counter the theory. But since then digital cameras have substantially lower noise. Read noise at high ISOs are commonly less than 2 electrons. But this is only obtained when ISOs are much higher than Unity Gain. Clearly there are advantages to ISOs beyond Unity Gain. The fundamental reason Unity Gain is not relevant is because the sensor in a digital camera is an analog system, not digital. The signals from the sensor are analog and only after amplification is the signal digitized.
The following is for historical reference.
A concept important to the fundamental sensitivity of a sensor is the Quantum Efficiency. But in terms of camera performance other factors also play a role, including the size of a pixel, and the transmission of the filters over the sensor (the Bayer RGBG filter, the IR blocking filter, and the blur filter). Larger pixels enable the collection of more light, just like a large bucket collects more rain drops in a rain storm. But the large pixel comes at a cost: less resolution on the subject (less pixels on the subject).
A parameter that combines the quantum efficiency and the total converted photons in a pixel, which factors in the size of the pixel and the transmission of the filters (the Bayer RGBG filter, blur filter, IR blocking filter), is called the " Unity Gain ISO ." The Unity Gain ISO is the ISO of the camera where the A/D converter digitizes 1 electron to 1 data number (DN) in the digital image. Further, to scale all cameras to equivalent Unity Gain ISO, a 12-bit converter is assumed. Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, in theory there is little point in increasing ISO above the Unity Gain ISO (small gains may be realized due to quantization effects, but as ISO is increased, dynamic range decreases). EXCEPT THE THEORY IS FLAWED. There is a caveat to this idea: fixed pattern noise may still be a factor and in some cameras, a higher ISO than unity gain is needed to reduce the apparent fixed pattern noise. Figure 6 shows the Unity Gain ISO for various cameras and sensors that can be purchased from manufacturers. It is clear that there is a trend in ISO performance as a function of pixel size. Gains for various cameras are shown in Table 3 as a function of ISO. Note in practice for 14-bit systems lower ISO may be employed if the A/D converter does not limit performance. In comparing actual performance of 14-bit A/D converters (e.g. see Figure 8a) and the read noise in Table 4, the lowest apparent read noise performance remains similar (
ISO 1600) for both 12-bit DSLRs and 14-bit DSLRs. But many cameras have pattern noise at low ISO, including ISOs above Unity Gain. Optimum low light performance is at ISOs at or above Unity gain and high enough where pattern noise is no longer apparent. In many DSLRs, this is around ISO 1600 (see individual camera sensor analyses). In practice, set the gain at the nearest 2x ISO (e.g. ISO 400, 800, 1600, 3200), as the data obtained at other ISOs may be simply multiplied by the camera's digital processor in some cameras. In many cases, it is usually difficult to see the performance difference between ISO 800 and 1600 except for the lower dynamic range decrease at the higher ISO, or the higher fixed pattern noise at lower ISOs.
Figure 6b. Unity gain is shown as a function of pixel pitch. Same data as in Figure 6a, with expanded scale.
One can find on the internet discussions about the "Native ISO" for a camera. There is really no such thing. ISO is simply a post sensor gain followed by digitization. ISO settings are needed mainly to compensate for inadequate dynamic range of down stream electronics. One could specify ISO such that downstream electronics digitize the full range of signal up to the full well capacity. Some may say that is the native ISO, but there is no inherent advantage to this ISO, and in many applications is less than ideal. Digitizing the full range, assuming one fills that range with electrons from optimum exposure, maximizes signal-to-noise ratio at high signals (this is the concept of use the lowest ISO on the camera, and "expose to the right." However, this range provides poor digitization of the low end of the signal, and depending on the camera can have side effects like pattern noise. Alternatively, working at higher ISOs digitizes the low end of the signal range better with the sacrifice of losing the high end of the range. See "What is ISO on a digital camera? ISO Myths and Digital Cameras" for more information on ISO.
Low Light Sensitivity Factor
Unity Gain ISO describes the high signal part of an image (the highlights) at high ISO, and apparent read noise the performance corresponding to the low signal end of the photograph. But if a camera was delivering more photons to a pixel, then read noise alone does not give a complete story of the performance in the shadows. The " Low-Light Sensitivity Factor " describes the high iso shadow performance per pixel (Figure 7). It also describes the low light performance in shadows of exposures up to tens of seconds at high ISO.
In astrophotography, a high Low-Light Sensitivity Factor would record the most faint stars, at least for exposures where thermal noise did not dominate. However, this low light sensitivity factor does not apply to stars. And if not stars, then not other subjects. For example, stars in the focal plane are small disks, spread by diffraction, atmospheric turbulence, and lens aberrations. Thus, spread over several pixels, and more pixels in a camera with smaller pixels. For SUBJECT sensitivity a better metric may be signal density divided by read noise density (see Figure 10). This section will be changed after more testing. Note the 7D in Figure 7 shows a much lower factor than the 1D Mark IV, yet the 7D records stars at least as faint as the 1DIV, and perhaps slightly fainter. See Nightscape Photography with Digital Cameras for example images that prove this. A contributing factor in low light long exposure factor is also the detrimental effects of noise from dark current. That too will be factored into a new measure.
Figure 7 The Low-Light Sensitivity Factor per pixel describes the camera performance in shadows or darkest parts of an image at high ISO. Low-Light Sensitivity Factor = Unity Gain ISO divided by read noise in electrons. A higher value shows better performance in recording shadow detail at high ISO. Derived from data in Table 2.
At high signal levels (most of the range of a digital camera image), noise per pixel is dominated by photon noise, the inherent random arrival times of photons at the sensor. At the lowest signal levels, other sources contribute. There is sometimes confusion over what are the sources of such low level noise. For example, Table 4 below shows apparent read noise is high (when expressed in electrons) at low ISO and decreases with increasing ISO. Figures 8a and 8b show the sources and reasons for these trends. At low ISO large pixel cameras, typical of DSLRs collect enough photons that photon noise is small compared to read noise and noise from the analog-to-digital converter (ADC). Some call this quantization noise, and while such noise contributes to the total ADC noise, other noise sources in the ADC stage dominate, especially on newer cameras with 14-bit ADCs (the Canon 40D in Figure 8a). On small pixel cameras, the analog gain is high enough that at low signals, read noise dominates the noise sources and ADC noise is a small factor (Figure 8b). The small pixel camera in Figure 8b looks like is has better low ISO performance than the large pixel cameras in Figure 8a, but that is not the case, because the large pixel cameras collect many times more photons/pixel in a given exposure.
Figure 8a Noise sources of total apparent read noise per pixel for 2 cameras: the 12-bit ADC in the 1D Mark II shows the ADC noise limits performance at low ISO, while sensor read noise dominates at high ISO. The 14-bit Canon 40D has an ADC stage with relatively low performance, and thus the camera is still limited by ADC noise at low ISO and does not achieve 4x improvement over the 12-bit system. A 4x improvement is not expected based on typical ADC performance see reference 15 and examine 12 and 14-bit ADC specifications for devices running in the many megahertz range. The data indicates, however, that better ADCs could improve the low ISO performance (including dynamic range), and we see this in the Canon 1D Mark III (as indicated by a slight improvement in Figure 5: the smaller pixels of the Canon 1D Mark III plot at a similar performance level as the larger pixel Canon 1D Mark II). From reference 15, 16-bit ADCs appear to be needed, as only such high performance devices have the signal-to-noise ratios needed for these sensors.
Figure 8b Small pixel cameras have analog gain stages with high gain such that the 12-bit ADC is not a limiting factor. Even though the sensor is read noise limited at low signals, the small pixels collect many fewer photons in a given exposure compared to large pixel cameras.
Thermal Noise from Dark Current
On long exposures, electrons collect in the sensor due to thermal processes. This is called the thermal dark current. As with photon noise, the noise from thermal dark current is the square root of the signal. One can subtract the dark current level, but not the noise from the dark current. Many modern digital cameras have on sensor dark current suppression, but this does not suppress the noise from the dark current. It does, however, prevent uneven zero levels that plagues cameras before the innovation (Canon cameras before circa 2008). Examples of this problem are seen at: Long-Exposure Comparisons.
N = (S + r 2 + t 2 ) 1/2 = (S + r 2 + dc * e) 1/2 , (eqn 1a)
Signal-to-Noise Ratio, SNR = S / N, (eqn 1c)
where N = total noise in electrons, S = number of photons (signal), r = apparent read noise in electrons (sensor read noise + downstream electronics noise), and t = thermal noise in electrons. The thermal noise equals the square root of the dark current per second, dc, times the exposure time in seconds, e. The signal, S, is proportional to the photon arrival rate, P, times the exposure time, e. Noise from a stream of photons, the light we all see and image with our cameras, is the square root of the number of photons, so that is why the S in equation 1 is not squared (sqrt(S) 2 = S). Both the total photons counted, S, and the thermal noise, t, are functions of exposure time. S is directly proportional to exposure time. Thermal noise is is related to dark current. Dark current is usually expressed as electrons/second, and the noise is the square root of the electrons, so thermal noise is proportional to the square root of the exposure time.
See individual sensor evaluations for details on dark current for a given camera.
Figure T. Dark current as a function of temperature for 5 cameras are compared. The temperatures are the camera temperatures reported in the camera's EXIF data and was 2 to 10 degrees higher than measured ambient temperature. The more massive 1D cameras tended to have a larger difference between internal camera and ambient temperature. For example, the 7D points at -10 and -11 C where made side-by-side with the 1DIV in a freezer and the 1D reported -3 and -5 C. The freezer temperature was -13 C and the cameras were cooled for 2 hours. The upturn in the trend for the 6D and 1DX may be due to internal heating and the sensor was actually warmer than the reported temperature. Even so, we see a clear trend of increasing dark current with increasing temperature. Dark current tends to double for about every 5 to 6 degrees C.
Diffraction also limits the detail and contrast in an image. It is the loss of contrast and detail that limits the Apparent Image Quality (below). As the pixel size becomes smaller, lenses must be used at lower f/ratios and those lenses must deliver better performance in order to increase Apparent Image Quality. Few lenses are diffraction limited at f/8 over their entire field of view, so this is an optimistic upper limit to image quality.
Figure 9. Diffraction affects image detail by reducing contrast. The technical term for the contrast reduction is called the Modulation Transfer Function (MTF) and describes the contrast the camera delivers as a function of the spacing of lines (called the spatial frequency), or fine detail. Here the spatial frequency is expressed in terms of pixel spacing. As the f/stop increases, the diffraction spot becomes larger, and fine detail in the image becomes reduced in contrast. The red, green and blue lines show the diffraction effects for red, green and blue wavelengths of light for f/ratios 1, 2, 4, and 8. When MTF reach 0, there is no detail in the image at that scale.
Figure 11. Because all sensors have finite read noise, when one adds pixels together, the total read noise increases. Read noise per pixel has little dependence on pixel size. Note the technology improvements with different generations of cameras. For example, the oldest cameras, the Canon S60 and 10D had very high noise, in the mid-part of the 2000-2009 decade, cameras had about 4 electron read noise (e.g. Canon 5D, 1D Mark II, 1D Mark III, 40D, Nikon D300) and in the latter part of the decade read noise dropped to about 2.5 electrons (Canon 5D Mark II, 50D, 7D) and in 2010 the 1D Mark IV set a new low of 1.7 electrons).
Figure 12. Dynamic range of an area of pixels will be higher than that of a single pixel, but will decrease with pixel size because read noise increases and signal level decreases as pixel size decreases. The dynamic range vertical axis is shown in photographic stops (1 stop = factor of 2). Solid lines are models that show what the trends would be if technology were constant and pixel size varied. Again we see an improving trend of better dynamic range with newer sensors as technology improves.
Full Sensor Apparent Image Quality (FSAIQ)
Apparent image quality is a subjective measure, that includes resolution and signal-to-noise ratio. While this is not a new concept, I present my own working definitions.
Image quality depends on the imaging situation. As described in the beginning sections of this article, if you want maximum image quality and can choose the focal length to make use of the entire sensor (e.g. landscape photography), the image quality metric will be different than if you want maximum detail on a subject, like a bird, that is small in the frame and you can not increase focal length. The first situation regards use of the full sensor, so "Full Sensor Apparent Image Quality" is important, FSAIQ:
FSAIQ = StoN18 * MPix / 20.0 = sqrt(0.18*Full well electrons) * Mpix / 20.0,
where StoN18 is the signal-to-noise delivered by the sensor on an 18% gray target, assuming a 100% reflective target just saturates the sensor, and Mpix is the number of megapixels. StoN18 is computed from pixel performance before Bayer de-mosaicing: indicative of the true performance of each pixel. Table 2 shows calculated FSAIQ. Actual image quality depends on the lens delivering a certain resolution, so use these values as a rough guide of what might be possible. More information on FSAIQ and comparison to film can be found at: http://www.clarkvision.com/articles/film.vs.digital.summary1.html.
Figure 13 Full Sensor Apparent Image Quality. The models use the same equation and parameters as the model in Figure 4 and is "Model A" described below. The model closely predicts performance for all modern cameras (within about 10% for large pixels, and 20% for small pixels). Older cameras and sensors fall below the model, e.g. typically due to lower fill factors. Higher quantum efficiency (QE) sensors than the model (45%) would plot above the model (by a factor of square root 2, 1.41x higher FSAIQ for a
100% QE sensor). Solid colored lines indicate constant sensor size in megapixels. The Canon 7D and 1D Mark IV have higher system sensitivities than the model so are plotting a little above the models. Dashed colored lines indicate constant format sized sensors. The "Full-Frame" sensor is the same size as 35-mm film. As one moves to the left along a constant format line, FSAIQ first increases until diffraction begins to take effect, then FSAIQ decreases. Diffraction at f/8 is used for the Full Frame, 1.3x-crop, and 1.6x-crop sensors, and f/7 for the 4/3 sensor (long dashed lines), f/4 for the Full Frame and 2/3" small-format sensor, and f/2.8 for the smallest sensor shown, 1/1.8" (short dashed lines). The smaller f/ratios are needed as sensor size decreases in order to make the model fit observed data. This indicates smaller format cameras must have very high quality lenses in order to deliver performance at high megapixels. Diffraction limits the effective megapixels. When pixels become very small, they hold so few electrons that dynamic range suffers, and this causes the turn down in FSAIQ at pixel sizes below 2 microns pixel pitch. See the discussion of diffraction, below, which will further limit FSAIQ. For example, the FSAIQ for the Canon 7D plots above the model line for its 1.6x crop sensor. But that FSAIQ will only be realized of the lens used with the camera is diffraction limited below f/8.
The data for FSAIQ for some sensors in Figure 13 plot below the model curves. This is best seen in the trend below the 1.6x-crop model. Those points represent older cameras that had lower efficiency (e.g. the Canon 10D, plotting at 7.4 micron pixel pitch), probably due to lower fill factors, lower quality microlenses, and lower quantum efficiencies. The newer cameras plot close to the model lines. The Nikon D3 plots below the model because of the reported low full-well capacity (more data at ISO 100 are needed to confirm the D3 full-well capacity). The Canon 7D and 1D Mark IV have higher system sensitivities than the model so are plotting a little above the models.
Note the Canon 7D in Figure 13 falls above the red dashed line. The red dashed lines is decreasing due to loss of detail and contrast from diffraction at f/8. Thus, a lens working faster than f/8 and delivering the detail better than an f/8 diffraction limited lens is needed to enable the 7D sensor to give the indicated quality in its images. That requires a very high quality lens.
The FSAIQ model and sensor data in Figure 13 is for the lowest ISO filling the pixel with electrons. FSAIQ for higher ISOs drops approximately with the square root of the ISO, so quadruple the ISO and the FSAIQ drops by 2x. If new sensors came out with higher quantum efficiency (about a 2x improvement is possible), the FSAIQ would be increases by the square root of the increase, so a 1.4x improvement is possible.
Focal Length Limited Apparent Image Quality FLL-AIQ
Now we look at image quality in situations where you want to resolve the most detail as possible on a subject, for example, the Moon with a sort telephoto lens, or a bird or other subject when it is small in the frame.
We first consider such a focal length limited situation where one wants to make the same sized output, e.g. a print. For example, you want to make an 8x10 inch print of the Moon. To get maximum detail on the Moon, one needs the smallest pixels for a given focal length as long as the lens can deliver the detail, and such that the pixels are not too small that the small pixel result in too much apparent noise (low signal-to-noise ratios) and that dynamic range is not impacted. The focal length limited constant output apparent image quality, at ISO 1600, FLL-AIQ1600, is:
FLL-AIQ1600 = pixels/mm * StoN18 at ISO 1600 * square root(dynamic range density/15 stops)/42
Figure 14. The Focal Length Limited constant output Apparent Image Quality at ISO 1600. The model (green solid curve) shows the image quality for constant technology and varying pixel size. The camera data points show improving technology with time. Image quality may be limited by lenses and diffraction limits as shown.
Focal Length Limited Apparent Image Quality FLL-AIQ-MAX
In focal length limited situations where you want the maximum detail on a subject, the highest resolution will be delivered by the sensor with the smallest pixels, assuming the lens can deliver the detail. Most will judge the highest image quality on the image with more fine detail even with more noise, as long as the noise is not excessive and the dynamic range is adequate. The best metric for this situation is the inverse of the pixel size, as given in the Camera Sensor Figure of Merit (CSFM) section below.
An illustration of the inverse pixel size is illustrated in Figure 15. The moon was photographed with the same lens on four cameras, ranging from 8.2 micron pixels to 4.3 micron pixels. The image with 4.3 micron pixels shows more detail. Note, there is no crop factor multiplier effect. See Crop Factor for more details.
Figure 15. The Moon photographed with 4 different cameras using the same lens, so focal length is the same for each image. This is the full resolution image produced in the camera and written as a jpeg file. No post processing sharpening has been done. The 1D Mark II camera has 8.2 micron pixels, collecting more light per pixel giving very high signal-to-noise ratios. The 5D Mark II camera has 6.4 micron pixels which resolves finer details, but with lower signal to noise ratios. The 1D Mark IV has smaller pixels but delivers a better image despite a lower signal-to-noise ratio per pixel. The 7D, with the smallest pixels and lowest signal-to-noise ratio delivers the best image despite having the lowest signal-to-noise ratios per pixel of the 4 cameras.
Camera Sensor Figures of Merit (CSFM)
The AIQ function above requires reliable sensor performance data which does not exist for many sensors. Also, when new cameras are introduced, one might want some simple prediction of sensor performance based on data that the consumer might readily obtain. So I have come up with a simple equation:
Camera Full Sensor Figure of Merit (CFSFM) = megapixels * pixel pitch.
Pixel pitch is proportional to the square root of the pixel density, and pixel pitch is also related to signal-to-noise ratio, which is a main property of my AIQ function. But the above equation does ignore quantum efficiency, filter transmission and fill factor variations which is better represented in the S/N in AIQ model above. However, we do not have raw data and S/N sensor info for many cameras, so figures of merit may be good overall indicators. I'll add computed figures of merit as time permits, but you can easily use the data in the Table 2 to compute the figures or merit for various cameras. However, the figures of merit does ignore the small pixel effects of vanishing dynamic range as pixel size decreases, and lower image detail due to diffraction effects, which is in the AIQ models.
Example Camera Sensor Figures of Merit values:
CFSFM = Camera Full Sensor Figure of Merit
FLLCSFM = Focal Length Limited Camera Sensor Figure of Merit
IMPORTANT NOTE ON INTERPRETING FLLCSFM . This is the focal length limited case and only applies to true focal lengths. For example, the FZ50 has an attached 35-420 mm equivalent lens, but the true focal length is only 7.4 to 88.8 mm. If you took a picture of the Moon at 88.8 mm (max zoom) on the FZ50 and compared it to an image taken with an 88.8 mm lens on a Nikon D3, the FZ50 image would show more detail. But if you had a longer focal length about 3 times longer on the D3, the two cameras would produce similar spatial detail (but the D3 would be higher signal-to-noise), and with longer focal lengths, the D3 would show more detail. So ratios of the FLLCSFM will indicate the ratios in true focal lengths needed to show similar detail on a small subject in the images from each camera. For example, with the Canon 7D, a 180 mm lens would produce similar detail on the subject as the FZ50 at maximum zoom, but again the 7D would have higher signal-to-noise images. The CFSFM indicates which camera has higher quality pixels, and FLLCSFM indicates the relative resolution for the same true focal length lenses. It is physically impossible for both metrics to be the maximum for the same camera.
Sensor Performance Models
The sensor models in this article are simple but they accurately describe many sensors. Note the greater the distance in data points from the model generally occurs for older sensors. e.g. probably due to lower fill factors, lower quality microlenses, and lower quantum efficiencies. Newer sensors tend to plot closer to the model.
Two models are used: Model A and Model B. The models assume a quantum efficiency similar to current digital camera sensors (
45%), a full well capacity = 1,700 electrons per active square micron (the electron density) (Model A) and 1,900 electrons (model B), read noise = 2, 2.5 or 4 electrons (as noted) and a 0.25-micron dead space between pixels. (Models shown in figures on this web page before December 26, 2008 used a 1-micron dead space between 2009 used 0.5 microns dead space for APS-C and larger sensors) For example, a sensor with pixel spacing of 3.5 microns and a dead space of 0.5 microns, would have an active area of 9 square microns collecting 9*1,700 = 15,300 electrons in Model A. AIQ is limited in the model in Figure 13 by 2 factors: 1) diffraction, and 2) lower dynamic range as pixel size decreases. The model limits resolution (effective megapixels) to the Modulation Transfer Function at 50% response (MTF50). MTF50 occurs at f-ratio / 1.56 microns/pixel. For example, at f/8 the MTF 0% occurs at 5.13 microns, so pixels smaller than about 5 microns will be limited in spatial resolution with a diffraction limited f/8 lens. AIQ is decreased linearly in the model when dynamic range (defined as full well divided by read noise) falls below 10 photographic stops. This breakpoint is seen in the constant-format curves (dashed lines) below 2-microns in Figure 13.
If you find the information on this site useful, please support Clarkvision and make a donation (link below).
Below are tables that give other derived parameters for many cameras along with data from the manufacturer's data sheets for their sensors. Methods for determining gain, full-well capacity, and read noise can be found at references 1-5. Specific procedures are described in Procedures for Evaluating Digital Camera Sensor Noise, Dynamic Range, and Full Well Capacities Canon 1D Mark II Analysis http://www.clarkvision.com/articles/evaluation-1d2.
The signal and noise model for digital cameras is given in equation 1, above. It is that model which allows us to compute performance of a camera and how it will respond in a given situation. It is this predictable signal and noise model that allows us to predict the performance of digital cameras. It also shows us that those waiting for the small pixel camera to improve and equal the performance of today's large pixel DSLR will have a long wait: it simply can not happen because of the laws of physics. So, if you need high ISO and/or low light performance at the pixel level, the best solution is a camera with large pixels with a correspondingly larger sensor. However, with pixel management, small pixels can be added together to effectively give the performance of large pixels. So the perception of larger versus small pixels can largely be mitigated in post processing. What is important for good overall performance good sensitivity, low read noise, low dark current, and low fixed pattern noise.
Another factor to be considered these days is constant sensor size and different sized pixels. In this case one trades signal-to-noise ratio and more detail in an image (assuming the lens can deliver the detail). The trade between more pixels, each with a lower signal-to-noise ratio and fewer larger pixels, each with a better signal-to-noise ratio, and which produces the better image is subject dependent. Usually only when one is starved for photons, as in darker night scenes will the larger pixel camera deliver a better image. When the signal-to-noise ratio is photon noise limited (which includes most digital camera images), then one can average pixels and achieve the signal-to-noise ratio of any larger pixel (the value of r is small compared to P in equation 2 above). In that case, noise is dominated by photons and in general smaller pixels (in a constant sized sensor) will deliver a better image. If the photon signal is very low so that read noise is a significant portion of the photon noise, then larger pixels (in a constant sized sensor) will deliver the better image.
Digital Camera Sensor Performance Data
AIQ = StoN18 * MPix / 20.0 = sqrt(0.18*Full well electrons) * Mpix / 20.0, where StoN18 is the signal-to-noise of the sensor on an 18% gray target, assuming a 100% reflective target just saturates the sensor, and Mpix is the number of megapixels.
full well iso is the lowest iso where the camera reaches full well (=100 for DSLRs and 50 for the S60 and S70 point and shoot cameras). Example, the Canon 1D Mark II at iso 800 has a gain of 1.6, so: 1.6 * 4095 * 800/100
*At ISO 100, the Canon 1D MII records a maximum of 52,300 electrons at ISO 50, 79,900 electrons are recorded, but that occurs about 3/4 of the 12-bit linear scale, at 3071 on the 12-bit DN range.
An "e" following a value means estimate.
The Kodak KAF-18000CE is targeted as a medium format sensor see reference 12.
Possible dynamic range of the sensor is theoretical and in practice is often limited by the 12-bit (or 10-bit) analog to digital converters in many cameras.
Sensor sizes from manufacturer's data sheets or product reviews.
The Canon 50D full well is projected from ISO 400 data in Reference 27.
Some additional parameters, grouped by camera for easier comparison are shown below.
Table 3a Camera Gain in 12-bits
* = 14-bit system. Canon 1D Mark II values from reference 3. Canon 1DMII are newer values determined Feb 12, 2006 with firmware 1.2.4, reference 3.
Canon 10D values from Tam Kam-Fai posted on [email protected], 20D, 300D, D70 ISO 400 values from Terry Lovejoy, reference 1. Canon S60 5-megapixel point and shoot digital camera from this study.
The 5D, 350D ISO 400 values are from reference 13. Reference 21 derives similar gains for the Canon 5D. The 20D value also agrees with reference 13 where 3.09 electrons/DN is reported. Reference 13 reports the 10D at ISO 400 has a gain of 2.34 electrons/DN, 15% lower than used here. 40D and 400D data from References 14.
Table 3b Camera Gain in 14-bit Systems Notes:
Canon 1D Mark III and 40D are from Clarkvision analyses. Note the 1D Mark III does not change gain between ISO 50 and 100. ISO 50 photos will be saturated a stop lower.
Canon 1DMIII saturates at 70500 electrons at DN 15280 out of 16383, ISO 50 (1360 electrons/square micron).
Canon 40D saturates at 43400 electrons at DN 13824 out of 16383, ISO 100 (1336 electrons/square micron).
Nikon D3 info derived from references 16 and 21 Reference 21 derives a saturated full well capacity of 65,568 electrons. This is in contrast to the stated (December 2007) 340,000 electrons in reference 16 (which is several times the full well capacity on a per square micron basis that any other CMOS or CCD sensor). For example, 340,000 gives 4761 electrons per square micron much higher than any published value I have seen. I will use
137,000 electrons as the full well, which gives 1918 electrons / sq. micron, still a value that is probably too high.
Nikon D300 data derived from reference 17 Reference 17 states the camera saturates at 12-bit DN 3830. The full well capacity should be about: 2.74 * 16383 * 3830/4095
42,000 electrons. Note Reference 27 derives 0.78 e/DN for the Canon 40D at ISO 400 compared to this site's 0.85 (that is within 9% of each other). Canon 50D derived from reference 27. Canon 5D Mark II analysis by R. Clark.
* = 14-bit system. Full well depth (electrons for max DN at iso 100) (maybe we should call this the "camera maximum DN well depth", because it is not necessarily the real full well depth). Canon 1D Mark II values from reference 3. Canon 1DMII values determined Feb 12, 2006 with firmware 1.2.4.
Canon 10D values from Tam Kam-Fai posted on [email protected], 20D, 300D, D70 values from Terry Lovejoy, reference 1, and http://www.astrosurf.org/buil/20d/20dvs10d.htm The Canon 5D, 350D, and 20D values computed from the gains above and read noise in DNs from Table 3 of Reference 13. 40D and 400D data from References 14. For comparison, reference 21 derives for the Canon 5D, read noise = 32.7 electrons at ISO 100 15.5 at ISO 200, 8.9 at ISO 400, and 3.8 at ISO 1600.
Table 4b Additional Analyses Notes:
Canon 1D Mark III data from Jerry L., analyzed here by Clark.
Canon 40D analyzed here by Clark. Nikon D3 info derived from references 16 and 21. Nikon D300 data derived from reference 17. Canon 50D read noise derived in this study from Image data courtesy Tim Dodd, London. Canon 5D Mark II analysis by R. Clark.
Table 5 Signal-to-noise assumes photon noise limited. Read noise, and other factors can only degrade this number (read noise is insignificant for the maximum possible and 18% gray card signal-to-noise ratios for the cases shown here). * The Canon S60 full well is for ISO 50. P&S means point and shoot. The Canon 20D full well and signal-to-noise is conditional on an initial number that may have a large error bar.
Individual Camera Sensor Data
Canon 50D (November 2008) Pixel pitch: 4.7 microns.
S/N on 18% gray card, ISO 100 = 70.
Sensor Full Well Capacity at lowest ISO: 27,300 electrons.
Sensor dynamic range = 27300/2.61 = 10,460 = 13.4 stops.
ISO at unity gain (scaled to 12 bit) = 880 (14-bit unity gain = ISO 220).
Low Light sensitivity Factor: 337.
Apparent Image Quality, AIQ = 52.7
The gains were derived from the ISO 400 gain in reference 27. Image data courtesy Tim Dodd, London. That image data enabled the read noise, full well capacity, and dynamic ranges to be derived.
Canon 5D Mark II (December 23, 2008) Pixel pitch: 6.4 microns.
S/N on 18% gray card, ISO 100 = 103.
Sensor Full Well Capacity at lowest ISO: 65,700 electrons.
Sensor dynamic range = 65700/2.5 = 26,280 = 14.7 stops.
ISO at unity gain (scaled to 12 bit) = 1600 (14-bit unity gain = ISO 404).
Low Light sensitivity Factor: 640.
Apparent Image Quality, AIQ = 109
All data derived by R. Clark, December, 2008.
Special features to look for in motion sensors
You can purchase a basic motion detector or look for models with more features — usually aimed at reducing false alarms or making installation easier. Here are some features to consider when choosing your motion detector.
- Wireless motion sensors
- Choosing a wireless model will make your life easier from start to finish. No wires means no drilling and a simpler setup. This type of motion sensor will communicate with your home security system wirelessly and these are the most common type of sensor used today.
- These types of sensors are installed on doors and windows are typically passive infrared sensors. They will trigger your security alarm immediately if a door or window is opened in your home.
- Pet owners are never quite sure what animals are up to while they’re away at work. But if you have an active dog or cat, it’s possible a motion sensor would be activated due to their movements. There are some passive sensors that can be set to ignore your pet’s movements. You can usually set them up to ignore animals up to a certain weight, which means they should still be useful for protecting your home from unwanted human intruders.
- Some motion detectors work hand-in-hand with video security systems. Using this technology, the camera will only record when the sensor is tripped. This saves on memory for your security camera and, of course, it can come in handy to have video evidence of a break-in.
Types of Motion Detectors
Motion detectors are classified based on how they detect motion of a body. The two classifications are listed below with a brief explanation of their operation
Active Detectors are also known as Radar-based motion sensors. The active detector sensors emit the radio waves/ microwaves across a room or other place, which strike on nearby objects and reflect it back to the sensor detector. When an object moves in motion sensor controlled area at this time, the sensor looks for a Doppler (frequency) shift in the wave when it returns to the sensor detector, which would indicate that the wave has hit a moving object. The Motion sensor is able to understand these changes and send an electrical signal to the alarm system, light or other types of device that is connected to the motion sensor.
Active Motion Sensor Operation.
When an object moves in motion sensor controlled area at this time, the sensor looks for a Doppler (frequency) shift in the wave when it returns to the sensor detector, which would indicate that the wave has hit a moving object.
The Motion sensor is able to understand these changes and send an electrical signal to the alarm system, light or other types of device that is connected to the motion sensor.
Active sensors use microwaves for movement detection are mostly used in such applications as automatic doors in shopping malls and similar applications, but can also be found in house security alarm systems and are used for indoor lighting systems.
Active motion sensors are not best suitable for outdoor lighting or similar applications as a movement of random objects such as windblown things, smaller animals and, even larger insects can be detected by the active sensor and lightning will be triggered.
Passive Motion Sensors are opposite to active sensors, they do not send out anything, but it simply detects the infrared energy. Infrared (heat) energy levels are sensed by passive detectors. Passive sensors scan the room or area, it is installed for infrared heat that is radiated from living beings.
Passive Motion Sensor Operation
Actually, heat is radiated from any object with a temperature above absolute zero. When an object walks into the detection area of a passive sensor, it detects heat emitted from that object and activates the alarm or turns on the light or any application.
These sensors would not be effective if they could get activated by a small animal or insect that moves in the detection range, however, most passive sensors can be adjusted to pick up the motion of an object with a certain level of emitted heat, for example adjusting the sensor to pick up movement only by humans.
Combined (Hybrid) Sensors
Combined or Hybrid technology motion sensor is a combination of both active and passive sensors. It activates light or alarm only in such a case when motion is detected by both active and passive sensors. Combined sensors are useful for alarm systems to reduce the possibility of false alarm triggers.
However, this technology also has its disadvantages. It cannot provide the same level of safety as separate PIR and microwave sensors because the alarm is triggered only when motion is detected by both sensors.
So for example, if a burglar knows how to escape one of the sensors, his movement is going to be detected just by one sensor, but not by another sensor, at that time no signal will be sent to the alarm system and it won’t be turned on. The most popular type of dual technology sensors combines a PIR and a microwave sensor.
So for example, if a burglar knows how to escape one of the sensors, his movement is going to be detected just by one sensor, but not by another sensor, at that time no signal will be sent to the alarm system and it won’t be turned on. The most popular type of dual technology sensors combines a PIR and a microwave sensor.
However, this technology also has its disadvantages. It cannot provide the same level of safety as separate PIR and microwave sensors because the alarm is triggered only when motion is detected by both sensors.
So for example, if a burglar knows how to escape one of the sensors, his movement is going to be detected just by one sensor, but not by another sensor, at that time no signal will be sent to the alarm system and it won’t be turned on. The most popular type of dual technology sensors combines a PIR and a microwave sensor.
The motion sensors come in different shapes and sizes. Here we are explaining below a couple of examples
Passive Infrared Detectors (PIR)
These are one of the widely used sensors nowadays and can be found in many home security systems. Passive Infrared Detectors are looking the changes of infrared energy level that caused by movement of objects (human, pets… etc.).
Passive Infrared Detectors (PIR)
PIR Motion detector is very easy obstructed by the variability of heat sources and sunlight, so PIR Motion detector is more suitable for the indoor movement detection within the closed environment.
Active Infrared Sensors
Designed to emit an electrical signal that connects to a light detector. As soon as the beam gets interrupted, it may sound the motion sensor alarm.
Active infrared Detectors use a dual beam transmission as structure, one side of a transmitter for emitting Infrared Ray, and the other side with a receiver for receiving the IR, it is suitable for the outdoor point to point interruption detection.
Active Infra Red Beam motion sensors are mainly installed outside, due to it adopts transmitter and receiver theory for detection. It is important that the beam must go through the detection area and reach the receiver.
These motion sensors are available in both active and passive. In theory, an ultrasonic detector sends out high-frequency sound waves that are reflected back to the sensor. If any interruption occurs in the sound waves, the active ultrasonic sensor may sound the alarm.
Mini Ultrasonic Motion Detector
Applications of Motion Sensors
Some of the key applications of motion detectors include:
- Intruder alarms
- Automatic ticket gates
- Entryway lighting
- Security lighting
- Hand dryers
- Automatic doors
- Ultrasonic sensors are used for triggering the security camera at home and for wildlife photography.
- Active infrared sensors used To indicate the presence of products on conveyor belts
Here some of the practical applications of both the active and passive motion detector sensors are given below.
Liquid Level Controller using Ultrasonic Sensors
The below figure shows how Liquid Level Controller using Ultrasonic Sensor works for controlling the liquid levels in a tank by operating a motor by sensing predefined limits of the liquid.
Liquid Level Controller using Ultrasonic Sensors by
When the liquid in the tank reaches lower and upper limits, then the ultrasonic sensor detects this limits and sends the signals to the Microcontroller. The Microcontroller is programmed in such a way that it operates a relay for driving the motor pump based on the limit condition signals from the ultrasonic sensor.
Automatic Door Opening System Using PIR sensor
Similar to the above system, Automatic Door Opening System Using PIR sensor detects the presence of humans to perform door operations, i.e., opening and closing the door. As we have discussed above, a PIR sensor detects the presence of humans alone and enables the Microcontroller pins when motion is detected. Depending on the signals from the PIR sensor, the Microcontroller operates the door by operating the motor in forward and reverse rotation modes with the help of a driver IC.
Automatic Door Opening System Using PIR sensor
This is a brief description regarding the motion sensors and their applications with some practical examples of motion detectors. If you still intend to know more regarding these motion sensors or any other sensor-based projects you can post your queries by commenting below.