To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A. Labeyrie, Observatoire de la Cote d'Azur,S. G. Lipson, Technion - Israel Institute of Technology, Haifa,P. Nisenson, Smithsonian Astrophysical Observatory, Cambridge, Massachusetts
Imaging with very high resolution using multimirror telescopes
Recent years have seen very large aperture telescopes constructed by piecing together smaller mirrors and carefully mounting them on a frame so that the individual images interfere constructively at the focus. This way the two multimirror Keck telescopes on Mauna Kea are constructed, each from 36 hexagonally shaped mirrors which together form a paraboloidal mirror about 10 m in diameter. The frame is constructed very rigidly, but since each segment weighs half a ton, it still distorts significantly when the telescope is pointed, so that the mirror positions have to be actively corrected to compensate for the small movements. Then, together with adaptive optics correction for atmospheric turbulence, diffraction-limited images are obtained since, for a small enough field of view, the off-axis aberrations of the paraboloidal mirror are insignificant. However, the maximum aperture which can be operated this way is expected to be of the order of 100 m, the size of the “Overwhelmingly Large Array” (OWL) being studied by the European Southern Observatories. The resolution achievable for a pointable direct imaging telescope is therefore limited to a few milli-arcseconds at optical wavelengths.
By using synthetic imaging, interferometry represents one way around this problem, when angular resolution rather than light-gathering power is the dominant aim. Effective apertures of hundreds of meters have been achieved, but the number of subapertures is still quite modest. As we have seen in the preceding chapters, interferometers with even tens of apertures and optical delay lines become extremely expensive and complicated to operate.
A. Labeyrie, Observatoire de la Cote d'Azur,S. G. Lipson, Technion - Israel Institute of Technology, Haifa,P. Nisenson, Smithsonian Astrophysical Observatory, Cambridge, Massachusetts
We saw in the chapter on atmospheric turbulence that the real limitation to the resolution of a ground-based telescope is not the diameter of the telescope aperture, but the atmosphere. As a result, a telescope of any diameter will rarely give an angular resolution in visible light better than 1 arcsec, which is equivalent to the diffraction limit of an aperture of about 10 cm diameter (the Fried parameter, r0, defined in section 5.4.1). This limitation has been considered so fundamental that large telescope mirrors might not even have been polished to an accuracy which could give a better resolution than this. The ideas behind the various methods of astronomical interferometry are all directed at exceeding it.
The first idea was due to Fizeau (1868) who conceived the idea of masking the aperture of a large telescope with a mask containing two apertures each having diameter less than r0, but separated by a distance considerably greater than this. The result would be to modulate the image with Young's fringes and, from the contrast of the fringes, to glean information about the source dimensions. A few years after the publication of Fizeau's idea, Stéphan (1874) tried it out experimentally with the 1-m telescope at Marseilles and concluded (correctly) that the fixed stars were too small for their structure to be resolved by this telescope. Michelson (1891) later developed the necessary theory to make this idea quantitative and was the first to succeed in using Fizeau's technique, when he measured the diameters of the moons of Jupiter using the 12-inch Lick refractor telescope.
One of the basic astronomical pursuits throughout history has been to determine the amount and temporal nature of the flux emitted by an object as a function of wavelength. This process, termed photometry, forms one of the fundamental branches of astronomy. Photometry is important for all types of objects from planets to stars to galaxies, each with their own intricacies, procedures, and problems. At times, we may be interested in only a single measurement of the flux of some object, while at other times we could want to obtain temporal measurements on time scales from seconds or less to years or longer. Some photometric output products, such as differential photometry, require fewer additional steps, whereas to obtain the absolute flux for an object, additional CCD frames of photometric standards are needed. These standard star frames are used to correct for the Earth's atmosphere, color terms, and other possible sources of extinction that may be peculiar to a given observing site or a certain time of year (Pecker, 1970).
We start this chapter with a brief discussion of the basic methods of performing photometry when using digital data from 2-D arrays. It will be assumed here that the CCD images being operated on have already been reduced and calibrated as described in detail in the previous chapter. We will see that photometric measurements require that we accomplish only a few steps to provide output flux values. Additional steps are then required to produce light curves or absolute fluxes.
Although imaging and photometry have been and continue to be mainstays of astronomical observations, spectroscopy is indeed the premier method by which we can learn the physics that occurs within or near the object under study. Photographic plates obtained the first astronomical spectra of bright stars in the late nineteenth century, while the early twentieth century saw the hand-in-hand development of astronomical spectroscopy and atomic physics. Astronomical spectroscopy with photographic plates, or with some method of image enhancement placed in front of a photographic plate, has led to numerous discoveries and formed the basis for modern astrophysics. Astronomical spectra have also had a profound influence on the development of the fields of quantum mechanics and the physics of extreme environments. The low quantum efficiency and nonlinear response of photographic plates placed the ultimate limiting factors on their use.
During the 1970s and early 1980s, astronomy saw the introduction of numerous electronic imaging devices, most of which were applied as detectors for spectroscopic observations. Television- type devices, diode arrays, and various silicon arrays such as Reticons were called into use. They were a step up from plates in a number of respects, one of which was their ability to image not only a spectrum of an object of interest, but, simultaneously, the nearby sky background spectrum as well – a feat not always possible with photographic plates.
The current high level of understanding of CCDs in terms of their manufacture, inherent characteristics, instrumental capabilities, and data analysis techniques make these devices desirable for use in spacecraft and satellite observatories and at wavelengths other than the optical. Silicon provides at least some response to photons over the large wavelength range from about 1 to 10 000 Å. Figure 7.1 shows this response by presenting the absorption depth of silicon over an expanded wavelength range. Unless aided in some manner, the intrinsic properties of silicon over the UV and EUV spectral range (1000–3000 Å) are such that the QE of the device at these wavelengths is typically only a few percent or less. This low QE value is due to the fact that for these very short wavelengths, the absorption depth of silicon is near 30–50 Å, far less than the wavelength of the incident light itself. Thus, the majority of the light (~ 70%) is reflected with the remaining percentage passing directly through the CCD unhindered.
Observations at wavelengths shorter than about 3000 Å involve additional complexities not encountered with ground-based optical observations. Access to these short wavelengths can only be obtained via space-based telescopes or high altitude rocket and balloon flights. The latter are of short duration from only a few hours up to possibly hundreds of days and use newly developing high-altitude ultra-long duration balloon flight technologies.
Before we begin our discussion of the physical and intrinsic characteristics of charge-coupled devices (Chapter 3), we want to spend a brief moment looking into how CCDs are manufactured and some of the basic, important properties of their electrical operation.
The method of storage and information retrieval within a CCD is dependent on the containment and manipulation of electrons (negative charge) and holes (positive charge) produced within the device when exposed to light. The produced photoelectrons are stored in the depletion region of a metal insulator semiconductor (MIS) capacitor, and CCD arrays simply consist of many of these capacitors placed in close proximity. Voltages, which are static during collection, are manipulated during readout in such as way as to cause the stored charges to flow from one capacitor to another, providing the reason for the name of these devices. These charge packets, one for each pixel, are passed through readout electronics that detect and measure each charge in a serial fashion. An estimate of the numerical value of each packet is sent to the next step in this process, which takes the input analog signal and assigns a digital number to be output and stored in computer memory.
Thus, originally designed as a memory storage device, CCDs have swept the market as replacements for video tubes of all kinds owing to their many advantages in weight, power consumption, noise characteristics, linearity, spectral response, and others.
Even casual users of CCDs have run across the terms read noise, signal-to-noise ratio, linearity, and many other possibly mysterious sounding bits of CCD jargon. This chapter will discuss the meanings of the terms used to characterize the properties of CCD detectors. Techniques and methods by which the reader can determine some of these properties on their own and why certain CCDs are better or worse for a particular application are discussed in the following chapters. Within the discussions, mention will be made of older types of CCDs. While these are generally not available or used anymore, there is a certain historical perspective to such a presentation and it will likely provide some amusement for the reader along the way.
One item to keep in mind throughout this chapter and in the rest of the book is that all electrons look alike. When a specific amount of charge is collected within a pixel during an integration, one can no longer know the exact source of each electron (e.g., was it due to a stellar photon or is it an electron generated by thermal motions within the CCD itself?). We have to be clever to separate the signal from the noise. There are two notable quotes to cogitate on while reading this text. The first is from an early review article on CCDs by Craig Mackay (1986), who states: “The only uniform CCD is a dead CCD.”
This appendix provides a reading list covering the aspects of CCD development, research, and astronomical usage. There are so many articles, books, and journal papers covering the innumerable aspects of information on CCDs that the material presented in a book this size or any size can only cover a small fraction of the details of such work. Even the list presented here does not cover all aspects of interest concerning the use of CCDs in astronomy, but it does provide a very good starting point. The growth of information on CCDs has risen sharply over the past ten years and will, no doubt, continue to do so. Thus the student of CCD science must constantly try to keep up with the latest developments both in astronomy and within the field of opto-electronics, both areas where progess is being made. The internet is a powerful tool to help in this pursuit. Using a good search engine (e.g. Google) type in items such as “deep depletion,” or “L3CCD,” or “MIT/LL” and you'll get back many items of interest.
Much of the information on CCDs is contained in books devoted to the subject. Numerous SPIE, IEEE, and other conferences publish their proceedings in books as well. Detailed information is available in the scientific literature some of which we reference in this volume. Many refereed articles of interest are not listed here as they are easily searched for via web-based interfaces such as the Astrophysics Data System (ADS).
Seven years ago, Cambridge University Press began a new series of books called Handbooks. I was fortunate enough to be asked to author the one on CCDs. Little did I realize how wonderful of an undertaking that writing this book would be. I have learned and relearned a number of details about CCDs and had cause to read many scientific and popular papers and articles I otherwise would have overlooked. The greatest benefit, however, has been the many gracious colleagues and students who have provided comments, revisions, suggestions, support, and simply said thanks. The first edition of the Handbook of CCD Astronomy was written for you and you have truly made it your own through this volume.
When I was first asked to write a second edition, I have to admit I was skeptical that enough had changed to warrant it. I am happy to say I was completely wrong. Upon going back and reading the original volume, I had no problem seeing its many pages of outdated material. There are, however, some fundamental discussions and properties of CCDs that are timeless, and remain in the present volume. New areas of CCD development abound and to highlight a few this second edition is a bit longer and has a few more illustrations. The areas of faster and higher performance electronics to control and read out a CCD, better analog-to-digital circuitry, and better manufactured CCDs are some of the additions discussed within.
Silicon. This semiconductor material certainly has large implications on our life. Its uses are many, including silicon oil lubricants, implants to change our bodies' outward appearance, electric circuitry of all kinds, nonstick frying pans, and, of course, charge-coupled devices.
Charge-coupled devices (CCDs) and their use in astronomy will be the topic of this book. We will only briefly discuss the use of CCDs in commercial digital cameras and video cameras but not their many other industrial and scientific applications. As we will see, there are four main methods of employing CCD imagers in astronomical work: imaging, astrometry, photometry, and spectroscopy. Each of these topics will be discussed in turn. Since the intrinsic physical properties of silicon, and thus CCDs, are most useful at optical wavelengths (about 3000 to 11 000 Å), the majority of our discussion will be concerned with visible light applications. Additional specialty or lesser-used techniques and CCD applications outside the optical bandwidth will be mentioned only briefly. The newest advances in CCD systems in the past five years lies in the areas of (1) manufacturing standards that provide higher tolerances in the CCD process leading directly to a reduction in their noise output, (2) increased quantum efficiency, especially in the far red spectral regions, (3) new generation control electronics with the ability for faster readout, low noise performance, and more complex control functions, and (4) new types of scientific grade CCDs with some special properties.
We are all aware of the amazing astronomical images produced with telescopes these days, particularly those displayed as color representations and shown off on websites and in magazines. For those of us who are observers, we deal with our own amazing images produced during each observing run. Just as spectacular are photometric, astrometric, and spectroscopic results generally receiving less fanfare but often of more astrophysical interest. What all of these results have in common is the fact that behind every good optical image lies a good charge-coupled device.
Charge-coupled devices, or CCDs as we know them, are involved in many aspects of everyday life. Examples include video cameras for home use and those set up to automatically trap speeders on British highways, hospital X-ray imagers and high-speed oscilloscopes, and digital cameras used as quality control monitors. This book discusses these remarkable semiconductor devices and their many applications in modern day astronomy.
Written as an introduction to CCDs for observers using professional or off-the-shelf CCD cameras as well as a reference guide, this volume is aimed at students, novice users, and all the rest of us who wish to learn more of the details of how a CCD operates. Topics include the various types of CCD; the process of data taking and reduction; photometric, astrometric, and spectroscopic methods; and CCD applications outside of the optical band-pass.
Most computer screens and image displays in use are 8-bit devices. This means that the displays can represent data projected on them with 28 = 256 different greyscale levels or data values of resolution. These greyscale levels can represent numeric values from 0 to 255 and it is common to only have about 200 levels actually available to the image display for representing data values with the remaining 50 or so values reserved for graphical overlays, annotation, etc. If displaying in color (actually pseudo-color), then one has available about 200 separate colors, each with a possible grey value of 0–255, or the famous “16 million possible colors” listed in many computer ads (see below).
On the display, the color black is represented by a value of zero (or in color by a value of zero for each of the three color guns, red (R), green (G), and blue (B)). White has R = G = B = 255, and various grey levels are produced by a combination of R = G = B = N, where N is a value from 0 to 255. Colors are made by having R ≠ G ≠ B or any combination thereof in which all three color guns are not operated at the same intensity. A pure color, say blue, is made with R = G = 0 and B = 255 and so on.
In Section 1.4 we stated that astrometry must be developed within an extragalactic reference frame to microarcsecond accuracies. The objective of the present chapter is to provide the theoretical and practical background of this basic concept.
International Celestial Reference System (ICRS)
A reference system is the underlying theoretical concept for the construction of a reference frame. In an ideal kinematic reference system it is assumed that the Universe does not rotate. The theoretical background was presented in Section 5.4.1. The reference system requires the identification of a physical system and its characteristics, or parameters, which are determined from observations and that can be used to define the reference system. In 1991 the International Astronomical Union agreed, in principle, to change to a fundamental reference system based on distant extragalactic radio sources, in place of nearby bright optical stars (IAU, 1992; IAU, 1998; IAU, 2001). The distances of extragalactic radio sources are so large that motions of selected objects, and changes in their source structure, should not contribute to apparent temporal positional changes greater than a few microarcseconds. Thus, positions of these objects should be able to define a quasi-inertial reference frame that is purely kinematic. A Working Group was established to determine a catalog of sources to define this frame that is now called the ICRF.