To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We shall study waveforms in one dimension in this chapter and waveforms in two dimensions in the next chapter. We shall call these waveforms one-dimensional signals and two-dimensional signals, respectively.
A waveform that is used by a surveillance system to probe the environment is usually a function of the single variable — time. The collected sensor data will be a set of one or more of such one-dimensional waveforms. However, the environment usually is a two-dimensional or three-dimensional spatial region on which is defined a two-dimensional or three-dimensional signal of interest. Thus the computational task of image formation frequently consists of estimating an unknown two-dimensional or three-dimensional function when given a set of one-dimensional signals as the measured data.
Every waveform of finite energy is associated with another function, known as its Fourier transform, that describes a decomposition of the waveform into an infinite number of sinusoids. The Fourier transform constitutes an alternative representation of the function in the frequency domain. The frequency-domain representation is often much easier to work with than the original time-domain representation of the waveform. The Fourier transform is so pervasive in our studies that, in time, we shall scarcely be able to decide whether the original waveform or its Fourier transform is the more fundamental concept. For this reason, it is important to develop good intuition regarding the Fourier transform.
The earliest imaging systems were optical imaging systems, and optical systems are still the most common imaging systems. Optical imaging systems that employ the simple lens are widespread; they are found both in biological organisms and in man-made devices. Much of optics, including the properties of the ideal lens, can be understood in the language of signal processing in terms of pointspread functions, convolutions, and Fourier transforms. More generally, we shall describe the propagation and diffraction of waves based on a two-dimensional pointspread function. In this setting, the Huygens—Fresnel principle of optics will be presented simply as a special case of the convolution theorem of the two-dimensional Fourier transform.
In principle, the diffraction of electromagnetic waves should be explained directly from Maxwell's equations, which give a complete description of electromagnetic fields. However, there may be mathematical difficulties when starting from first principles because there may be concerns about how to model a given problem, or how to specify a consistent and accurate set of boundary conditions. It may be difficult to formulate the boundary conditions at a level of detail needed to apply Maxwell's equations, while the weaker conditions needed for diffraction theory may be readily apparent. This is why we formulate the theories of diffraction as distinct from, but subservient to, electromagnetic theory.
A radar processor consists of a preprocessor, a detection and estimation function, and a postprocessor. On entering the receiver, a received radar signal first encounters the preprocessor, which often can be viewed as the computation of the sample cross-ambiguity function, though perhaps in a crudely approximated form. A search radar is one whose sample cross-ambiguity function typically consists of isolated peaks that are examined to detect objects in the environment and to estimate parameters associated with those objects.
A reflecting object may be made up of many individual reflecting elements, such as corners, edges, and so on. When the resolution of the radar is coarse compared with the size of the individual reflecting elements, then the single reflecting object is regarded as a point and appears as a single peak in the sample cross-ambiguity function. The search radar detects that peak, and the delay and doppler coordinates of the peak form an estimate of the delay and doppler coordinates of the reflector considered as a single object.
When the resolution of the radar is fine compared with the size of an individual reflector, there will be structure in the processed image. Then the search radar begins to take on the character of an imaging radar.
A propagating medium may be teeming with a multitude of weak signals even when it appears superficially to be empty. For example, an acoustic medium such as a lake may appear quite still, and yet it may contain numerous faint pressure waves originating in various submerged objects and reflecting off other submerged objects. A passive sonar system can intercept these waves and extract useful information from the raw received data. Indeed, these invisible pressure waves can be used in principle to form images of submerged objects. Likewise, a seismographic sensor or an array of such sensors on the surface of the earth can measure tiny vibrations of the earth's surface and deduce the location of distant earthquakes and other geophysical disruptions or can form images of geological structures. Even the electromagnetic environment in which we are immersed contains immense quantities of information. For example, some of these electromagnetic signals can be intercepted by suitable large apertures and formed into detailed images of far distant galaxies. We need provide no illumination, nor can we provide illumination in such an application. We need only gather the data with appropriate passive sensors and process the sensed data into the images that allow us to observe these galaxies.