To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To use stochastic process models in practical signal processing applications, we need to estimate their parameters from data. In the first part of this chapter we introduce some basic concepts and techniques from estimation theory and then we use them to estimate the mean, variance, ACRS, and PSD of a stationary random process model. In the second part, we discuss the design of optimum filters for detection of signals with known shape in the presence of additive noise (matched filters), optimum filters for estimation of signals corrupted by additive noise (Wiener filters), and finite memory linear predictors for signal modeling and spectral estimation applications. We conclude with a discussion of the Karhunen–Loève transform, which is an optimum finite orthogonal transform for representation of random signals.
Study objectives
After studying this chapter you should be able to:
Compute estimates of the mean, variance, and covariance of random variables from a finite number of observations (data) and assess their quality based on the bias and variance of the estimators used.
Estimate the mean, variance, ACRS sequence, and PSD function of a stationary process from a finite data set by properly choosing the estimator parameters to achieve the desired quality in terms of bias–variance trade-offs.
Design FIR matched filters for detection of known signals corrupted by additive random noise, FIR Wiener filters that minimize the mean squared error between the output signal and a desired response, and finite memory linear predictors that minimize the mean squared prediction error.
The term “filter” is used for LTI systems that alter their input signals in a prescribed way. Frequency-selective filters, the subject of this chapter, are designed to pass a set of desired frequency components from a mixture of desired and undesired components or to shape the spectrum of the input signal in a desired way. In this case, the filter design specifications are given in the frequency domain by a desired frequency response. The filter design problem consists of finding a practically realizable filter whose frequency response best approximates the desired ideal magnitude and phase responses within specified tolerances.
The design of FIR filters requires finding a polynomial frequency response function that best approximates the design specifications; in contrast, the design of IIR filters requires a rational approximating function. Thus, the algorithms used to design FIR filters are different from those used to design IIR filters. In this chapter we concentrate on FIR filter design techniques while in Chapter 11 we discuss IIR filter design techniques. The design of FIR filters is typically performed either directly in the discrete-time domain using the windowing method or in the frequency domain using the frequency sampling method and the optimum Chebyshev approximation method via the Parks–McClellan algorithm.
Study objectives
After studying this chapter you should be able to:
Understand how to set up specifications for design of discrete-time filters.
Understand the conditions required to ensure linear phase in FIR filters and how to use them to design FIR filters by specifying their magnitude response. […]
This chapter is primarily concerned with the definition, properties, and applications of the Discrete Fourier Transform (DFT). The DFT provides a unique representation using N coefficients for any sequence of N consecutive samples. The DFT coefficients are related to the DTFS coefficients or to equally spaced samples of the DTFT of the underlying sequences. As a result of these relationships and the existence of efficient algorithms for its computation, the DFT plays a central role in spectral analysis, the implementation of digital filters, and a variety of other signal processing applications.
Study objectives
After studying this chapter you should be able to:
Understand the meaning and basic properties of DFT and how to use the DFT to compute the DTFS, DTFT, CTFS, and CTFT transforms.
Understand how to obtain the DFT by sampling the DTFT and the implications of this operation on how accurately the DFT approximates the DTFT and other transforms.
Understand the symmetry and operational properties of DFT and how to use the property of circular convolution for the computation of linear convolution.
Understand how to use the DFT to compute the spectrum of continuous-time signals and how to compensate for the effects of windowing the signal to finite-length using the proper window.
Computational Fourier analysis
The basic premise of Fourier analysis is that any signal can be expressed as a linear superposition, that is, a sum or integral of sinusoidal signals.
As we discussed in Chapter 2, any LTI can be implemented using three basic computational elements: adders, multipliers, and unit delays. For LTI systems with a rational system function, the relation between the input and output sequences satisfies a linear constant-coefficient difference equation. Such systems are practically realizable because they require a finite number of computational elements. In this chapter, we show that there is a large collection of difference equations corresponding to the same system function. Each set of equations describes the same input-output relation and provides an algorithm or structure for the implementation of the system. Alternative structures for the same system differ in computational complexity, memory, and behavior when we use finite precision arithmetic. In this chapter, we discuss the most widely used discrete-time structures and their implementation using Matlab. These include direct-form, transposed-form, cascade, parallel, frequency sampling, and lattice structures.
Study objectives
After studying this chapter you should be able to:
Develop and analyze practically useful structures for both FIR and IIR systems.
Understand the advantages and disadvantages of different filter structures and convert from one structure to another.
Implement a filter using a particular structure and understand how to simulate and verify the correct operation of that structure in Matlab.
Block diagrams and signal flow graphs
Every practically realizable LTI system can be described by a set of difference equations, which constitute a computational algorithm for its implementation.
This chapter is primarily concerned with the conversion of continuous-time signals into discrete-time signals using uniform or periodic sampling. The presented theory of sampling provides the conditions for signal sampling and reconstruction from a sequence of sample values. It turns out that a properly sampled bandlimited signal can be perfectly reconstructed from its samples. In practice, the numerical value of each sample is expressed by a finite number of bits, a process known as quantization. The error introduced by quantizing the sample values, known as quantization noise, is unavoidable. The major implication of sampling theory is that it makes possible the processing of continuous-time signals using discrete-time signal processing techniques.
Study objectives
After studying this chapter you should be able to:
Determine the spectrum of a discrete-time signal from that of the original continuous-time signal, and understand the conditions that allow perfect reconstruction of a continuous-time signal from its samples.
Understand how to process continuous-time signals by sampling, followed by discrete-time signal processing, and reconstruction of the resulting continuous-time signal.
Understand how practical limitations affect the sampling and reconstruction of continuous-time signals.
Apply the theory of sampling to continuous-time bandpass signals and two-dimensional image signals.
Ideal periodic sampling of continuous-time signals
In the most common form of sampling, known as periodic or uniform sampling, a sequence of samples x[n] is obtained from a continuous-time signal xc(t) by taking values at equally spaced points in time.
In Chapter 2 we discussed representation and analysis of LTI systems in the time-domain using the convolution summation and difference equations. In Chapter 3 we developed a representation and analysis of LTI systems using the z-transform. In this chapter, we use Fourier representation of signals in terms of complex exponentials and the polezero representation of the system function to characterize and analyze the effect of LTI systems on the input signals. The fundamental tool is the frequency response function of a system and the close relationship of its shape to the location of poles and zeros of the system function. Although the emphasis is on discrete-time systems, the last section explains how the same concepts can be used to analyze continuous-time LTI systems.
Study objectives
After studying this chapter you should be able to:
Determine the steady-state response of LTI systems to sinusoidal, complex exponential, periodic, and aperiodic signals using the frequency response function.
Understand the effects of ideal and practical LTI systems upon the input signal in terms of the shape of magnitude, phase, and group-delay responses.
Understand how the locations of poles and zeros of the system function determine the shape of magnitude, phase, and group-delay responses of an LTI system.
Develop and use algorithms for the computation of magnitude, phase, and group-delay responses of LTI systems described by linear constant-coefficient difference equations. […]
In this chapter we are concerned with probability models for the mathematical description of random signals. We start with the fundamental concepts of random experiment, random variable, and statistical regularity and we show how they lead into the concepts of probability, probability distributions, and averages, and the development of probabilistic models for random signals. Then, we introduce the concept of stationary random process as a model for random signals, and we explain how to characterize the average behavior of such processes using the autocorrelation sequence (time-domain) and the power spectral density (frequency-domain). Finally, we discuss the effect of LTI systems on the autocorrelation and power spectral density of stationary random processes.
Study objectives
After studying this chapter you should be able to:
Understand the concepts of randomness, random experiment, statistical variability, statistical regularity, random variable, probability distributions, and statistical averages like mean and variance.
Understand the concept of correlation between two random variables, its measurement by quantities like covariance and correlation coefficient, and the meaning of covariance in the context of estimating the value of one random variable using a linear function of the value of another random variable.
Understand the concept of a random process and the characterization of its average behavior by the autocorrelation sequence (time-domain) and power spectral density (frequency-domain), develop an insight into the processing of stationary processes by LTI systems, and be able to compute mean, autocorrelation, and power spectral density of the output sequence from that of the input sequence and the impulse response.
During the last three decades Digital Signal Processing (DSP) has evolved into a core area of study in electrical and computer engineering. Today, DSP provides the methodology and algorithms for the solution of a continuously growing number of practical problems in scientific, engineering, and multimedia applications.
Despite the existence of a number of excellent textbooks focusing either on the theory of DSP or on the application of DSP algorithms using interactive software packages, we feel there is a strong need for a book bridging the two approaches by combining the best of both worlds. This was our motivation for writing this book, that is, to help students and practicing engineers understand the fundamental mathematical principles underlying the operation of a DSP method, appreciate its practical limitations, and grasp, with sufficient details, its practical implementation.
Objectives
The principal objective of this book is to provide a systematic introduction to the basic concepts and methodologies for digital signal processing, based whenever possible on fundamental principles. A secondary objective is to develop a foundation that can be used by students, researchers, and practicing engineers as the basis for further study and research in this field. To achieve these objectives, we have focused on material that is fundamental and where the scope of application is not limited to the solution of specialized problems, that is, material that has a broad scope of application.
In this chapter we introduce the concept of Fourier or frequency-domain representation of signals. The basic idea is that any signal can be described as a sum or integral of sinusoidal signals. However, the exact form of the representation depends on whether the signal is continuous-time or discrete-time and whether it is periodic or aperiodic. The underlying mathematical framework is provided by the theory of Fourier series, introduced by Jean Baptiste Joseph Fourier (1768–1830).
The major justification for the frequency domain approach is that LTI systems have a simple behavior with sinusoidal inputs: the response of a LTI system to a sinusoid is a sinusoid with the same frequency but different amplitude and phase.
Study objectives
After studying this chapter you should be able to:
Understand the fundamental differences between continuous-time and discrete-time sinusoidal signals.
Evaluate analytically the Fourier representation of continuous-time signals using the Fourier series (periodic signals) and the Fourier transform (aperiodic signals).
Evaluate analytically and numerically the Fourier representation of discrete-time signals using the Fourier series (periodic signals) and the Fourier transform (aperiodic signals).
Choose the proper mathematical formulas to determine the Fourier representation of any signal based on whether the signal is continuous-time or discrete-time and whether it is periodic or aperiodic.
Understand the use and implications of the various properties of the discrete-time Fourier transform.
Signal processing is a discipline concerned with the acquisition, representation, manipulation, and transformation of signals required in a wide range of practical applications. In this chapter, we introduce the concepts of signals, systems, and signal processing. We first discuss different classes of signals, based on their mathematical and physical representations. Then, we focus on continuous-time and discrete-time signals and the systems required for their processing: continuous-time systems, discrete-time systems, and interface systems between these classes of signal. We continue with a discussion of analog signal processing, digital signal processing, and a brief outline of the book.
Study objectives
After studying this chapter you should be able to:
Understand the concept of signal and explain the differences between continuous-time, discrete-time, and digital signals.
Explain how the physical representation of signals influences their mathematical representation and vice versa.
Explain the concepts of continuous-time and discrete-time systems and justify the need for interface systems between the analog and digital worlds.
Recognize the differences between analog and digital signal processing and explain the key advantages of digital over analog processing.
Signals
For our purposes a signal is defined as any physical quantity that varies as a function of time, space, or any other variable or variables. Signals convey information in their patterns of variation. The manipulation of this information involves the acquisition, storage, transmission, and transformation of signals.
There are many signals that could be used as examples in this section. However, we shall restrict our attention to a few signals that can be used to illustrate several important concepts and they will be useful in later chapters.
A key feature of the discrete-time systems discussed so far is that the signals at the input, output, and every internal node have the same sampling rate. However, there are many practical applications that either require or can be implemented more efficiently by processing signals at different sampling rates. Discrete-time systems with different sampling rates at various parts of the system are called multirate systems. The practical implementation of multirate systems requires changing the sampling rate of a signal using discrete-time operations, that is, without reconstructing and resampling a continuous-time signal. The fundamental operations for changing the sampling rate are decimation and interpolation. The subject of this chapter is the analysis, design, and efficient implementation of decimation and interpolation systems, and their application to two important areas of multirate signal processing: sampling rate conversion and multirate filter banks.
Study objectives
After studying this chapter you should be able to:
Understand the operations of decimation, interpolation, and arbitrary sampling rate change in the time and frequency domains.
Understand the efficient implementation of discrete-time systems for sampling rate conversion using polyphase structures.
Design a special type of filter (Nyquist filters), which are widely used for the efficient implementation of multirate filters and filter banks.
Understand the operation, properties, and design of two-channel filter banks with perfect reconstruction analysis and synthesis capabilities.
Sampling rate conversion
The need for sampling rate conversion arises in many practical applications, including digital audio, communication systems, image processing, and high-definition television.
In this chapter we discuss the basic concepts and the mathematical tools that form the basis for the representation and analysis of discrete-time signals and systems. We start by showing how to generate, manipulate, plot, and analyze basic signals and systems using Matlab. Then we discuss the key properties of causality, stability, linearity, and time-invariance, which are possessed by the majority of systems considered in this book. We continue with the mathematical representation, properties, and implementation of linear time-invariant systems. The principal goal is to understand the interaction between signals and systems to the extent that we can adequately predict the effect of a system upon the input signal. This is extremely difficult, if not impossible, for arbitrary systems. Thus, we focus on linear time-invariant systems because they are amenable to a tractable mathematical analysis and have important signal processing applications.
Study objectives
After studying this chapter you should be able to:
Describe discrete-time signals mathematically and generate, manipulate, and plot discrete-time signals using Matlab.
Check whether a discrete-time system is linear, time-invariant, causal, and stable; show that the input-output relationship of any linear time-invariant system can be expressed in terms of the convolution sum formula.
Determine analytically the convolution for sequences defined by simple formulas, write computer programs for the numerical computation of convolution, and understand the differences between stream and block processing.
Determine numerically the response of discrete-time systems described by linear constant-coefficient difference equations.
Scientists do mensurative or manipulative experiments to test hypotheses. The result of an experiment will differ from the expected outcome under the null hypothesis because of two things: (a) chance and (b) any effect of the experimental condition or treatment. This concept was illustrated with the chi-square test for nominal scale data in Chapter 6.
Although life scientists work with nominal scale variables, most of the data they collect are measured on a ratio, interval or ordinal scale and are often summarised by calculating a statistic such as the mean (which is also called the average: see Section 3.5). For example, you might have the mean blood pressure of a sample of five astronauts who had spent the previous six months in space and need to know if it differs significantly from the mean blood pressure of the population on Earth. An agricultural scientist might need to know if the mean weight of tomatoes differs significantly between two or more fertiliser treatments. If you knew the range of values within which 95% of the means of samples taken from a particular population were likely to occur, then a sample mean within this range would be considered non-significant and one outside this range would be considered significant. This chapter explains how a common property of many variables measured on a ratio, interval or ordinal scale data can be used for significance testing.
Life scientists often collect data that can be assigned to two or more discrete and mutually exclusive categories. For example, a sample of 20 humans can be partitioned into two categories of ‘right-handed’ or ‘left-handed’ (because even people who claim to be ambidextrous still perform a greater proportion of actions with one hand and can be classified as having a dominant right or left hand). These two categories are discrete, because there is no intermediate state, and mutually exclusive, because a person cannot be assigned to both. They also make up the entire set of possible outcomes within the sample and are therefore contingent upon each other, because for a fixed sample size a decrease in the number in one category must be accompanied by an increase in the number in the other and vice versa. These are nominal scale data (Chapter 3). The questions researchers ask about these data are the sort asked about any sample(s) from a population.
First, you may need to know the probability a sample has been taken from a population having a known or expected proportion within each category. For example, the proportion of left-handed people in the world is close to 0.1 (10%), which can be considered the proportion in the population because it is from a sample of several million people. A biomedical scientist, who suspected the proportion of left- and right-handed people showed some variation among occupations, sampled 20 statisticians and found that four were left handed and 16 right handed. The question is whether the proportions in the sample were significantly different from the expected proportions of 0.1 and 0.9 respectively. The difference between the population and the sample might be solely due to chance or also reflect career choice.
This chapter explains how simple linear regression analysis describes the functional (linear) relationship between a dependent and an independent variable, followed by an introduction to some more complex regression analyses which include fitting curves to non-linear relationships and the use of multiple linear regression to simultaneously model the relationship between a dependent variable and two or more independent ones. If you are using this book for an introductory course, you may only need to study the sections on simple linear regression, but the material in the latter part will be a very useful bridge to more advanced courses and texts.
The uses of correlation and regression were contrasted in Chapter 16. Correlation examines if two variables are related. Regression describes the functional relationship between a dependent variable (which is often called the response variable) and an independent variable (which is often called the predictor variable).