To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The extremely fast rolloff of the characteristic function of Gaussian variables provides nearly perfect fulfillment of the quantization theorems under most circumstances, and allows easy approximation of the errors in Sheppard's corrections by the first terms of their series expression. However, for most other distributions, this is not the case.
As an example, let us study the behavior of the residual error of Sheppard's first correction in the case of a sinusoidal quantizer input of amplitude A.
Plots of the error are shown in Fig. G.1.
It can be observed that neither of the functions is smooth, that is, a high–order Fourier series is necessary for properly representing the residual error in Sheppard's first correction, R1(A, μ) with sufficient accuracy. The maxima and minima of R1(A, μ) obtained for each value of A by changing μ, exhibits oscillatory behavior. For some values of A, for example as A ≈ 1.43q or A ≈ 1.93q (marked by vertical dotted lines in Fig. G.1(b)), the residual error of Sheppard's correction remains quite small for any value of the mean, but the limits of the error jump apart rapidly for values of A even close to these. A conservative upper bound of the error is therefore as high as the peaks in Fig. G.1(b). One could use the envelope of the error function for this purpose.
Dither inputs are externally applied disturbances that have been used in control systems and in signal processing systems to alleviate the effects of nonlinearity, hysteresis, static friction, gear backlash, quantization, etc. In many cases, dither inputs have been used to “improve” system behavior without there being a clear idea of the nature of the improvement sought and without any method for designing the dither signal other than empiricism. It is the purpose of this chapter to explain the uses of dither signals in systems containing quantizers. We will employ the mathematical methods developed herein for the design of dither signals and for analysis of their benefits and limitations.
DITHER: ANTI-ALIAS FILTERING OF THE QUANTIZER INPUT CF
When the input signal to a uniform quantizer has statistical properties allowing it to satisfy QT I or QT II, the PQN model can be applied to describe the statistical behavior of the quantizer. This is a linear model, and from the point of view of moments and joint moments, the quantizer acts like a source of additive independent noise. This type of linear behavior would be highly desirable under many circumstances.
When the quantizer input is inadequate for satisfaction of QT II, it is possible to add an independent dither signal to the quantizer input so that the sum of input and dither does satisfy QT II. Then the quantizer exhibits linear behavior and we can say that it is linearized.
The theory described in this book is very general: it describes well most quantization and roundoff situations. However, assumptions have been made which are necessary for proper application. In this appendix, some of these will be briefly described.
LONG–TIME VS. SHORT–TIME PROPERTIES OF QUANTIZATION
Statistical theory of quantization deals with statistics of signals: PDFs, CFs and moments. The basic idea is coined at the beginning of Chapter 4: “Instead of devoting attention to the signal being quantized, let its probability density function be considered.”
Having extensively explored the theory, a few basic questions need to be discussed, like:
when may the PDF be used?
what are the consequences of the use of the PDF in the application of the theory?
what has to be done if the PDF may not used, e.g. when the signal is deterministic?
For the introduction of the probability density function, Fig. 3.1(a) shows an ensemble of random time functions. These random time functions are realizations of a stochastic process. In practice, usually just one realization is measured, thus averaging is performed as averaging in time — which means that time averages are assumed to be equal to ensemble averages. In this case, the process is called ergodic (see page 43).
Quantization theory deals with statistical properties of ergodic processes. Fulfillment of QT I ensures that the PDF of x can be determined from the PDF of x′.