To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Queueing theory describes basic phenomena such as the waiting time, the throughput, the losses, the number of queueing items, etc. in queueing systems. Following Kleinrock (1975), any system in which arrivals place demands upon a finite-capacity resource can be broadly termed a queueing system.
Queuing theory is a relatively new branch of applied mathematics that is generally considered to have been initiated by A. K. Erlang in 1918 with his paper on the design of automatic telephone exchanges, in which the famous Erlang blocking probability, the Erlang B-formula (14.17), was derived (Brockmeyer et al., 1948, p. 139). It was only after the Second World War, however, that queueing theory was boosted mainly by the introduction of computers and the digitalization of the telecommunications infrastructure. For engineers, the two volumes by Kleinrock (1975, 1976) are perhaps the most well-known, while in applied mathematics, apart from the penetrating influence of Feller (1970, 1971), the Single Server Queue of Cohen (1969) is regarded as a landmark. Since Cohen's book, which incorporates most of the important work before 1969, a wealth of books and excellent papers have appeared, an evolution that is still continuing today.
A queueing system
Examples of queueing abound in daily life: queueing situations at a ticket window in the railway station or post office, at the cash points in the supermarket, the waiting room at the airport, train or hospital, etc. In telecommunications, the packets arriving at the input port of a router or switch are buffered in the output queue before transmission to the next hop towards the destination.
Performance analysis belongs to the domain of applied mathematics. The major domain of application in this book concerns telecommunications systems and networks. We will mainly use stochastic analysis and probability theory to address problems in the performance evaluation of telecommunications systems and networks. The first chapter will provide a motivation and a statement of several problems.
This book aims to present methods rigorously, hence mathematically, with minimal resorting to intuition. It is my belief that intuition is often gained after the result is known and rarely before the problem is solved, unless the problem is simple. Techniques and terminologies of axiomatic probability (such as definitions of probability spaces, filtration, measures, etc.) have been omitted and a more direct, less abstract approach has been adopted. In addition, most of the important formulas are interpreted in the sense of “What does this mathematical expression teach me?” This last step justifies the word “applied”, since most mathematical treatises do not interpret as it contains the risk to be imprecise and incomplete.
The field of stochastic processes is much too large to be covered in a single book and only a selected number of topics has been chosen. Most of the topics are considered as classical. Perhaps the largest omission is a treatment of Brownian processes and the many related applications. A weak excuse for this omission (besides the considerable mathematical complexity) is that Brownian theory applies more to physics (analogue fields) than to system theory (discrete components).
In this chapter, the probability density function of the number of hops to the most nearby member of the anycast group consisting of m members (e.g. servers) is analyzed. The results are applied to compute a performance measure η of the effciency of anycast over unicast and to the server placement problem. The server placement problem asks for the number of (replicated) servers m needed such that any user in the network is not more than j hops away from a server of the anycast group with a certain prescribed probability. As in Chapter 17 on multicast, two types of shortest path trees are investigated: the regular k-ary tree and the irregular uniform recursive tree treated in Chapter 16. Since these two extreme cases of trees indicate that the performance measure η ≈ 1 − alogm where the real number a depends on the details of the tree, it is believed that for trees in real networks (as the Internet) a same logarithmic law applies. An order calculus on exponentially growing trees further supplies evidence for the conjecture that η ≈ 1 alogm for small m.
Introduction
IPv6 possesses a new address type, anycast, that is not supported in IPv4. The anycast address is syntactically identical to a unicast address. However, when a set of interfaces is specified by the same unicast address, that unicast address is called an anycast address. The advantage of anycast is that a group of interfaces at different locations is treated as one single address. For example, the information on servers is often duplicated over several secondary servers at different locations for reasons of robustness and accessibility.
In this chapter, we consider block codes with a certain structure, which are defined over alphabets that are fields. Specifically, these codes, which we call linear codes, form linear spaces over their alphabets. We associate two objects with these codes: a generator matrix and a parity-check matrix. The first matrix is used as a compact representation of the code and also as a means for efficient encoding. The parity-check matrix will be used as a tool for analyzing the code (e.g., for computing its minimum distance) and will also be part of the general framework that we develop for the decoding of linear codes.
As examples of linear codes, we will mention the repetition code, the parity code, and the Hamming code with its extensions. Owing to their structure, linear codes are by far the predominant block codes in practical usage, and virtually all codes that will be considered in subsequent chapters are linear.
Definition
Denote by GF(q) a finite (Galois) field of size q. For example, if q is a prime, the field GF(q) coincides with the ring of integer residues modulo q, also denoted by ℤq. We will see more constructions of finite fields in Chapter 3.
An (n, M, d) code C over a field F = GF(q) is called linear if C is a linear subspace of Fn over F; namely, for every two codewords c1, c2 ∈ C and two scalars a1, a2 ∈ F we have a1c1 + a2c2 ∈ C.
For most of this chapter, we deviate from our study of codes to become acquainted with the algebraic concept of finite fields. These objects will serve as our primary tool for constructing codes in upcoming chapters. As a motivating example, we present at the end of this chapter a construction of a double-error-correcting binary code, whose description and analysis make use of finite fields. This construction will turn out to be a special case of a more general family of codes, to be discussed in Section 5.5.
Among the properties of finite fields that we cover in this chapter, we show that the multiplicative group of a finite field is cyclic; this property, in turn, suggests a method for implementing the arithmetic operations in finite fields of moderate sizes through look-up tables, akin to logarithm tables. We also prove that the size of any finite field must be a power of a prime and that this necessary condition is also sufficient, that is, every power of a prime is a size of some finite field. The practical significance of the latter property is manifested particularly through the special case of the prime 2, since in most coding applications, the data is sub-divided into symbols—e.g., bytes—that belong to alphabets whose sizes are powers of 2.
Prime fields
For a prime p, we let GF(p) (Galois field of size p) denote the ring of integer residues modulo p (this ring is also denoted by ℤp).
In Chapter 1, we introduced the concept of a block code with a certain application in mind: the codewords in the code serve as the set of images of the channel encoder. The encoder maps a message into a codeword which, in turn, is transmitted through the channel, and the receiver then decodes that message (possibly incorrectly) from the word that is read at the output of the channel. In this model, the encoding of a message is independent of any previous or future transmissions—and so is the decoding.
In this chapter, we consider a more general coding model, where the encoding and the decoding are context-dependent. The encoder may now be in one of finitely many states, which contain information about the history of the transmission. Such a finite-state encoder still maps messages to codewords, yet the mapping depends on the state which the encoder is currently in, and that state is updated during each message transmission. Finite-state encoders will be specified through directed graphs, where the vertices stand for the states and the edges define the allowed transitions between states. The mapping from messages to codewords will be determined by the edge names and by labels that we assign to the edges.
The chapter is organized as follows. We first review several concepts from the theory of directed graphs. We then introduce the notion of trellis codes, which can be viewed as the state-dependent counterpart of block codes: the elements of a trellis code form the set of images of a finite-state encoder.
In this chapter, we introduce the model of a communication system, as originally proposed by Claude E. Shannon in 1948. We will then focus on the channel portion of the system and define the concept of a probabilistic channel, along with models of an encoder and a decoder for the channel. As our primary example of a probabilistic channel—here, as well as in subsequent chapters—we will introduce the memoryless q-ary symmetric channel, with the binary case as the prevailing instance used in many practical applications. For q = 2 (the binary case), we quote two key results in information theory. The first result is a coding theorem, which states that information through the channel can be transmitted with an arbitrarily small probability of decoding error, as long as the transmission rate is below a quantity referred to as the capacity of the channel. The second result is a converse coding theorem, which states that operating at rates above the capacity necessarily implies unreliable transmission.
In the remaining part of the chapter, we shift to a combinatorial setting and characterize error events that can occur in channels such as the q-ary symmetric channel, and can always be corrected by suitably selected encoders and decoders. We exhibit the trade-off between error correction and error detection: while an error-detecting decoder provides less information to the receiver, it allows us to handle twice as many errors.
Generalized Reed–Solomon (in short, GRS) codes and their derivative codes are probably the most extensively-used codes in practice. This may be attributed to several advantages that these codes have. First, GRS codes are maximum distance separable, namely, they attain the Singleton bound. Secondly, being linear codes, they can be encoded efficiently; furthermore, as we see in this chapter, encoders for the sub-class of conventional Reed–Solomon (in short, RS) codes can be implemented by particularly simple hardware circuits. Thirdly, we will show in Chapters 6 and 9 that GRS codes can also be decoded efficiently.
As their names suggest, RS codes pre-dated their GRS counterparts. Nevertheless, we find it more convenient herein to define GRS codes first and prove several properties thereof; we then present RS codes as a special class of GRS codes.
One seeming limitation of GRS codes is the fact that their length is bounded from above by the size of the field over which they are defined. This could imply that these codes might be useful only when the application calls for a field size that is relatively large, e.g., when the field is GF(28) and the symbols are bytes. Still, we show that GRS codes can serve as building blocks to derive new codes over small alphabets as well. We present two methods for doing so. The first technique is called concatenation and is based on two stages of encoding, the first of which is a GRS encoder.
In Section 4.1, we defined MDS codes as codes that attain the Singleton bound. This chapter further explores their properties. The main topic to be covered here is the problem of determining for a given positive integer k and a finite field F = GF(q), the largest length of any linear MDS code of dimension k over F. This problem is still one of the most notable unresolved questions in coding theory, as well as in other disciplines, such as combinatorics and projective geometry over finite fields. The problem has been settled so far only for a limited range of dimensions k. Based on the partial proved evidence, it is believed that within the range 2 ≤ k ≤ q−1 (and with two exceptions for even values of q), linear [n, k] MDS codes exist over F if and only if n ≤ q+1. One method for proving this conjecture for certain values of k is based on identifying a range of parameters for which MDS codes are necessarily extended GRS codes. To this end, we will devote a part of this chapter to reviewing some of the properties of GRS codes and their extensions.
Definition revisited
We start by recalling the Singleton bound from Section 4.1. We will prove it again here, using a certain characterization of the minimum distance of a code, as provided by the following lemma.