To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The decoding algorithms that we have considered to this point have all been hard decision algorithms. A hard decision decoder is one which accepts hard values (for example 0s or 1s if the data is binary) from the channel that are used to create what is hopefully the original codeword. Thus a hard decision decoder is characterized by “hard input” and “hard output.” In contrast, a soft decision decoder will generally accept “soft input” from the channel while producing “hard output” estimates of the correct symbols. As we will see later, the “soft input” can be estimates, based on probabilities, of the received symbols. In our later discussion of turbo codes, we will see that turbo decoding uses two “soft input, soft output” decoders that pass “soft” information back and forth in an iterative manner between themselves. After a certain number of iterations, the turbo decoder produces a “hard estimate” of the correct transmitted symbols.
Additive white Gaussian noise
In order to understand soft decision decoding, it is helpful to take a closer look first at the communication channel presented in Figure 1.1. Our description relies heavily on the presentation in. The box in that figure labeled “Channel” is more accurately described as consisting of three components: a modulator, a waveform channel, and a demodulator; see Figure 15.1. For simplicity we restrict ourselves to binary data. Suppose that we transmit the binary codeword c = c1 … cn.
In this chapter we discuss some basic properties of combinatorial designs and their relationship to codes. In Section 6.5, we showed how duadic codes can lead to projective planes. Projective planes are a special case of t-designs, also called block designs, which are the main focus of this chapter. As with duadic codes and projective planes, most designs we study arise as the supports of codewords of a given weight in a code.
t-designs
A t-(v, k, λ) design, or briefly a t-design, is a pair (P, B) where P is a set of v elements, called points, and B is a collection of distinct subsets of P of size k, called blocks, such that every subset of points of size t is contained in precisely λ blocks. (Sometimes one considers t-designs in which the collection of blocks is a multiset, that is, blocks may be repeated. In such a case, a t-design without repeated blocks is called simple. We will generally only consider simple t-designs and hence, unless otherwise stated, the expression “t-design” will mean “simple t-design.”) The number of blocks in B is denoted by b, and, as we will see shortly, is determined by the parameters t, v, k, and λ.
The [n, k] codes that we have studied to this point are called block codes because we encode a message of k information symbols into a block of length n. On the other hand convolutional codes use an encoding scheme that depends not only upon the current message being transmitted but upon a certain number of preceding messages. Thus “memory” is an important feature of an encoder of a convolutional code. For example, if x(1), x(2), … is a sequence of messages each from to be transmitted at time 1, 2, …, then an (n, k) convolutional code with memory M will transmit codewords c(1), c(2), … where depends upon x(i), x(i − 1), …, x(i − M). In our study of linear block codes we have discovered that it is not unusual to consider codes of fairly high lengths n and dimensions k. In contrast, the study and application of convolutional codes has dealt primarily with (n, k) codes with n and k very small and a variety of values of M.
Convolutional codes were developed by Elias in 1955. In this chapter we will only introduce the subject and restrict ourselves to binary codes. While there are a number of decoding algorithms for convolutional codes, the main one is due to Viterbi; we will examine his algorithm in Section 14.2.
In 1948 Claude Shannon published a landmark paper “A mathematical theory of communication” that signified the beginning of both information theory and coding theory. Given a communication channel which may corrupt information sent over it, Shannon identified a number called the capacity of the channel and proved that arbitrarily reliable communication is possible at any rate below the channel capacity. For example, when transmitting images of planets from deep space, it is impractical to retransmit the images. Hence if portions of the data giving the images are altered, due to noise arising in the transmission, the data may prove useless. Shannon's results guarantee that the data can be encoded before transmission so that the altered data can be decoded to the specified degree of accuracy. Examples of other communication channels include magnetic storage devices, compact discs, and any kind of electronic communication device such as cellular telephones.
The common feature of communication channels is that information is emanating from a source and is sent over the channel to a receiver at the other end. For instance in deep space communication, the message source is the satellite, the channel is outer space together with the hardware that sends and receives the data, and the receiver is the ground station on Earth. (Of course, messages travel from Earth to the satellite as well.) For the compact disc, the message is the voice, music, or data to be placed on the disc, the channel is the disc itself, and the receiver is the listener.
Coding theory originated with the 1948 publication of the paper “A mathematical theory of communication” by Claude Shannon. For the past half century, coding theory has grown into a discipline intersecting mathematics and engineering with applications to almost every area of communication such as satellite and cellular telephone transmission, compact disc recording, and data storage.
During the 50th anniversary year of Shannon's seminal paper, the two volume Handbook of Coding Theory, edited by the authors of the current text, was published by Elsevier Science. That Handbook, with contributions from 33 authors, covers a wide range of topics at the frontiers of research. As editors of the Handbook, we felt it would be appropriate to produce a textbook that could serve in part as a bridge to the Handbook. This textbook is intended to be an in-depth introduction to coding theory from both a mathematical and engineering viewpoint suitable either for the classroom or for individual study. Several of the topics are classical, while others cover current subjects that appear only in specialized books and journal publications. We hope that the presentation in this book, with its numerous examples and exercises, will serve as a lucid introduction that will enable readers to pursue some of the many themes of coding theory.
Fundamentals of Error-Correcting Codes is a largely self-contained textbook suitable for advanced undergraduate students and graduate students at any level.
In this chapter we examine the properties of the binary and ternary Golay codes, the hexacode, and the Pless symmetry codes. The Golay codes and the hexacode have similar properties while the Pless symmetry codes generalize the extended ternary Golay code. We conclude the chapter with a section showing some of the connections between these codes and lattices.
The binary Golay codes
In this section we examine in more detail the binary Golay codes of lengths 23 and 24. We have established the existence of a and a binary code in Section 1.9.1. Recall that our original construction of the extended binary Golay code used a bordered reverse circulant generator matrix, and the code was obtained by puncturing. Since then we have given different constructions of these codes, both of which were claimed to be unique codes of their length, dimension, and minimum distance. We first establish this uniqueness.
Uniqueness of the binary Golay codes
Throughout this section let C be a (possibly nonlinear) binary code of length 23 and minimum distance 7 containing M ≥ 212 codewords, one of which is 0. In order to prove the uniqueness of C, we first show it has exactly 212 codewords and is perfect. We then show it has a uniquely determined weight distribution and is in fact linear. This proof of linearity follows along the lines indicated by.