To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the major applications of finite fields is coding theory. This theory has its origin in a famous theorem of Shannon that guarantees the existence of codes that can transmit information at rates close to the capacity of a communication channel with an arbitrarily small probability of error. One purpose of algebraic coding theory—the theory of error-correcting and errordetecting codes—is to devise methods for the construction of such codes.
During the last two decades more and more abstract algebraic tools such as the theory of finite fields and the theory of polynomials over finite fields have influenced coding. In particular, the description of redundant codes by polynomials over Fq is a milestone in this development. The fact that one can use shift registers for coding and decoding establishes a connection with linear recurring sequences. In our discussion of algebraic coding theory we do not consider any of the problems of the implementation or technical realization of the codes. We restrict ourselves to the study of basic properties of block codes and the description of some interesting classes of block codes.
Section 1 contains some background on algebraic coding theory and discusses the important class of linear codes in which encoding is performed by a linear transformation. A particularly interesting type of linear code is a cyclic code—that is, a linear code invariant under cyclic shifts.
This chapter is of central importance since it contains various fundamental properties of finite fields and a description of methods for constructing finite fields.
The field of integers modulo a prime number is, of course, the most familiar example of a finite field, but many of its properties extend to arbitrary finite fields. The characterization of finite fields (see Section 1) shows that every finite field is of prime-power order and that, conversely, for every prime power there exists a finite field whose number of elements is exactly that prime power. Furthermore, finite fields with the same number of elements are isomorphic and may therefore be identified. The next two sections provide information on roots of irreducible polynomials, leading to an interpretation of finite fields as splitting fields of irreducible polynomials, and on traces, norms, and bases relative to field extensions.
Section 4 treats roots of unity from the viewpoint of general field theory, which will be needed occasionally in Section 6 as well as in Chapter 5. Section 5 presents different ways of representing the elements of a finite field. In Section 6 we give two proofs of the famous theorem of Wedderburn according to which every finite division ring is a field.
Many discussions in this chapter will be followed up, continued, and partly generalized in later chapters.
CHARACTERIZATION OF FINITE FIELDS
In the previous chapter we have already encountered a basic class of finite fields—that is, of fields with finitely many elements.
Sequences in finite fields whose terms depend in a simple manner on their predecessors are of importance for a variety of applications. Such sequences are easy to generate by recursive procedures, which is certainly an advantageous feature from the computational viewpoint, and they also tend to have useful structural properties. Of particular interest is the case where the terms depend linearly on a fixed number of predecessors, resulting in a so-called linear recurring sequence. These sequences are employed, for instance, in coding theory (see Chapter 8, Section 2), in cryptography (see Chapter 9, Section 2), and in several branches of electrical engineering. In these applications, the underlying field is often taken to be F2, but the theory can be developed quite generally for any finite field.
In Section 1 we show how to implement the generation of linear recurring sequences on special switching circuits called feedback shift registers. We discuss also some basic periodicity properties of such sequences. Section 2 introduces the concept of an impulse response sequence, which is of both practical and theoretical interest. Further relations to periodicity properties are found in this way, and also through the use of the so-called characteristic polynomial of a linear recurring sequence. Another application of the characteristic polynomial yields explicit formulas for the terms of a linear recurring sequence. Maximal period sequences are also defined in this section. These sequences will appear in various applications in later chapters.
This book is designed as a textbook edition of our monograph Finite Fields which appeared in 1983 as Volume 20 of the Encyclopedia of Mathematics and Its Applications. Several changes have been made in order to tailor the book to the needs of the student. The historical and bibliographical notes at the end of each chapter and the long bibliography have been omitted as they are mainly of interest to researchers. The reader who desires this type of information may consult the original edition. There are also changes in the text proper, with the present book having an even stronger emphasis on applications. The increasingly important role of finite fields in cryptology is reflected by a new chapter on this topic. There is now a separate chapter on algebraic coding theory containing material from the original edition together with a new section on Goppa codes. New material on pseudorandom sequences has also been added. On the other hand, topics in the original edition that are mainly of theoretical interest have been omitted. Thus, a large part of the material on exponential sums and the chapters on equations over finite fields and on permutation polynomials cannot be found in the present volume.
The theory of finite fields is a branch of modern algebra that has come to the fore in the last 50 years because of its diverse applications in combinatorics, coding theory, cryptology, and the mathematical study of switching circuits, among others.
The theory of polynomials over finite fields is important for investigating the algebraic structure of finite fields as well as for many applications. Above all, irreducible polynomials—the prime elements of the polynomial ring over a finite field—are indispensable for constructing finite fields and computing with the elements of a finite field.
Section 1 introduces the notion of the order of a polynomial. An important fact is the connection between minimal polynomials of primitive elements (so-called primitive polynomials) and polynomials of the highest possible order for a given degree. Results about irreducible polynomials going beyond those discussed in the previous chapters are presented in Section 2. The next section is devoted to constructive aspects of irreducibility and deals also with the problem of calculating the minimal polynomial of an element in an extension field.
Certain special types of polynomials are discussed in the last two sections. Linearized polynomials are singled out by the property that all the exponents occurring in them are powers of the characteristic. The remarkable theory of these polynomials enables us, in particular, to give an alternative proof of the normal basis theorem. Binomials and trinomials — that is, two-term and three-term polynomials—form another class of polynomials for which special results of considerable interest can be established. We remark that another useful collection of polynomials— namely, that of cyclotomic polynomials—was already considered in Chapter 2, Section 4, and that some additional information on cyclotomic polynomials is contained in Section 2 of the present chapter.
This book has been out of print for some time, but because of the continuing demand we want to make it available again. We have taken this opportunity to revise the book slightly. Historical and bibliographical notes have been added to each chapter, the bibliography has become more detailed, and the misprints in the original edition have of course been corrected. We would like to thank the staff at Cambridge University Press for accommodating our wishes so readily.
The subject of finite fields has undergone a spectacular development in the last 10 years. Great strides have been made, especially in the computational and algorithmic aspects of finite fields which are important in the rapidly developing areas of computer algebra and symbolic computation. The numerous applications of finite fields to combinatorics, cryptology, algebraic coding theory, pseudorandom number generation and electrical engineering, to name but a few, have provided a steady impetus for further research. Thus, the subject grows inexorably. Nevertheless, we hope that this book will continue to serve its purpose as an introduction for students, since it is devoted mainly to those parts that have a certain quality of timelessness, namely the classical theory and the standard applications of finite fields.
In this chapter we consider some aspects of cryptology that have received considerable attention over the last few years. Cryptology is concerned with the designing and the breaking of systems for the communication of secret information. Such systems are called cryptosystems or cipher systems or ciphers. The designing aspect is called cryptography, the breaking is referred to as cryptanalysis. The rapid development of computers, the electronic transmission of information, and the advent of electronic transfer of funds all contributed to the evolution of cryptology from a government monopoly that deals with military and diplomatic communications to a major concern of business. The concepts have changed from conventional (private-key) cryptosystems to public-key cryptosystems that provide privacy and authenticity in communication via transfer of messages. Cryptology as a science is in its infancy since it is still searching for appropriate criteria for security and measures of complexity of cryptosystems.
Conventional cryptosystems date back to the ancient Spartans and Romans. One elementary cipher, the Caesar cipher, was used by Julius Caesar and consists of a single key K = 3 such that a message M is transformed into M + 3 modulo 26, where the integers 0, 1, …, 25 represent the letters A, B, …, Z of the alphabet. An obvious generalization of this cipher leads to the substitution ciphers often named after de Vigenère, a French cryptographer of the 16th century.
Any nonconstant polynomial over a field can be expressed as a product of irreducible polynomials. In the case of finite fields, some reasonably efficient algorithms can be devised for the actual calculation of the irreducible factors of a given polynomial of positive degree.
The availability of feasible factorization algorithms for polynomials over finite fields is important for coding theory and for the study of linear recurrence relations in finite fields. Beyond the realm of finite fields, there are various computational problems in algebra and number theory that depend in one way or another on the factorization of polynomials over finite fields. We mention the factorization of polynomials over the ring of integers, the determination of the decomposition of rational primes in algebraic number fields, the calculation of the Galois group of an equation over the rationals, and the construction of field extensions.
We shall present several algorithms for the factorization of polynomials over finite fields. The decision on the choice of algorithm for a specific factorization problem usually depends on whether the underlying finite field is “small” or “large.” In Section 1 we describe those algorithms that are better adapted to “small” finite fields and in the next section those that work better for “large” finite fields. Some of these algorithms reduce the problem of factoring polynomials to that of finding the roots of certain other polynomials. Therefore, Section 3 is devoted to the discussion of the latter problem from the computational viewpoint.
Finite fields play a fundamental role in some of the most fascinating applications of modern algebra to the real world. These applications occur in the general area of data communication, a vital concern in our information society. Technological breakthroughs like space and satellite communications and mundane matters like guarding the privacy of information in data banks all depend in one way or another on the use of finite fields. Because of the importance of these applications to communication and information theory, we will present them in greater detail in the following chapters. Chapter 8 discusses applications of finite fields to coding theory, the science of reliable transmission of messages, and Chapter 9 deals with applications to cryptology, the art of enciphering and deciphering secret messages.
This chapter is devoted to applications of finite fields within mathematics. These applications are indeed numerous, so we can only offer a selection of possible topics. Section 1 contains some results on the use of finite fields in affine and projective geometry and illustrates in particular their role in the construction of projective planes with a finite number of points and lines. Section 2 on combinatorics demonstrates the variety of applications of finite fields to this subject and points out their usefulness in problems of design of statistical experiments.
In Section 3 we give the definition of a linear modular system and show how finite fields are involved in this theory.
This introductory chapter contains a survey of some basic algebraic concepts that will be employed throughout the book. Elementary algebra uses the operations of arithmetic such as addition and multiplication, but replaces particular numbers by symbols and thereby obtains formulas that, by substitution, provide solutions to specific numerical problems. In modern algebra the level of abstraction is raised further: instead of dealing with the familiar operations on real numbers, one treats general operations —processes of combining two or more elements to yield another element—in general sets. The aim is to study the common properties of all systems consisting of sets on which are defined a fixed number of operations interrelated in some definite way—for instance, sets with two binary operations behaving like + and · for the real numbers.
Only the most fundamental definitions and properties of algebraic systems—that is, of sets together with one or more operations on the set—will be introduced, and the theory will be discussed only to the extent needed for our special purposes in the study of finite fields later on. We state some standard results without proof. With regard to sets we adopt the naive standpoint. We use the following sets of numbers: the set ℕ of natural numbers, the set ℤ of integers, the set ℚ of rational numbers, the set ℝ of real numbers, and the set ℂ of complex numbers.
Recall from the past chapter (specifically, the section “Computation as metaphor for the exploration of creativity”) that what has been named the computational theory of scientific creativity or CTSC is a hypothesis about the nature of creativity (in the natural and the artificial sciences). This hypothesis is, furthermore, firmly rooted in the computational metaphor – more precisely, it relies on the formation of a metaphorical model connecting creative and computational processes.
It also bears repeating that this theory goes back to Newell, Shaw, and Simon (1962) and, in effect, is a special case of a broader theory of thinking known concisely as the physical symbol system hypothesis (Newell and Simon 1976). The theory also serves as the basis for some recent explanations, by Kulkarni and Simon (1988), Thagard and Nowak (1990), and Thagard (1988, 1990), of certain historically important discoveries in the natural sciences. Thus, the general nature of CTSC is well known and has been so for some time. The task of the present chapter is to articulate and state CTSC in a sufficiently precise form such that (1) the reader understands and can anticipate, at least in general terms, the direction along which the explanation of Wilkes's creativity will proceed in the chapters to follow (especially, in Chapters 4–6) and (2) it is posed as a genuinely testable (i.e., in principle, falsifiable) hypothesis for which the Wilkes case study constitutes a nontrivial test. The extent to which CTSC is, in fact, corroborated or refuted by this test, and the general lessons learned from the case study, are matters discussed in the final part of this book.
There we have it: an account – an explanation – of how a particular act of inventive design in the domain of one particular science of the artificial might have taken place. The explanation takes the form of a computation at the level of cognitive description known as the knowledge level. Just as the proof of a theorem stated as an organized, stepwise set of arguments is an explanation of why the theorem is (believed or should be believed by an individual or community to be) true, so also our explanation takes the form of a symbolic process – that is, a structured set of symbol-transforming actions. Extending this parallel further, just as a mathematical argument draws on a body of assertions (axioms, lemmas, and theorems) that are assumed or known to be true prior to the onset of the argument or are produced along the way, so also the process described in these pages appeals to a corpus of knowledge the tokens of which are in part postulated to have existed at the time Maurice Wilkes began to think about the problem and in part generated by the process itself.
Let us, at this stage, recall a point already emphasized in Chapters 1 and 4. Historical episodes of a certain type bear the stamp of contingency. Past episodes of cognitive acts such as the invention of a theory or the design of a new type of artifact belong to this category. Thus, any explanation of such an episode will inherit the burden of contingency. We can hardly ever claim that the episode must have happened in one particular way rather than some other.
In May 1949, the EDSAC computer, designed and constructed by Maurice Wilkes and his co-workers at the Cambridge University Mathematical Laboratory successfully performed its first, fully automatic computation (Wilkes, 1956, p. 39; 1985, p. 142; Wilkes and Renwick 1949). The machine was demonstrated soon after, in June 1949, at a conference entitled “High Speed Automatic Calculating Machines” held in Cambridge during which tables of squares and primes were printed out (Worsley 1949). As noted in chapter 1 (see the section “The invention of microprogramming: as a case study”), the EDSAC was the very first stored program computer to become fully operational.
The EDSAC was a serial machine in that (1) reading from or writing into main memory was done in a “bit-serial” manner – that is, each bit of a memory word was read or written into one at a time, and (2) the arithmetic unit performed its various operations in a bit-by-bit manner.
Soon after the EDSAC's completion, Wilkes became preoccupied with the issues of regularity and complexity in computer design. This preoccupation is documented not only in his retrospective writings (Wilkes 1985, pp. 184–5, 1986), but also in the early sections of Wilkes (1951), as a preamble to his description of the microprogramming principle. Thus, there is considerable evidence that the development of microprogramming was the outcome of the following problem:
To design a control unit that would be systematic and regular in structure in much the same way that the memory unit is regular in structure.