To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Exponential sums are important tools in number theory for solving problems involving integers—and real numbers in general—that are often intractable by other means. Analogous sums can be considered in the framework of finite fields and turn out to be useful in studying the number of solutions of equations over finite fields (see Chapter 6) and in various applications of finite fields.
A basic role in setting up exponential sums for finite fields is played by special group homomorphisms called characters. It is necessary to distinguish between two types of characters—namely, additive and multiplicative characters—depending on whether reference is made to the additive or the multiplicative group of the finite field. Exponential sums are formed by using the values of one or more characters and possibly combining them with weights or with other function values. If we only sum the values of a single character, we speak of a character sum.
In Section 1 we lay the foundation by first discussing characters of finite abelian groups and then specializing to finite fields. Section 2 is devoted to Gaussian sums, which are arguably the most important types of exponential sums for finite fields as they govern the transition from the additive to the multiplicative structure and vice versa. They also appear in many other contexts in algebra and number theory. The closely related Jacobi sums are studied in the next section.
The theory of polynomials over finite fields is important for investigating the algebraic structure of finite fields as well as for many applications. Above all, irreducible polynomials—the prime elements of the polynomial ring over a finite field—are indispensable for constructing finite fields and computing with the elements of a finite field.
Section 1 introduces the notion of the order of a polynomial. An important fact is the connection between minimal polynomials of primitive elements (so-called primitive polynomials) and polynomials of the highest possible order for a given degree. Results about irreducible polynomials going beyond those discussed in the previous chapters are presented in Section 2. The next section is devoted to constructive aspects of irreducibility and deals also with the problem of calculating the minimal polynomial of an element in an extension field.
Certain special types of polynomials are discussed in the last two sections. Linearized polynomials are singled out by the property that all the exponents occurring in them are powers of the characteristic. The remarkable theory of these polynomials enables us, in particular, to give an alternative proof of the normal basis theorem. Binomials and trinomials—that is, two-term and three-term polynomials—form another class of polynomials for which special results of considerable interest can be established. We remark that another useful collection of polynomials—namely, that of cyclotomic polynomials—was already considered in Chapter 2, Section 4, and that some additional information on cyclotomic polynomials is contained in Section 2 of the present chapter.
The intention of this chapter is to present a notation for specifying programs to be written later in imperative programming languages such as PASCAL, C, MODULA, ADA, BASIC, or even in assembly code. Put together, the elements of this notation form a very simple pseudo-programming language.
The pseudo-programs that we shall write using this notation will not be executed on a computer nor, a fortiori, be submitted to any kind of tests. Rather, and more importantly, they will be able to be submitted to a mathematical analysis and this will be made possible because each construct of the notation receives a precise axiomatic definition. For doing so, we shall use the technique of the weakest pre-condition as introduced by Dijkstra.
The notation contains a number of constructs that look like the ones encountered in every imperative programming language, namely assignment and conditionals. But it also contains unusual features such as pre-condition, multiple assignment, bounded choice, guard, and even unbounded choice, which are very important for specifying and designing programs although, and probably because, they are not always executable or even implementable.
Although it might seem strange at first glance, the notation does not contain any form of sequencing or loop. Yet the use of such features forms the basis of any decent imperative program. The reason for not having these constructs here is that our problem is not, for the moment, that of writing programs, rather it is that of specifying them: sequencing and loop certainly pertain to the how domain which characterizes programs, but not to the what domain which characterizes specifications.
When doing mathematics, people prove assertions. This is accomplished with the implicit or explicit help of some rules of reasoning admitted, for quite a long time, to be the correct rules of reasoning. Our goal, in this chapter, is to make absolutely precise what such rules are, so that, in principle, if not in practice, the activity of proving could be made checkable by a robot.
The reason why we insist, in this first chapter, on the very precise definition of what mathematical reasoning is, must be clear. Our eventual aim is to have a significant part of software systems constructed in a way that is guaranteed by proofs. Most of the time such proofs are, mathematically speaking, not very deep; however, there will be many of them. Hence, it will be quite easy to introduce errors in doing them, in very much the same way as people introduce errors in writing programs. There is, clearly, no gain in just shifting the production of errors from programs to proofs. Fortunately, however, there exists an important distinction between programs and proofs. Both are formal texts, but, provided the foundation for proofs is sufficiently elaborate (hence this chapter), then proofs can always be checked mechanically for correctness, whereas programs cannot.
This chapter is organized as follows. In the first section, we introduce the way mathematical conjectures are presented and the way such conjectures can be eventually proved. This is done independently of any precise mathematical domain.
This book is a very long discourse explaining how, in my opinion, the task of programming (in the small as well as in the large) can be accomplished by returning to mathematics.
By this, I first mean that the precise mathematical definition of what a program does must be present at the origin of its construction. If such a definition is lacking, or if it is too complicated, we might wonder whether our future program will mean anything at all. My belief is that a program, in the absolute, means absolutely nothing. A program only means something relative to a certain intention, that must predate it, in one form or another. At this point, I have no objection with people feeling more comfortable with the word “English” replacing the word “mathematics”. I just wonder whether such people are not assigning themselves a more difficult task.
I also think that this “return to mathematics” should be present in the very process of program construction. Here the task is to assign a program to a well-defined meaning. The idea is to accompany the technical process of program construction by a similar process of proof construction, which guarantees that the proposed program agrees with its intended meaning.
Simultaneous concerns about the architecture of a program and that of its proof are surprisingly efficient. For instance, when the proof is cumbersome, there are serious chances that the program will be too; and ingredients for structuring proofs (abstraction, instantiation, decomposition) are very similar to those for structuring programs.
This chapter is devoted to the study of various theoretical developments concerning Generalized Substitutions and Abstract Machines. Our ultimate goal is to construct some set-theoretic models for Abstract Machines and Generalized Substitutions. By doing so, we shall be able to conduct further developments of Abstract Machines (i.e. loop and refinement in chapters 9 and 11 respectively) on these models rather than on the notation itself. This will prove to be very convenient.
In section 6.1, we prove that any generalized substitution defined in terms of the basic constructs of the previous chapter can be put into a certain normalized form. An outcome of this result is that any proof involving a generalized substitution can be done by assuming, without any loss of generality, that the substitution in question is in normalized form.
In section 6.2, we prove two convenient properties of generalized substitutions: namely, that the establishment of a post-condition distributes through conjunction and that it is monotonic under universal implication. The first will have a clear practical impact whereas the second is useful for theoretical developments. These properties were named by E.W. Dijkstra the Healthiness Conditions.
In section 6.3, we study the problem of termination and that of feasibility. We give a simple characterization of these properties. We also explain how the Generalized Substitution Language has exactly the same expressive power as the Before-after Predicates as introduced in section 4.4.