We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Denotational semantics is a formal method for defining the semantics of programming languages. It is of interest to the language designer, compiler writer and programmer. These individuals have different criteria for judging such a method – it should be concise, unambiguous, open to mathematical analysis, mechanically checkable, executable and readable depending on your point of view. Denotational semantics cannot be all things to all people but it is one attempt to satisfy these various aims. It is a formal method because it is based on well-understood mathematical foundations and uses a rigorously defined notation or meta-language.
The complete definition of a programming language is divided into syntax, semantics and sometimes also pragmatics. Syntax defines the structure of legal sentences in the language. Semantics gives the meaning of these sentences. Pragmatics covers the use of an implementation of a language and will not be mentioned further.
In the case of syntax, context-free grammars expressed in Backus–Naur form (BNF) or in syntax diagrams have been of great benefit to computer scientists since Backus and Naur [44] formally specified the syntax of Algol-60. Now all programming languages have their syntax given in this way. The result has been ‘cleaner’ syntax, improved parsing methods, parser-generators and better language manuals. As yet no semantic formalism has achieved such popularity and the semantics of a new language is almost invariably given in natural language.
The typical problem facing a programmer is to write a program which will transform data satisfying some properties or assertions ‘P’ into results satisfying ‘Q’.
Ashcroft and Wadge [4] have criticized effort spent on describing existing programming languages and have suggested a more active, prescriptive role for denotational semantics in designing languages of the future. Accepting some truth in this, this chapter contains a semantics for Prolog. While there are existing Prologs, plural, logic programming is still a research area and a denotational semantics is one way to investigate variations in it.
Prolog [9] is a programming language based on first-order predicate logic. A Prolog program can be thought of in two ways. It can be taken to be a set of logical assertions or facts about a world or some part of a world. This is the declarative semantics of the program. It can also be taken as a set of procedure definitions which gives its procedural semantics.
The declarative semantics are very elegant in that the program stands for some basic facts and certain other facts that logically follow from them. No side-effects or considerations of the order of evaluation are involved. Unfortunately, to make Prolog run and to make it run efficiently, some programs require side-effects such as input–output and the order of evaluation to be taken into account. This can only be understood procedurally.
Here a denotational semantics of a subset of Prolog is given. This defines the backtracking search and unification processes of Prolog. Later the definition is translated into Algol-68 to form an interpreter. Prolog is still a research language and giving a denotational semantics enables it to be compared with other languages in a uniform framework.
There is great variety amongst programming languages in the area of data structures and type checking. It is only possible to deal with some of the more straightforward issues in this chapter.
Some languages, such as BCPL [52], are typeless, or have only one type. All BCPL variables have the type ‘word’. This enables BCPL to rival assembly code in application while being much more readable and concise. There are dangers, however; the compiler cannot detect type errors because there are none.
Languages that do provide types are characterized by the kind of data structures, the time at which types are checked and how much the programmer can define. Simple types, such as integer, stand for basic domains like Int. Structured types – arrays and records – stand for derived domains. There are hard problems, however, in deciding what a programmer-defined type, particularly one defined by possibly recursive equations, stands for – see recent conference proceedings [1, 2, 31]. This is obviously connected with recursive domains (§4.3).
APL [27] is a dynamically typed language. Each constant has a particular type – integer, character or vector or array of one of these. The value currently assigned to a variable therefore has some type, but both the value and the type may change as the program runs. Each APL operator is only applicable to certain types, so it is possible to add 1 to an integer or to a vector of integers but not to a character.
Both natural and programming languages can be viewed as sets of sentences—that is, finite strings of elements of some basic vocabulary. The notion of a language introduced in this section is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages.
The basic theory of computation, as well as of its various branches, such as cryptography, is inseparably connected with language theory. The input and output sets of a computational device can be viewed as languages, and—more profoundly—models of computation can be identified with classes of language specifications, in a sense to be made more precise. Thus, for instance, Turing machines can be identified with phrase-structure grammars and finite automata with regular grammars.
A finite automaton is a strictly finitary model of computation. Everything involved is of a fixed, finite size and cannot be extended during the course of computation. The other types of automata studied later have at least a potentially infinite memory. Differences between various types of automata are based mainly on how information can be accessed in the memory.
A finite automaton operates in discrete time, as do all essential models of computation. Thus, we may speak of the “next” time instant when specifying the functioning of a finite automaton.
The simplest case is the memoryless device, where, at each time instant, the output depends only on the current input. Such devices are models of combinational circuits.
In general, however, the output produced by a finite automaton depends on the current input as well as on earlier inputs. Thus, the automaton is capable (to a certain extent) of remembering its past inputs. More specifically, this means the following.
The automaton has a finite number of internal memory states. At each time instant i it is in one of these states, say qi. The state qi + 1 at the next time instant is determined by qi and by the input at given at time instant i. The output at time instant i is determined by the state qi (or by qi and ai, together).
As is true for all our models of computation, a Turing machine also operates in discrete time. At each moment of time it is in a specific internal (memory) state, the number of all possible states being finite. A read-write head scans letters written on a tape one at a time. A pair (q, a) determines a triple (q′, a′, m) where the q's are states, a's are letters, and m (“move”) assumes one of the three values l (left), r (right), or 0 (no move). This means that, after scanning the letter a in the state q, the machine goes to the state q′ writes a′ in place of a (possibly a′ = a, meaning that the tape is left unaltered), and moves the read-write head according to m.
If the read-write head is about to “fall off” the tape, that is, a left (resp. right) move is instructed when the machine is scanning the leftmost (resp. rightmost) square of the tape, then a new blank square is automatically added to the tape. This capability of indefinitely extending the external memory can be viewed as a built-in hardware feature of every Turing machine. The situation is depicted in Figure 4.1.
It might seem strange that a chapter on cryptography appears in a book dealing with the theory of computation, automata, and formal languages. However, in the last two chapters of this book we want to discuss some recent trends. Undoubtedly, cryptography now constitutes such a major field that it cannot be omitted, especially because its interconnections with some other areas discussed in this book are rather obvious. Basically, cryptography can be viewed as a part of formal language theory, although it must be admitted that the notions and results of traditional language theory have so far found only few applications in cryptography. Complexity theory, on the other hand, is quite essential in cryptography. For instance, a cryptosystem can be viewed as safe if the problem of cryptanalysis—that is, the problem of “breaking the code”—is intractable. In particular, the complexity of certain number-theoretic problems has turned out to be a very crucial issue in modern cryptography. And more generally, the seminal idea of modern cryptography, public key cryptosystems, would not have been possible without an understanding of the complexity of problems. On the other hand, cryptography has contributed many fruitful notions and ideas to the development of complexity theory.