To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This thesis has taken as its object of study the control-discipline variants of a simple logic programming language equivalent to Horn clause logic programming. It has classified and logically characterised the set of successful and failing queries of these variants of the language.
I have given an operational semantics, SOS, of which variants correspond to the parallel “and” and “or”, sequential “and”, sequential “or”, and sequential “and” and “or” control disciplines. This operational semantics homogenises the treatment of the control disciplines by incorporating control information (such as the failure-backtrack mechanism of seqeuntial systems) into the operational semantics. (Some of the variants of SOS have equivalent compositional operational semantics, which I have given.) I have also classified the queries into those succeeding and those failing in each of the control disciplines, and have proven the equivalence of some of these classes.
I have then used a sequent calculus framework, in which the elements of sequents are assertions about the success or failure of queries, to give a logical analysis of these classes of queries. Three calculi are given; they share a common set LKE of rules for classical logic with equality as syntactic identity, and differ in the set of axioms which characterise the behaviour of queries.
LKE+PAR characterises the queries which succeed in parallel-or systems, and those which fail in parallel-and systems;
LKE+SEQ characterises the queries which succeed in the sequential-and, sequential-or system, and those which fail in sequential-and systems;
LKE+PASO characterises the queries which succeed in the parallel-and, sequential-or system.
In this chapter, I will give a characterisation of the two inner circles of the Venn diagram in Figure 2.6 in the same way as I characterised the two outer circles. That is, I will give a proof-theoretic characterisation of sequential logic programming (in particular, the operational semantics SP) in the form of a sequent calculus.
For this sequent calculus, we can use the rules LKE from the last chapter unchanged; we need only give anew group of axioms, SEQ, corresponding to PAR from the last chapter. These axioms, however, are more complex than those in PAR, have more side-conditions, and in particular involve the concept of disjunctive unfoldings of formulae.
Nevertheless, we can prove the same things about SEQ that we can about PAR: the laws are sound, and the proof system LKE+SEQ characterises sequential logic programming in several useful ways.
I will also give a characterisation of the last circle in Figure 2.6, namely the middle success circle. This set contains all queries which succeed in SOS/so, and can be characterised by a set of axioms, PASO, which combines axioms from PAR and from SEQ in a simple and intuitively clear way.
Approaches to Semantics
I begin by going into more detail about why we want a semantics for sequential logic programming, and what approaches have been taken so far to giving one.
The assumptions made about search strategies in most research on foundations of logic programming (for instance, SLD-resolution with a fair search rule) are not satisfied by sequential logic programming.
Operational (or “procedural”) semantics, as I mentioned in the Introduction, are used to provide characterisations of programming languages which meet certain “computational” criteria: giving a detailed description of the language for implementation purposes, and giving a computational model to which programmers can refer.
For logic programming, operational semantics are particularly important because it is in them that the innovations of logic programming lie. The notions of resolution and unification are not immediately apparent; unification, though defined by Herbrand in his thesis [44], was virtually ignored until Prawitz's work [62], and resolution was not defined until 1965 [66]. These notions must be explained within the context of a full description of the computational model of the language.
If we want to do such things as soundness and completeness proofs, or indeed any formal comparison of the operational semantics to other characterisations of the language, the operational semantics must also be mathematically precise – for instance, in the form of a formal system. (Plotkin [58] has explored the idea of structured operational semantics in detail, and gives a taxonomy to which I will refer in this chapter.) SLD-resolution [49], SLDNF-resolution [50], and the operational semantics in this chapter are just a few examples of formal operational semantics for logic programming. Other examples include Voda's tree-rewriting system [76], Deransart and Ferrand's [29] and Börger's [13] standardisation efforts, and the abstract computation engines for Andorra Prolog [43] and the “Pure Logic Language”, PLL [10, 52].
Although we have proven some useful completeness theorems about the proof systems in the last two chapters, we have not been able to prove absolute completeness: that every valid sequent is derivable. Because of some formal incompleteness results, we will never be able to prove such a completeness theorem, for any finitary proof system; but there are several ways in which we can, at least partially, escape the effect of these incompleteness results. In this chapter, I present the incompleteness theorems and some of the partial solutions.
There are two main incompleteness results, as discussed in the first section below. The first says that we will never be able to derive all valid closed sequents which have signed formulae in negative contexts, and follows from the non-existence of a solution to the Halting Problem. (We can deal with many of the important cases of this result by adding extra rules which I will describe.) The second result says that we will never be able to derive all valid sequents with free variables, even if they have no signed formulae in negative contexts, and is a version of Gödel's Incompleteness Theorem.
The “mathematical” solution to these problems is to bring the proof theory closer to a kind of model theory, by allowing infinitary elements into the proof systems. Though these are not adequate solutions for practical theorem proving, they are useful in that they shed light on the extent to which the proof systems in question are complete.
Monotone networks have been the most widely studied class of restricted Boolean networks. It is now possible to prove superlinear (in fact exponential) lower bounds on the size of optimal monotone networks computing some naturally arising functions. There remains, however, the problem of obtaining similar results on the size of combinational (i.e. unrestricted) Boolean networks. One approach to solving this problem would be to look for circumstances in which large lower bounds on the complexity of monotone networks would provide corresponding bounds on the size of combinational networks.
In this paper we briefly review the current state of results on Boolean function complexity and examine the progress that has been made in relating monotone and combinational network complexity.
Introduction
One of the major problems in computational complexity theory is to develop techniques by which non-trivial lower bounds, on the amount of time needed to solve ‘explicitly defined’ decision problems, could be proved. By ‘nontrivial’ we mean bounds which are superlinear in the length of the input; and, since we may concentrate on functions with a binary input alphabet, the term ‘explicitly defined’ may be taken to mean functions for which the values on all inputs of length n can be enumerated in time 2cn for some constant c.
Classical computational complexity theory measures ‘time’ as the number of moves made by a (multi-tape) deterministic Turing machine. Thus a decision problem, f, has time complexity, T(n) if there is a Turing machine program that computes f and makes at most T(n) moves on any input of length n.
A general theory is developed for constructing the asymptotically shallowest networks and the asymptotically smallest networks (with respect to formula size) for the carry save addition of n numbers using any given basic carry save adder as a building block.
Using these optimal carry save addition networks the shallowest known multiplication circuits and the shortest formulae for the majority function (and many other symmetric Boolean functions) are obtained.
In this paper, simple basic carry save adders are described, using which multiplication circuits of depth 3.71 log n (the result of which is given as the sum of two numbers) and majority formulae of size O(n3.21) are constructed. Using more complicated basic carry save adders, not described here, these results could be further improved. Our best bounds are currently 3.57 log n for depth and O(n3.13) for formula size.
Introduction
The question ‘How fast can we multiply?’ is one of the fundamental questions in theoretical computer science. Ofman-Karatsuba and Schönhage-Strassen tried to answer it by minimising the number of bit operations required, or equivalently the circuit size. A different approach was pursued by Avizienis, Dadda, Ofman, Wallace and others. They investigated the depth, rather than the size of multiplication circuits.
The main result proved by the above authors in the early 1960's was that, using a process called Carry Save Addition, n numbers (of linear length) could be added in depth O(log n). As a consequence, depth O(log n) circuits for multiplication and polynomial size formulae for all the symmetric Boolean functions are obtained.
Topical but classical results concerning the incidence relationship between prime clauses and implicants of a monotone Boolean function are derived by applying a general theory of computational equivalence and replaceability to distributive lattices. A non-standard combinatorial model for the free distributive lattice FDL(n) is described, and a correspondence between monotone Boolean functions and partitions of a standard Cayley diagram for the symmetric group is derived.
Preliminary research on-classifying and characterising the simple paths and circuits that are the blocks of this partition is summarised. It is shown in particular that each path and circuit corresponds to a characteristic configuration of implicants and clauses. The motivation for the research and expected future directions are briefly outlined.
Introduction
Models of Boolean formulae expressed in terms of the incidence relationship between the prime implicants and clauses of a function were first discovered several years ago, but they have recently been independently rediscovered by several authors, and have attracted renewed interest. They have been used in proving lower bounds by Karchmer and Wigderson and subsequently by Razborov. More general investigations aimed at relating the complexity of functions to the model have also been carried out by Newman [20].
This paper demonstrates the close connection between these classical models for monotone Boolean formulae and circuits and a general theory of computational equivalence as it applies to FDL(n): the (finite) distributive lattice freely generated by n elements. It also describes how the incidence relationships between prime implicants and clauses associated with monotone Boolean functions can be viewed as built up from a characteristic class of incidence patterns between relatively small subsets of implicants and clauses.
We give a general complexity classification scheme for monotone computation, including monotone space-bounded and Turing machine models not previously considered. We propose monotone complexity classes including mACi, mNCi, mLOGCFL, mBWBP, mL, mNL, mP, mBPP and mNP. We define a simple notion of monotone reducibility and exhibit complete problems. This provides a framework for stating existing results and asking new questions.
We show that mNL (monotone nondeterministic log-space) is not closed under complementation, in contrast to Immerman's and Szelepcsényi's nonmonotone result [Imm88, Sze87] that NL = co-NL; this is a simple extension of the monotone circuit depth lower bound of Karchmer and Wigderson [KW90] for st-connectivity.
We also consider mBWBP (monotone bounded width branching programs) and study the question of whether mBWBP is properly contained in mNC1, motivated by Barrington's result [Bar89] that BWBP = NC1. Although we cannot answer this question, we show two preliminary results: every monotone branching program for majority has size Ω(n2) with no width restriction, and no monotone analogue of Barrington's gadget exists.
Introduction
A computation is monotone if it does not use the negation operation. Monotone circuits and formulas have been studied as restricted models of computation with the goal of developing techniques for the general problem of proving lower bounds.
In this paper we seek to unify the theory of monotone complexity along the lines of Babai, Frankl, and Simon who gave a framework for communication complexity theory. We propose a collection of monotone complexity models paralleling the familiar nonmonotone models. This provides a rich classification system for monotone functions including most monotone circuit classes previously considered, as well as monotone space-bounded complexity classes which have previously received little attention.
We survey some recent results on read-once Boolean functions. Among them are a characterization theorem, a generalization and a discussion on the randomized Boolean decision tree complexity for read-once functions. A previously unpublished result of Lovás and Newman is also presented.
Introduction
A Boolean formula is a rooted binary tree whose internal nodes are labeled by the Boolean operators ∨ or Λ and in which each leaf is labeled by a Boolean variable or its negation. A Boolean formula computes a Boolean function in a natural way.
A Boolean formula is read-once if every variable appears exactly once. A function is read-once if it has a read-once formula.
Read-once functions have been studied by many authors, since they have the lowest possible formula size (for functions that depend on all their variables). In addition, every NC1 function on n variables is a projection of a read-once function with a polynomial (in n) number of variables.
We present here some recent results in the area. All but one of those results have been published, hence, full proofs will be generally omitted and will be given just for the unpublished result (Theorem 3.4). The results we will discuss cover a characterization theorem, some generalizations and results on the randomized decision tree complexity of read-once functions. There is a recent result on learning of read-once functions, [AHK89], which will not be described.
Definitions and Notations
If g : {0, l}n ↦ {0, 1} has a formula in which no negated variable appears, we say that g is monotone. The size of a Boolean formula is the number of its leaves.