To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The specification, design, construction, evaluation and maintenance of computing systems involve significant theoretical problems that are common to hardware and software. Some of these problems are long standing, although they change in their form, difficulty and importance as technologies for the manufacture of digital systems change. For example, theoretical areas addressed in this volume about hardware include
models of computation and semantics,
computational complexity,
methodology of design,
specification methods,
design and synthesis, and
verification methods and tools;
and the material presented is intimately related to material about software. It is interesting to attempt a comparison of theoretical problems of interest in these areas in the decades 1960–69 and 1980–89. Plus ça change, plus c'est la même chose?
Of course, the latest technologies permit the manufacture of larger digital systems at smaller cost. To enlarge the scope of digital computation in the world's work it is necessary to enlarge the scope of the design process. This involves the development of the areas listed above, and the related development of tools for CAD and CIM.
Most importantly, it involves the unification of the study of hardware and software. For example, a fundamental problem in hardware design is to make hardware that is independent of specific fabricating technologies. This complements a fundamental problem in software design – to make software that is independent of specific hardware (i.e., machines and peripherals).
The chapters in this part examine the performance of VLSI systems from different viewpoints.
Chapter 8 looks at the use of discrete complexity models in VLSI design, both theoretically and practically. The results of an experiment on a basic hypothesis are reported.
The final chapter is a technical presentation of two recent innovative results in a field of complexity theory relevant to VLSI.
ABSTRACT A model of computation for the design of synchronous and systolic algorithms is presented. The model is hierarchically structured, and so can express the development of an algorithm through many levels of abstraction. The syntax of the model is based on the directed graph, and the synchronous semantics are state-transitional. A semantic representation of ripple-carries is included. The cells available in the data structure of a computation graph are defined by a graph signature. In order to develop two-level pipelining in the model, we need to express serial functions as primitives, and so a data structure may include history-sensitive functions.
A typical step in a hierarchical design is the substitution of a single data element by a string of data elements so as to refine an algorithm to a lower level of abstraction. Such a refinement is formalised through the definition of parallel and serial homomorphisms of data structures.
Central to recent work on synchronous algorithms has been the work of H. T. Kung and others on systolic design. The Retiming Lemma of Leiserson & Saxe [1981] has become an important optimisation tool in the automation of systolic design (for example, in the elimination of ripple-carries). This lemma and the Cut Theorem (Kung & Lam [1984]) are proved in the formal model.
The use of these tools is demonstrated in a design for the matrix multiplication algorithm presented in H. T. Kung [1984].
When nulls occur in a relational theory T, updates to T will cause excessive growth in the size of T if many data atoms of T unify with atoms occurring in the updates. This chapter proposes a scheme of lazy evaluation for updates that strictly bounds the growth of T caused by each update, via user-specified limits on permissible size increases. Under lazy evaluation, an overly-expensive update U will be stored away rather than executed, with the hope that hew information on costly null values will reduce the expense of executing U before the information contained in U is needed for an incoming query. If an incoming query unavoidably depends on the results of an overly expensive portion of an update, the query must be rejected, as there is no way to reason about the information in the update other than by incorporating it directly in the relational theory. When a query is rejected, the originator of the query is notified of the exact reasons for the rejection. The query may be resubmitted once the range of possible values of the troublesome nulls has been narrowed down. The bottom line for an efficient implementation of updates, however, is that null values should not be permitted to occur as attribute values for attributes heavily used in update selection clauses—particularly those used as join attributes.
The cost of an update can be measured as a function of the increase in the size of T that would result from execution of the update, and by measures of the expected time to execute the update and to answer subsequent queries.
In this chapter and in Chapter 5 we shall present many mathematical applications of computer graphics. In order to draw the line somewhere (pardon the pun) we shall restrict ourselves to the mathematics associated with the plane and in particular with curves in the plane. There is a whole other realm, just as fascinating, connected with objects, such as curves, polyhedra and surfaces, in three-dimensional space. Nevertheless what the computer actually draws is, in these cases too, a curve or system of curves in the plane – that of the screen. The extra complications come from taking a three-dimensional object and associating with it a curve or system of curves – for example its outline when seen from a distance, or a sequence of such outlines or a sequence of plane sections of the object. We touch on this in a discussion of the swallowtail surface in Section 4.14.
The computer can (with our help) draw curves and collections of curves which are just too complicated to attempt by hand. For some purposes a rough sketch of a curve does very well – you will probably have drawn many such sketches by hand, and we are certain that the art of curve-sketching by hand is still an art well worth acquiring. However, for some other purposes, such as the illustration or discovery of facts connected with the differential geometry of curves and families of curves (not to mention surfaces), accurate drawings are essential.
This book is intended for anyone who has some mathematical knowledge and a little experience with programming a microcomputer in BASIC (or any other language). The book shows how simple programs can be used to do significant mathematics.
To spell out our mathematical prerequisites in more detail, some of the chapters assume no more mathematical knowledge than whole numbers, but for the most part we assume some calculus, and the rudiments of algebra (polynomials, equations) and trigonometry (sines, cosines and tangents). Thus, British readers with A or A/S level in mathematics and American readers with a Freshman calculus course behind them will, we hope, have little difficulty in following most of the mathematics here. We have, naturally, included some material for the more mathematically sophisticated reader: those sections requiring closer inspection are, appropriately, printed in smaller type. (We hope that readers who do not immediately recognise the small type material will be intrigued rather than frightened. Surely one of the charms of mathematics is the glimpses it affords of mysterious and fascinating territory which is for the moment just out of reach.)
As for programming, the knowledge we assume is very small, and most programs are given full listings in the text. It seems to us that a very effective way to learn programming is to use it to solve interesting mathematical problems. We have regarded the mathematics as the pre-eminent interest, and have not tried too hard to make the sample programs beautiful or elegant, or even particularly efficient.
‘ … But I say to hell with common sense! By itself each segment of your experience is plausible enough, but the trajectory resulting from the aggregate of these segments borders on being a miracle.’
—Stanislaw Lem, The Chain of Chance
This chapter describes simulation experiments conducted with the Update Algorithm, and presents the results from these experiments. The goal of the simulation was to gauge the expected performance of the update and query processing algorithms in a traditional database management system application. The implementation was tailored to this environment, and for that reason the techniques used and results obtained will apply only partially, if at all, to other application environments, such as knowledge-based artificial intelligence applications. In particular, the following assumptions and restrictions were made.
Update syntax was modified and restricted, to encourage use of simple constructs.
A fixed data access mechanism (query language) was assumed.
A large, disk-resident database supplying storage for the relational theory was assumed.
Performance was equated with the number of disk accesses required to perform queries and updates after a long series of updates, and the storage space required after a long series of updates.
These assumptions and restrictions are all appropriate to traditional database management scenarios; they will be discussed in more detail in later sections. We begin with a brief high-level description of the implemented system, and then examine its components in more detail. The chapter concludes with a description of the experimental results.
Overview
The Update Algorithm Version II was chosen for simulation.
“ … You believers make so many and such large and such unwarrantable assumptions.”
“My dear, we must make assumptions, or how get through life at all?”
“Very true. How indeed? One must make a million unwarrantable assumptions, such as that the sun will rise tomorrow, and that the attraction of the earth for our feet will for a time persist, and that if we do certain things to our bodies they will cease to function, and that if we get into a train it will probably carry us along, and so forth. One must assume these things just enough to take action on them, or, as you say, we couldn't get through life at all. But those are hypothetical, pragmatical assumptions, for the purposes of action: there is no call actually to believe them, intellectually. And still less call to increase their number, and carry assumption into spheres where it doesn't help us to action at all. For my part, I assume practically a great deal, intellectually nothing.”
—Rose Macaulay, Told by an Idiot
Relational theories contain little knowledge, that is, data about data. The exact line between knowledge and data is hard to pinpoint; for our purposes, the distinguishing characteristic of knowledge will be our reluctance to change it in response to new information in the form of an update. Under this categorization, the integrity constraints discussed in Chapter 7 are a form of knowledge.
Sir Isaac Newton is certainly one of the greatest scientists to have ever lived. He is generally reckoned to have been one of the three most outstanding mathematicians of all time, along with Archimedes and Gauss, and his discoveries in physics are unrivalled in their width and influence. What was Newton's secret? How did he achieve as much as he did? Obviously there is no simple answer, but Newton had one secret, which he guarded jealously, and which he believed to be vital. It was ‘Data aequatione quotainque fluentes quantitoe involuente fluxions invenire et vice versa’ or in English ‘solve differential equations’.
Nowadays this ‘secret’ is entirely unremarkable; we are all aware that many processes and phenomena in the world are governed by differential equations. The very fact that Newton's secret is now common knowledge clearly indicates its worth and power. Of course his secret was rather hard won; he did have to invent differential equations before pronouncing his dictum concerning solving them!
In this chapter we shall see what the microcomputer can do for those intending to follow Newton's advice. Our eventual viewpoint will be considerably more modern than Newton's. It turns out that in certain circumstances solving differential equations is not as useful as watching them.
Differential equations and tangent segments
Much of science is devoted to the problems of predicting the future Differential behaviour of some physical system or other. Often the underlying equations and physical law will describe the rate at which the system evolves; what tangent segments we require is a description of how it evolves.
One of the most important and useful ways in which mathematics can help us to solve problems is by the solution of equations. ‘Let x be the length of the piece of string; then x satisfies the equation x2 – 2x - 3 = 0 and solving the equation gives x = 3.’ We are sure that you have solved many problems using equations; unfortunately all but the simplest equations cannot be solved exactly.
There are two reasons for this. In the first place even for a quadratic equation, unless the solutions are rational numbers (as in the above example), there is a square root such as √2 to be evaluated, and this cannot be done exactly. The decimal expansion does not terminate or recur, so we must be satisfied either with the formal ‘√2’ or with an approximation to so-many decimal places.
The second reason is more profound. Exact formulae analogous to the famous quadratic formula do exist for equations of degrees 3 and 4 – of course these formulae involve cube roots and so on, so are open to the same difficulty as we noted above for quadratics. On the other hand no algebraic formula exists at all for equations of degree 5 or more! In a precise sense, the equation x5 - 6x + 3 = 0 cannot be solved algebraically at all. This is a difficult statement and has an even more difficult proof, in which computers won't help in the least.