To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Real-time computing has been the domain of the practical systems engineer for many decades. It is only comparatively recently that very much attention has been directed to its theoretical study. Using methods originally developed for use in operations research and optimization, scheduling theory has been used to analyse characteristic timing problems in the sharing of resources in real-time computing systems. Independently of this, specification and verification techniques used in sequential and concurrent programming have been extended to allow definition of the timing properties of programs. In terms of effectiveness, the two approaches are still limited and experimental, and neither on its own can yet be used to provide an exact timing analysis or to verify the timing properties of even modestly complex real-time programs of practical size. But if restricted classes of program are considered, they are rapidly approaching the point of practical usefulness. This suggests that the development of a discipline of real-time programming would allow the construction of programs with analyzable and verifiable timing properties. Such a discipline will need to be built upon on a well integrated framework in which different methods are used where appropriate to obtain timing properties to which a high level of assurance can be attached.
Introduction
A real-time computer system interacts with an environment which has time-varying properties and the system must exhibit predictable time-dependent behaviour. Most real-time systems have limited resources (e.g. memory, processors) whose allocation to competing demands must be scheduled in a way that will allow the system to satisfy its timing constraints. Thus one important aspect of the design and analysis of a real-time system is concerned with resource allocation.
The editors of this volume set out to compile a set of personal views about the long-term direction of computer science research. In responding to this goal, we have chosen to identify what we perceive as a longterm challenge to the capabilities of computing technology in serving the broader needs of people and society, and to discuss how this ‘Grand Challenge’ might be met by future research. We also present our personal view of the required research methodology.
Introduction
Much of present-day computer technology is concerned with the processing, storage and communication of digital data. The view taken in this contribution is that a far more important use of computers and computing is to manage and manipulate human-related information. Currently, the provision of such structured information has been tackled at the level of single organisations (company, institution, government department, etc.) by the use of databases which are often limited to single functions within the organisation. Databases are closed, in the sense that the information itself can be viewed in a limited number of ways, and the ways in which it can evolve are carefully controlled. Interaction between databases containing related data is prohibited, except through the mediation of human experts. This is an unnecessarily restricted concept of information processing, and one which fails to recognise its real social and economic potential. We foresee a huge market for personal information services based on open access to a continually evolving global network of stored information. Although there are significant technical difficulties associated with creating and controlling such networks and services, we predict that the economic incentives will ensure that the necessary development occurs and that this information market will – within a period of decades – dwarf the market in computing machinery and software.
This article argues that problems of scale and complexity of data in large scientific and engineering databases will drive the development of a new generation of databases. Examples are the human genome project with huge volumes of data accessed by widely distributed users, geographic information systems using satellite data, and advanced CAD and engineering design databases. Databases will share not just facts, but also procedures, methods and constraints. This together with the complexity of the data will favour the object-oriented database model. Knowledge base technology is also moving in this direction, leading to a distributed architecture of knowledge servers interchanging information on objects and constraints. An important theme is the re-use not just of data and methods but of higher-level knowledge. For success, a multi-disciplinary effort will be needed along the lines of the Knowledge Sharing Effort in USA, which is discussed.
Introduction
The research area of databases is a very interesting testing ground for computing science ideas. It is an area where theory meets reality in the form of large quantities of data with very computer-intensive demands on it. Until recently the major problems were in banking and commercial transactions. These sound easy in principle but they are made difficult by problems of scale, distributed access, and the ever present need to move a working system with long-term data onto new hardware, new operating systems, and new modes of interaction. Despite this, principles for database system architecture were established which have stood the test of time – data independence, serialised transactions, two-phase commit, query optimisation and conceptual schema languages. Thanks to these advances the database industry is very large and very successful.
Computer science and mathematics are closely related subjects, and over the last fifty years, each has fed off the other. Mathematicians have used computers to prove (or disprove) traditional results of mathematics, computer scientists have used more and more advanced mathematics in their work, and new areas of mathematics have been inspired by questions thrown up by computing.
Introduction
The academic subjects of mathematics and computer science, the oldest science and one of the newest, are closely related. This article considers the various ways in which they interact, and each influences the development of the other.
It is worth noting that we do not consider here the influence of computer technology (and the associated communications revolution) on the infrastructure and sociology of mathematics. Developments such as
CD-ROM publication (particularly of Mathematical Reviews),
electronic databases (again one thinks of Mathematical Reviews, but also of the Science Citation Index, which, even in its paper form, could not be compiled without computers),
electronic manuscripts and camera-ready copy,
ftp preprint systems and
electronic mail
have changed, and will continue to change, the way in which mathematicians consider, and add to, their literature, but this is not specific to mathematics, even though mathematicians have often been in the vanguard of such movements, presumably because of their general use of computers.
The Influence of Computers on Mathematics
Mathematicians have always numbered prodigious calculators among their kind, be they numerical calculators or symbolic ones (Delaunay's lunar theory (1860) contained a 120-page formula). Hence it is not surprising that the digital computer soon interested some pure mathematicians. With its help, they could perform far larger calculations than before, and investigate phenomena that were inaccessible to human computation.
On Disparity, Difficulty, Complexity, Novelty – and Inherent Uncertainty
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures.
Three decades ago the Sketchpad system was presented to the public, an event that did much to put interactive systems on the computing agenda (Sutherland, 1963). I remember the event well. The Sketchpad film, shown at a conference I was attending in Edinburgh, ran for only about ten minutes, but this was quite long enough for me to make up my mind to abandon my faltering career as a control-systems engineer and seek to become a computer scientist. I have never regretted that decision.
Thirty years later, interactive systems have established themselves across society, in a fashion that I and my 1960s contemporaries never dreamed of. Today we interact with computers in TV sets, telephones, wristwatches, ticket machines, kitchen scales, and countless other artefacts. The range of human activities supported by interactive technology is still expanding, apparently without limits. Demand for interactive systems and artefacts has fuelled the growth of today's computer industry, which now treats interactive computing as the mainstream of its business. Thus a technology that was startlingly radical in 1963 has become part of normal practice today.
My concern here is not with the amazing change, during three decades, in the way computers are used, but with a disappointing lack of change in the ways computing is taught and computer science research is carried out. I am concerned that interactive systems, having gained a place on the computing agenda in 1963, are still little more than an agenda item as far as computer science is concerned. While interactive computing now represents the mainstream of the industry's business, it plays hardly any part in the mainstream of computer science. It continues to be treated as a special case.
Computing has developed at an extraordinary pace since Alan Turing's crucial discovery in 1936. Then we had log tables, slide rules, filing cabinets, the postal service and human clerks. Now we have computers which will calculate at unimagined rates, vast data bases and global electronic communication. We are swamped with information which can be used in countless ways. The resulting impact of computing technology on society, both at work and at play, has been profound - some say it has transformed the nature of society itself. The revolution shows no sign of abating. But are there technical obstacles in our way which must be cleared in order to enhance our progress? And what conceptual advances are needed if we are to place the revolution on a firm scientific footing? Indeed, what are the major research issues confronting computing today?
The United Kingdom has been a major innovator in computing. The first stored-program digital computer ran in the UK. Many of the crucial ideas in computer architecture and programming came subsequently from researchers in this country. Over the last fifteen years, partly spurred on by publically-funded programmes such as Alvey, and partly driven by the promise of commercial exploitation, the volume of computing research has risen dramatically. Results have poured out into books, journals and conference proceedings. The atmosphere surrounding computing research has reflected the excitement of making crucial, formative discoveries. The subject has raced ahead.
This book brings together of the views of a distinguished group of Computer Scientists from the United Kingdom. The editors asked each contributor to set out the research position in his chosen subject and then to outline the important research problems that his subject now faced.
Society is becoming increasingly dependent on safety-critical computer-based systems where the operation or failure of these systems can lead to harm to individuals or the environment. There are many individual technical innovations that can be made to improve the processes of developing and assessing safety-critical systems, but perhaps the biggest need is for a coherent and integrated set of methods; narrow technical innovation is not sufficient. Thus we stress here a broad engineering approach to the development of safety-critical systems, identifying research directions which we believe will lead to the establishment of an effective, and cost-effective, development process.
Introduction
Society is becoming increasingly dependent on automated systems; in many cases the operation or failure of these systems can lead to harm to individuals or the environment. If the behaviour, or failure, of a system can lead to accidents involving loss of life, injury or environmental damage then it is safety-critical. States of the system, classes of system behaviour and failure modes of the system which can potentially lead to accidents are referred to as hazards. Our concern here is with safety-critical systems where the computing and software elements can potentially contribute to hazards.
Technical background
Safety-critical systems are found in a wide range of applications including aircraft flight control, reactor protection systems, fire protection systems on oil and gas platforms, and medical electronic devices. Increasingly these systems contain a substantial programmable element, using either conventional computers, or programmable logic controllers (PLCs).
Everyone accepts that large programs should be organized as hierarchical modules. Standard ml's structures and signatures meet this requirement. Structures let us package up declarations of related types, values and functions. Signatures let us specify what components a structure must contain. Using structures and signatures in their simplest form we have treated examples ranging from the complex numbers in Chapter 2 to infinite sequences in Chapter 5.
A modular structure makes a program easier to understand. Better still, the modules ought to serve as interchangeable parts: replacing one module by an improved version should not require changing the rest of the program. Standard ml'sabstract types and functors can help us meet this objective too.
A module may reveal its internal details. When the module is replaced, other parts of the program that depend upon such details will fail. ml provides several ways of declaring an abstract type and related operations, while hiding the type's representation.
If structure B depends upon structure A, and we wish to replace A by another structure A′, we could edit the program text and recompile the program. That is satisfactory if A is obsolete and can be discarded. But what if A and A′ are both useful, such as structures for floating point arithmetic in different precisions?
ml lets us declare B to take a structure as a parameter.
In the previous chapters, with their spiraling build-up of repetition and variations, you may have felt like you were being subjected to the Lisp-equivalent of Ravel's Bolero. Even so, no doubt you noticed two motifs were missing: assignment and side effects. Some languages abhor both because of their nasty characteristics, but since Lisp dialects procure them, we really have to study them here. This chapter examines assignment in detail, along with other side effects that can be perpetrated. During these discussions, we'll necessarily digress to other topics, notably, equality and the semantics of quotations.
Coming from conventional algorithmic languages, assignment makes it more or less possible to modify the value associated with a variable. It induces a modification of the state of the program that must record, in one way or another, that such and such a variable has a value other than its preceding one. For those who have a taste for imperative languages, the meaning we could attribute to assignment seems simple enough. Nevertheless, this chapter will show that the presence of closures as well as the heritage of λ-calculus complicates the ideas of binding and variables.
The major problem in defining assignment (and side effects, too) is choosing a formalism independent of the traits that we want to define. As a consequence, neither assignment nor side effects can appear in the definition.
Once again, here's a chapter about compilation, but this time, we'll look at new techniques, notably, flat environments, and we have a new target language: C. This chapter takes up a few of the problems of this odd couple. This strange marriage has certain advantages, like free optimizations of the compilation at a very low level or freely and widely available libraries of immense size. However, there are some thorns among the roses, such as the fact that we can no longer guarantee tail recursion, and we have a hard time with garbage collection.
Compiling into a high-level language like C is interesting in more ways than one. Since the target language is so rich, we can hope for a translation that is closer to the original than would be some shapeless, linear salmagundi. Since C is available on practically any machine, the code we produce has a good chance of being portable. Moreover, any optimizations that such a compiler can achieve are automatically and implicitly available to us. This fact is particularly important in the case of C, where there are compilers that carry out a great many optimizations with respect to allocating registers, laying out code, or choosing modes of address—all things that we could ignore when we focused on only one source language.
On the other hand, choosing a high-level language as the target imposes certain philosophic and pragmatic constraints as well.
This book originated in lectures on Standard ml and functional programming. It can still be regarded as a text on functional programming — one with a pragmatic orientation, in contrast to the rather idealistic books that are the norm — but it is primarily a guide to the effective use of ml. It even discusses ml's imperative features.
Some of the material requires an understanding of discrete mathematics: elementary logic and set theory. Readers will find it easier if they already have some programming experience, but this is not essential.
The book is a programming manual, not a reference manual; it covers the major aspects of ml without getting bogged down with every detail. It devotes some time to theoretical principles, but is mainly concerned with efficient algorithms and practical programming.
The organization reflects my experience with teaching. Higher-order functions appear late, in Chapter 5. They are usually introduced at the very beginning with some contrived example that only confuses students. Higher-order functions are conceptually difficult and require thorough preparation. This book begins with basic types, lists and trees. When higher-order functions are reached, a host of motivating examples is at hand.
The exercises vary greatly in difficulty. They are not intended for assessing students, but for providing practice, broadening the material and provoking discussion.
Overview of the book. Most chapters are devoted to aspects of ml. Chapter 1 introduces the ideas behind functional programming and surveys the history of ml.
Functional programming has its merits, but imperative programming is here to stay. It is the most natural way to perform input and output. Some programs are specifically concerned with managing state: a chess program must keep track of where the pieces are! Some classical data structures, such as hash tables, work by updating arrays and pointers.
Standard ml's imperative features include references, arrays and commands for input and output. They support imperative programming in full generality, though with a flavour unique to ml. Looping is expressed by recursion or using a while construct. References behave differently from Pascal and C pointers; above all, they are secure.
Imperative features are compatible with functional programming. References and arrays can serve in functions and data structures that exhibit purely functional behaviour. We shall code sequences (lazy lists) using references to store each element. This avoids wasteful recomputation, which is a defect of the sequences of Section 5.12. We shall code functional arrays (where updating creates a new array) with the help of mutable arrays. This representation of functional arrays can be far more efficient than the binary tree approach of Section 4.15.
A typical ml program is largely functional. It retains many of the advantages of functional programming, including readability and even efficiency: garbage collection can be faster for immutable objects. Even for imperative programming, ml has advantages over conventional languages.