To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Part I of this book I presented a general framework for understanding language and computers, and in Part II showed how to apply this framework to analysis and design of computer systems. What remains to be done now is to broaden the scope of our analysis, and show how the same framework can be used for describing and designing computer systems in a practical context of work. In order to do that I need a concrete example, and I shall use data from the 1986 project at the Postal Giro Office in Stockholm, “Professional language in change”, mentioned in Section 1.1.1.
Before I start on this last stage of the journey, I think it would not be out of place to make a short summary of what has been accomplished so far.
Although the theoretical framework of the book is not natural science but a structuralist tradition of semiotics and linguistics, the project of the book is a part of an ongoing paradigm change in computer science itself: from seeing the computer system as a self-sufficient mathematical object, the focus is gradually being shifted to the relations between system and work context. An important practical motivation for this shift is simply that it hopefully will enable us to construct computer systems that meet the needs of the users in a better way. However, the shift is also theoretically motivated by the structuralist framework, since structuralism is characterized by focusing on relations, not on objects, as the real existing phenomena.
Semiotics is the science of signs and their life in society. A sign is anything that stands for something else for someone.
Semiotics treats all kinds of signs: verbal language, pictures, literature, motion pictures, theater, body language. Computer semiotics is a semiotic discipline that studies the nature and use of computer-based signs.
The motivation for the discipline comes from the nature of computer systems: although in many respects computer systems can be conceived as tools in analogy with typewriters, pencils, brushes and filing cabinets, they differ from these tools by not primarily existing or being used as physical objects, but as signs. The pencil of the drawing program is no real pencil that can be used to chew on, it is merely stands for a pencil, represented by a collection of pixels on the screen. Computer systems resemble other media by primarily acting as carriers of meaning.
Even if computer systems basically are symbolic tools, this symbolic nature has only attracted serious attention in recent years.
The reason for this is threefold: from being a tool only for specialists, computers have now been integrated into many occupations, and therefore the demands for interpretability have risen. The mode of operation and the meaning of the data must be easy to grasp for the secretary or the manager whose time should not be spent deciphering cryptic codes but writing agendas or making decisions. A good interface has become a necessary prerequisite for a good system.
We have now looked at work language, the language variety that must be central to our concerns, and made a preliminary list of the descriptive shortcomings of classical structuralism with respect to this language type.
In the next, constructive sections I will describe those parts of the tradition I wish to keep, and propose changes in other parts that make the theory better suited for the purposes at hand. Although the next few sections are concerned with language proper, I shall append short indications of how the concepts described can be used to understand computer systems, so that readers primarily interested in computers may judge if reading on is still worth the effort.
After presenting the key concepts (Section I.2.1), I show in Section I.2.2 how they can be applied to analysis of work processes and organizations, in particular the car repair and the Postal Giro examples from I.1.
Section I.2.3 extends the framework further to deal with computer systems. Computer systems are interpreted as media for human communication, and as a consequence of this, the theoretical focus is shifted from the system itself to the relations between system and user (Section I.2.4).
Basic concepts for describing symbolic acts
One of the basic points of structuralism is that units of language are defined relationally.
Instead of pre-existing ideas then, we find in all the foregoing examples values emanating from the system. When they are said to correspond to concepts, it is understood that the concepts are purely differential and defined not by their positive content but negatively by their relations with the other terms in the system. […]
I start my investigation of computer-based signs from the viewpoint of the expression plane, partly because the computer's means of expression are different from other media, partly because the history of linguistics tells us that expression analysis is easier than content analysis. What are the characteristic features of computer-based signs, and in which respects do they differ from other kinds of signs we know?
Handling, transient, and permanent features
The prototypical computer-based sign is composed of three classes of features:
A handling feature of a computer-based sign is produced by the user and includes key-press, mouse and joystick movements that cause electrical signals to be sent to the processor. Handling features articulate user actions.
A permanent feature of a computer-based sign is generated by the computer. It is a property of the sign that remains constant throughout the lifetime of a sign token, serving to identify the sign by contrasting it to other signs. Permanent features articulate system states into parts.
A transient feature of a computer-based sign is also generated by the computer, but unlike permanent features, it changes as the sign token is used. It does not contrast primarily to other signs, but only internally in the same sign, symbolizing the different states in which the sign referent can be. Transient features articulate system transformations.
As our first example, let us look at a version of the game Breakout. The system displays a paddle, a ball and a brick wall. The ball bounces back and forth, and the player must hit it with the paddle.
The first part of the book ended with a sketch of the computer system as a medium, and the purpose of this second part is to turn this sketch into a more detailed portrait. What are important properties of computer systems from this perspective, and how should systems be structured?
It should come as no surprise that the interface, defined in Section 1.2.3.3 as a collection of computer-based signs, and its functions relative to the linguistic environment and working context reign supreme in this approach. The basic function of a system is to generate processes in which an interface can be expressed, just as the only reason for having the costumes and wings in a theater is the experience they may contribute to giving the audience. Although this view on design is relatively new in computer science, it exists under the name of user centered design.
This view makes it difficult to maintain the tradition of separating functionality from the interface. Traditionally, functionality is what can be done with the system, while the interface is the manner in which it is done. Thus, two word processors have different functionality if one allows the user to make an automatic table of contents while the other does not. They have the same interface if operations are done in the same way, for example if text is selected by dragging the mouse over the desired piece of text, whereas their interface is different if one uses the mouse while the other requires the user to type line and character numbers in to select text.
This part describes three different approaches to the use of formal methods in the verification and design of systems and circuits.
Chapter 2 describes the stages involved in the verification of a counter using a mechanized theorem prover.
The next chapter describes a mathematical model of synchronous computation within which formal transformations which are useful in the design process can be defined.
Chapter 4 describes verification in a different framework – that of the algebra of communicating processes.
In designing VLSI-circuits it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Systolic systems are circuits built up from arrays of cells and therefore very suitable for formal analysis and induction methods. In the case of a palindrome recognizer a correctness proof is given using bisimulation semantics with asynchronous cooperation. The proof is carried out in the formal setting of the Algebra of Communicating Processes (see Bergstra & Klop [1986]), which provides us with an algebraical theory and a convenient proof system. An extensive introduction to this theory is included in this paper. The palindrome recognizer has also been studied by Hennessy [1986] in a setting of failure semantics with synchronous cooperation.
INTRODUCTION
In the current research on (hardware) verification one of the main goals is to find strong proof systems and tools to verify the designs of algorithms and architectures. For instance, in the development of integrated circuits the important stage of testing a prototype (to save the high costs of producing defective processors) can be dealt with much more efficiently, when a strong verification tool is available. Therefore, developing a verification theory has very high priority and is subject of study at many universities and scientific institutions.
However, working on detailed verification theories is not the only approach to this problem. Once having a basic theory, the development of case studies is of utmost importance to provide us with new ideas.
In this part the design process itself is examined from three approaches.
In Chapter 5 design is modelled as transforming formal draft system designs, and the user specification process is examined in detail.
In Chapter 6 circuits are relations on signals, and design is achieved through the application of combining forms satisfying certain mathematical laws.
Chapter 7 treats the problem of the automatic synthesis of VLSI chips for signal processing, and the practical issues involved are discussed in greater depth.
The development of VLSI fabrication technology has resulted in a wide range of new ideas for application specific hardware and computer architectures, and in an extensive set of significant new theoretical problems for the design of hardware. The design of hardware is a process of creating a device that realises an algorithm, and many of the problems are concerned with the nature of algorithms that may be realised. Thus fundamental research on the design of algorithms, programming and programming languages is directly relevant to research on the design of hardware. And conversely, research on hardware raises many new questions for research on software. These points are discussed at some length in the introductory chapter.
The papers that make up this volume are concerned with the theoretical foundations of the design of hardware, as viewed from computer science. The topics addressed are the complexity of computation; the methodology of design; and the specification, derivation and verification of designs. Most of the papers are based on lectures delivered at our workshop on Theoretical aspects of VLSI design held at the Centre for Theoretical Computer Science, University of Leeds in September 1986. We wish to express our thanks to the contributors and referees for their cooperation in producing this work.
One of the natural ways to model circuit behaviour is to describe a circuit as a function from signals to signals. A signal is a stream of data values over time, that is, a function from integers to values. One can choose to name signals and to reason about their values. We have taken an alternative approach in our work on the design language μFP (Sheeran [1984]). We reason about circuits, that is functions from signals to signals, rather than about the signals themselves. We build circuit descriptions by ‘plugging together’ smaller circuit descriptions using a carefully chosen set of combining forms. So, signals are first order functions, circuits are second order, and combining forms are third order.
Each combining form maps one or more circuits to a single circuit. The combining forms were chosen to reflect the fact that circuits are essentially two-dimensional. So, they correspond to ways of laying down and wiring together circuit blocks. Each combining form has both a behavioural and a pictorial interpretation. Because they obey useful mathematical laws, we can use program transformation in the development of circuits. An initial obviously correct circuit can be transformed into one with the same behaviour, but a more acceptable layout. It has been shown that this functional approach is particularly useful in the design of regular array architectures (Sheeran [1985, 1986], Luk & Jones [1988a]).
However, sometimes a relational description of a circuit is more appropriate than a functional one.
Combinational networks are a widely studied model for investigating the computational complexity of Boolean functions relevant both to sequential computation and parallel models such as VLSI circuits. Recently a number of important results proving non-trivial lower bounds on a particular type of restricted network have appeared. After giving a general introduction to Boolean complexity theory and its history this chapter presents a detailed technical account of the two main techniques developed for proving such bounds.
INTRODUCTION
An important aim of Complexity Theory is to develop techniques for establishing non-trivial lower bounds on the quantity of particular resources required to solve specific problems. Natural resources, or complexity measures, of interest are Time and Space, these being formally modelled by the number of moves made (resp. number of tape cells scanned) by a Turing machine. ‘Problems’ are viewed as functions, f : D → R; D is the domain of inputs and R the range of output values. D and R are represented as words over a finite alphabet Σ and since any such alphabet can be encoded as a set of binary strings it is sufficiently general to consider D to be the set of Boolean valued n-tuples {0, 1}n and R to be {0,1}. Functions of the form f : {0, 1}n → {0,1}, are called n-input single output Boolean functions. Bn denotes the set of all such functions and Xn = (x1,x2, …, xn) is a variable over {0, 1}n.
The theme of this chapter centres on the automatic synthesis of cost effective and highly parallel digital signal processors suitable for VLSI implementation. The proposed synthesis model is studied in detail and the concepts of signal modelling and data flow analysis are discussed. This is further illustrated by the COSPRO (COnfigurable Signal PROcessor) simulator – a primitive version of the automatic synthesis concept developed at the Department of Electrical & Electronic Engineering, University of Newcastle Upon Tyne. Binary addition is chosen as a case study to demonstrate the application of the concept.
INTRODUCTION
Digital signal processing
Digital signal processing (DSP), a counterpart of analog signal processing, began to blossom in the mid 1960s when semiconductor and computer technologies were able to offer a massive increase in flexibility and reliability. Within the short period of twenty years, this field has matured rapidly in both theory and applications, and contributed significantly to the understanding in many diverse areas of science and technology. The range of applications has grown to include almost every part of our lives, from microprocessor controlled domestic appliances to computerised banking systems and highly sophisticated missile guidance systems. Many other areas such as biomedical engineering, seismic research, radar and sonar detection and countermeasures, acoustics and speech, telecommunications, image processing and understanding, thermography, office automation and computer graphics employ DSP to a great extent, and are heavily applied in military, intelligence, industrial and commercial environments.
The VIPER microprocessor designed at the Royal Signals and Radar Estasblishment (RSRE) is probably the first commercially produced computer to have been developed using modern formal methods. Details of VIPER can be found in Cullyer [1985, 1986, 1987] and Pygott [1986]. The approach used by W. J. Cullyer and C. Pygott for its verification is explained in Cullyer & Pygott [1985], in which a simple counter is chosen to illustrate the verification techniques developed at RSRE. Using the same counter, we illustrate the approach to hardware verification developed at Cambridge, which formalizes Cullyer and Pygott's method. The approach is based on the HOL system, a version of LCF adapted to higher-order logic (Camilleri et al. [1987], Gordon [1983, 1985]). This research has formed the basis for the subsequent project to verify the whole of VIPER to register transfer level (Cohn [1987, 1989]).
In Cullyer and Pygott's paper, the implementation of the counter is specified at three levels of decreasing abstractness:
As a state-transition system called the host machine;
As an interconnection of functional blocks called the high level design;
As an interconnection of gates and registers called the circuit.
Ultimately, it is the circuit that will be built and its correctness is the most important. However, the host machine and high level design represent successive stages in the development of the implementation and so one would like to know if they too are correct.
Since our concern was speech, and speech impelled us
To purify the dialect of the tribe
And urge the mind to aftersight and foresight
T. S. Eliot Little Gidding
ABSTRACT
We analyse theoretically the process of specifying the desired behaviour of a digital system and illustrate our theory with a case study of the specification of a digital correlator.
First, a general theoretical framework for specifications and their stepwise refinement is presented. A useful notion of the consistency of two general functional specifications is defined. The framework has three methodological divisions: an exploration phase, an abstraction phase, and an implementation phase.
Secondly, a mathematical theory for specifications based on abstract data types, streams, clocks and retimings, and recursive functions is developed. A specification is a function that transforms infinite streams of data. The mathematical theory supports formal methods and software tools.
Thirdly, a digital correlator is studied in considerable detail to demonstrate points of theoretical and practical interest.
INTRODUCTION
Overview
How can we precisely define the desired behaviour of a digital system? What role can such precise definitions have in the imprecise process of designing a digital system, and in its subsequent use?
We wish to formulate answers to these questions by theoretically analysing the first step of a design assignment, when it must be determined what is to be designed.