To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper will present a study of practical design decisions relevant to the retargeting of a traditional compilation system to a distributed target environment. The knowledge was gathered during the course of Honeywell's Distributed Ada project which involved the retargeting of a full commercial Ada compilation system to a distributed environment. The goal of the project was to create a compilation system which would allow a single unmodified Ada program to be fragmented and executed in a distributed environment.
The Distributed Ada Project
The trend in embedded system architectures is shifting from uniprocessor systems to networks of multiple computers. Advances in software tools and methodologies have not kept pace with advances in using distributed system architectures. In current practice, the tools designed for developing software on uniprocessor systems are used even when the target hardware is distributed. Typically, the application developer factors the hardware configuration into software design very early in the development process and writes a separate program for each processor in the system. In this way, software design gets burdened with hardware information that is unrelated to the application functionality. The paradigm is also weak in that no compiler sees the entire application. Because of this, the semantics of remote operations are likely to be different from local operations and the type checking that the compiler provides is defeated for inter-processor operations.
The task of programming distributed applications in Ada may be addressed in several ways. Most of these require the application developer to factor the hardware configuration into software design very early in the development process. The resulting software is sensitive to changes in hardware, does not lend itself to design iteration, is not easily transportable across different hardware configurations, and is not stable against changes during the lifecycle of the application.
In Section 3, we describe an approach that aims at separation of concerns between program design and program partitioning for distributed execution. The entire application is written as a single Ada program using the full capabilities of the language for program structuring, separate compilation, and type checking. Then in a distinct second phase of design, the program is partitioned and prepared for distributed execution. Advantages of a two-phase design approach are discussed. Section 4 reviews related work and presents a comparative evaluation. Section 5 describes the notation used to express program partitioning. Section 6 revisits the issue of what Ada entities should be distributable.
Two implementations of this approach have been completed and tested with the Ada Compiler Validation Capability (ACVC) test-suite. Implementation issues, and the key features of our implementation approach are presented in an accompanying paper.
CLASSIFICATION OF APPROACHES
The Ada language does not provide explicit language support for distribution.
By
D. Auty, SofTech, USA,
A. Burns, University of Bradford, UK,
C. W. McKay, University of Houston - Clear Lake, USA,
C. Randall, GHG Corporation, USA,
P. Rogers, University of Houston - Clear Lake, USA
Perhaps the greatest challenge facing Ada is in the domain of the large distributed real-time system. Because of the long lead time associated with such complex applications no real experience of the use of Ada, in this type of domain, has yet been gained. Nevertheless there are projects of a large and complex nature that are committed to the use of Ada, even though the full potential of the language has yet to prove itself in this challenging domain.
The Portable Common Execution Environment (PCEE) project is a research effort addressing the life cycle support of large, complex, non-stop, distributed computing applications with Mission And Safety Critical (MASC) components. Such applications (for example the International Space Station — Freedom) typically have extended life-time (e.g., 30 years) requirements. PCEE focuses on the system software, the interface to applications and the system architecture necessary to reliably build and maintain such systems. The requirements extend from the target system environment to the integration environment, and ultimately to the host environment. The integration environment serves as the single logical point of integration, deployment, and configuration control whereas system development occurs in the host environment. Life cycle issues include an integrated approach to the technologies (environments, tools, and methodologies) and theoretical foundations (models, principles, and concepts) that span these three environments. The scope of the effort is necessarily broad. There are, however, substantial research foundations to support development across the breadth of the project.
By
Judy M Bishop, Department of Electronics and Computer Science, The University, Southampton, England,
Michael J Hasling, Department of Electronics and Computer Science, The University, Southampton, England
Although Ada is now ten years old, there are still not firm guidelines as to how the distribution of an Ada program onto multiprocessors should be organised, specified and implemented. There is considerable effort being expended on identifying and solving problems associated with distributed Ada, and the first aim of this paper is to set out where the work is being done, and how far it has progressed to date. In addition to work of a general nature, there are now nearly ten completed distributed Ada implementations, and a second aim of the paper is to compare these briefly, using a method developed as part of the Stadium project at the University of Southampton. Much of Southampton's motivation for getting involved in distributed Ada has been the interest from the strong concurrent computing group, which has for several years taken a lead in parallel applications on transputers. The paper concludes with a classification of parallel programs and a description of how the trends in distributed Ada will affect users in the different groups.
COLLECTIVE WORK ON DISTRIBUTED ADA
The major forums where work on distributed Ada is progressing are Ada UK's International Real-Time Issues Workshop, the Ada 9X Project, SIGAda ARTEWG and AdaJUG CARTWG. Reports of these meetings appear regularly in Ada User (published quarterly by Ada UK) and Ada Letters (published bi-monthly by ACM SIGAda). The status of their activities is summarised here.
Although Ada is now reaching its adolescence, distributed Ada is still in its infancy. The extent of the problems yet to be solved and the multitude of proposed solutions presents a very real dilemma for prospective implementors and users alike. How does one specify a distributed program? What parts of Ada are allowed to be distributed? Will the underlying hardware configuration matter? Can the program be made fault tolerant and reliable in the face of processor failure? How much effort will it take to move an existing Ada program onto a mutiprocessor system? Will the proposed new Ada Standard (Ada 9X) address distributed issues?
These are just some of the questions that arise, and there is considerable effort being expended, world-wide, in answering them. However, much of this work is being conducted in small working groups, and the interim results are published only in condensed form, if at all. The aim of this book is to open the debate to as wide an audience as possible, heightening the level of awareness of the progress that has been made to date, and the issues that still remain open.
The symposium on which this book is based was held at the University of Southampton on 11–12 December 1989 and attended by nearly 100 people.
A recent trend in computer engineering has been the replacement of large uniprocessor based proprietary architectures by multiple microprocessor based designs employing various interconnection strategies. While these multiprocessor based systems offer significant performance and economic advantages over uniprocessor systems, not all prospective users are able or willing to adapt their applications to execute as multiple concurrent streams.
The Ada programming language is well suited to multiprocessor systems as it allows the programmer to direct the use of concurrency through the use of the Ada tasking mechanism. The avoidance of automatic distribution of the program by the compiler and the choice of the Ada task as the unit of distribution greatly simplify the development of Ada software for multiprocessor architectures.
For performance reasons, the inter-processor communications path should offer low latency and high transfer rates. Shared memory supports these characteristics and a multiprocessor system, where all memory can be accessed by all processors, has proven to be a suitable platform for a parallel Ada implementation.
This paper discusses the implementation and architecture of a parallel Ada system that allows up to twenty processors to co-execute the same Ada program with true concurrency. Particular attention is given to the design of the Ada runtime and the interface between the runtime and the underlying operating system, as these parts of the system must be “multi-threaded” throughout in order to minimize bottle-necks. The paper concludes with the description of a 1000 MIPS Ada engine currently under development.
By
Robert Dewar, New York University Ada/Ed Research Group,
Susan Flynn, New York University Ada/Ed Research Group,
Edmond Schonberg, New York University Ada/Ed Research Group,
Norman Shulman, New York University Ada/Ed Research Group
The Ada multi-tasking model is one in which tasks can run on separate processors and memory is either non-shared (local to one task), or shared (referenced by more than one task). It would therefore seem that mapping Ada onto a multi-processor architecture with both local and shared memory should be straightforward. This paper examines the difficulties in mapping Ada onto the IBM RP3 which is an example of such an architecture. In practice there are a number of difficult problems, the most significant of which is the inability to determine at compile time which variables are shared. The RP3 has a flexible shared memory architecture, and an important purpose of the Ada/RP3 project is to investigate possible models for implementation of Ada, with a view to determining whether modifications or enhancements of Ada are desirable to ensure optimal use of such architectures.
INTRODUCTION
The NYU Ada/Ed system consists of a front end and interpreter written entirely in C. This system is a direct descendant of the original SETL interpreter, and has been ported to a wide variety of machines [KS84].
Our current research involves porting Ada/Ed to the IBM RP3, an experimental multi-processor with shared memory [P87]. The front end is essentially unchanged, except for the addition of a set of pragmas described later, but the backend is being modified to interface with proprietary IBM code generating technology, and the runtime library is being rewritten to take advantage of the multi-processor architecture.
By
A.B. Gargaro, Computer Sciences Corporation Moorestown, New Jersey, USA,
S.J. Goldsack, Department of Computing Imperial College London, UK,
R.A. Volz, Department of Computer Science Texas A&M University, USA,
A.J. Wellings, Department of Computer Science University of York, UK
The Ada programming language was designed to provide support for a wide range of safety-critical applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals. A major impediment has been the lack of language support for distributed fault-tolerant program execution.
In this paper we propose language changes to Ada which will facilitate the programming of fault-tolerant distributed real-time applications. These changes support partitioning and configuration/reconfiguration. Paradigms are given to illustrate how dynamic reconfiguration of the software can be programmed following notification of processor and network failure, mode changes, software failure, and deadline failure.
INTRODUCTION
There is increasing use of computers that are embedded in some wider engineering application. These systems all have several common characteristics: they must respond to externally generated input stimuli within a finite and specified period; they must be extremely reliable and/or safe; they are often geographically distributed over both a local and a wide area; they may contain a very large and complex software component; they may contain processing elements which are subject to cost/size/weight constraints.
Developing software to control safety-critical applications requires programming abstractions that are unavailable in many of today's programming languages. The Ada programming language was designed to provide support for such applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals.
By
Colin Atkinson, Imperial College, Dept. of Computing, 180 Queens Gate, London SWZ 2BZ, U.K.,
Andrea Di Maio, TXT S.p.A, Via Socrate, 41, 20128 Milano, Italy.
Although the introduction of Ada represented a significant step forward for the developers and users of embedded systems, experience in the use of the language has demonstrated that it has several shortcomings, particularly in the realm of distributed systems. Some of the difficulties with Ada in this respect are caused by relatively minor semantic details chosen without due regard for the properties of distributed systems, such as the semantics of timed and conditional entry calls, and should be easily rectified. Others, however, are of a much more fundamental nature, and are likely to require more significant modifications to the language to overcome them.
One of the main problems of the existing version of Ada is its execution model, based on the notion of a single main program. This model does not carry over well to distributed environments, and tends to reduce the prospects for supporting dynamic configuration and flexible responses to hardware failure.
The purpose of this paper is to outline the difficulties caused by the current execution model of Ada, and to describe the different solutions devised by the European projects, DIADEM, and DRAGON. The first of these was a small project partially funded under the Multi-Annual Programme of the Commission of the European communities, and was completed early in 1987. The second project is partially supported under the Esprit program of the Commission, and is due for completion in the middle of 1990.
Futurologists have proclaimed the birth of a new species, machina sapiens, that will share (perhaps usurp) our place as the intelligent sovereigns of our earthly domain. These “thinking machines” will take over our burdensome mental chores, just as their mechanical predecessors were intended to eliminate physical drudgery. Eventually they will apply their “ultra-intelligence” to solving all of our problems. Any thoughts of resisting this inevitable evolution is just a form of “speciesism,” born from a romantic and irrational attachment to the peculiarities of the human organism.
Critics have argued with equal fervor that “thinking machine” is an oxymoron – a contradiction in terms. Computers, with their foundations of cold logic, can never be creative or insightful or possess real judgment. No matter how competent they appear, they do not have the genuine intentionality that is at the heart of human understanding. The vain pretensions of those who seek to understand mind as computation can be dismissed as yet another demonstration of the arrogance of modern science.
Although my own understanding developed through active participation in artificial intelligence research, I have now come to recognize a larger grain of truth in the criticisms than in the enthusiastic predictions. But the story is more complex. The issues need not (perhaps cannot) be debated as fundamental questions concerning the place of humanity in the universe. Indeed, artificial intelligence has not achieved creativity, insight, and judgment. But its shortcomings are far more mundane: we have not yet been able to construct a machine with even a modicum of common sense or one that can converse on everyday topics in ordinary language.
Systems of interconnected and interdependent computers are qualitatively different from the relatively isolated computers of the past. Such “open systems” uncover important limitations in current approaches to artificial intelligence (AI). They require a new approach that is more like organizational designs and management than current approaches. Here we'll take a look at some of the implications and constraints imposed by open systems.
Open systems are always subject to communications and constraints from outside. They are characterized by the following properties:
Continuous change and evolution. Distributed systems are always adding new computers, users and software. As a result, systems must be able to change as the components and demands placed upon them change. Moreover, they must be able to evolve new internal components in order to accommodate the shifting work they perform. Without this capability, every system must reach the point where it can no longer expand to accommodate new users and uses.
Arm's-length relationships and decentralized decision making. In general, the computers, people, and agencies that make up open systems do not have direct access to one another's internal information. Arm's-length relationships imply that the architecture must accommodate multiple computers at different physical sites that do not have access to the internal components of others. This leads to decentralized decision making.
Perpetual inconsistency among knowledge bases. Because of privacy and discretionary concerns, different knowledge bases will contain different perspectives and conflicting beliefs. Thus, all the knowledge bases of a distributed AI system taken together will be perpetually inconsistent. Decentralization makes it impossible to update all knowledge bases simultaneously.
“But why,” Aunty asked with perceptible asperity, “does it have to be a language?” Aunty speaks with the voice of the Establishment, and her intransigence is something awful. She is, however, prepared to make certain concessions in the present case. First, she concedes that there are beliefs and desires and that there is a matter of fact about their intentional contents; there's a matter of fact, that is to say, about which proposition the intentional object of a belief or a desire is. Second, Aunty accepts the coherence of physicalism. It may be that believing and desiring will prove to be states of the brain, and if they do that's OK with Aunty. Third, she is prepared to concede that beliefs and desires have causal roles, and that overt behavior is typically the effect of complex interactions among these mental causes. (That Aunty was raised as a strict behaviorist goes without saying. But she hasn't been quite the same since the sixties. Which of us has?) In short, Aunty recognizes that psychological explanations need to postulate a network of causally related intentional states. “But why,” she asks with perceptible asperity, “does it have to be a language?” Or, to put it more succinctly than Aunty often does, what – over and above mere Intentional Realism – does the Language of Thought Hypothesis buy? That is what this discussion is about.
A prior question: what – over and above mere Intentional Realism – does the Language of Thought Hypothesis (LOT) claim? Here, I think, the situation is reasonably clear.
Artificial intelligence is still a relatively young science, in which there are still various influences from different parent disciplines (psychology, philosophy, computer science, etc.). One symptom of this situation is the lack of any clearly defined way of carrying out research in the field (see D. McDermott, 1981, for some pertinent comments on this topic). There used to be a tendency for workers (particularly Ph.D. students) to indulge in what McCarthy has called the “look-ma-no-hands” approach (Hayes, 1975b), in which the worker writes a large, complex program, produces one or two impressive printouts and then writes papers stating that he has done this. The deficiency of this style of “research” is that it is theoretically sterile – it does not develop principles and does not clarify or define the real research problems. What has happened over recent years is that some attempt is now made to outline the principles which a program is supposed to implement. That is, the worker still constructs a complex program with impressive behaviour, but he also provides a statement of how it achieves this performance. Unfortunately, in some cases, the written “theory” may not correspond to the program in detail, but the writer avoids emphasizing (or sometimes even conceals) this discrepancy, resulting in methodological confusion. The “theory” is supposedly justified, or given empirical credibility, by the presence of the program (although the program may have been designed in a totally different way); hence the theory is not subjected to other forms of argument or examination.
Rational reconstruction (reproducing the essence of the program's significant behavior with another program constructed from descriptions of the purportedly important aspects of the original program) has been one approach to assessing the value of published claims about programs.
Campbell attempts to account for why the status of AI vis-a-vis the conventional sciences is a problematic issue. He outlines three classes of theories, the distinguishing elements of which are: equations; entities, operations and a set of axioms; and general principles capable of particularization in different forms. Models in AI, he claims, tend to fall in the last class of theory.
He argues for the methodology of rational reconstruction as an important component of a science of AI, even though the few attempts so far have not been particularly successful, if success is measured in terms of the similarity of behavior between the original AI system and the subsequent rational reconstruction. But, as Campbell points out, it is analysis and exploration of exactly these discrepancies that is likely to lead to significant progress in AI.
The second paper in this section is a reprint of one of the more celebrated attempts to analyse a famous AI program. In addition, to an analysis of the published descriptions of how the program works with respect to the program's behaviour (Lenat's ‘creative rediscovery’ system AM), Richie and Hanna discuss more general considerations of the rational-reconstruction methodology.
There is a continuing concern in AI that proof and correctness, the touchstones of the theory of programming, are being abandoned to the detriment of AI as a whole. On the other hand, we can find arguments to support just the opposite view, that attempts to fit AI programming into the specify-and-prove (or at least, specify-and-test correctness) paradigm of conventional software engineering, is contrary to the role of programming in AI research.
Similarly, the move to establish conventional logic as the foundational calculus of AI (currently seen in the logic programming approach and in knowledge-based decision-making implemented as a proof procedure) is another aspect of correctness in AI; and one whose validity is questioned (for example, Chandrasekaran's paper in section 1 opened the general discussion of such issues when it examined logic-based theories in AI, and Hewitt, in the section 11, takes up the more specific question of the role of logic in expert systems). Both sides of this correctness question are presented below.
Philosophers constantly debate the nature of their discipline. These interminable debates frustrate even the most patient observer. Workers in AI also disagree, although not so frequently, about how to conduct their research. To equate programs with theories may offer a simple unifying tool to achieve agreement about the proper AI methodology. To construct a program becomes a way to construct a theory. When AI researchers need to justify their product as scientific, they can simply point to their successful programs. Unfortunately, methodological agreement does not come so easily in AI.
For a number of reasons, theorists in any discipline do not relish washing their proverbially dirty laundry in public. Methodology creates a great deal of that dirt, and philosophy of science supposedly supplies the soap to cleanse the discipline's methodology. Scientists often appeal to philosophers of science to develop methodological canons. It certainly does not instill confidence in a discipline if its practitioners cannot even agree on how to evaluate each other's work. Despite public images to the contrary, disagreements over how to approach a subject matter predominate in most scientific disciplines. Can philosophy of science come to the rescue of AI methodology? Yes and no.
Before describing the middle ground occupied by philosophy of science in its relationship to AI, we need to examine how some dismiss the AI research project altogether. In a previous article I argued against the various philosophical obstacles to the program/theory equation (Simon, 1979). I considered three types of objections to AI research: impossibility, ethical, and implausibility.
The area of non-monotonic reasoning and the area of logic programming are of crucial and growing significance to artificial intelligence and to the whole field of computer science. It is therefore important to achieve a better understanding of the relationship existing between these two fields.
The major goal in the area of non-monotonic reasoning is to find adequate and sufficiently powerful formalizations of various types of non-monotonic reasoning – including common-sense reasoning – and to develop efficient ways of their implementation. Most of the currently existing formalizations are based on mathematical logic.
Logic programming introduced to computer science the important concept of declarative – as opposed to procedural – programming, based on mathematical logic. Logic programs, however, do not use logical negation, but instead rely on a non-monotonic operator – often referred to as negation as failure – which represents a procedural form of negation.
Non-monotonic reasoning and logic programming are closely related. The importance of logic programming to the area of non-monotontic reasoning follows from the fact that, as observed by several researchers (see e.g. Reiter, [to appear]) the non-monotonic character of procedural negation used in logic programming often makes it possible to efficiently implement other non-monotonic formalisms in Prolog or in other logic programming languages. Logic programming can also be used to provide formalizations for special forms of non-monotonic reasoning. For example, Kowalski and Sergot's calculus of events (1986) uses Prolog's negation-asfailure operator to formalize the temporal persistence problem in AI.
Fodor restates his language of thought hypothesis, which presents a serious challenge to the view that the architecture of cognition is network-based. Fodor claims that, whatever the representation underlying thought at the level of semantic interpretation, it must have constituent structure, i.e. a structure such that belief in the truth of (A and B) necessarily involves belief in the truth of both of the constituents, A and B, separately. Such necessity is not, in general, supported by network architecture whereas it is in Turing machine-type representations.
The rest of the papers in this section respond (directly or indirectly) to this challenge to the significance of connectionism in AI. Is connectionism just implementation detail, an essentially Turing machine-type architecture implemented with an activity-passing network? Or are sub-symbolic networks inherently more powerful than the traditional symbolic-level processing representations that have dominated much of AI? Smolensky is firmly on the side of connectionism as a fundamentally new and powerful ‘sub-symbolic’ paradigm for AI. Wilks discusses and denies the central claims of both Fodor and Smolensky, arguing that, at the moment, benificent agnosticism is the best position on connectionism, awaiting further clarification of its claims and more empirical results.
Churchland supports the connectionist movement but his support is based on the similarities between connectionist principles and the architecture of the brain. He argues for the study of neuroanatomy as a source of system-building constraints in AI.
My concern is with what an AI experiment is, and hence with what AI is. I shall talk about what experiments are actually like, but suggest that this is what they must be like.
Thus is it reasonable to suppose that AI experiments are, or could be, like the experiments of classical physics? I do not believe it is. This is not because we cannot expect the result of a single critical experiment to validate a theory, as we cannot expect a single translation to validate a translation program, for example: we can presumably extend the classical model to cover the case where validation depends on a set of results, for different data. Nor is it because we have not in practice got anything like an adequate predictive theory. I believe that we cannot in principle have the sort of predictive theory associated with physics, because we are not modelling nature in the classical physics sense. I shall elaborate on what I think we are doing, but claim now that we reach the same conclusion if we consider the suggestion that we are not in the classical physics position, but rather in that of investigative biologists, doing experiments to find out what nature is like (notionally without any theory at all, though perhaps in fact influenced by some half-baked theory). This is because there is nothing natural to discover.