To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
C. J. Hinde, Dept. Computer Studies University of Technology Loughborough Leics LE11 3TU.,
A. D. Bray, Dept. Computer Studies University of Technology Loughborough Leics LE11 3TU.
The truth maintained blackboard model of problem solving as used in the Loughborough University Manufacturing Planner had supported collaboration between experts which were closely linked to the management system. On realistic problems the size of the assumption bases produced by the system and the overall size of the blackboard combined to impair the performance of the system. This model of design supported the collaboration of experts around a central blackboard. Clearly collaboration is a necessary condition for concurrent decision making and so the basic framework for collaboration is preserved in this model.
The Design to Product management system within which the Planner had to operate had a central “Tool Manager” through which all communication was routed. In order to implement a model of simultaneous engineering and also to support collaborative work using this model a multiple context design system is useful, if not essential. Our model extends this by distributing the control between the various expert agents where each agent treats the others as knowledge sources to its own private blackboard. All interaction between agents is done using a common communication protocol which is capable of exchanging contextual information necessary to separate contexts in the Assumption based Truth Maintenance System (de Kleer 84) environment. The hierarchical model of control by a central tool manager has been replaced by a hierarchical model of distributed control. The agents are configured using a single line inheritance scheme which endows each agent with its required knowledge and also allows it to declare its functionality to its colleagues.
Abstract. In this paper, the problem of obtaining unbiased attribute selection in probabilistic induction is described. This problem is one which is at present only poorly appreciated by those working in the field and has still not been satisfactorily solved. It is shown that the method of binary splitting of attributes goes only part of the way towards removing bias and that some further compensation mechanism is required to remove it completely. Work which takes steps in the direction of finding such a compensation mechanism is described in detail.
Introduction
Automatic induction algorithms have a history which can be traced back to Hunt's concept learning systems (Hunt et al., 1966). Later developments include AQ11 (Michalski & Larson, 1978) and ID3 (Quinlan, 1979). The extension of this type of technique to the task of induction under uncertainty is characterised by algorithms such as AQ15 (Michalski et al., 1986) and C4 (Quinlan, 1986). Other programs, developed specifically to deal with noisy domains include CART (Breiman et al., 1984) and early versions of Predictor (White 1985, 1987; White & Liu, 1990). A recent review of inductive techniques may be found in Liu & White (1991). However, efforts to develop these systems have uncovered a problem which is at present only poorly appreciated by those working in the field and has still not been satisfactorily solved.
By
N. Y. L. Yue, The Management School, Imperial College of Science, Technology & Medicine, 53 Prince's Gate, Exhibition Road, London SW7 2PG, ENGLAND,
B. Cox, The Management School, Imperial College of Science, Technology & Medicine, 53 Prince's Gate, Exhibition Road, London SW7 2PG, ENGLAND
This paper describes an on-line system which serves to support the management of the Knowledge Acquisition Process. Research on Knowledge Acquisition has tended to focus on the difficulties encountered in the elicitation of cognitive processes from the human expert with less emphasis being placed on the specific difficulties encountered in the management of Knowledge-Based Systems projects. The results of empirical research undertaken by the authors identified the need for improved rigour in the management of the Knowledge Acquisition Process [Yue & Cox, 1991, 1992a,b]. The Strategy Maze is the implementation of these results.
The goal of the Strategy Maze is to reduce and prevent risks to Knowledge Acquisition projects through improved management. The Strategy Maze identifies those management issues which must be addressed at the planning and implementation stages of the project if risk is to be minimised. The system consists of three levels: the Scoping Level which is designed to reduce and prevent those risks arising from the lack of clear project definition; the Requirements Analysis Level which provides a comprehensive checklist of the tasks and activities which need addressing prior to implementation of the project; and the Implementation Level which assists in the reduction and prevention of potential project risks during the implementation, monitoring, and control stages of the project.
INTRODUCTION
Knowledge-Based Systems (KBS) differ from conventional computer systems in their degree of dependence upon the elicitation, representation and emulation of human knowledge.
This paper describes the development of the Injection Moulding Process Expert System (IMPRESS). The IMPRESS system diagnoses faults in injection moulding machinery which lead to dirt or other contamination appearing in the plastic mouldings which are produced. This KBS has recently been put into use at Plastic Engineers (Scotland) Ltd, and is proving useful both as an expert assistant when technical help is otherwise unavailable, and as a training aid.
The IMPRESS system was built by a member of Plastic Engineers' staff with assistance from a KBS consultant. It was decided that the project would be based around a KBS methodology; a ‘pragmatic’ version of the KADS methodology was chosen. The methodology was used not only to formalise and guide the development of the KBS itself, but also to act as a framework for dividing the work between the two members of the project team. By gaining an understanding of the methodology, the staff member from Plastic Engineers was able to understand the knowledge analysis and KBS design documents produced by the consultant, and to use these documents to implement part of the KBS, both during the development of the system and when system maintenance was required.
The use of a methodology for this project on this project had both benefits and weaknesses, which are discussed at the end of the paper.
Introduction
In January 1992, Plastic Engineers (Scotland) Ltd obtained funding from Scottish Enterprise to help them in the development of a knowledge based system (KBS) for fault diagnosis.
This paper describes the structure and components of a case-based scheduler named CBS-1 which is being created to demonstrate the feasibility and utility of case-based reasoning (CBR) for dynamic job-shop scheduling problems. The paper describes the characteristics of a specific real-world scheduling task used in the work on CBS-1, identifies major problems to consider, and gives arguments for and against the application of CBR. The functions of the components of the system are illustrated by examples. Finally, some existing case-based schedulers are compared with CBS-1.
INTRODUCTION
Scheduling is the allocation of resources, like machines or human power, to operations over time to achieve certain goals. In job-shop scheduling the goals to be achieved are the processing or production of discrete parts in several steps each requiring several different resources. Dynamic scheduling is scheduling simultaneously with the execution of the processes that are affected by the created schedules.
In the Interuniversitary Centre for CIM (IUCCIM) in Vienna the production process for remote controlled toy cars is used to demonstrate the main ideas in CIM. In this context the problem of scheduling incoming orders for toy cars into the ongoing production process arises. There are several reasons for the complexity of such a scheduling task.
There is a combinatorial explosion of the number of possible schedules (which must be checked for feasibility) in each problem dimension such as the number of machines and operations.
By
M. L. G. Shaw, Knowledge Science Institute University of Calgary Calgary, Alberta, Canada T2N 1N4,
B. R. Gaines, Knowledge Science Institute University of Calgary Calgary, Alberta, Canada T2N 1N4
A number of practical knowledge acquisition methodologies and tools have been based on the elicitation and analysis of repertory grids. These result in frames and rules that are exported to knowledge-based system shells. In the development of repertory grid tools, the original methodology has been greatly extended to encompass the data types required in knowledge-based systems. However, this has been done on a fairly pragmatic basis, and it has not been clear how the resultant knowledge acquisition systems relate to psychological, or computational, theories of knowledge representation. This paper shows that there is a close correspondence between the intensional logics of knowledge, belief and action developed in the personal construct psychology underlying repertory grids, and the intensional logics for term subsumption knowledge representation underlying KL-ONE-like systems. The paper gives an overview of personal construct psychology and its expression as an intensional logic describing the cognitive processes of anticipatory agents, and uses this to survey knowledge acquisition tools deriving from personal construct psychology.
PERSONAL CONSTRUCT PSYCHOLOGY
George Kelly was a clinical psychologist who lived between 1905 and 1967, published a two volume work (Kelly, 1955) defining personal construct psychology in 1955, and went on to publish a large number of papers further developing the theory many of which have been issued in collected form (Maher, 1969). Kelly was a keen geometer with experience in navigation and an interest in multi-dimensional geometry.
By
A. Ovalle, Groupe SIC (Integrated Cognitive Systems) Equipe de Reconnaissance des Formes et de Microscopie Quantitative Laboratoire TIM3 - Institut IMAG Bât. CERMO - BP 53X - 38041 Grenoble Cedex, FRANCE,
C. Grabay, Groupe SIC (Integrated Cognitive Systems) Equipe de Reconnaissance des Formes et de Microscopie Quantitative Laboratoire TIM3 - Institut IMAG Bât. CERMO - BP 53X - 38041 Grenoble Cedex, FRANCE
We describe a method for Multi-Agent System design which is assisted by two original typologies, resulting from the deeper study of knowledge and reasoning. The first typology reflects a formal character while the second reflects a technological character. The purpose of the Formal Typology is the classification and structuring of knowledge and reasoning. The Technological Typology handles the parameters governing the reasoning intrinsic to Multi-Agent technology, not only at the individual level of the agent but also within a group of agents. Possible correspondence between both of these typologies will become concrete by the presentation of the Multi-Agent generator MAPS (Multi-Agent Problem Solver), and the Multi-Agent system KIDS (Knowledge based Image Diagnosis System) devoted to Biomedical Image Interpretation.
Keywords
Second Generation Expert Systems, Multi-Agent System Design, Distributed Artificial Intelligence, Knowledge and Reasoning Modeling, Control, Biomedical Image Interpretation.
INTRODUCTION
Among knowledge based systems using artificial intelligence techniques we are particularly interested in the Multi-Agent systems which arise from their second generation (systems using multiple reasoning schemes). The Multi-Agent paradigm results from distributed artificial intelligence approaches and makes it possible to overcome the drawbacks encountered during the resolution of complex problems. The main issue of the Multi-Agent approach involves the distribution of tasks and skills among intelligent entities that co-operate, pooling their knowledge and their expertise to attain an aim (Ferber 88). In this way, not only a multi-modal knowledge representation, and reasoning schemes handling are permitted but also, co-operative problem solving.
By
H. A. Tolba, CRIN-CNRS and INRIA-Lorraine Campus Scientifique - B.P. 239 54506 Vandœuvre–lès–Nancy Cedex, FRANCE,
F. Charpillet, CRIN-CNRS and INRIA-Lorraine Campus Scientifique - B.P. 239 54506 Vandœuvre–lès–Nancy Cedex, FRANCE,
J. -P. Haton, CRIN-CNRS and INRIA-Lorraine Campus Scientifique - B.P. 239 54506 Vandœuvre–lès–Nancy Cedex, FRANCE
Time is an important aspect of any intelligent knowledge representation. This has led to a rising need for reasoning about time in various applications of artificial intelligence such as process control or decision making. Different schemes for temporal information representation have been proposed so far. A natural way to refer to a temporal event consists in making references to a clock providing a quantitative or numerical representation of time, as well as several concepts such as duration and calendar. However, a clock reference is not always available or relevant. In such cases, a qualitative (symbolic) representation of time can be used to describe the situations in question.
In spite of the different representations of temporal information proposed, most are not completely satisfactory. Looking at existing work, Allen's representation [Allen 83] is a very powerful representation in describing the relativity between intervals. Vilain and kautz proposed a subinterval algebra [Vilain & Kautz 86], Ghallab and Mounir [Ghallab & Mounir 89] based on the notion of subinterval algebara, have also proposed a model with symbolic relations but within the framework of subinterval algebra. However, these models don't address numerical aspect of time. On the other hand, the time map of Dean and McDermott [Dean & McDermott 87], Rit geometrical model [Rit 86] and the temporal constraint networks [Dechter, Meiri & Pearl 91] are designed for handling metric information and can't handle in a good way symbolic ones.
Examples of the pattern classification problem (known variously as: pattern recognition, discriminant analysis, and pattern grouping) are widespread. In general such problems involve the need to assign objects to various groups, or classes, and include such applications as: (i) the assignment of production items to either defective or non-defective classes as based upon the results of tests performed on each part, (ii) the assignment of personnel to jobs as based upon their test scores and/or physical attributes, (iii) the assignment of an object detected by radar to either a friendly or unfriendly category, (iv) the categorization of investment opportunities into those that are attractive and those that are not, and so on. Early (scientific) efforts to model and solve the pattern classification problem utilized, for the most part, statistical approaches. In turn, these approaches usually rely upon the somewhat restrictive assumptions of multivariate normal distributions and certain types of (and conditions on) covariance matrices. More recent attempts have employed expert systems, linear programming (LP) and, in particular, neural networks. In this paper, we describe the development of an approach that combines linear programming (specifically, traditional linear programming and/or linear goal programming [Ignizio, 1982]) with neural networks, wherein the combined technique is itself monitored and controlled by an expert systems interface.
More specifically, we describe the use of expert systems and linear programming in the simultaneous design and training of neural networks for the pattern classification problem.
The development of expert systems is inherently uncertain and so involves a high degree of risk. This paper describes a project management method that helps manage this uncertainty. It has been tailored to the Client Centred Approach — an expert system development method that is being designed for use by small and medium sized enterprises. This context implies that the management technique and its accompanying documentation must not over burden the resources of a smaller developer. The helix method of project management introduced in this paper represents a different view of Boehm's Spiral Model. It accepts that conventional linear project planning methods are not always suitable for developers of expert systems. Having accepted this, the helix method allows plans to be made for each development stage within the Client Centred Approach. We believe the Client Centred Approach is applicable wherever prototyping is used, and we contrast it with the methods being developed by KADS-II.
INTRODUCTION
This paper describes proposals for handling project management within the Client Centred Approach (CCA). The principles of the CCA are described in Basden [1989]. The thinking behind the approach, and its current state of development, are described in greater detail in Basden et al. [1991] and Watson et al. [1992]. Although the technique described here is applicable for any project that uses prototyping to develop a system (around forty five per cent of all commercial expert system projects according to a recent survey [DTI, 1992]), it has been developed specifically for small and medium sized enterprises (SMEs), rather than larger organisations.
By
C. Harris-Jones, BIS Information Systems, Ringway House, 45 Bull Street, Colmore Circus, Birmingham, B4 6AF,
T. Barret, BIS Information Systems, Ringway House, 45 Bull Street, Colmore Circus, Birmingham, B4 6AF,
T. Walker, Expert Systems Ltd, The Magdalen Centre, Oxford Science Park, Oxford, 0X4 4GA,
T. Moores, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET,
J. Edwards, Aston Business School, Aston University, Aston Triangle, Birmingham, B4 7ET
The last few years has seen a significant change in commercial KBS development. Organisations are now building KBS to solve specific business problems rather than simply to see what the technology can do. There has also been a move away from building KBS on stand alone PCs to using the corporate resources of networks, mini and Mainframe computers, and existing databases. As a result of these changes, two significant questions are now being regularly asked by organisations developing or interested in developing KBS:
How can KBS be linked into existing systems to enhance their processing functions and make better use of data already held?
What methods can be used to help build commercial applications using KBS techniques?
The key to these questions is the use of an integrated approach to the development of all IT systems. There are many methods available for conventional systems development, such as Information Engineering, SSADM, Jackson and Yourdon. There are also a number of KBS methods available or under development such as KADS, KEATS, and GEMINI. However, commercial organisations with well established procedures for conventional development do not want to use two different methods side-by-side, nor do they wish to discard their current conventional development method and replace it with a method claiming to cover all aspects of conventional and KBS development. Organisations therefore require some way of integrating KBS methods into their existing methods.
By
X. Zhang, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic,
J. L. Nealon, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic,
R. Lindsay, Knowledge Engineering Research Group School of Computing & Mathematical Sciences Oxford Polytechnic
Abstract: Current intelligent user interfaces have two limitations: (i) They are domain specific and mainly built for existing database management systems, (ii) They are specific to the target systems for which they are constructed. However, user goals, which motivate interactions with a computer, are likely to be complicated and to require the use of multiple target systems in various domains. In this paper, we discuss the development of intelligent user interfaces which are not subject to the limitations identified. An architecture is proposed, the major function of which is the dynamic integration and intelligent use of multiple target systems relevant to a user's goals. Other important features of the proposed system include its theoretical orientation around relevance relationships, mental models and speech acts, and the introduction of “system experts” and “goal manager”. A prototype Intelligent Multifunctional User Interface, (IMUI), is briefly described which indicates that the proposed architecture is viable, the methodology is promising, and the theoretical ideas introduced are worthy of further investigation.
INTRODUCTION
Computer-based systems are coming to play an ever more important part in our society, and as they do so, they become increasingly complicated and difficult to use effectively. As a consequence, the need to develop flexible and versatile intelligent interfaces has become more crucial than ever.
What would an ideal interface look like, and how can such a system be designed and implemented? Most investigators would agree that it should behave like an intelligent human assistant who has expert knowledge both of user characteristics and requirements, and of target system(s).
By
P. Maher, Department of Mathematics and Computer Science University of Missouri - St. Louis, St. Louis, MO 63121 USA.,
O. Traynor, FB 3 Informatik und Mathematik, Universität Bremen, Bremen 33, Germany.
This paper describes and illustrates the use of a methodology suitable for the formal development of expert systems. It addresses the problems of verification and validation of expert systems in a realistic way, though the methods are not advocated as a general tool for expert system development. The framework described allows for both the specification of Knowledge and the specification of the Inference methods which provide the basis for deduction. A flexible and extensible environment for the development and testing of specific types of expert system is presented. Various tools and results are shown to be useful in determining properties of both the knowledge base and the inference system when these are developed within the proposed framework.
The framework is based on exploitation of the transformational model of software development in combination with techniques from algebraic specification.
INTRODUCTION
The development of expert systems, within a formal development framework (see [Krieg-Brückner and Hoffmann 91]), can be seen as a significant advance in expert system technology. The benefits accrued from such an approach are substantial. In particular the following are notable: a formal foundation for reasoning about properties of the knowledge base and inference system is provided. Inductive and deductive methods are available to help in both the construction of the expert system and as a tool for analysis of the knowledge bases. A well defined language, with well defined semantics, provides the basis for specifying both the expert system and the associated knowledge bases.
One of the principal difficulties in developing a distributed problem solver is how to distribute the reasoning task between the agents cooperating to find a solution.
We will propose the distributed logic programming language DLP as a vehicle for the design and implementation of distributed knowledge based systems. The language DLP combines logic programming with active objects.
We will show how object oriented modeling may be applied for the specification and implementation of a distributed diagnostic (medical) expert system. The example illustrates how the diagnostic process is distributed over the agents participating in the diagnosis according to the structure of the knowledge of that particular domain.
Logic programming offers a declarative way to solve problems in Artificial Intelligence. However, when implementing large (possibly distributed) systems, traditional software engineering problems such as modularization and the distribution of data and control reoccur. Cf. [Subrahmanyam, 1985].
One of the principal difficulties in developing a distributed problem solver is how to distribute the reasoning task between the agents cooperating to find a solution.
Due to its declarative nature, logic programming has become popular for implementing knowledge-based systems. However, lacking adequate modularization facilities, logic programming languages such as Prolog fall short in providing the mechanisms necessary to specify the distribution of data and control.
Object oriented modeling To tackle these problems, we suggest in this paper to embed the logic programming paradigm into an object oriented approach.
By
J. W. Brahan, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
B. Farley, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
R. A. Orchard, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
A. Parent, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6,
C. S. Phan, Institute for Information Technology National Research Council Ottawa Canada K1A 0R6
Most expert systems perform a task on behalf of the user. The task usually involves gathering and analyzing data, and recommending or initiating the appropriate action. However, expert systems can also play an important role in showing the user how to perform a task. In this role, the expert system provides support until it eventually becomes of decreasing importance as its knowledge base is transferred to the user. This category includes Help Systems, Coaching Systems, and Tutorial Systems. In this paper, we discuss the development of an Intelligent Advisor combining the three functions in a system to assist the user in acquiring and refining the knowledge required to carry out a design task. The combined system provides a means of introducing a training facility as an integral part of the work environment. The primary goal of our project is the creation of a system in which the generic advisor components are identified along with the methodology required to adapt them to specific applications. The conceptual modelling phase of database design was chosen as the application domain to develop the system and to demonstrate feasibility. An initial prototype has been implemented, which illustrates the operation of the system in each of the three modes as applied to database modelling. The technology is currently being extended to a second application domain.
Introduction
ERMA (Entity-Relationship Modelling Advisor) is a knowledge-based system that serves as a consultant to the user of a computer-based design tool, providing advice as required.
By
S. Craw, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
D. Sleeman, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
N. Graner, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
M. Rissakis, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE,
S. Sharma, Department of Computing Science University of Aberdeen Aberdeen AB9 2UE
The Machine Learning Toolbox (MLT), an Esprit project (P2154), provides an integrated toolbox of ten Machine Learning (ML) algorithms. One distinct component of the toolbox is Consultant, an advice-giving expert system, which assists a domain expert to choose and use a suitable algorithm for his learning problem. The University of Aberdeen has been responsible for the design and implementation of Consultant.
Consultant's knowledge and domain is unusual in several respects. Its knowledge represents the integrated expertise of ten algorithm developers, whose algorithms offer a range of ML techniques; but also some algorithms use fairly similar approaches. The lack of an agreed ML terminology was the initial impetus for an extensive, associated help system. From an MLT user's point of view, an ML beginner requires significant assistance with terminology and techniques, and can benefit from having access to previous, successful applications of ML to similar problems; but in contrast a more experienced user of ML does not wish constant supervision. This paper describes Consultant, discusses the methods used to achieve the required flexibility of use, and compares Consultant's similarities and distinguishing features with more standard expert system applications.
INTRODUCTION
The Machine Learning Toolbox (MLT), an Esprit project (P2154), provides an integrated toolbox of ten Machine Learning (ML) algorithms. One distinct component of the toolbox is Consultant, an advice-giving expert system. It provides domain experts with assistance and guidance on the selection and use of tools from the toolbox, but it is specifically aimed at experts who are not familiar with ML and its design has focused on their needs.
By
B. R. Gaines, Knowledge Science Institute, University of Calgary Calgary, Alberta, Canada T2N 1N4.,
M. L. G. Shaw, Knowledge Science Institute, University of Calgary Calgary, Alberta, Canada T2N 1N4.
An intelligent learning data base (ILDB) system is an integrated learning system which implements automatic knowledge acquisition from data bases by providing formalisms for 1) translation of standard data base information into a form suitable for use by its induction engines. 2) using induction techniques to produce knowledge from data bases, and 3) interpreting the knowledge produced efficiently to solve users' problems. Although a lot of work on knowledge acquisition from data bases has been done, the requirements for building practical learning systems to learn from conventional data bases are still far away for existing systems to reach. A crucial requirement is more efficient learning algorithms as realistic data bases are usually fairly large. Based on KEshell. dBASE3 and the low-order polynomial induction algorithm HCV. this paper presents a knowledge engineering shell. KEsheH2. which implements the 3 phases of automatic knowledge acquisition from data bases in an integral way.
INTRODUCTION
Over the past twenty years data base research has evolved technologies that are now widely used in almost every computing and scientific field. However, many new advanced applications including computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided software engineering (CASE), image processing, and office automation (OA) have revealed that traditional data base management systems (DBMSs) are inadequate, especially on the following cases [Wu 90b]:
Conventional data base technology has laid particular stress on dealing with large amounts of persistent and highly structured data efficiently and using transactions for concurrency control and recovery.