To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Jonathan Hassell, Human-Computer Interaction Group, Department of Computer Science, University of York, Heslington, York YO1 5DD, UK,
Michael Harrison, Human-Computer Interaction Group, Department of Computer Science, University of York, Heslington, York YO1 5DD, UK
Automated macro systems which apply re-use to a user's input are a possible solution to the problems of customising an interactive system to the needs of the user. More useful than simple re-use would be a system that makes use of general patterns in users' behaviour and encapsulates this knowledge for application in similar, yet unfamiliar, circumstances. This process we term generalisation. This paper outlines some issues involved in controlling generalisation and the presentation and interaction with these macros, and specifies applicable heuristics. Finally the architecture for building an adaptive agent to perform the whole process is presented, with an example prototype operating on UNIX command-line interaction.
One example of demonstrational interfaces (Myers, 1991) — automated macro creation — has been shown to be a promising area for adaptive system research previously by Greenberg (1990) and Crow & Smith (1992). Crow & Smith have extended the simple re-use of previous command entries (the history/tool-based systems of Greenberg) from a single line to an inferred macro. Macros are a concept that users are already familiar with for automation.
Re-use, however, is limited to situations corresponding exactly to those which have occurred before. Whilst it has been shown that these situations happen reasonably frequently (Greenberg & Witten, 1993a; Greenberg & Witten, 1993b) for single line re-use (a result which has not been investigated for multi-line re-use), both single-line and multi-line macro re-use break down in situations which differ slightly from the original. In both cases the re-use system is of no help.
By
Francesmary Modugno, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA,
T R G Green, MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK,
Brad A Myers, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
We present a new visual programming language and environment that serves as a form of feedback and representation in a Programming by Demonstration system. The language differs from existing visual languages because it explicitly represents data objects and implicitly represents operations by changes in data objects. The system was designed to provide non-programmers with programming support for common, repetitive tasks and incorporates some principles of cognition to assist these users in learning to use it. With this in mind, we analyzed the language and its editor along cognitive dimensions. The assessment provided insight into both strengths and weaknesses of the system, prompting a number of design changes. This demonstrates how useful such an analysis can be.
A visual shell (or desktop) is a direct manipulation interface to a file system. Examples include the Apple Macintosh desktop and the Xerox Star. Although such systems are easy to use, most do not support end-user programming. Pursuit is a visual shell aimed at providing programming capabilities in a way that is consistent with the direct manipulation paradigm.
To enable users to construct programs, Pursuit contains a Programming by Demonstration (PBD) system (Cypher, 1993). In a PBD system, users execute actions on real data and the underlying system attempts to construct a program (Myers, 1991). Such systems have limitations: feedback is often difficult to understand, disruptive or non-existent; and programs often have no representation for users to examine or edit. Pursuit addresses these problems by presenting the evolving program in a visual language while it is being constructed.
By
Ben Anderson, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Michael Smyth, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Roger P Knott, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Marius Bergan, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
Julie Bergan, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK,
James L Alty, LUTCHI Research Centre, Department of Computer Studies, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK
By
Steve Benford, Department of Computer Science, The University of Nottingham, Nottingham NG7 2RD, UK,
Lennart E Fahlén, Swedish Institute of Computer Science (SICS), Box 1263, S-16428, Kista, Stockholm, Sweden
Synchronisation is a key issue for collaborative user interfaces. An examination of current approaches, in particular the concept WYSIVVIS and the use of Video as a communication medium, highlights a number of issues in this area including lack of a common spatial frame of reference, lack of appropriate embodiment of users and inflexible and rigid communication channels between users. The paper then proposes a new framework for designing collaborative user interfaces which addresses these issues. This framework is based on the notion of a common spatial frame within which embodied users are free to move autonomously, being casually aware of each other's activities. Embodiment is considered in terms of both individual viewpoints and actionpoints (e.g. telepointers) within the display space. We propose that, in many cases, synchronisation of the spatial frame is necessary but synchronisation of viewpoints and actionpoints may actually inhibit collaboration. We finish by describing some prototype systems which provide one (of possibly many) examples of how our framework might be employed; in this case to create shared cooperative virtual environments.
Collaborative user interfaces, particularly shared workspaces, have been the focus of considerable research effort in recent years. Resulting systems include multi-user editors and drawing tools (Ellis, Gibbs & Rein, 1991; Foster & Stefik, 1986; Greenberg & Bohnet, 1991), shared screen systems and more specialised design surfaces (Ishii & Kobayashi, 1992). There has also been a growth in the use of multi-media technology to support communication and awareness between the users of such systems including conferencing systems (Sarin & Greif, 1985) and media-spaces (Gaver et al., 1992; Root, 1988).
By
Kee Yong Him, School of Mechanical and Production Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 2263,
John Long, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
The paper illustrates the use of structured notations to support the specification of various aspects of a system design; such as organisational hierarchies, conceptual level tasks, domain semantics, human-computer interactions, etc. In contrast with formal or algebraic notations, graphical structured notations are communicated to users more easily. Thus, user feedback elicitation and design validation would be supported better throughout system development. It is expected that the structured notations illustrated in the paper, could be used more widely for two reasons; namely they support more specific task specifications, and have now been incorporated into a structured human factors method. In addition, off-the-shelf computer-based support for the notation is emerging, e.g. PDFTM.
Keywords: graphical structured notations, human factors specifications, structured human factors method.
General Requirements of a Notation for Human Factors Specification
Generally, an appropriate human factors notation should fulfil two pre-requisites, namely it should rectify the inadequacies of existing human factors notations, and accommodate additional specification demands arising from wider human factors involvement in system development. In particular, a notation should satisfy the following requirements:
a. Specificity. Current human factors specifications have been criticised for being insufficiently specific. This situation is aggravated further by the increasingly complex and sophisticated systems being designed. In response to these demands, human factors methods should be enhanced to include more powerful notations to support tighter design specification. For instance, in safety critical system development, task specifications should be detailed enough to support design simulation, workload assessment and probabilistic human reliability assessment. Thus, Brooks' (1991) emphasis on task specifications that reveal the hierarchical structure and operational control of the user's task, is especially pertinent. Hence, notational constructs should satisfy the demands of such design specifications;
By
John Dowell, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK,
Ian Salter, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK,
Solaleh Zekrullahi, Ergonomics and HCI Unit, University College London, 26 Bedford Way, London WC1H 0AP, UK
The demand for a more effective Air Traffic Management system, and the central role of the controller in that system, has focused attention on the design of the controller's interface. This paper presents an analysis of the task domain of Air Traffic Management. It demonstrates with a simulated system how the domain analysis can be used to model the controller's performance in the traffic management task. The use of this model in rationalising interface design issues is then illustrated. The analysis supports the general case for explicitly capturing the task domain in interface design.
The Need for Analysis of the Air Traffic Management Task Domain
The Operational Issue in Air Traffic Management
Increases in the volume of air traffic have consistently exceeded all predictions and now demand a more effective Air Traffic Management (ATM) system. Although the amount of air traffic over the UK has increased threefold in the last three decades, the public evidence points only to its increasing safety (NATS, 1988). Rather, the most pressing concern of the Civil Aviation Authority (CAA) is now the forecast 70% growth in demand on UK airspace over the next decade. This forecast increase is extremely problematic since the UK system is already considered to be operating near capacity, and bottlenecks are publicly visible (Jackson, 1993; John & Macalister, 1991). The same problem faces the US authorities where, even in the 1980s, delays and congestions were estimated to cost between 1 and 1.5 billion dollars per year (Kanafani, 1986).
If safety must not be compromised by further increases in air traffic volume, neither must ‘expedition’.
Many safety-critical applications rely upon complex interaction between computer systems and their users. When accidents occur, regulatory bodies are called upon to investigate the causes of user ‘error’ and system ‘failure’. Reports are drawn up so that the designers and operators of future systems will not repeat previous ‘mistakes’. These documents present the work of specialists who are drawn from many different technical disciplines: human factors; forensic investigation; engineering reconstruction; computer simulation; etc. The findings of these different experts are often separated into different sections. This creates a number of problems. Important evidence can be hidden within numerous appendices. The interaction between systems and users can be obscured by tortuous cross referencing schemes. There are occasional temporal ambiguities and inconsistencies between the different analyses. This paper presents ways in which formal methods can be exploited to address these problems. Mathematical notations provide means of representing and reasoning about the circumstances that lead to accidents in human machine systems. Executable logics can also be used to simulate event sequences. These simulations might be shown to other analysts. They can be used to encourage agreement on the course of events prior to more detailed investigations.
Accident reports are intended to ensure that the faults of previous systems are not propagated into future applications. For example, the Presidential investigation into the Three Mile Island accident led the United States' Nuclear Regulatory Commission (NRC) to adopt a policy of minimal intervention (Pew, Miller & Feehrer, 1981). Whenever possible operators should not be required to intervene in order to preserve the safety of their system.
By
Darryn Lavery, Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Alistair Kilgour, Department of Computing and Electrical Engineering, Heriot-Watt University, Riccarton, Edinburgh EH14 4AS, UK,
Pete Sykes, Axon Networks Inc., Scottish Software Partners Centre, South Queensferry, Edinburgh EH30 9TG, UK
This paper describes a case study in the design and prototyping of a system to support shared use of application programs in an X Windows environment. The primary aim was to satisfy the requirements for remote observation at Royal Observatory Edinburgh. The starting point for the software development was an existing tool ‘Shared-X’, developed to support window-sharing in X Windows. The paper describes the analysis of requirements for safe and efficient shared control in the remote observing situation. Previous work in groupware and application sharing is reviewed, and the architecture for the target system is related to existing taxonomies. The modifications that were necessary to the Shared X tool are described, in particular an improved and extended mechanism for floor control, which was found to be an important factor in the acceptability and usability of the system in the target domain. However limitations in the underlying X Windows architecture and having no access to the shared X source code prevented full implementation of the specification for shared telepointers. In conclusion the work highlights the importance of key issues in collaborative system design, including the importance of flexible and transparent mechanisms for floor control, the effective representation of status and control information in the user interface, the need for appropriate support mechanisms in the underlying window system (e.g. for multiple telepointers), and the increased complexity of evaluation with collaborative as opposed to single-user systems.
By
A Dutt, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
H Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK,
P Johnson, Department of Computer Science, Queen Mary and Westfield College, University of London, Mile End Road, London E1 4NS, UK
In HCI the aim of evaluation is to gather information about the usability or potential usability of a system. This paper is principally concerned with evaluating the effectiveness of two discount user inspection evaluation methods in identifying usability problems in a commercial recruitment database system with complex interface and system functionality. The two specific inspection methods investigated are heuristic evaluation and cognitive walkthrough. Several comparisons are made between the number, nature and severity of usability problems highlighted, the time needed to employ the methods and the ability to generate requirements for re-design. The results indicate that the methods are best considered as complementary and both should be employed in, but perhaps at different stages of, the design process.
The development of a successful interactive system depends on a formula of iterative design and early and continuous evaluation. However, industry's response to conducting evaluations has been patchy (Johnson & Johnson, 1989; Rosson, Maass & Kellogg, 1988). Many industrialists remark that the reasons for this are the cost of employing evaluation methods and the expertise necessary. Another reason is the cumbersome and complex nature of evaluation approaches, especially task analytic approaches such as TAG (Payne & Green, 1986), TAL (Reisner, 1981) and GOMS (Card, Moran & Newell, 1983). Additionally, evaluations are seen as providing information about what is unsatisfactory, but are less useful in generating information that can be used to facilitate more usable and fewer re-designs. Researchers therefore, must assess the effect of using current evaluation methods within the industrial development process, develop future methodologies and tools that require a limited training period and can be far more easily accommodated within the development process.
Analogy is an important factor in learning unfamiliar computer systems and problem solving when using those systems. Designers of computer systems can aid novice users by exploiting analogies and explicitly representing a model world with which the users are familiar as part of the user interface. Objects in the model world, and some operations that may be performed on them, are often analogous to those in the real world. We consider the qualitative reasoning approach to modelling people's knowledge of the real world and attempt to build qualitative models of objects and operations in the model world of a user interface. These models reveal features of existing systems that cannot be explained in terms of users' knowledge of the real world and suggest limits to direct engagement with on-screen objects.
Keywords: analogy, qualitative reasoning, direct engagement.
Introduction
Two principle paradigms have been employed in designing user interfaces to interactive computing systems, the ideas of the conversation metaphor, and the model world metaphor. In the conversation metaphor, users and systems engage in a dialogue, using languages of various complexities, about some unseen, but assumed, task domain. In the model world metaphor, the task domain is explicitly represented on-screen. Even with these direct manipulation interfaces, when users encounter them for the first time, as Carroll & Thomas (1982) suggest, by definition they do not have the knowledge required to successfully use the system. Instead, related knowledge is employed and is used as a metaphor for the material being acquired.
By
Simon Buckingham Shum, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Nick Hammond, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK
The human-computer interaction (HCI) community is generating a large number of analytic approaches such as models of user cognition and user-centred design representations. However, their successful uptake by practitioners depends on how easily they can be understood, and how usable and useful they are. We present a framework which identifies four different ‘gulfs’ between HCI modelling and design techniques and their intended users. These gulfs are potential opportunities to support designers if techniques can be encapsulated in appropriate forms. Use of the gulfs framework is illustrated in relation to three very different strands of work:
i. representing HCI design spaces and design rationale;
ii. modelling user cognition; and
iii. modelling interactive system behaviour.
We summarise what is currently known about these gulfs, report empirical investigations showing how these gulfs can be ‘bridged’, and describe plans for further investigations. We conclude that it is desirable for practitioners' requirements to shape analytic approaches much earlier in their development than has been the case to date. The work reported in this paper illustrates some of the techniques which can be recruited to this end.
The human-computer interaction (HCI) community is generating a large number of analytic, usability-oriented approaches such as cognitive modelling and user-centred design representations. Three critical factors which will determine whether any of these approaches makes any impact on design practice are their intelligibility to practitioners, and their utility and usability.
By
Phil Gray, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
David England, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Steve McGowan, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK
Time is one of the most vital properties of an interface from a user's point of view, and the TAU project aims to explore how temporal properties of user interfaces affect their usability. This paper describes the XUAN notation of the specification of temporal behaviour. This notation also provides the basis for a software tool allowing not only specification but also rapid instantiation and modification of (small) user interfaces with defined temporal behaviour. This in turn will support rapid experimentation on users that varies temporal aspects on interfaces. In this paper we describe the features we have added to the UAN in creating XUAN in order to express temporal properties of tasks.
Keywords: task description language, response time, specification.
Introduction
Time is one of the most vital properties of an interface from a user's point of view but an aspect of interaction that is neglected by HCI theorists and practitioners. Work by Teal & Rudnicky (1992) has shown that users change their interaction strategies in response to varying response delays. This change in strategy is not accounted for in Norman's theory of action (Norman, 1986) or GOMS (Card, Moran & Newell, 1983). The use of multimedia systems and CSCW systems will mean that people will be faced increasingly with time-varying interactions. Our work in the TAU project provides an experimental basis for exploring issues of time in complex interactions.
Informally we know that if mouse tracking is too slow, using the mouse becomes almost impossible.
By
Alan Conway, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland,
Tony Veale, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland
This paper describes a linguistically motivated approach to synthesising animated sign language. Our approach emphasises the importance of the internal, phonological structure of signs. Representing this level of structure results in greatly reduced lexicon size and more realistic signed output, a claim which is justified by reference to sign linguistics and by examples of sign language structure. We outline a representation scheme for phonological structure and a synthesis system which uses it to address these concerns.
Keywords: deaf sign language, phonological structure, human animation.
Introduction
The sign languages used by the deaf are a striking example of the diversity of human communication. On the surface, visual-gestural languages appear entirely dissimilar to verbal languages. It is a common misconception that signs are a form of pantomime and that they cannot convey the same range of abstract meanings as words. However, research has shown that this is entirely untrue (Klima & Bellugi, 1979). Sign languages are languages in the full sense of the word with all the expressive power of verbal languages.
In this paper we present an approach to the synthesis of animated sign language which focuses on the internal structure of signs. Several authors have discussed the translation of verbal language into sign language and the visual presentation of sign language via 3D graphics (Holden & Roy, 1992; Lee & Kunii, 1992; Patten & Hartigan, 1993). However, these authors seem to regard the sign as a unit which requires no further analysis. Sign linguists tell us that signs have internal structure and are built from more fundamental units. We argue that representing this level of structure in a synthesis system is essential for the synthesis of native sign languages.
By
Angel R Puerto, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Henrik Eriksson, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
John H Gennari, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Mark A Musen, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.
Researchers in the area of automated design of user interfaces have shown that the layout of an interface can, in many cases, be generated from the application's data model using an intelligent program that applies design rules. The specification of interface behavior, however, has not been automated in the same manner, and is mostly a programmatic task. Mecano is a model-based user-interface development environment that extends the notion of automating interface design from data models. Mecano uses a domain model — a highlevel knowledge representation that augments significantly the expressiveness of a data model — to generate automatically both the static layout and the dynamic behavior of an interface. Mecano has been applied successfully to completely generate the layout and the dynamic behavior of relatively large and complex, domain-specific, form- and graph-based interfaces for medical applications and several other domains.
One of the areas that is receiving increased interest by researchers is that of model-based user interface development. This emerging technology is centered around the premise that a declarative interface model can be used as a basis for building interface development environments. The model-based approach facilitates the automation of the design and implementation of user interfaces.
In addition, researchers have shown that an application's data model can be used effectively to generate the static layout of an application's interface (de Baar, Foley & Mullet, 1992; Janssen, Weisbecker & Ziegler, 1993). However, data models have not been applied to the generation of interface behavior specifications.
In this paper, we present Mecano, a model-based interface development environment that extends the concept of generating interface specifications from data models.