To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Analogy is an important factor in learning unfamiliar computer systems and problem solving when using those systems. Designers of computer systems can aid novice users by exploiting analogies and explicitly representing a model world with which the users are familiar as part of the user interface. Objects in the model world, and some operations that may be performed on them, are often analogous to those in the real world. We consider the qualitative reasoning approach to modelling people's knowledge of the real world and attempt to build qualitative models of objects and operations in the model world of a user interface. These models reveal features of existing systems that cannot be explained in terms of users' knowledge of the real world and suggest limits to direct engagement with on-screen objects.
Keywords: analogy, qualitative reasoning, direct engagement.
Introduction
Two principle paradigms have been employed in designing user interfaces to interactive computing systems, the ideas of the conversation metaphor, and the model world metaphor. In the conversation metaphor, users and systems engage in a dialogue, using languages of various complexities, about some unseen, but assumed, task domain. In the model world metaphor, the task domain is explicitly represented on-screen. Even with these direct manipulation interfaces, when users encounter them for the first time, as Carroll & Thomas (1982) suggest, by definition they do not have the knowledge required to successfully use the system. Instead, related knowledge is employed and is used as a metaphor for the material being acquired.
By
Simon Buckingham Shum, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Nick Hammond, Human-Computer Interaction Group, Department of Psychology, University of York, Heslington, York YO1 5DD, UK
The human-computer interaction (HCI) community is generating a large number of analytic approaches such as models of user cognition and user-centred design representations. However, their successful uptake by practitioners depends on how easily they can be understood, and how usable and useful they are. We present a framework which identifies four different ‘gulfs’ between HCI modelling and design techniques and their intended users. These gulfs are potential opportunities to support designers if techniques can be encapsulated in appropriate forms. Use of the gulfs framework is illustrated in relation to three very different strands of work:
i. representing HCI design spaces and design rationale;
ii. modelling user cognition; and
iii. modelling interactive system behaviour.
We summarise what is currently known about these gulfs, report empirical investigations showing how these gulfs can be ‘bridged’, and describe plans for further investigations. We conclude that it is desirable for practitioners' requirements to shape analytic approaches much earlier in their development than has been the case to date. The work reported in this paper illustrates some of the techniques which can be recruited to this end.
The human-computer interaction (HCI) community is generating a large number of analytic, usability-oriented approaches such as cognitive modelling and user-centred design representations. Three critical factors which will determine whether any of these approaches makes any impact on design practice are their intelligibility to practitioners, and their utility and usability.
By
Phil Gray, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
David England, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK,
Steve McGowan, GIST (Glasgow Interactive Systems cenTre), Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, Hillhead, Glasgow G12 8QQ, UK
Time is one of the most vital properties of an interface from a user's point of view, and the TAU project aims to explore how temporal properties of user interfaces affect their usability. This paper describes the XUAN notation of the specification of temporal behaviour. This notation also provides the basis for a software tool allowing not only specification but also rapid instantiation and modification of (small) user interfaces with defined temporal behaviour. This in turn will support rapid experimentation on users that varies temporal aspects on interfaces. In this paper we describe the features we have added to the UAN in creating XUAN in order to express temporal properties of tasks.
Keywords: task description language, response time, specification.
Introduction
Time is one of the most vital properties of an interface from a user's point of view but an aspect of interaction that is neglected by HCI theorists and practitioners. Work by Teal & Rudnicky (1992) has shown that users change their interaction strategies in response to varying response delays. This change in strategy is not accounted for in Norman's theory of action (Norman, 1986) or GOMS (Card, Moran & Newell, 1983). The use of multimedia systems and CSCW systems will mean that people will be faced increasingly with time-varying interactions. Our work in the TAU project provides an experimental basis for exploring issues of time in complex interactions.
Informally we know that if mouse tracking is too slow, using the mouse becomes almost impossible.
By
Alan Conway, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland,
Tony Veale, Hitachi Dublin Laboratory, O'Reilly Institute, Trinity College, Dublin 2, Ireland
This paper describes a linguistically motivated approach to synthesising animated sign language. Our approach emphasises the importance of the internal, phonological structure of signs. Representing this level of structure results in greatly reduced lexicon size and more realistic signed output, a claim which is justified by reference to sign linguistics and by examples of sign language structure. We outline a representation scheme for phonological structure and a synthesis system which uses it to address these concerns.
Keywords: deaf sign language, phonological structure, human animation.
Introduction
The sign languages used by the deaf are a striking example of the diversity of human communication. On the surface, visual-gestural languages appear entirely dissimilar to verbal languages. It is a common misconception that signs are a form of pantomime and that they cannot convey the same range of abstract meanings as words. However, research has shown that this is entirely untrue (Klima & Bellugi, 1979). Sign languages are languages in the full sense of the word with all the expressive power of verbal languages.
In this paper we present an approach to the synthesis of animated sign language which focuses on the internal structure of signs. Several authors have discussed the translation of verbal language into sign language and the visual presentation of sign language via 3D graphics (Holden & Roy, 1992; Lee & Kunii, 1992; Patten & Hartigan, 1993). However, these authors seem to regard the sign as a unit which requires no further analysis. Sign linguists tell us that signs have internal structure and are built from more fundamental units. We argue that representing this level of structure in a synthesis system is essential for the synthesis of native sign languages.
By
Angel R Puerto, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Henrik Eriksson, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
John H Gennari, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.,
Mark A Musen, Medical Computer Science Group, Knowledge Systems Laboratory, Departments of Medicine and Computer Science, Stanford University, Stanford, CA 94305-5479, USA.
Researchers in the area of automated design of user interfaces have shown that the layout of an interface can, in many cases, be generated from the application's data model using an intelligent program that applies design rules. The specification of interface behavior, however, has not been automated in the same manner, and is mostly a programmatic task. Mecano is a model-based user-interface development environment that extends the notion of automating interface design from data models. Mecano uses a domain model — a highlevel knowledge representation that augments significantly the expressiveness of a data model — to generate automatically both the static layout and the dynamic behavior of an interface. Mecano has been applied successfully to completely generate the layout and the dynamic behavior of relatively large and complex, domain-specific, form- and graph-based interfaces for medical applications and several other domains.
One of the areas that is receiving increased interest by researchers is that of model-based user interface development. This emerging technology is centered around the premise that a declarative interface model can be used as a basis for building interface development environments. The model-based approach facilitates the automation of the design and implementation of user interfaces.
In addition, researchers have shown that an application's data model can be used effectively to generate the static layout of an application's interface (de Baar, Foley & Mullet, 1992; Janssen, Weisbecker & Ziegler, 1993). However, data models have not been applied to the generation of interface behavior specifications.
In this paper, we present Mecano, a model-based interface development environment that extends the concept of generating interface specifications from data models.
By
Richard M Young, MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK,
Gregory D Abowd, College of Computing, Georgia Institute of Technology, 801 Atlantic Drive, Atlanta, GA 30332-0280, USA
Successful interface design respects constraints stemming from a number of diverse domains analysed by different disciplines. Modelling techniques exist within the individual disciplines, but there is a need for ways to weave together different techniques to provide an integrated analysis of interface design issues from multiple perspectives. We illustrate the relations and interplay between six different modelling techniques — two for system modelling, two for user modelling, one for interaction modelling, and one for design modelling — applied to a shared design scenario concerning the provision of an Undo facility for a collaborative editor. The resulting multi-perspective analysis provides a depth of understanding and a breadth of scope beyond what can be achieved by any one technique alone.
Keywords: user modelling, system modelling, design rationale, interaction analysis, multi-disciplinary analysis, scenario analysis, undo, multi-user, editing.
Introduction
Successful interface design requires the satisfaction of a diverse set of constraints stemming from different domains. One of the factors making interface design so challenging is the number and diversity of those domains, and the different disciplines that study each. Relevant domains include that of the computer, within which are the disciplines of computer science and software engineering; of the user, studied by disciplines such as psychology; of work, the topic of sociology and anthropology and other disciplines; and of design itself.
Modelling techniques that can contribute to interface design exist in each of these domains. However, any one of these approaches tells only part of the story, and covers only some of the issues. There is a pressing need to combine modelling techniques derived from different disciplines and reflecting different perspectives in order to provide analyses with the scope and the depth adequate for guiding design.
Systems analysts have a number of techniques at their disposal when capturing or generating the requirements for a system. One of the most commonly used is the interview. Interviewing users and other members of the client organisation is often fraught with difficulty: social and communicational barriers may prove difficult to overcome, especially if the level of contact between developers and users is kept to a minimum. Poor interview technique, ignorance of incorrect implicit/unspoken assumptions and the misinterpretation of interview data can lead to incorrect requirements or incomplete specifications. This paper describes a technique for developing a collaborative visual representation of information gathered during the interview process which enhances understanding between participants and enriches the information gathered. The method combines the manipulation of graphical objects and informal discussions which are collected via cassette or video recording. Graphical representation objects — representing the groups, procedures, tools and products that exist in the interviewee's experience — provide a standard, structured means of visual expression. Recording of walkthroughs and discussions of the results keeps note-making to a minimum and helps to reduce the social distance between the participants. A description of the four main stages of the technique is presented, along with supporting material outlining reasons why the technique was developed and describing how it has been used on organisational case studies. The paper concludes with an assessment of the effectiveness of the technique and suggests how it could be tailored to support requirements capture for system design.
Keywords: problems in communication, interviews, visual thinking, visual description, system design.
Introduction
This paper describes an interview technique that was developed in an attempt to solve the problems generated by the possibility of communication barriers between analysts and domain experts.
By
Howell O Istance, Imaging and Displays Research Group, Department of Computing Science, De Montfort University, Leicester, UK,
Peter A Howarth, Vision and Lighting Research Group, Department of Human Sciences, Loughborough University of Technology, Loughborough, Leicestershire LE11 3TU, UK
This paper examines the issues surrounding the use of an eyetracker, providing eye-movement data, as a general purpose input device for graphical user interfaces. Interacting with computers via eye-movements is not in itself new, however previous work in the area has been directed towards interaction with purpose-built software which can take into account device limitations such as accuracy. This work investigates how one can interact with unmodified graphical interface software which normally requires mouse and/or keyboard input. The results of three experiments are discussed which have compared performance between the eyetracker and the mouse, and between different ways of emulating mouse button presses using the eyetracker data. The experiments as a whole consider a range of tasks from simple button presses to the more complex and demanding operations of selecting text, and they indicate the feasibility of using the eyes to control computers.
Benefits of Controlling Graphical User Interfaces by Eye
Overview
The use of the eyes as a primary means of controlling input is appealing for a number of reasons.
First, it can be considered as a ‘natural’ mode of input and by-passes the need for learned hand-eye co-ordination to effect operations such as object selection. The user simply looks at a screen object they wish to select rather than using a hand-held pointing device, such as a mouse, to position a screen cursor over the object.
Second, one can expect performance benefits. If a user need only look at an object to acquire it, rather than having additionally to control and position a cursor by hand, speed of selection will be increased.
By
Andrew F Monk, Department of Psychology, University of York, Heslington, York YO1 5DD, UK,
Martin B Curry, Sowerby Research Centre, British Aerospace pic, FPC 267, Filton, Bristol BS12 7QW, UK
A description of the high level structure of a user interface is an important part of any system specification. Currently the most common way of thinking about and recording this part of the design is through story boards and verbal descriptions, these may be imprecise and are difficult to evaluate. Action Simulator allows a designer to build simple models of the high level behaviour of the user interface. The models are easy to read and can be executed to give a dynamic view of the design. This makes it possible to ‘run through’ the actions needed to complete the users' work. A procedure for characterising the users' work that is suitable for this purpose is also sketched out in the paper. Action Simulator consists of an Excel spreadsheet and associated macros and is publicly available.
Keywords: dialogue model, task model, work objective, decomposition, scenario, system behaviour, specification, spreadsheet.
The Need for Abstract Dialogue Models
The design of software, like any other undertaking in engineering, involves the construction of a specification that includes models of various kinds. The reason engineers construct a blue print or specification before building the artefact itself is that the latter is difficult to change and so between gathering requirements and implementation a specification is built that is easy to change. Analysis and evaluation of the specification enables improvements to be made before implementation begins. Also like other engineering projects, software is extremely complex and so difficult to reason about. For this reason engineers build models that concentrate on some aspect of the design and abstract across others.
Assessment contributes to the educational process of students but only a small fraction of the full potential is typically realized. The primary impediment to realizing greater benefit is the infeasibility of implementing more effective alternatives in the resource-limited settings typical of modern educational environments. We are developing a system architecture that exploits hypermedia technology to overcome serious limitations of traditional assessment methods.
The architecture addresses the design of cost-effective confidence-measuring and performance-testing assessment vehicles using hypermedia-based studentsystem interaction. In this paper we describe the conceptual foundation, its embodiment in prototypes, and preliminary results from classroom tests.
The educational experience can be enhanced by using assessment methods as techniques for evaluation and as guides for instructors and administrators in curriculum design and teaching methods (Airasian, 1991). Unfortunately, standardized assessment methods do not discriminate between finer-grained states of knowledge nor do they adequately reflect the ability of students to apply what they've learned. In addition, since the assessment instrument significantly influences instruction, alternative assessment methods are needed to better address fundamental educational goals. Past attempts to address these problems and goals on a large scale using traditional technology have proven infeasible primarily due to the high costs of providing adequate, standardized materials and controlled, responsive environments. In this paper we present alternatives that exploit the characteristics of modern hypermedia-capable computer systems to achieve the desired goals in a cost-effective way.
By
Mark Addison, Department of Psychology, University of Stirling, Stirling FK9 4LA, UK,
Harold Thimbleby, Department of Psychology, University of Stirling, Stirling FK9 4LA, UK
A user manual may provide instructions that, if the user follows them, achieve any of certain objectives as determined by the manual designers. A manual may therefore be viewed rather like a computer program, as pre-planned instructions. Accordingly, software engineering and its methods may be applied mutatis mutandis to the manual and its design process.
We consider structured programming methods, and show that some difficulties with user interfaces may be attributed to manuals being ‘unstructured’. Since there are many programming metrics, and very many styles of manuals for user interfaces, this paper is concerned with justifying the approach and showing how insightful it is.
Keywords: manuals, hypertext, multimedia, finite state machines, flowgraphs.
Introduction
There is much evidence that improved manuals improve user acceptance (Carroll, 1990). There is also the argument that improving manuals by changing the system documented by them leads to improved systems (Thimbleby, 1990). Thus manuals are an essential part of the system life cycle: from requirements and design, through usability, to acceptance.
The importance of manuals certainly extends beyond their use in training and reference. In some sense (whether this is explicit or implicit) a user must ‘know’ what they are doing to use a system, and the manual is a representation of what they could know. Whether a user could in practice verbalise their knowledge as a system manual is unlikely — it may not even be necessary to be able to do so if the system feedback is sufficient, cf. (Payne, 1991); however it is certain that, for many users, the manual is the prime input to their initial system knowledge.
By
Russell Beale, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK,
Andrew Wood, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
Agents are becoming widespread in a variety of computer systems and domains, but often appear to have little in common with each other. In this paper we look at different agent systems and identify what a generic agent should be composed of. We also identify the characteristics of a task that make it worthy of an agentbased approach. We then discuss the implications for the interaction of using agents, that is, the notion of a balanced interface, and briefly look at how an agent-based approach assists in two very different application domains.
Keywords: agents, intelligent interfaces, groupwork, computer-supported cooperative work (CSCW).
Introduction
The term agent has become increasingly widespread over the past few years. Unfortunately, it has no consistent definition and means many different things to different people. This paper considers the multifarious uses that these disparate agents are put to and tries to identify what, if anything, they have in common. With this commonality in mind, we identify the general properties of an agent and consider what makes a task ‘agent worthy’.
Agent-based interaction has consequences, leading to the notion of a balanced interface. This leads on to a description of the design of a generic agent and a look at a couple of applications that are benefiting from being treated in an agent-based manner.
Classifying Agents
We can identify a number of categories that agents can be classified under, based on the functionality offered by the different types of agent.
By
Maria da Graça Campos Pimentel, Computing Laboratory, University of Kent at Canterbury, Canterbury, Kent CT2 7NF, UK, Department of Computer Science, ICMSC, Universidade de São Paulo, CP 668, São Carlos – SP, 13560–970, Brazil
The aim of the Previewing Information Operation (PIO) approach is to tackle some overhead factors imposed on the user-hypertext interaction. The purpose is to diminish cognitive overhead and disorientation problems by reducing some of their causes.
This paper describes an experiment carried out to evaluate the usability of the operations based on the PIO approach. Results from between-groups studies show that subjects' evaluation of the ease of use of the system and feeling of general orientation were affected by the presence of PIO operations. A further study has revealed that the PIO operations were predicted by standard navigational operations.
Keywords: hypertext, link selection, previewing information, evaluation.
Introduction
When referring to a user's interaction with a hypertext system, the metaphor generally used in the literature is that the user navigates or browses through the information by selecting those links which are interesting.
In such a scenario, an interactive session could be described as a sequence of link selections along with other navigational operations, as for instance backtracking and string searching. Each of the link selection operations performed is a very important unit of the navigation sequence the user goes through: without the link options and the navigation taking place by the user freely choosing among links, there is no hypertext.
Accordingly, the secondary navigational modes such as bookmarks, history lists, backtracking and search operations (Bernstein & Joyce, 1992), are probably as important as the link selection alternatives. Firstly, they promote the understanding of the embedded hypertext structure and the building of a cognitive map. Secondly, they help users orientate themselves when they are lost.