To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While speakers are theorized to ideally not include unnecessary information (redundancy) in their utterances, in reality, they often do so. One potential reason is that linguistic redundancy facilitates language communication, especially when the addressee (interlocutor) is linguistically less competent (e.g., an artificial system). In three experiments, we examined whether linguistic redundancy may arise as a result of people’s tendency to use similar linguistic features as their interlocutor does during communication (i.e., linguistic alignment) and whether redundancy alignment (if any) differs with a human interlocutor versus a computer interlocutor. We also examined whether redundancy alignment is affected by the perceived competency of the interlocutor, participants’ abilities in theory of mind (ToM), and if redundancy alignment varied across time during the experiment. Participants carried out a picture matching and naming task with a human or computer interlocutor who either always or never included redundancies in their utterances. Redundancy alignment was found across all experiments, in that speakers produced more redundancies with a redundant interlocutor compared to a non-redundant one. This alignment was also modulated by the perceived competency of the interlocutor, the time course of the interaction, and ToM abilities, suggesting that redundancy usage is affected by both automatic and strategic mechanisms of linguistic alignment.
Traditional bulky and complex control devices such as remote control and ground station cannot meet the requirement of fast and flexible control of unmanned aerial vehicles (UAVs) in complex environments. Therefore, a data glove based on multi-sensor fusion is designed in this paper. In order to achieve the goal of gesture control of UAVs, the method can accurately recognize various gestures and convert them into corresponding UAV control commands. First, the wireless data glove fuses flexible fiber optic sensors and inertial sensors to construct a gesture dataset. Then, the trained neural network model is deployed to the STM32 microcontroller-based data glove for real-time gesture recognition, in which the convolutional neural network-Attention mechanism (CNN-Attention) network is used for static gesture recognition, and the convolutional neural network-bidirectional long and short-term memory (CNN-Bi-LSTM) network is used for dynamic gesture recognition. Finally, the gestures are converted into control commands and sent to the vehicle terminal to control the UAV. Through the UAV simulation test on the simulation platform, the average recognition accuracy of 32 static gestures reaches 99.7%, and the average recognition accuracy of 13 dynamic gestures reaches 99.9%, which indicates that the system’s gesture recognition effect is perfect. The task test in the scene constructed in the real environment shows that the UAV can respond to the gestures quickly, and the method proposed in this paper can realize the real-time stable control of the UAV on the terminal side.
The Aerospace Integration Research Centre (AIRC) at Cranfield University offers industry and academia an open environment to explore the opportunities for efficient integration of aircraft systems. As a part of the centre, Cranfield University, Rolls-Royce, and DCA Design International jointly have developed the Future Systems Simulator (FSS) for the purpose of research and development in areas such as human factors in aviation, single-pilot operations, future cockpit design, aircraft electrification, and alternative control approaches. Utilising the state-of-the-art modularity principles in simulation technology, the FSS is built to simulate a diverse range of current and novel aircraft, enabling researchers and industry partners to conduct experiments rapidly and efficiently. Central to the requirement, a unique, user-experience-centred development and design process is implemented for the development of the FSS. This paper presents the development process of such a flight simulator with an innovative flight deck. Furthermore, the paper demonstrates the FSS’s capabilities through case studies. The cutting-edge versatility and flexibility of the FSS are demonstrated through the diverse example research case studies. In the final section, the authors provide guidance for the development of an engineering flight simulator based on lessons learned in this project.
The Fourth Industrial Revolution (4IR) describes the technological transformations that are incrementally, but radically, changing everyday life practices. Like previous industrial revolutions, technological advancements are so pervasive and impactful that everything from an individual's sense of identity and understanding of the world to the economic success of an entire industry are profoundly altered by 4IR innovation. Despite the significance of 4IR transformations, little applied linguistic research has examined how these emergent technologies collectively transform human behavior and communication. To this end, this Element identifies key 4IR issues and outlines how they relate to applied linguistic research. The Element argues that applied linguists are in an excellent position to contribute to such research, as expertise in language and communication is critical to understanding 4IR issues. However, to make interdisciplinary and wider societal contributions, applied linguists must rethink how 4IR technologies can be harnessed to more efficiently publish and disseminate timely research.
Economic games involving allocation of resources have been a useful tool for the study of decision making for both psychologists and economists. In two experiments involving a repeated-trials game over twenty opportunities, undergraduates made choices to distribute resources between themselves and an unseen, passive other either optimally (for themselves) but non-competitively, equally but non-optimally, or least optimally but competitively. Surprisingly, whether participants were told that the anonymous other was another student or a computer did not matter. Using such terms as “game” and “player” in the course of the session was associated with an increased frequency of competitive behavior. Males were more optimal than females: a gender-by-incentive interaction was found in the first experiment. In agreement with prior research, participants whose resources were backed by monetary incentive acted the most optimally. Overall, equality was the modal strategy employed, although it is clear that motivational context affects the allocation of resources.
The digital transition refers to the fact that information technology (IT) tools are used in all our activities on a daily basis. In this article, we will study the use of IT tools in engineering activities. It is possible to say that today IT tools accompany engineers in their professional practices. This presence of computing has also enabled the development and considerable changes in human-technologies interactions. Moreover, the socio-economic context has evolved considerably, and environmental issues have taken on an important role in engineering. We ask whether and to what extent these two contexts (digital and ecological) have changed the expectations of design professionals with regard to IT tools. Should the way of addressing the type of human-machine interaction in engineering tools be modified in depth? The objective of this paper is to understand what types of human-computer interaction would allow a more satisfying user experience for those future engineers who are using new technologies and marked by the ecological urgency. To do so, we will focus on a particular engineering context (design for sustainability) and a particular engineering practice (LCA practice).
This study investigates the use of augmented reality technology (AR) in the field of maritime navigation and how researchers and designers have addressed AR data visualisation. The paper presents a systematic review analysing the publication type, the AR device, which information elements are visualised and how, the validation method and technological readiness. Eleven AR maritime solutions identified from scientific papers are studied and discussed in relation to previous navigation tools. It is found that primitive information such as course, compass degrees, boat speed and geographic coordinates continue to be fundamental information to be represented even with AR maritime solutions.
This chapter consists of two parts. The first part covers basic aspects of machine autonomy as a technical concept, explaining how it constitutes a form of control over a machine and how degrees of autonomous capability manifest in complex machines. This part is not about weapon systems in particular, but about electromechanical systems in general. It briefly outlines adaptive and intelligent control methodologies and explains that autonomy does not sever the connection between a machine and its operator but only alters the relationship between them. The second part discusses some aspects of how autonomous systems are used in military applications. Specifically, autonomous behaviour will extend to systems ancillary to combat operations and autonomous systems will be employed in roles wherein they effectively ‘collaborate’ with human soldiers and with each other. Assessing the legal consequences of using autonomous systems in military operations is therefore not simply a matter of studying the properties of a new type of weapon; it is about understanding a new relationship between soldiers and weapons.
Computer-Aided Design (CAD) constitutes an important tool for industrial designers. Similarly, Virtual Reality (VR) has the capability to revolutionize how designers work with its increased sense of scale and perspective. However, existing VR CAD applications are limited in terms of functionality and intuitive control. Based on a comparison of VR CAD applications, ImPro, a new application for immersive prototyping for industrial designers was developed. The user evaluations and comparisons show that ImPro offers increased usability, functionality, and suitability for industrial designers.
This chapter documents how the Legal Design Lab at Stanford University has integrated design thinking into law school technology curriculum. In this chapter we profile the objectives of the lab and explore the work the lab has undertaken to introduce new opportunities for skill acquisition through design thinking courses, innovation sprints, and workshops. We explore the purpose, process, and outcomes of these new experiments in legal education, and overview the interdisciplinary methods we have developed, brought from design schools and human-computer interaction programmes. We detail examples of the specific types of classes, sprints, and workshops run, how we define learning outcomes, and how we evaluate student performance. Further, we explore the way in which we leverage technology to provide students with opportunities to acquire user research, mapping, rapid prototyping, and improved communication skills. Drawing on lessons observed over the life of the Design Lab, we conclude by reflecting on our experience of integrating design thinking into a law school programme and argue for the importance of design thinking as an aspect of technology training within and outside of law.
Over 20 years have passed since a free-viewpoint video technology has been proposed with which a user's viewpoint can be freely set up in a reconstructed three-dimensional space of a target scene photographed by multi-view cameras. This technology allows us to capture and reproduce the real world as recorded. Once we capture the world in a digital form, we can modify it as augmented reality (i.e., placing virtual objects in the digitized real world). Unlike this concept, the augmented world allows us to see through real objects by synthesizing the backgrounds that cannot be observed in our raw perspective directly. The key idea is to generate the background image using multi-view cameras, observing the backgrounds at different positions and seamlessly overlaying the recovered image in our digitized perspective. In this paper, we review such desired view-generation techniques from the perspective of free-view point image generation and discuss challenges and open problems through a case study of our implementations.
The Principles of Legal Research (PLR) website of the University of Ottawa's Brian Dickson Law Library is a bilingual (English and French) online learning tool for all first year students in both Common Law and Civil Law.1 Law librarians use this e-learning website to facilitate teaching components such as student assignments and assessments. This user experience study aims to investigate law students’ real experience with the system. Their feedback will be used for future development planning as well as analysing user behaviour trends. The authors investigate the following aspects: accuracy of information, interface design, navigation system, Web 2.0, social media, and smartphone version.
Emotion recognition is the ability to identify what people would think someone is feeling from moment to moment and understand the connection between his/her feelings and expressions. In today's world, human–computer interaction (HCI) interface undoubtedly plays an important role in our daily life. Toward harmonious HCI interface, automated analysis and recognition of human emotion has attracted increasing attention from the researchers in multidisciplinary research fields. In this paper, a survey on the theoretical and practical work offering new and broad views of the latest research in emotion recognition from bimodal information including facial and vocal expressions is provided. First, the currently available audiovisual emotion databases are described. Facial and vocal features and audiovisual bimodal data fusion methods for emotion recognition are then surveyed and discussed. Specifically, this survey also covers the recent emotion challenges in several conferences. Conclusions outline and address some of the existing emotion recognition issues.
Cognitive engineering is the application of cognitive science theories to human factors practice. Attempts to apply computational and mathematical modeling techniques to human factor issues have a long and detailed history. This chapter reviews the seminal work of Card, Moran, and Newell from the modern perspective. It discusses the issues and applications of cognitive engineering, first for the broad category of complex systems and then for the classic area of human-computer interaction, with a focus on human interaction with quantitative information, that is, visual analytics. Not only is the control of integrated cognitive systems a challenging basic research question, the importance of understanding the control of integrated cognitive systems for cognitive engineering purposes suggests that research on control issues should become a high priority among basic researchers as well as those agencies that fund basic research.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.