We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
This chapter discusses the use of AI ethics standardizations for robot governance. Specifically, the chapter considers challenges to the regulation of AI-enabled technology due to slow legislative processes that have not been able to keep pace with the rapid speed of technological advances. In addition to considering the regulation of critical AI technologies, the chapter also argues for a regulatory framework that relies on nonbinding and flexible AI ethics standards to ensure that stakeholders manage ethical, legal, and social implication (ELSI) risks that are inherent in daily human–robot interactions. By including AI ethics standards into the development process for humanoid and expressive robots, robot developers will be able to include principles of responsible innovation and research without conflicting with “hard laws” enacted for robot regulation. In this chapter, through two case studies, I explore the approach of ethical robot design, examine its potential and limitations, and demonstrate the utility of “ethically aligned design” and “social system design” frameworks in implementing legal human–robot interaction (L-HRI).
This chapter introduces issues of law, policy, and regulations for human interaction with robots that are AI enabled, expressive, humanoid in appearance, and that are anthropomorphized by users. These features are leading to a class of robots that are beginning to pose unique challenges to courts, legislators, and the robotics industry as they consider how the behavior of robots operating with sophisticated social skills and increasing levels of intelligence should be regulated. In this chapter we introduce basic terms, definitions, and concepts which relate to human interaction with AI-enabled and social robots and we review some of the regulations, statutes, and case law which apply to such robots and we do so specifically in the context of human–robot interaction. Our goal in this chapter is to provide a conceptual framework for the chapters which follow focusing on human interaction with robots that are becoming more like us in form and behavior.
In this concluding chapter, we discuss future directions in law, policy, and regulations for robots that are expressive, humanoid in appearance, becoming smarter, and that are anthropomorphized by users. Given the wide range of skills shown by this emerging class of robots, legal scholars, legislators, and roboticists are beginning to discuss how law, policy, and regulations should be applied to robots that are becoming more like us in form and behavior. For such robots, we propose that human–robot interaction should be the focus of efforts to regulate increasingly smart, expressive, humanoid, and social robots. Therefore, in the context of human–robot interaction, this chapter summarizes our views on future directions of law and policy for robots that are becoming highly social and intelligent, displaying the ability to detect and express emotions, and controversially, in the view of some commentators, beginning to display a rudimentary level of self-awareness.
Over the last years there has been growing research interest in religion within the robotics community. Along these lines, this chapter will provide a case study of the ‘religious robot’ SanTO, which is the world’s first robot designed to be ‘Catholic’. This robot was created with the aim of exploring the theoretical basis for the application of robot technology in the religious space. While the application of this technology has many potential benefits for users, the use and design of religious or other social robots raises a number of ethical, legal, and social issues (ELSI). This paper, which is concerned with such issues will start with a general introduction, offer an ELSI analysis, and finally develop conclusions from an ethical design perspective.
To many people, there is a boundary which exists between artificial intelligence (AI), sometimes referred to as an intelligent software agent, and the system which is controlled through AI primarily by the use of algorithms. One example of this dichotomy is robots which have a physical form, but whose behavior is highly dependent on the “AI algorithms” which direct its actions. More specifically, we can think of a software agent as an entity which is directed by algorithms that perform many intellectual activities currently done by humans. The software agent can exist in a virtual world (for example, a bot) or can be embedded in the software controlling a machine (for example, a robot). For many current robots controlled by algorithms, they represent semi-intelligent hardware that repetitively perform tasks in physical environments. This observation is based on the fact that most robotic applications for industrial use since the middle of the last century have been driven by algorithms that support repetitive machine motions. In many cases, industrial robots which typically work in closed environments, say, for example, factory floors, do not need “advanced” techniques of AI to function because they perform daily routines with algorithms directing the repetitive motions of their end effectors. However, lately, there is an emerging technological trend which has resulted from the combination of AI and robots, which, by using sophisticated algorithms, allows robots to adapt complex work styles and to function socially in open environments. We may call these merged technological products “embodied AI,” or in a more general sense, “embodied algorithms.”
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.