We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Expert drivers possess the ability to execute high sideslip angle maneuvers, commonly known as drifting, during racing to navigate sharp corners and execute rapid turns. However, existing model-based controllers encounter challenges in handling the highly nonlinear dynamics associated with drifting along general paths. While reinforcement learning-based methods alleviate the reliance on explicit vehicle models, training a policy directly for autonomous drifting remains difficult due to multiple objectives. In this paper, we propose a control framework for autonomous drifting in the general case, based on curriculum reinforcement learning. The framework empowers the vehicle to follow paths with varying curvature at high speeds, while executing drifting maneuvers during sharp corners. Specifically, we consider the vehicle’s dynamics to decompose the overall task and employ curriculum learning to break down the training process into three stages of increasing complexity. Additionally, to enhance the generalization ability of the learned policies, we introduce randomization into sensor observation noise, actuator action noise, and physical parameters. The proposed framework is validated using the CARLA simulator, encompassing various vehicle types and parameters. Experimental results demonstrate the effectiveness and efficiency of our framework in achieving autonomous drifting along general paths. The code is available at https://github.com/BIT-KaiYu/drifting.
Technological innovations in the online food delivery sector include the use of autonomous delivery vehicles. The aim of the present study was to investigate consumers’ intentions to use these services once they are widely available and their motivations for using them to access unhealthy food.
Design:
Online survey including a vignette describing a future world where autonomous food deliveries are in common use in both metropolitan and non-metropolitan areas.
Setting:
Australia.
Participants:
1078 Australians aged 18 years and older, nationally representative by sex, age and location (metropolitan v. non-metropolitan residence).
Results:
Around half of the sample reported intending to use an autonomous food delivery service at least once per week for fast food (53 %) and/or healthy pre-prepared food (50 %). Almost two-thirds (60 %) intended using autonomous vehicle deliveries to receive groceries. Around one in five (17 %) anticipated an increase in their fast-food intake as a result of access to autonomous delivery services compared with one in two (46 %) expecting others’ total fast-food intake to increase. The most common reason provided for using autonomous food deliveries was increased convenience. More frequent current fast-food ordering, higher socio-economic status, younger age and regional location were significantly associated with an anticipated increase in fast-food consumption.
Conclusions:
The emergence of autonomous food delivery systems may bring both benefits and adverse consequences that in combination are likely to constitute a substantial regulatory challenge. Proactive efforts will be required to avoid negative public health nutrition outcomes of this transport evolution.
This chapter seeks to clarify the criminal responsibility that may be imputable to: (i) programmers of autonomous vehicles for related crimes under national criminal law such as manslaughter and negligent homicide and (ii) programmers of autonomous weapons for related crimes under international criminal law, such as war crimes. The key question is whether programmers could satisfy the actus reus element required for establishing criminal responsibility. The core challenge in answering this question is establishing a causal link between programmers’ conduct and crimes related to autonomous vehicles and autonomous weapons. The chapter proposes responsibility for inherent foreseeable risks associated with the use of AVs and AWs on the basis of programmers’ alleged control of the behavior and/or effects of the autonomous vehicles and autonomous weapons. Establishing the exercise of meaningful human control by programmers over autonomous vehicles and autonomous weapons is crucial to the process of imputing criminal responsibility and bridging a responsibility gap.
In human–robot interactions in legal proceedings, human responses to robot-generated evidence will present unique challenges to the accuracy of litigation as well as ancillary goals such as fairness and transparency, though it may also enhance accuracy in other respects. The most important feature of human–robot interactions is the human tendency to anthropomorphize robots, which can generate misleading impressions and be manipulated by designing robots to make them appear more trustworthy and believable. Although robot-generated evidence may also offer unique advantages, there are concerns about the degree to which the traditional methods of testing the accuracy of evidence, particularly cross-examination, will be effective. We explore these phenomena in the autonomous vehicles context, comparing the forums of litigation, alternative dispute resolution, and the National Transportation Safety Board. We suggest that the presence of expert decision-makers might help mitigate some of the problems with human–robot interactions, though other aspects of the procedures in each of the forums still raise concerns.
In Singapore, residents have expressed concerns about the safety of autonomous vehicles. This chapter considers the case of Singapore, which has supported the development of autonomous vehicles and tested their use. Using research studies and newspaper reports, the chapter examines the rhetorical devices used to frame relevant discussion and identifies the narrative arguments used to reduce fears and justify the presence of vehicles on public streets. The narratives of government and commercial entities complement each other and are frequently upbeat, but they differ in that commercial entities asserted the narrative that autonomous vehicles were inevitable, while government entities did not. The government’s rejection of inevitability supports a different view of law and government, in which government officials decide the degree and pace of AV development. However, Singapore has not adopted a strict regulatory approach, and opted instead for light touch regulation. As a narrative argument, rejection of inevitability does not dictate regulatory approach.
Innovation plays a vital role in ensuring sustainable mining operations. Electrification and autonomy are two significant trends, but their implementation brings complexity at vehicle and site levels. Therefore, it is crucial to understand how these technologies impact the overall site value creation. This paper suggests a hybrid approach that combines Agent-Based Simulation and vehicle dynamics modeling to explore site configurations. By regarding a mining site as a System-of-Systems, designers can concurrently test different designs to find the optimal combination for a specific scenario.
The UK Parliament has already pre-emptively legislated for a compensation solution for autonomous vehicle accidents through the Automated and Electric Vehicles Act 2018. The Act is a response to the fact that the ordinary approach to motor vehicle accidents cannot apply in an AV context since there is no human driver. Tort law has previously been subjected to major shifts in response to motor vehicles, and we are again on the cusp of another motor-vehicle-inspired revolution in tort law. However, in legislating for AV accidents in the UK, there was inadequate consideration of alternative approaches.
This chapter discusses three philosophical disputes concerning the comparison between the ethics of crashing autonomous vehicles and the Trolley Problem. The first dispute concerns whether there is something ethically problematic – or perhaps even flippant – about comparing the real-world issue of what autonomous vehicles should do in accident scenarios with the philosophy of the Trolley Problem. The second dispute concerns whether or not there is a close enough analogy between real-world accidents involving autonomous vehicles and the so-called trolley cases discussed in relation to the Trolley Problem. The third dispute concerns whether or not the large literature on the Trolley Problem discusses topics directly relevant to the real-world ethics of crashes with autonomous vehicles. The chapter considers key arguments on each side of the dispute. It also offers a diagnosis regarding whether the Trolley Problem is relevant for the ethics of autonomous vehicles. The conclusion is that it is either directly or indirectly relevant: It may be directly relevant because the Trolley Problem can teach us important lessons or indirectly relevant because identifying key differences between the ethics of autonomous vehicles and the Trolley Problem allows us to get clear on what matters most in the real-world ethics of autonomous vehicles.
This article reviews existing state laws related to autonomous vehicle (AV) safety, equity, and automobile insurance. Thirty states were identified with relevant legislation. Of these, most states had one or two relevant laws in place. Many of these laws were related to safety and insurance requirements. Data are needed to evaluate the effectiveness of these laws in order to guide further policy development.
The increased development in automated driving systems (ADS) has opened up significant opportunities to revolutionize mobility and to set the path for technologies, such as electrification. The proposed methodology is a simulation model backed by a multi-objective optimization algorithm. This research investigates the adoption of future technologies in earthmoving application and explores its implications on the design of future machine concepts in terms of equipment size. The shift from “elephant to ants” in the machine selection, resulted in improved feasibility.
Electric vertical take-off and landing aircraft (eVTOLs) have been accessed on various configurations over the past decade. This literature review deals with the issue of determining the appropriate design for an Autonomous Passenger Drone (APD). APDs have been compared with VTOLs on their pros and cons. The authors analysed aerodynamics and propulsion systems of multiple APDs. Further, the comparative analysis aids in designing the best framework for the exterior form of APDs based on human capacity, flying technology, fuel type, travel distance, door type, size, material, safety, cost, etc.
This research focuses on evaluating a user interface (UI) for an autonomous vehicle (AV) with the goal to determine the most suitable layout for persons with visual acuity loss. The testing procedure includes a Wizard of Oz AV for simulating an automated ride. Several participants are included in the study and the visual impairments are simulated by specially designed glasses. The conclusions help to determine the optimal graphic design of the UI that can be independently used by persons with blurred vision. The results can be applied to improve the inclusiveness and ergonomics of vehicle UIs.
The field of autonomous vehicles is gaining wide recognition in the industry, academia as well as social media. However, there is a lack of knowledge on expectations of people regarding this topic. To this end, this paper analyses extant research on perceptions of people in various countries about semi-autonomous and autonomous vehicles. Secondly, based on the findings of this analysis, we developed a questionnaire to gauge the perceptions of the people in Sweden regarding such vehicles. The findings have important implications for the design of AVs in Sweden, and possibly other countries.
Open-source platforms are an increasingly popular business model for AI development for global technology companies. This chapter examines why a restrictive (non-fuzzy) interpretation of the data localisation provisions within the Cyber Security Law would harm the growth of China’s entrepreneurial ecosystem, focusing on recent Chinese government plans to grow its own domestic open-source AI ecosystem. Accordingly, this chapter reinforces the reasons why fuzzy logic lawmaking in China is so effective. It also queries whether the increased popularity of open-source platforms in China during 2017–2019 may have been another reason why data localisation was not comprehensively enforced.
In analyzing the existing and future transportation system in general, and the testing and deployment of automated and self-driving vehicles in particular, this chapter demonstrates that the application of our framework provides a good understanding of the interdependencies between the technological and institutional dimensions at stake. An analysis of both the vertical coordination between the layers along these two dimensions, respectively, and the horizontal alignment between them offers in-depth insights about the complexity of the transportation network and the conditions to be met if the expected services are to be delivered. The changes in the technological architecture, with the introduction of technological designs and operation of automated vehicles, and their interdependence with macro-institutional values, in particular safety but also security, privacy, and efficiency, offer a rich opportunity to analyze the structural complexities at stake. In this chapter, we focus on the layer of transactions: transactions between car manufacturers and their suppliers, between car manufacturers and the providers of the transportation services, and between these providers and their customers. The importance of the alignment between technical operations and micro-institutions is illustrated by the fatal accident involving an automated test car on March 19, 2018 in a street in Tempe, Arizona.
Can we trust the judgement of machines that see? Computer vision is being entrusted with ever more critical tasks: from access control by face recognition, to diagnosis of disease from medical scans and hand-eye coordination for surgical and nuclear decommissioning robots, and now to taking control of motor vehicles.
The increasing autonomy of AI systems is exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to what is meant by ‘autonomy’ and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programmes in the private or public sector. This chapter develops a novel typology that distinguishes three lenses through which to view the regulatory issues raised by autonomy: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap that is created when public authorities delegate their powers to algorithms.
A versatile architecture is presented to implement autonomous vehicles. The focus idea consists of a set of standalone modules, called wireless robotic components wireless robotic components (WRCs). Each component performs a particular function by means of a radio modem interface, a processing unit, and a sensor/actuator. The components interact through a coordinator that redirects asynchronous requests to the appropriate WRCs, configuring a built-in network. The WRC architecture has been tested in marine and terrestrial platforms to perform tasks of waypoint and wall following. Results show that the tested system complies with adaptability and portability that allow conforming a variety of autonomous vehicles.
This article addresses a dilemma about autonomous vehicles: how to respond to trade-off scenarios in which all possible responses involve the loss of life but there is a choice about whose life or lives are lost. I consider four options: kill fewer people, protect passengers, equal concern for survival, and recognize everyone’s interests. I solve this dilemma via what I call the new trolley problem, which seeks a rationale for the intuition that it is unethical to kill a smaller number of people to avoid killing a greater number of people based on numbers alone. I argue that killing a smaller number of people to avoid killing a greater number of people based on numbers alone is unethical because it disrespects the humanity of the individuals in the smaller-numbered group. I defend the recognize-everyone’s-interests algorithm, which will probably kill fewer people but will not do so based on numbers alone.
In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
1) Find out what “public morality” will prefer to see happen.
2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face.
3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences.
4) This yields “permission” to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.