We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For many tasks, a mobile robot needs to know “where it is” either on an ongoing basis or when specific events occur. A robot may need to know its location in order to be able to plan appropriate paths or to know if the current location is the appropriate place at which to perform some operation. Knowing “where the robot is” has many different connotations. In the strongest sense, “knowing where the robot is” involves estimating the location of the robot (in either qualitative or quantitative terms) with respect to some global representation of space: we refer to this as strong localization.
The use of machine learning in robotics is a vast and growing area of research. In this chapter we consider a few key variations using: the use of deep neural networks, the applications of reinforcement learning and especially deep reinforcement learning, and the rapidly emerging potential for large language models.
Although the vast majority of mobile robotic systems involve a single robot operating alone in its environment, a growing number of researchers are considering the challenges and potential advantages of having a group of robots cooperate in order to complete some required task. For some specific robotic tasks, such as exploring an unknown planet [374], search and rescue [812], pushing objects [608], [513], [687], [821], or cleaning up toxic waste [609], it has been suggested that rather than send one very complex robot to perform the task it would more effective to send a number of smaller, simpler robots. Such a collection of robots is sometimes described as a swarm [81], a colony [255], or a collective [436], or the robots may be said to exhibit cooperative behavior [607].
Robots in fiction seem to be able to engage in complex planning tasks with little or no difficulty. For example, in the novel 2001: A Space Odyssey, HAL is capable of long-range plans and reasoning about the effects and consequences of his actions [167]. It is indeed fortunate that fictional autonomous systems can be presented without having to specify how such devices represent and reason about their environment. Unfortunately, real autonomous systems often make explicit internal representations and mechanisms for reasoning about them.
Anyone who has had to move about in the dark recognizes the importance of vision to human navigation. Tasks that are fraught with difficulty and danger in the dark become straightforward when the lights are on. Given that humans seem to navigate effortlessly with vision, it seems natural to consider vision as a sensor for mobile robots. Visual sensing has many desirable potential features, including that it is passive and has high resolution and a long range.
There are many exercises included at the ends of chapters in Parts I and II of book. This appendix provides brief solutions or at least answers to most of these exercises.
We begin our journey into state estimation by considering systems that can be modelled using linear equations corrupted by Gaussian noise. While these linear-Gaussian systems are severe approximations of real robots, the mathematics are very amenable to straightforward analysis. We discuss the difference between Bayesian estimation and maximum a posteriori estimation in the context of batch trajectory estimation; these two approaches are effectively the same for linear systems, but this contrast is crucial to understanding the results for nonlinear systems later on. After introducing batch trajectory estimation, we show how the structure of the problem gives rise to sparsity in our equations that can be exploited to provide a very efficient solution. Indeed, the famous Rauch-Tung-Striebel smoother (whose forward pass is the Kalman filter) is equivalent to solving the batch trajectory problem. Several other avenues to the Kalman filter are also explored. Although much of the book focusses on discrete-time motion models for robots, we show how to begin with continuous-time models as well; in particular, we make the connection that batch continuous-time trajectory is an example of Gaussian process regression, a popular tool from machine learning.
This appendix contains a few extra derivations relating to rotations and poses that may be of interest to some enthusiastic readers. In particular, the eigen/Jordan decomposition of rotations and poses provides some deeper insight into these quantities that are ubiquitous in robotics.
Typical robots not only translate in the world but also rotate. This chapter serves a primer on three-dimensional geometry introducing such important geometric concepts as vectors, reference frames, coordinates, rotations, and poses (rotation and translation). We introduce kinematics, how geometry changes over time, with an eye towards describing robot motion models. We also present several common sensor models using our three-dimensional tools: camera, stereo camera, lidar, and inertial measurement unit.
The final technical chapter returns to the idea of representing a robot trajectory as a continuous function of time, only now in three-dimensional space where the robot may translate and rotate. We provide a method to adapt our earlier continuous-time trajectory estimation to Lie groups that is practical and efficient. The chapter serves as a final example of pulling together many of the key ingredients of the book into a single problem: continuous time estimation as Gaussian process regression, Lie groups to handle rotations, and simultaneous localization and mapping.
Nonlinear systems provide additional challenges for robotic state estimation. We provide a derivation of the famous extended Kalman filter (EKF) and then go on to study several generalizations and extensions of recursive estimation that are commonly used: the Bayes filter, the iterated EKF, the particle filter, and the sigmapoint Kalman filter. We return to batch estimation for nonlinear systems, which we connect more deeply to numerical optimization than in the linear-Gaussian chapter. We discuss the strengths and weaknesses of the various techniques presented and then introduce sliding-window filters as a compromise between recursive and batch methods. Finally, we discuss how continuous-time motion models can be employed in batch trajectory estimation for nonlinear systems.