We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Rotational state variables are a problem for our estimation tools from earlier chapters, which all assumed the state to be estimated was a vector in the sense of linear algebra. Rotations cannot be globally described as vectors and as such must be handled with care. This chapter re-examines rotations as an example of a Lie group, which has many useful properties despite not being a vector space. The main takeaway of the chapter is that in estimation we can use the Lie group structure to adapt our estimation tools to work with rotations, and by association poses. The key is to consider small perturbations to rotations in the group's Lie algebra in order to make two tasks easier to handle: performing numerical optimization and representing uncertainty. The chapter can also serve as a useful reference for readers already familiar with the content.
Following on the heels of the chapter on nonlinear estimation, this chapter focusses on some of the common pitfalls and failure modes of estimation techniques. We begin by discussing some key properties that we would like healthy estimators to have (i.e., unbiased, consistent) and how to measure these properties. We delve more deeply into biases and discuss how in some cases we can fold bias estimation right into our estimator, while in other cases we cannot. We touch briefly on data association (matching measurements to the right parts of models) and how to mitigate the effect of outlier measurements using robust estimation. We close with some methods to determine good measurement covariances for use in our estimators.
This chapter opens with a brief history of estimation from astronomy, navigation at sea, and space exploration. It defines the problem of estimation and gives some modern sensor fusion examples. A description of how the book is organized and how to read it is provided. The book is compared to other great volumes on estimation and robotics in order to understand how it fits into the larger landscape.
This appendix is a collection of topics that were slightly peripheral to the main flow of the book, but still potentially interesting to some readers. The derivations of Fisher's information matrix in several forms as well as Stein's lemma are both important tools employed in the main parts of the book.
With both our estimation and Lie group tools from previous chapters, we now begin to bring the two together. We discuss a classic three-dimensional estimation problem in robotics: pointcloud alignment; this gives us our first example of carrying out optimization over the group of rotations by a few different means. We then present the classic problem of localizing a moving robot using point observations of known three-dimensional landmarks; this involves adapting the extended Kalman filter (EKF) to work with the group of poses. Another common problem in robotics is that of pose-graph optimization, which is easily handled using our Lie group tools. We conclude with a presentation of how to carry out trajectory estimation based on an inertial measurement unit (IMU) both recursively via the EKF and batch using IMU preintegration for efficiency.
This chapter is devoted to the classic simultaneous localization and mapping (SLAM) problem and the related problem, bundle adjustment. In these problems we must estimate not only the trajectory of a robot but also the three-dimensional positions of many point landmarks, based on noisy sensor data and a motion model (in the case of SLAM). We discuss how to adapt the tools presented earlier to include landmarks in the state; the inclusion of landmarks changes the sparsity pattern of the resulting estimation equations and we discuss strategies of continuing to solve them efficiently. Our approach is carried out entirely in three dimensions using our Lie group tools.
As the book attempts to be as stand-alone as possible, this chapter provides up front a summary of all the results in probability theory that will be needed later on. Probability is key to estimation as we not only want to estimate, for example, where something is but how confident we are in that estimate. The first half of the chapter introduces general probability density functions, Bayes' theorem, the notion of independence, and quantifying uncertainty amongst other topics. The second half of the chapter delves into Gaussian probability density functions specifically and establishes the key tools needed in common estimation algorithms to follow in later chapters. This chapter can also simply serve as a reference for readers already familiar with the content.
This appendix serves as a quick summary of the main linear algebra and matrix calculus tools used throughout the book. It was designed primarily as a reference but could be used as a primer or refresher to be read before the main chapters of the book.
This chapter takes a step back and revisits nonlinear estimation through the lens of variational inference, another concept common in the machine learning world. Estimation is posed as minimizing a data-likelihood objective, the Kullback-Leibler divergence between a Gaussian estimate and the true Bayesian posterior. We follow through the consequences of this starting point and show that we can arrive at many of the algorithms presented earlier through appropriate approximations, but can also open the door to new possibilities. For example, a derivative-free batch estimator that uses sigmapoints is discussed. Variational inference also provides a principled approach to learning parameters in our estimators from training data (i.e., parameters of our motion and observation models).
A key aspect of robotics today is estimating the state (e.g., position and orientation) of a robot, based on noisy sensor data. This book targets students and practitioners of robotics by presenting classical state estimation methods (e.g., the Kalman filter) but also important modern topics such as batch estimation, Bayes filter, sigmapoint and particle filters, robust estimation for outlier rejection, and continuous-time trajectory estimation and its connection to Gaussian-process regression. Since most robots operate in a three-dimensional world, common sensor models (e.g., camera, laser rangefinder) are provided followed by practical advice on how to carry out state estimation for rotational state variables. The book covers robotic applications such as point-cloud alignment, pose-graph relaxation, bundle adjustment, and simultaneous localization and mapping. Highlights of this expanded second edition include a new chapter on variational inference, a new section on inertial navigation, more introductory material on probability, and a primer on matrix calculus.
Uniquely comprehensive and precise, this thoroughly updated sixth edition of the well-established and respected textbook is ideal for the complete study of the kinematics and dynamics of machines. With a strong emphasis on intuitive graphical methods, and accessible approaches to vector analysis, students are given all the essential background, notation, and nomenclature needed to understand the various independent technical approaches that exist in the field of mechanisms, kinematics, and dynamics, which are presented with clarity and coherence. This revised edition features updated coverage, and new worked examples alongside over 840 figures, over 620 end-of-chapter problems, and a solutions manual for instructors.