To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ergodic is a term appropriated from physics that derives from the Greek words έργον and οόος meaning “work” and “path”. In the context of controlled Markov processes it refers to the problem of minimizing a time averaged penalty, or cost, over an infinite time horizon. It is of interest in situations when transients are fast and therefore relatively unimportant, and one is essentially comparing various possible equilibrium behaviors. One typical situation is in communication networks, where continuous time and space models arise as scaled limits of the underlying discrete state and/or time phenomena.
Ergodic cost differs from the simpler “integral” costs such as finite horizon or infinite horizon discounted costs in several crucial ways. Most importantly, one is looking at a cost averaged over infinite time, whence any finite initial segment is irrelevant as it does not affect the cost. This counterintuitive situation is also the reason for the fundamental difficulty in handling this problem analytically – one cannot use for this problem the naive dynamic programming heuristic because it is perforce based on splitting the time horizon into an initial segment and the rest. One is thus obliged to devise altogether different techniques to handle the ergodic cost. One of them, the more familiar one, is to treat it as a limiting case of the infinite horizon discounted cost control problem as the discount factor tends to zero. This “vanishing discount” approach leads to the correct dynamic programming, or “Hamilton–Jacobi–Bellman” (HJB) equation for the problem, allowing one to characterize optimal control policies at least in the “nicer” situations when convenient technical hypotheses hold.
Kalman filter, particle filter, IMM, PDA, ITS, random sets... The number of useful object-tracking methods is exploding. But how are they related? How do they help track everything from aircraft, missiles and extra-terrestrial objects to people and lymphocyte cells? How can they be adapted to novel applications? Fundamentals of Object Tracking tells you how. Starting with the generic object-tracking problem, it outlines the generic Bayesian solution. It then shows systematically how to formulate the major tracking problems – maneuvering, multiobject, clutter, out-of-sequence sensors – within this Bayesian framework and how to derive the standard tracking solutions. This structured approach makes very complex object-tracking algorithms accessible to the growing number of users working on real-world tracking problems and supports them in designing their own tracking filters under their unique application constraints. The book concludes with a chapter on issues critical to successful implementation of tracking algorithms, such as track initialization and merging.
The sample space is the set of all possible values, or outcomes, of a realization that is not known, be it in the past, present or future. In the Bayesian probabilistic framework, every unknown quantity is treated as a random quantity.
Examples:
When tracking an object in 3D space, the position of the target at some point in time in the future is not known. The 3D space is the sample space.
The exact position of that object in the past may not be known. In many tracking situations, the exact position is never observed, only estimated. In that case, although in the past, the exact position of the target is a random quantity and the 3D space is the sample space.
Measurements in object tracking are the results of observations by sensing devices. They are subject to random fluctuations. Measurement errors are attached to the measurements, making them random quantities. The focus is on the errors and they are treated as random values. Their sample space is problem dependent, but often is the value space of the measurements.
The sample space is the mathematical set of all values that can be taken by an unknown quantity of interest. One of the simplest examples would be the tossing of a coin. The sample space is {H, T}, where H is the outcome of a head in the tossing and T is tail.
In Chapters 2 and 3, we introduced state estimation and filtering theory and its application to idealistic object tracking problems. The fact that makes practical object tracking problems both challenging and interesting is that the sensor measurements, more often than not, contain detections from false targets. For example, in many radar and sonar applications, measurements (detections) originate not only from objects of interest, but also from thermal noise, terrain reflections, clouds, etc. Such unwanted measurements are usually termed clutter. In vision-based object tracking, where tracking can be used to count moving targets, shadows created by an afternoon sun, light reflections on snow or the movement of leaves on a tree can all generate clutter data in the images.
One of the defining characteristics of clutter or false alarms is that their number changes from one time instant to the next in a random manner and, to make matters worse, target- and clutter-originated measurements share the same measurement space and look alike. Practical tracking problems are considerably difficult since sometimes, even when there are targets in the sensor's field of view, they can go undetected or fail to appear in the set of measurements. In other words, true measurements from the target are present during each measurement scan with only a certain probability of detection. Hence, determining the state of the object using a combination of false alarms and true target returns is at the heart of all practical object tracking problems and is the subject of this chapter.
Typically, multiple-object tracking problems are handled by extending the singleobject tracking algorithms where each object is tracked as an isolated entity. The challenge comes when the targets are close by and there is ambiguity about the origin of the measurement, i.e., which measurements are from which track (in general). Using similar techniques of data association, multiple measurements are assigned to multiple objects (in general). However, such an extension of singleobject trackers to multiple-object trackers assumes that one knows the number of objects present in the surveillance space, which is not true.
This problem leads to some of the serious advances and methods of “data association” logic of these trackers. The data association step calculates the origin of the measurements in a probabilistic manner. It hypothesizes the measurement origin and calculates probabilities for each of the hypotheses. For example, a single-object tracking algorithm considers two hypotheses under measurement origin uncertainty – “the measurement is from an object of interest” or “the measurement is from clutter.” Such algorithms ignore the possibility of the measurements originating from other objects. This problem is partially solved by introducing the hypothesis “the measurement is from the ith (out of N) objects.” But setting the number of objects to a specific value is a limitation by itself. Moreover, this approach does not provide any measure for the validity of the number of objects. Multi-object trackers need to estimate the number of objects and their individual states jointly.
Tracking the paths of moving objects is an activity with a long history. People in ancient societies used to track moving prey to hunt and feed their kith and kin, and invented ways to track the motion of stars for navigation purposes and to predict seasonal changes in their environments. Object tracking has been an essential technology for human survival and has significantly contributed to human progress.
In recent times, there has been an explosion in the use of object tracking technology in non-military applications. Object tracking algorithms have become an essential part of our daily lives. For example, GPS-based navigation is a daily tool of humankind. In this application a group of artificial satellites in outer space continuously locate the vehicles people drive and the object tracking algorithms within the GPS perform self-localization and enable us to enjoy a number of locationbased services, such as finding places of interest and route planning. Similarly, tracking of objects is used in a wide variety of contexts, such as airspace surveillance, satellite and space vehicle tracking, submarine and whale tracking and intelligent video surveillance. They are also used in autonomous robot navigation using lasers, stereo cameras and other proximity sensors, radiosonde-enabled balloon tracking for accurate weather predictions, and, more recently, in the study of cell biology to study cell fate under different chemical and environmental influences by tracking many kinds of cells, including lymphocyte and stem cells through multiple generations of birth and death.
Estimation of an object state at a particular time based on measurements collected beyond that time is generally termed as smoothing or retrodiction. Smoothing improves the estimates compared to the ones obtained by filters owning to the use of more observations (or information). This comes at the cost of a certain time delay. However, these improvements are highly effective in applications like “situation awareness” or “threat assessment.” These higher level applications improve operator efficiency if a more accurate picture of the actual field scenario is provided to them, even if it is with a time delay. For these applications, besides object state, parameters representing the overall scenario, like number of targets, their initiation/termination instants and locations, may prove to be very useful ones. A smoothing algorithm can result in a better estimation of the overall situational picture and thus help increase the effectiveness of the critical applications like situation/ threat awareness. This chapter will introduce the Bayesian formulation of smoothing and derive the established smoothing algorithms under different tracking scenarios: non-maneuvering, maneuvering, clutter and in the presence of object existence uncertainty.
Introduction to smoothing
Filters, introduced in previous chapters, produce the “best estimate” of the object state at a particular time based on the measurements collected up to that time. Smoothers, on the other hand, produce an estimate of the state at a time based on measurements collected beyond the time in question (the predictor is another estimator where the estimation at a certain time is carried out based on measurements collected until a point before that time).
Maneuvering objects are those objects whose dynamical behavior changes over time. An object that suddenly turns or accelerates displays a maneuvering behavior with regard to its tracked position. While the definition of a maneuvering object extends beyond the tracking of position and speed, historically it is in this context that maneuvering object tracking theory developed. This chapter presents a unified derivation of some of the most common maneuvering object tracking algorithms in the Chapman–Kolmogorov–Bayesian framework.
Modeling for maneuvering object tracking
In general, maneuvering object tracking refers to the problem of state estimation where the system model undergoes abrupt changes. The standard Kalman filter with a single motion model is limited in performance for such problems because it does not effectively respond to the changes in the dynamics as the object maneuvers. A large number of approaches to the maneuvering object tracking problem have been developed including process noise adaptation (Singer et al., 1974; Moose, 1975; Gholson and Moose, 1977; Ricker and Williams, 1978; Moose et al., 1979; Farina and Studer, 1985), input estimation (Chan et al., 1979), variable dimension filtering (Bar-Shalom and Birmiwal, 1982) and multiple models (MM) (Ackerson and Fu, 1970; Mori et al., 1986; Blom and Bar-Shalom, 1988; Bar-Shalom and Li, 1993), etc. These apparently diverse approaches may be grouped into two broad categories:
Object/target tracking refers to the problem of using sensor measurements to determine the location, path and characteristics of objects of interest. A sensor can be any measuring device, such as radar, sonar, ladar, camera, infrared sensor, microphone, ultrasound or any other sensor that can be used to collect information about objects in the environment. The typical objectives of object tracking are the determination of the number of objects, their identities and their states, such as positions, velocities and in some cases their features. A typical example of object/target tracking is the radar tracking of aircraft. The object tracking problem in this context attempts to determine the number of aircraft in a region under surveillance, their types, such as military, commercial or recreational, their identities, and their speeds and positions, all based on measurements obtained from a radar.
There are a number of sources of uncertainty in the object tracking problem that render it a highly non-trivial task. For example, object motion is often subject to random disturbances, objects can go undetected by sensors and the number of objects in the field of view of a sensor can change randomly. The sensor measurements are subject to random noises and the number of measurements received by a sensor from one look to the next can vary and be unpredictable. Objects may be close to each other and the measurements received might not distinguish between these objects.