To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The efficacy of the concept of change of measures was demonstrated in the last few chapters in the context of non-linear stochastic filtering—a tool that also has considerable scientific usefulness in developing numerical schemes for system identification problems. This chapter also concerns an application of the same notion leading to a paradigm [Sarkar et al. 2014] on global optimization problems, wherein solutions are guided mainly through derivative-free directional information computable from the sample statistical moments of the design (state) variables within a MC setup. Before the ideas on this approach are presented in some detail, it is advisable to first focus on some of the available methodologies/strategies for solving such optimization problems.
In most cases of practical interest, the cost or objective functional, whose extremization solves the optimization problem, could be non-convex, non-separable and even non-smooth. Here separability means that the cost function can be additively split in terms of the component functions and the optimization problem may actually be split into a set of sub-problems. An optimization problem is convex if it involves minimization of a convex function (or maximization of a concave function) where the admissible state variables are in a convex set. For a convex problem, a fundamental result is that a locally optimal solution is also globally optimal. The classical methods [Fletcher and Reeves 1964, Fox 1971, Rao 2009] that mostly use directional derivatives are particularly useful in solving convex problems (Fig. 9.1). Non-convex problems, on the other hand, may have many local optima, and choosing the best one (i.e., the global extremum) could be an extremely hard task. In global optimization, we seek, in the design or state or parameter space, the extremal locations of nonconvex functions subject to (possibly) nonconvex constraints. Here the objective functional could be multivariate, multimodal and even non-differentiable, which together precludes applying a gradient-based Newton–step whilst solving the optimization problem.
In the last chapter, we have laid down a procedure to solve an optimization problem by first posing it as a martingale problem (see Section 4.12, Chapter 4), whose solution may lead to a local extremization of the cost functional. The stochastic search is next guided to reach global maximum by random perturbation strategies—coalescence and scrambling—specifically devised for the purpose. To realize a single reliable scheme that satisfies the diverse and conflicting needs of an optimization problem defined in terms of multi-cost functions under prescribed constraints is a tough task [Fonseca and Fleming 1995, Deb 2001]. This chapter addresses precisely this issue and considers some modifications to the skeletal optimization approach considered in the last chapter so as to impart greater flexibility with which the innovation process may be designed in the presence of conflicting demands en route to the detection of the global extremum. The efficiency of the global search basically relies upon the ability of the algorithm to explore the search space whilst preserving some directionality that helps in quickly resolving the nearest extremum. The development of the modified setup, referred to as COMBEO (Change Of Measure Based Evolutionary Optimization), recognizes the near impossibility of a specific optimization scheme performing uniformly well across a large class of problems. Recognition of this fact had earlier [Hart et al. 2005, Vrugt and Robinson 2007] led to a proposal of an evolutionary scheme that simultaneously ran different optimization methods for a given problem with some communications built amongst the updates by the different methods. We herein similarly aim at combining a few of the basic ideas for global search used with some well-known optimization schemes under a single unified framework propped up by a sound probabilistic basis.
In a way to be explained in the sections to follow, the ideas (or their possible generalizations) behind some of the existing optimization methods may sometimes be readily included in the present setting—COMBEO—often by just tweaking the innovation process and attempting to incorporate the best practices of some of the available stochastic search methods.
Random walks are fundamental models in probability theory that exhibit deep mathematical properties and enjoy broad application across the sciences and beyond. Generally speaking, a random walk is a stochastic process modelling the random motion of a particle (or random walker) in space. The particle's trajectory is described by a series of random increments or jumps at discrete instants in time. Central questions for these models involve the long-time asymptotic behaviour of the walker.
Random walks have a rich history involving several disciplines. Classical one-dimensional random walks were first studied several hundred years ago as models for games of chance, such as the so-called gambler's ruin problem. Similar reasoning led to random walk models of stock prices described by Jules Regnault in his 1863 book [265] and Louis Bachelier in his 1900 thesis [14]. Many-dimensional random walks were first studied at around the same time, arising from the work of pioneers of science in diverse applications such as acoustics (Lord Rayleigh's theory of sound developed from about 1880 [264]), biology (Karl Pearson's 1906 [254] theory of random migration of species), and statistical physics (Einstein's theory of Brownian motion developed during 1905–8 [86]). The mathematical importance of the random walk problem became clear after Pόlya's work in the 1920s, and over the last 60 years or so there have emerged beautiful connections linking random walk theory and other influential areas of mathematics, such as harmonic analysis, potential theory, combinatorics, and spectral theory. Random walk models have continued to find new and important applications in many highly active domains of modern science: see for example the wide range of articles in [287]. Specific recent developments include modelling of microbe locomotion in microbiology [23, 245], polymer conformation in molecular chemistry [15, 202], and financial systems in economics.
Spatially homogeneous random walks are the subject of a substantial literature, including [139, 195, 269, 293]. In many modelling applications, the classical assumption of spatial homogeneity is not realistic: the behaviour of the random walker may depend on the present location in space.