We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A careful exposition of the conceptual underpinnings of algorithmic or computational optimization is presented. Computation in continuous optimization has its origins in the traditions of scientific computing and numerical analysis, whereas discrete optimization broadly views computation via the Turing machine model. The different views lead to some friction. In the continuous world, one often designs algorithms assuming one can perform exact operations with real numbers (consider, for example, Newton’s method), which is impossible in the Turing machine model. In the discrete world, the “input" to a Turing machine becomes a tricky question when dealing with general nonlinear functions and sets. The question of “complexity" of an optimization algorithm is also treated in somewhat different ways in the two communities. This chapter, combined with the careful discussion of computation models in Chapter 1, shows how all these issues can be handled in a unified, coherent way making no distinction whatsoever between "continuous" and "discrete" optimization.
This chapter deals with the important question of certifying optimality of a solution to a mixed-integer convex optimization problem. The classical duality theory for continuous optimization, including Lagrangian relaxations, KKT and general optimality conditions, and Slater type conditions for strong duality, is rigorously covered in complete detail. Recent work on duality for mixed-integer convex optimization is succinctly summarized.
In the early part of the 20th century, Hermann Minkowski developed a novel geometric approach to several questions in number theory. This approach developed into a field called the geometry of numbers and it had an influence on fields outside number theory as well, particularly functional analysis and the study of Banach spaces, and more recently on cryptography and discrete optimization. This chapter covers those aspects of the geometry of numbers that are most relevant for the second part of the book on optimization. Topics include the basic theory of lattices (including Minkowski’s convex body theorem), packing and covering radii, shortest and closest lattice vector problems (SVPs and CVPs), Dirichlet-Voronoi cells, Khinchine’s flatness theorem, and maximal lattice-free convex sets. Several topics like lattice basis reduction and SVP/CVP algorithms are presented without making a rationality assumption as is common in other expositions. This presents a slightly more general perspective on these topics that contains the rational setting as a special case.
This chapter introduces the concept of a convex function and develops the basic theory of convex functions. Standard continuity and differentiability properties are established. Fundamental notions like subgradients and subdifferentials are introduced and their properties are investigated in detail. Sublinear functions get particular focus, given their recent importance in optimization theory and practice. Some new results on sublinear functions that have never before appeared outside specialized research articles are presented with clean, textbook-style proofs. Elementary Brunn-Minkowski theory is covered, including important consequences like the concavity principle and the Rogers-Shepard inequality.
This chapter introduces the fundamental notion of a convex set. It establishes basic structural properties of convex sets, illustrated via examples throughout. The chapter gives equal emphasis on the analytic as well as discrete or combinatorial aspects of convexity. Topics include foundational results like the Separating and Supporting Hyperplane theorems, polarity, the combinatorial theorems of Caratheodory, Radon and Helly, and the basic theory of polyhedra and ellipsoids.
This chapter presents the theory of mixed-integer convex optimization, i.e., minimizing a convex function subject to convex constraints where some of the decision variables have to take integer values. State-of-the-art results on information and algorithmic complexity of mixed-integer convex optimization are established. The basics of continuous convex optimization are presented as the special case where no variable is integer constrained.
Information complexity of classical continuous optimization has been well understood since the 1970s. The information complexity in the presence of integer variables was not well developed until research work done in the past decade and is covered in complete detail here. On the algorithmic side, the best known upper bound of $2^{n\log(n)}$ on the complexity of deterministic algorithms for convex integer optimization is presented, which does not appear outside specialized, technical research articles. Moreover, a general mixed-integer complexity bound allowing for both integer and continuous variables is presented that does not explicitly appear anywhere in the literature. A complete theory of branch-and-cut methods is also covered.
Mathematical optimization models are mathematical means to find the best possible solutions to real-life optimization problems. They consist of three parts: decision variables that describe possible solutions, constraints that define conditions that these solutions need to satisfy, and an objective function that assigns a value to each solution, expressing how “good” it is.
In all the optimization problems discussed so far, we treated the quantities in the problem description as exact, but, in reality, they cannot always be trusted or assumed to be what we think. Uncertainty might negatively affect solutions to an optimization problem in the following forms:
Estimation/forecast errors (increasingly important in an ML-driven world):
– in a production planning problem, future customer demand is a forecast;
– in a vehicle routing problem, travel times along various roads are real-time updated forecasts;
– in a wind farm layout problem, power production levels are based on wind forecasts.
Measurement errors:
– a warehouse manager might have errors in the data records regarding current stock levels;
– the concentration level of a given chemical substance is different from expected.
Implementation errors:
– a given quantity of an ingredient is sent to production in a chemical company, but due to device errors, a slightly smaller amount is actually received;
– electrical power sent to an antenna is subject to the generator’s errors.
In this chapter, compared to Chapter 8 we assume that data or expert knowledge can tell us not only something about the possible values of the problem’s parameters but also about their relative likelihood, that is, the probability distribution.
We now consider problems in which the situation is not as simple as “first we make the decisions, then we observe the uncertainty and compute the costs”