To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers the basics from real analysis to linear algebra and the theory of computation that is foundational for the rest of the book. A careful discussion of different models of computation is taken up, which discusses several issues that are often ignored in other presentations of optimization theory and algorithms.
A careful exposition of the conceptual underpinnings of algorithmic or computational optimization is presented. Computation in continuous optimization has its origins in the traditions of scientific computing and numerical analysis, whereas discrete optimization broadly views computation via the Turing machine model. The different views lead to some friction. In the continuous world, one often designs algorithms assuming one can perform exact operations with real numbers (consider, for example, Newton’s method), which is impossible in the Turing machine model. In the discrete world, the “input" to a Turing machine becomes a tricky question when dealing with general nonlinear functions and sets. The question of “complexity" of an optimization algorithm is also treated in somewhat different ways in the two communities. This chapter, combined with the careful discussion of computation models in Chapter 1, shows how all these issues can be handled in a unified, coherent way making no distinction whatsoever between "continuous" and "discrete" optimization.
This chapter deals with the important question of certifying optimality of a solution to a mixed-integer convex optimization problem. The classical duality theory for continuous optimization, including Lagrangian relaxations, KKT and general optimality conditions, and Slater type conditions for strong duality, is rigorously covered in complete detail. Recent work on duality for mixed-integer convex optimization is succinctly summarized.
In the early part of the 20th century, Hermann Minkowski developed a novel geometric approach to several questions in number theory. This approach developed into a field called the geometry of numbers and it had an influence on fields outside number theory as well, particularly functional analysis and the study of Banach spaces, and more recently on cryptography and discrete optimization. This chapter covers those aspects of the geometry of numbers that are most relevant for the second part of the book on optimization. Topics include the basic theory of lattices (including Minkowski’s convex body theorem), packing and covering radii, shortest and closest lattice vector problems (SVPs and CVPs), Dirichlet-Voronoi cells, Khinchine’s flatness theorem, and maximal lattice-free convex sets. Several topics like lattice basis reduction and SVP/CVP algorithms are presented without making a rationality assumption as is common in other expositions. This presents a slightly more general perspective on these topics that contains the rational setting as a special case.
This chapter introduces the concept of a convex function and develops the basic theory of convex functions. Standard continuity and differentiability properties are established. Fundamental notions like subgradients and subdifferentials are introduced and their properties are investigated in detail. Sublinear functions get particular focus, given their recent importance in optimization theory and practice. Some new results on sublinear functions that have never before appeared outside specialized research articles are presented with clean, textbook-style proofs. Elementary Brunn-Minkowski theory is covered, including important consequences like the concavity principle and the Rogers-Shepard inequality.
This chapter introduces the fundamental notion of a convex set. It establishes basic structural properties of convex sets, illustrated via examples throughout. The chapter gives equal emphasis on the analytic as well as discrete or combinatorial aspects of convexity. Topics include foundational results like the Separating and Supporting Hyperplane theorems, polarity, the combinatorial theorems of Caratheodory, Radon and Helly, and the basic theory of polyhedra and ellipsoids.
This chapter presents the theory of mixed-integer convex optimization, i.e., minimizing a convex function subject to convex constraints where some of the decision variables have to take integer values. State-of-the-art results on information and algorithmic complexity of mixed-integer convex optimization are established. The basics of continuous convex optimization are presented as the special case where no variable is integer constrained.
Information complexity of classical continuous optimization has been well understood since the 1970s. The information complexity in the presence of integer variables was not well developed until research work done in the past decade and is covered in complete detail here. On the algorithmic side, the best known upper bound of $2^{n\log(n)}$ on the complexity of deterministic algorithms for convex integer optimization is presented, which does not appear outside specialized, technical research articles. Moreover, a general mixed-integer complexity bound allowing for both integer and continuous variables is presented that does not explicitly appear anywhere in the literature. A complete theory of branch-and-cut methods is also covered.
Using a pedagogical, unified approach, this book presents both the analytic and combinatorial aspects of convexity and its applications in optimization. On the structural side, this is done via an exposition of classical convex analysis and geometry, along with polyhedral theory and geometry of numbers. On the algorithmic/optimization side, this is done by the first ever exposition of the theory of general mixed-integer convex optimization in a textbook setting. Classical continuous convex optimization and pure integer convex optimization are presented as special cases, without compromising on the depth of either of these areas. For this purpose, several new developments from the past decade are presented for the first time outside technical research articles: discrete Helly numbers, new insights into sublinear functions, and best known bounds on the information and algorithmic complexity of mixed-integer convex optimization. Pedagogical explanations and more than 300 exercises make this book ideal for students and researchers.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.