We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the previous chapters, we dealt with general problems by first formulating all necessary constraints and then passing the problem to an LO or MILO solver, but in a way, we have been oblivious to the problem’s structure. However, it is often advantageous to analyze this structure, as it can enable us to find better solution methods. In this chapter, we consider a very general class of problems with special structure – the network problems. In the following example, we illustrate the key ideas.
In Chapter 5, we claimed that the watershed between easy and difficult problems is their convexity status. Convex optimization problems are, however, a very broad class and one of their downsides is that the dual problem is not always readily available; see the discussion in Section 5.3. In view of the computational benefits of concurrently solving the primal and dual problems, a natural question arises: Is there a subclass of convex optimization problems that are expressive enough to model relevant real-life problems and, at the same time, allow us for a systematic derivation of the dual akin to linear optimization?
The particular feature of linear optimization problems is that as long as the decision variables satisfy all the constraints, they can take any value. However, there are many situations in which it makes sense to restrict the solution space in a way that cannot be expressed using linear (in)equality constraints. For example, some numbers might need to be integers, such as the number of people to be assigned to a task. Another situation is when certain constraints need to hold only if another constraint holds. For example, the amount of power generated by a power plant must not be less than a certain minimum threshold only if that generator is turned on. Neither of these two examples can be expressed using only linear constraints, as we have seen up to this point. In these cases, it is often still possible to formulate the problem as an LO problem, although some additional restrictions may be needed on certain variables, requiring them to take integer values only. We will refer to this type of LO problem in which some variables are constrained to be integers as mixed-integer linear optimization (MILO) problems.
The simplest and most scalable type of optimization problem is one in which the objective function and constraints are formulated using the simplest type of functions – linear functions. We refer to this class of problems as linear optimization (LO) problems.
This chapter introduces robust optimization (RO), where we aim to solve a MILO in which some of the parameters/data can take multiple (possibly infinitely many) values and we want the optimal solution to perform “the best possible,” assuming that the unknown problem parameters can always turn out to be “the worst possible.”
Our aim so far has been to formulate a real-life decision problem as a (mixed integer) linear optimization problem. The reason was clear: Linear functions are simple, so problems formulated with them should also be simple. However, two questions arise. First, is the world of linear functions flexible enough to model all real-life problems? Second, are MILOs the only simple problems we can solve quickly?
Using a pedagogical, unified approach, this book presents both the analytic and combinatorial aspects of convexity and its applications in optimization. On the structural side, this is done via an exposition of classical convex analysis and geometry, along with polyhedral theory and geometry of numbers. On the algorithmic/optimization side, this is done by the first ever exposition of the theory of general mixed-integer convex optimization in a textbook setting. Classical continuous convex optimization and pure integer convex optimization are presented as special cases, without compromising on the depth of either of these areas. For this purpose, several new developments from the past decade are presented for the first time outside technical research articles: discrete Helly numbers, new insights into sublinear functions, and best known bounds on the information and algorithmic complexity of mixed-integer convex optimization. Pedagogical explanations and more than 300 exercises make this book ideal for students and researchers.
Dive into the foundations of intelligent systems, machine learning, and control with this hands-on, project-based introductory textbook. Precise, clear introductions to core topics in fuzzy logic, neural networks, optimization, deep learning, and machine learning, avoid the use of complex mathematical proofs, and are supported by over 70 examples. Modular chapters built around a consistent learning framework enable tailored course offerings to suit different learning paths. Over 180 open-ended review questions support self-review and class discussion, over 120 end-of-chapter problems cement student understanding, and over 20 hands-on Arduino assignments connect theory to practice, supported by downloadable Matlab and Simulink code. Comprehensive appendices review the fundamentals of modern control, and contain practical information on implementing hands-on assignments using Matlab, Simulink, and Arduino. Accompanied by solutions for instructors, this is the ideal guide for senior undergraduate and graduate engineering students, and professional engineers, looking for an engaging and practical introduction to the field.
In Chapter 5, we look at approaches that belong to heuristic algorithms. These methods are derived from observations nature provide. In our argumentation for specific heuristic optimization algorithms, we discuss the local search and the hill climbing problem. One of the outcomes of this discussion is the argument for attempting to avoid cycling during a search. Tabu search optimization is built on this premise where we avoid cycling. An entirely different class of heuristic optimization algorithms are given by Particle Swarm optimization and Ant Colony optimization algorithms. In contrast to Tabu search and local search, the PSO and AC optimization algorithms utilize a number of agents in order to search for optimality. Another multi agent based algorithm is the Genetic algorithm. GA’s are inspired by Darwin’s survival of the fittest principle and use the terminology found in the field of genetics. Additionally, in this chapter we use heuristic optimization to formulate optimum control concepts, including hybrid control using fuzzy logic-based controllers and Matlab scripts to realize each of the heuristic optimization algorithms.