To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Convex optimization problems arise frequently in many different fields. This book provides a comprehensive introduction to the subject, and shows in detail how such problems can be solved numerically with great efficiency. The book begins with the basic elements of convex sets and functions, and then describes various classes of convex optimization problems. Duality and approximation techniques are then covered, as are statistical estimation techniques. Various geometrical problems are then presented, and there is detailed discussion of unconstrained and constrained minimization problems, and interior-point methods. The focus of the book is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. It contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance and economics.
This collection of papers presents a series of in-depth examinations of a variety of advanced topics related to Boolean functions and expressions. The chapters are written by some of the most prominent experts in their respective fields and cover topics ranging from algebra and propositional logic to learning theory, cryptography, computational complexity, electrical engineering, and reliability theory. Beyond the diversity of the questions raised and investigated in different chapters, a remarkable feature of the collection is the common thread created by the fundamental language, concepts, models, and tools provided by Boolean theory. Many readers will be surprised to discover the countless links between seemingly remote topics discussed in various chapters of the book. This text will help them draw on such connections to further their understanding of their own scientific discipline and to explore new avenues for research.
This introduction to decision theory offers comprehensive and accessible discussions of decision-making under ignorance and risk, the foundations of utility theory, the debate over subjective and objective probability, Bayesianism, causal decision theory, game theory, and social choice theory. No mathematical skills are assumed, and all concepts and results are explained in non-technical and intuitive as well as more formal ways. There are over 100 exercises with solutions, and a glossary of key terms and concepts. An emphasis on foundational aspects of normative decision theory (rather than descriptive decision theory) makes the book particularly useful for philosophy students, but it will appeal to readers in a range of disciplines including economics, psychology, political science and computer science.
This book is on existence and necessary conditions, such as Potryagin's maximum principle, for optimal control problems described by ordinary and partial differential equations. These necessary conditions are obtained from Kuhn–Tucker theorems for nonlinear programming problems in infinite dimensional spaces. The optimal control problems include control constraints, state constraints and target conditions. Evolution partial differential equations are studied using semigroup theory, abstract differential equations in linear spaces, integral equations and interpolation theory. Existence of optimal controls is established for arbitrary control sets by means of a general theory of relaxed controls. Applications include nonlinear systems described by partial differential equations of hyperbolic and parabolic type and results on convergence of suboptimal controls.
For many applications a randomized algorithm is either the simplest algorithm available, or the fastest, or both. This tutorial presents the basic concepts in the design and analysis of randomized algorithms. The first part of the book presents tools from probability theory and probabilistic analysis that are recurrent in algorithmic applications. Algorithmic examples are given to illustrate the use of each tool in a concrete setting. In the second part of the book, each of the seven chapters focuses on one important area of application of randomized algorithms: data structures; geometric algorithms; graph algorithms; number theory; enumeration; parallel algorithms; and on-line algorithms. A comprehensive and representative selection of the algorithms in these areas is also given. This book should prove invaluable as a reference for researchers and professional programmers, as well as for students.
This first book on greedy approximation gives a systematic presentation of the fundamental results. It also contains an introduction to two hot topics in numerical mathematics: learning theory and compressed sensing. Nonlinear approximation is becoming increasingly important, especially since two types are frequently employed in applications: adaptive methods are used in PDE solvers, while m-term approximation is used in image/signal/data processing, as well as in the design of neural networks. The fundamental question of nonlinear approximation is how to devise good constructive methods (algorithms) and recent results have established that greedy type algorithms may be the solution. The author has drawn on his own teaching experience to write a book ideally suited to graduate courses. The reader does not require a broad background to understand the material. Important open problems are included to give students and professionals alike ideas for further research.
Probabilistic risk analysis aims to quantify the risk caused by high technology installations. Increasingly, such analyses are being applied to a wider class of systems in which problems such as lack of data, complexity of the systems, uncertainty about consequences, make a classical statistical analysis difficult or impossible. The authors discuss the fundamental notion of uncertainty, its relationship with probability, and the limits to the quantification of uncertainty. Drawing on extensive experience in the theory and applications of risk analysis, the authors focus on the conceptual and mathematical foundations underlying the quantification, interpretation and management of risk. They cover standard topics as well as important new subjects such as the use of expert judgement and uncertainty propagation. The relationship of risk analysis with decision making is highlighted in chapters on influence diagrams and decision theory. Finally, the difficulties of choosing metrics to quantify risk, and current regulatory frameworks are discussed.
Quantitative risk assessments cannot eliminate risk, nor can they resolve trade-offs. They can, however, guide principled risk management and reduction - if the quality of assessment is high and decision makers understand how to use it. This book builds a unifying scientific framework for discussing and evaluating the quality of risk assessments and whether they are fit for purpose. Uncertainty is a central topic. In practice, uncertainties about inputs are rarely reflected in assessments, with the result that many safety measures are considered unjustified. Other topics include the meaning of a probability, the use of probability models, the use of Bayesian ideas and techniques, and the use of risk assessment in a practical decision-making context. Written for professionals, as well as graduate students and researchers, the book assumes basic probability, statistics and risk assessment methods. Examples make concepts concrete, and three extended case studies show the scientific framework in action.
The search for symmetry is part of the fundamental scientific paradigm in mathematics and physics. Can this be valid also for economics? This book represents an attempt to explore this possibility. The behavior of price-taking producers, monopolists, monopsonists, sectoral market equilibria, behavior under risk and uncertainty, and two-person zero- and non-zero-sum games are analyzed and discussed under the unifying structure called the linear complementarity problem. Furthermore, the equilibrium problem allows for the relaxation of often-stated but unnecessary assumptions. This unifying approach offers the advantage of a better understanding of the structure of economic models. It also introduces the simplest and most elegant algorithm for solving a wide class of problems.
Prospect Theory: For Risk and Ambiguity, provides a comprehensive and accessible textbook treatment of the way decisions are made both when we have the statistical probabilities associated with uncertain future events (risk) and when we lack them (ambiguity). The book presents models, primarily prospect theory, that are both tractable and psychologically realistic. A method of presentation is chosen that makes the empirical meaning of each theoretical model completely transparent. Prospect theory has many applications in a wide variety of disciplines. The material in the book has been carefully organized to allow readers to select pathways through the book relevant to their own interests. With numerous exercises and worked examples, the book is ideally suited to the needs of students taking courses in decision theory in economics, mathematics, finance, psychology, management science, health, computer science, Bayesian statistics, and engineering.
At a time when corporate scandals and major financial failures dominate newspaper headlines, the importance of good risk management practices has never been more obvious. The absence or mismanagement of such practices can have devastating effects on exposed organizations and the wider economy (Barings Bank, Enron, Lehmann Brothers, Northern Rock, to name but a few). Today's organizations and corporate leaders must learn the lessons of such failures by developing practices to deal effectively with risk. This book is an important step towards this end. Written from a European perspective, it brings together ideas, concepts and practices developed in various risk markets and academic fields to provide a much-needed overview of different approaches to risk management. It critiques prevailing enterprise risk management frameworks (ERMs) and proposes a suitable alternative. Combining academic rigour and practical experience, this is an important resource for graduate students and professionals concerned with strategic risk management.
Shimon Even's Graph Algorithms, published in 1979, was a seminal introductory book on algorithms read by everyone engaged in the field. This thoroughly revised second edition, with a foreword by Richard M. Karp and notes by Andrew V. Goldberg, continues the exceptional presentation from the first edition and explains algorithms in a formal but simple language with a direct and intuitive presentation. The book begins by covering basic material, including graphs and shortest paths, trees, depth-first-search and breadth-first search. The main part of the book is devoted to network flows and applications of network flows, and it ends with chapters on planar graphs and testing graph planarity.
Bayesian decision analysis supports principled decision making in complex domains. This textbook takes the reader from a formal analysis of simple decision problems to a careful analysis of the sometimes very complex and data rich structures confronted by practitioners. The book contains basic material on subjective probability theory and multi-attribute utility theory, event and decision trees, Bayesian networks, influence diagrams and causal Bayesian networks. The author demonstrates when and how the theory can be successfully applied to a given decision problem, how data can be sampled and expert judgements elicited to support this analysis, and when and how an effective Bayesian decision analysis can be implemented. Evolving from a third-year undergraduate course taught by the author over many years, all of the material in this book will be accessible to a student who has completed introductory courses in probability and mathematical statistics.
With the advent of approximation algorithms for NP-hard combinatorial optimization problems, several techniques from exact optimization such as the primal-dual method have proven their staying power and versatility. This book describes a simple and powerful method that is iterative in essence and similarly useful in a variety of settings for exact and approximate optimization. The authors highlight the commonality and uses of this method to prove a variety of classical polyhedral results on matchings, trees, matroids and flows. The presentation style is elementary enough to be accessible to anyone with exposure to basic linear algebra and graph theory, making the book suitable for introductory courses in combinatorial optimization at the upper undergraduate and beginning graduate levels. Discussions of advanced applications illustrate their potential for future application in research in approximation algorithms.
To better understand the core concepts of probability and to see how they affect real-world decisions about design and system performance, engineers and scientists might want to ask themselves the following questions: what exactly is meant by probability? What is the precise definition of the 100-year load and how is it calculated? What is an 'extremal' probability distribution? What is the Bayesian approach? How is utility defined? How do games fit into probability theory? What is entropy? How do I apply these ideas in risk analysis? Starting from the most basic assumptions, this 2005 book develops a coherent theory of probability and broadens it into applications in decision theory, design, and risk analysis. This book is written for engineers and scientists interested in probability and risk. It can be used by undergraduates, graduate students, or practicing engineers.
This book provides a solid foundation and an extensive study for an important class of constrained optimization problems known as Mathematical Programs with Equilibrium Constraints (MPEC), which are extensions of bilevel optimization problems. The book begins with the description of many source problems arising from engineering and economics that are amenable to treatment by the MPEC methodology. Error bounds and parametric analysis are the main tools to establish a theory of exact penalisation, a set of MPEC constraint qualifications and the first-order and second-order optimality conditions. The book also describes several iterative algorithms such as a penalty-based interior point algorithm, an implicit programming algorithm and a piecewise sequential quadratic programming algorithm for MPECs. Results in the book are expected to have significant impacts in such disciplines as engineering design, economics and game equilibria, and transportation planning, within all of which MPEC has a central role to play in the modelling of many practical problems.
The depth-first search (DFS) technique is a method of scanning a finite, undirected graph. Since the publication of the papers of Hopcroft and Tarjan [4, 6], DFS has been widely recognized as a powerful technique for solving various graph problems. However, the algorithm has been known since the nineteenth century as a technique for threading mazes. See, for example, Lucas' report of Trémaux's work [5]. Another algorithm, which was suggested later by Tarry [7], is just as good for threading mazes, and in fact, DFS is a special case of it. But the additional structure of DFS is what makes the technique so useful.
Trémaux's Algorithm
Assume one is given a finite, connected graph G(V,E), which we will also refer to as the maze. Starting in one of the vertices, one wants to “walk” along the edges, from vertex to vertex, visit all vertices, and halt. We seek an algorithm that will guarantee that the whole graph will be scanned without wandering too long in the maze, and that the procedure will allow one to recognize when the task is done. However, before one starts walking in the maze, one does not know anything about its structure, and therefore, no preplanning is possible. So, decisions about where to go next must be made one by one as one goes along.
We will use “markers,” which will be placed in the maze to help one to recognize that one has returned to a place visited earlier and to make later decisions on where to go next.
Consider a graph drawn in the plane in such a way that each vertex is represented by a point; each edge is represented by a continuous line connecting the two points that represent its end vertices, and no two lines, which represent edges, share any points, except in their ends. Such a drawing is called a plane graph. If a graph G has a representation in the plane that is a plane graph then it is said to be planar.
In this chapter, we shall discuss some of the classical work concerning planar graphs. The question of efficiently testing whether a given finite graph is planar is discussed in the next chapter.
Let S be a set of vertices of a non-separable graph G(V,E). Consider the partition of the set V – S into classes, such that two vertices are in the same class if and only if there is a path connecting them that does not use any vertex of S. Each such class K defines a component as follows: The component is a subgraph H(V′,E′), where V′ ⊃ K. In addition, V′ includes all the vertices of S that are connected by an edge to a vertex of K, in G. Also, E′ contains all edges of G that have at least one end-vertex in K. An edge, where both u and ν are in S, defines a singular component ({u,ν}, {e}). Clearly, two components share no edges, and the only vertices they can share are elements of S.