To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter first introduces basic concepts in nonlinear optimization, especially feasible regions and convexity conditions. Sufficient conditions are provided for both convex regions and convex functions. Next, optimality conditions are presented for unconstrained optimization problems (stationary conditions), and constrained problems with equality constraints (stationary condition of Lagrange function) and with inequality constraints (Fritz-John Theorem).The chapter concludes with nonlinear optimization with equality and equalityconstraints that lead to the Karush–Kuhn–Tucker conditions. Finally, an active set strategy is introduced for the solution of small nonlinear programming problems, and is illustrated with a small example.
This chapter addresses the global optimization of nonconvex NLP and MINLP optimization problems. The use of convexification transformations is first introduced that allow us to transform a nonconvex NLP into a convex NLP. This is illustrated with geometric programming problems that involve posynomals, and that can be convexified with exponential transformations. We consider next the more general solution approach that relies on the use of convex envelopes that can predict rigorous lower bounds to the global optimum, and which are used in conjunction with a spatial branch and bound method. The case of bilinear NLP problems is addressed as a specific example for which the McCormick convex envelopes are derived. The application of the spatial branch and bound search, coupled with McCormick envelopes, is illustrated with a small example. The software BARON, ANTIGONE, and SCIOP are briefly described.
The cold neutral medium (CNM) represents gas at temperature T ∼ 80 K and number density n ∼ 40 cm-−3, where heating by photoelectrons ejected from dust grains balances cooling by fine-structure line emission from C+. The cold neutral medium is studied by looking at the absorption lines caused by the CNM along the line of sight to bright background stars. Interpreting these absorption lines requires solving the equation of radiative transfer. In particular, the curve of growth for an absorption line yields the relation between the observed equivalent width of a line and the underlying column density of the atom or ion giving rise to the absorption.
The warm ionized medium (WIM) represents gas at T ~ 8000 K and n ~ 0.2 cm−3, where heating by a variety of mechanisms balances cooling by fine-structure line emission from oxygen and Lyman alpha emission from hydrogen. Ionized nebulae, such as H ii regions around hot stars and planetary nebulae around newly unveiled white dwarfs, have temperatures similar to the WIM, but much higher density. Ionized nebulae can be idealized as spherical Strömgren spheres. The physics of an ionized nebula is made more complex (and interesting!) by the presence of helium and “metals.” Emission lines from oxygen, nitrogen, sulfur, and other metals both help to cool an ionized nebula and provide useful diagnostic tools to determine observationally the density and temperature of the nebula.
The warm neutral medium (WNM) represents gas at T ∼ 6000 K and n ∼ 0.4 cm−3, where heating by photoelectrons from dust grains balances cooling by fine-structure line emission from oxygen. The warm neutral medium is studied by looking at 21 cm emission from the hyperfine transition of the ground state of hydrogen. The upper hyperfine level is excited and de-excited primarily by collisions with gas particles. The relatively rare radiative de-excitations, however, produce 21 cm photons that are a useful diagnostic of neutral hydrogen. All-sky maps of 21 cm intensity (commonly expressed as an “antenna temperature”) can be translated into a map of the column density of neutral hydrogen.
In this chapter we extend Cauchy’s Theorem to cases where the integrand is not analytic, for example, when the integrand possesses isolated singular points. Each isolated singular point contributes a term proportional to what is called the residue of the singularity. This extension, called the residue theorem, is very useful in applications such as the evaluation of definite integrals of various types. The residue theorem provides a straightforward and sometimes the only method to compute these integrals. We also show how to use contour integration to compute the solutions of certain partial differential equations by the techniques of Fourier and Laplace transforms.
This chapter addresses the solution of nonlinear programming (NLP)problemsthrough algorithms whose objective is to find a point satisfying the Karush–Kuhn–Tucker conditions through different applications of Newton's method. The algorithms considered include successive-quadratic programming, reduced-gradient method and interior-point method. The basic assumptions behind each method are stated and used to derive the major steps involved in these algoritms. We make brief reference to optimization software including SNOPT, MINOS, CONOPT, IPOPT and KNITRO. Finally, general guidelines are given how to formulate good NLP models.
This chapter introduces complex numbers, elementary complex functions, and their basic properties. It will be seen that complex numbers have a simple two-dimensional character which submits to a straightforward geometric description. While many results of real variable calculus carry over, some very important novel and useful notions appear in the calculus of complex functions. Applications to differential equations are briefly discussed as well.
The representation of complex functions frequently requires the use of infinite series expansions. The best known are Taylor and Laurent series, which represent analytic functions in appropriate domains. Applications often require that we manipulate series by termwise differentiation and integration. These operations may be substantiated by employing the notion of uniform convergence. Series expansions break down at points or curves where the represented function is not analytic. Such locations are termed singular points or singularities of the function. The study of the singularities of analytic functions is vitally important in many applications including contour integration, differential equations in the complex plane, and conformal mappings.
This chapter first describes general approaches for anticipating uncertainty in optimization models. The strategies include optimizing the expected value, minimax stategy, chance-constrained,two-stage and multistage programming, and robust optimization. The chapter focuses on the solution of two-stage stochastic MILP programming problems in which 0-1 variables are present in stage-1 decisions. The discretization of the uncertain parameters is described, which gives rise to scenario trees. We then present the extended MILP formulation that explicitly considers all possible scenarios. Since this problem can become too large, the Benders decomposition method (also known as the L-shaped method )is introduced, in which a master MILP problem is defined through duality in order to predict new integer values for stage-1 decisions, as well as a lower bound. The extension to multistage programming problems is also briefly discussed, as well as a brief reference to robust optmization in which the robust counterpart is derived.