To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 3 describes models for multivariate distributions. Included are the multinormal distribution, the multi-lognormal distribution, multivariate distributions constructed as products of conditional distributions, and three families of multivariate distributions with prescribed marginals and covariances: the Nataf family, the Morgenstern family, and copula distributions. Several structural reliability methods require transformation of original random variables into statistically independent standard normal random variables. The conditions for such a transformation to exist and be reversible are described and formulations are presented for each of the described multivariate distribution models. Also developed are the Jacobians of each transform and its inverse, which are also used in reliability analysis. Analytical and numerical results to facilitate the use of Nataf and Morgenstern distributions are provided in this chapter.
Chapter 8 introduces the theory and computational methods for system reliability analysis. The system is defined as a collection of possibly interdependent components, such that the system state depends on the states of its constituent components. The system function, cut and link sets, and the special cases of series and parallel systems are defined. Methods for reliability assessment of systems with independent and dependent components are described, including methods for bounding the system failure probability by bi- or tri-component joint probabilities. Bounds on the system failure probability under incomplete component probability information are developed using linear programming. An efficient matrix-based method for computing the reliability of certain systems is described. The focus is then turned to structural systems, where the state of each component is defined in terms of a limit-state function. FORM approximations are developed for series and parallel structural systems, and the inclusion–exclusion rule or bounding formulas are used to obtain the FORM approximation for general structural systems. Other topics include an event-tree approach for modeling sequential failures, measures of component importance, and parameter sensitivities of the system failure probability.
Chapter 14 develops methods for reliability-based design optimization (RBDO). Three classes of RBDO problems are considered: minimizing the cost of design subject to reliability constraints, maximizing the reliability subject to a cost constraint, and minimizing the cost of design plus the expected cost of failure subject to reliability and other constraints. The solution of these problems requires the coupling of reliability methods with optimization algorithms. Among many solution methods available in the literature, the main focus in this chapter is on a decoupling approach using FORM, which under certain conditions has proven convergence properties. The approach requires the solution of a sequence of decoupled reliability and optimization problems that are shown to gradually approach a near-optimal solution. Both structural component and system problems are considered. An alternative approach employs sampling to compute the failure probability with the number of samples increasing as the optimal solution point is approached. Also described are approaches that make use of surrogate models constructed in the augmented space of random variables and design parameters. Finally, the concept of buffered failure probability is introduced as a measure closely related to the failure probability, which provides a convenient alternative in solving the optimization subproblem.
Becoming familiar with the Laplace transform F(s) of basic time-domain functions f(t) such as exponentials, sinusoids, powers of t, and hyperbolic functions can be immensely useful in a variety of applications. That is because many of the more complicated functions that describe the behavior of real-world systems and that appear in differential equations can be synthesized as a mixture of these basic functions. And although there are dozens of books and websites that show you how to find the Laplace transform of such functions, much harder to find are explanations that help you achieve an intuitive understanding of why F(s) takes the form it does, that is, an understanding that goes beyond “That’s what the integral gives”. So the goal of this chapter is not just to show you the Laplace transforms of some basic functions, but to provide explanations that will help you see why those transforms make sense.
In the first five chapters of this book, we introduced two main families of low-dimensional models for high-dimensional data: sparse models and low-rank models. In Chapter 5, we saw how we could combine these basic models to accommodate data matrices that are superpositions of sparse and low-rank matrices. This generalization allowed us to model richer classes of data, including data containing erroneous observations. In this chapter, we further generalize these basic models to a situation in which the object of interest consists of a superposition of a few elements from some set of “atoms” (Section 6.1). This construction is general enough to include all of the models discussed so far, as well as several other models of practical importance. With this general idea in mind, we then discuss unified approaches to studying the power of low-dimensional signal models for estimation, measured in terms of the number of measurements needed for exact recovery or recovery with sparse errors (Section 6.2). These analyses generalize and unify the ideas developed over the earlier chapters, and offer definitive results on the power of convex relaxation. Finally, in Section 6.3, we discuss limitations of convex relaxation, which in some situations will force us to consider nonconvex alternatives, to be studied in later chapters.
This topic examines not just what goods and services consumers buy, but why they buy them. The standard neoclassical model, based on expected utility theory and indifference curve analysis, examines the ‘what’ question, but is too narrow in focus and involves numerous anomalies. To gain a better understanding of what people buy it is necessary to understand the psychology of consumers and examine the ‘why’ question. This approach reveals several important biases which cause the anomalies: biases in expectations, biases in estimating and maximizing utilities and biases in discounting. These biases are often a result of bounded rationality, social preferences and emotional or visceral influences. The field of behavioural economics has developed a body of theory, based on the concepts of prospect theory and mental accounting, which accounts for these biases and anomalies. The fundamental concepts here are the use of reference points and loss aversion. These and other behavioural factors are in turn based on evolutionary psychology. The process of natural selection has caused our brains to evolve not as utility maximising machines but as biological fitness maximising machines.
Chapter 6 describes the first-order reliability method (FORM), which employs full distributional information. The chapter begins with a presentation of the important properties of the outcome space of standard normal random variables, which are used in FORM and other reliability methods. The FORM is presented as an approximate method that employs linearization of the limit-state surface at the design point in the standard normal space. The solution requires transformation of the random variables to the standard normal space and solution of a constrained optimization problem to find the design point. The accuracy of the FORM approximation is discussed, and several measures of error are introduced. Measures of importance of the random variables in contributing to the variance of the linearized limit-state function and with respect to statistically equivalent variations in means and standard deviations are derived. Also derived are the sensitivities of the reliability index and the first-order failure probability approximation with respect to parameters in the limit-state function or in the probability distribution model. Other topics in this chapter include addressing problems with multiple design points, solution of an inverse reliability problem, and numerical approximation of the distribution of a function of random variables by FORM.
This topic examines relationships between inputs and outputs in the production process. The starting point is the explanation of various concepts that are fundamental in production and costs: factors of production, fixed and variable factors, short and long run, scale, productivity and efficiency. The increasing importance of intangible factors is discussed, along with their main features of scalability, sunkenness, spillover effects and synergy. Production functions can take various mathematical forms. The significance of average and marginal product is explained. The concepts of the law of diminishing returns, isoquants, economies of scale, diseconomies of scale and returns to scale are introduced and interpreted. The determination of the optimal use of factors of production is explained, using mathematical analysis. Case studies play an important role in this topic in terms of demonstrating the application of theoretical concepts to real-life situations, particularly in a digital age where intangible assets and network effects are important. The leading case study relates to the so-called ‘productivity puzzle’, and various explanations of the puzzle are presented.
The problem of identifying low-dimensional structure of signals or data in high-dimensional spaces is one of the most fundamental problems that, through a long history, interweaves many engineering and mathematical fields such as system theory, pattern recognition, signal processing, machine learning, and statistics.
Chapter 10 describes Bayesian methods for parameter estimation and updating of structural reliability in the light of observations. The chapter begins with a description of the sources and types of uncertainties. Uncertainties are categorized as aleatory or epistemic; however, it is argued that this distinction is not fundamental and makes sense only within the universe of models used for a given project. The Bayesian updating formula is then developed as the product of a prior distribution and the likelihood function, yielding the posterior (updated) distribution of the unknown parameters. Selection of the prior and formulation of the likelihood are discussed in detail. Formulations are presented for parameters in probability distribution models, as well as in mathematical models of physical phenomena. Three formulations are presented for reliability analysis under parameter uncertainties: point estimate, predictive estimate, and confidence interval of the failure probability. The discussion then focuses on the updating of structural reliability in the light of observed events that are characterized by either inequality or equality expressions of one or more limit-state functions. Also presented is the updating of the distribution of random variables in the limit-state function(s) in the light of observed events, e.g., the failure or non-failure of a system.