To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In man-made environments, most objects of interest are rich in regular, repetitive, symmetric structures. Figure 15.1 shows images of some representative structured objects. An image of such an object clearly inherits such regular structures and encodes rich information about the 3D shape, pose, or identity of the object.
This book is about modeling and exploiting simple structure in signals, images, and data. In this chapter, we take our first steps in this direction. We study a class of models known as sparse models, in which the signal of interest is a superposition of a few basic signals (called “atoms”) selected from a large “dictionary.” This basic model arises in a surprisingly large number of applications. It also illustrates fundamental tradeoffs in modeling and computation that will recur throughout the book.
Nonlinear stochastic dynamics is a broad topic well beyond the scope of this book. Chapter 13 describes a particular method of solution for a certain class of nonlinear stochastic dynamic problem by use of FORM. The approach belongs to the class of solution methods known as equivalent linearization. In this case, the linearization is carried out by replacing the nonlinear system with a linear one that has a tail probability equal to the FORM approximation of the tail probability of the nonlinear system – hence the name tail-equivalent linearization method. The equivalent linear system is obtained non-parametrically in terms of its unit impulse response function. For small failure probabilities, the accuracy of the method is shown to be far superior to those of other linearization methods. Furthermore, the method is able to capture the non-Gaussian distribution of the nonlinear response. This chapter develops this method for systems subjected to Gaussian and non-Gaussian excitations and nonlinear systems with differentiable loading paths. Approximations for level crossing rates and the first-passage probability are also developed. The method is extended to nonlinear structures subjected to multiple excitations, such as bi-component base motion, and to evolutionary input processes.
In the previous chapters, we have studied how either a sparse vector or a low-rank matrix can be recovered from compressive or incomplete measurements. In this chapter, we will show that it is also possible to simultaneously recover a sparse signal and a low-rank signal from their superposition (mixture) or from highly compressive measurements of their superposition (mixture). This combination of rank and sparsity gives rise to a broader class of models that can be used to model richer structures underlying high-dimensional data, as we will see in examples in this chapter and later application chapters. Nevertheless, we are also faced with new technical challenges about whether and how such structures can be recovered correctly and effectively, from few observations.
In Chapter 8, we introduced optimization techniques that efficiently solve many convex optimization problems that arise in recovering structured signals from incomplete or corrupted measurements, using known low-dimensional models. In contrast, as we saw in Chapter 7, problems associated with learning low-dimensional models from sample data are often nonconvex: either they do not have tractable convex relaxations or the nonconvex formulation is preferred due to physical or computational constraints (such as limited memory). In this chapter, we introduce optimization algorithms for nonconvex programs.
This chapter is the first of two chapters relating to business strategy and covers the most fundamental aspects in terms of strategic planning and positioning. The starting point is a discussion of the concept of competitive advantage, and how this relates to value creation. Different types of competitive advantage, based on costs and benefits, are discussed, and these are related to market positioning, targeting and segmentation. The relevance of price elasticity is explained in the context of positioning and competitive advantage. Various forms of integration are discussed, in terms of vertical, horizontal and diversification aspects. The nature, costs and benefits of each of these forms is explained. Recent trends in diversification are discussed, along with empirical studies. As with other chapters, case studies are vital in order to illustrate the management principles; in this case, three prominent tech firms are discussed in terms of their strategy development since their origins: Apple, Netflix and Tesla. Although all of them are high-cap tech firms of global reach, they each have quite different prospects.
Extracts from A New Tax System (Goods and Services Tax) Act 1999 (Cth); A New Tax System (Goods and Services Tax) Regulations 2019 (Cth); A New Tax System (Goods and Services Tax Imposition—General) Act 1999 (Cth); A New Tax System (Goods and Services Tax Imposition—Customs) Act 1999 (Cth); A New Tax System (Goods and Services Tax Imposition—Excise) Act 1999 (Cth); A New Tax System (Luxury Car Tax) Act 1999 (Cth); A New Tax System (Luxury Car Tax Imposition—General) Act 1999 (Cth)