We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Maximise student engagement and understanding of matrix methods in data-driven applications with this modern teaching package. Students are introduced to matrices in two preliminary chapters, before progressing to advanced topics such as the nuclear norm, proximal operators and convex optimization. Highlighted applications include low-rank approximation, matrix completion, subspace learning, logistic regression for binary classification, robust PCA, dimensionality reduction and Procrustes problems. Extensively classroom-tested, the book includes over 200 multiple-choice questions suitable for in-class interactive learning or quizzes, as well as homework exercises (with solutions available for instructors). It encourages active learning with engaging 'explore' questions, with answers at the back of each chapter, and Julia code examples to demonstrate how the mathematics is actually used in practice. A suite of computational notebooks offers a hands-on learning experience for students. This is a perfect textbook for upper-level undergraduates and first-year graduate students who have taken a prior course in linear algebra basics.
In many applications, dimensionality reduction is important. Uses of dimensionality reduction include visualization, removing noise, and decreasing compute and memory requirements, such as for image compression. This chapter focuses on low-rank approximation of a matrix. There are theoretical models for why big matrices should be approximately low rank. Low-rank approximations are also used to compress large neural network models to reduce computation and storage. The chapter begins with the classic approach to approximating a matrix by a low-rank matrix, using a nonconvex formulation that has a remarkably simple singular value decomposition solution. It then applies this approach to the source localization application via the multidimensional scaling method and to the photometric stereo application. It then turns to convex formulations of low-rank approximation based on proximal operators that involve singular value shrinkage. It discusses methods for choosing the rank of the approximation, and describes the optimal shrinkage method called OptShrink. It discusses related dimensionality reduction methods including (linear) autoencoders and principal component analysis. It applies the methods to learning low-dimensionality subspaces from training data for subspace-based classification problems. Finally, it extends the method to streaming applications with time-varying data. This chapter bridges the classical singular value decomposition tool with modern applications in signal processing and machine learning.
An important operation in signal processing and machine learning is dimensionality reduction. There are many such methods, but the starting point is usually linear methods that map data to a lower-dimensional set called a subspace. When working with matrices, the notion of dimension is quantified by rank. This chapter reviews subspaces, span, dimension, rank, and nullspace. These linear algebra concepts are crucial to thoroughly understanding the SVD, a primary tool for the rest of the book (and beyond). The chapter concludes with a machine learning application, signal classification by nearest subspace, that builds on all the concepts of the chapter.
This chapter contains topics related to matrices with special structures that arise in many applications. It discusses companion matrices that are a classic linear algebra topic. It constructs circulant matrices from a particular companion matrix and describes their signal processing applications. It discusses the closely related family of Toeplitz matrices. It describes the power iteration that is used later in the chapter for Markov chains. It discusses nonnegative matrices and their relationships to graphs, leading to the analysis of Markov chains. The chapter ends with two applications: Google’s PageRank method and spectral clustering using graph Laplacians.
Many of the preceding chapters involved optimization formulations: linear least squares, Procrustes, low-rank approximation, multidimensional scaling. All these have analytical solutions, like the pseudoinverse for minimum-norm least squares problems and the truncated singular value decomposition for low-rank approximation. But often we need iterative optimization algorithms, for example if no closed-form minimizer exists, or if the analytical solution requires too much computation and/or memory (e.g., singular value decomposition for large problems. To solve an optimization problem via an iterative method, we start with some initial guess and then the algorithm produces a sequence that hopefully converges to a minimizer. This chapter describes the basics of gradient-based iterative optimization algorithms, including preconditioned gradient descent (PGD) for the linear LS problem. PGD uses a fixed step size, whereas preconditioned steepest descent uses a line search to determine the step size. The chapter then considers gradient descent and accelerated versions for general smooth convex functions. It applies gradient descent to the machine learning application of binary classification via logistic regression. Finally, it summarizes stochastic gradient descent.
This chapter introduces matrix factorizations – somewhat like the reverse of matrix multiplication. It starts with the eigendecomposition of symmetric matrices, then generalizes to normal and asymmetric matrices. It introduces the basics of the singular value decomposition (SVD) of general matrices. It discusses a simple application of the SVD that uses the largest singular value of a matrix (the spectral norm), posed as an optimization problem, and then describes optimization problems related to eigenvalues and the smallest singular value. (The “real” SVD applications appear in subsequent chapters.) It discusses the special situations when one can relate the eigendecomposition and an SVD of a matrix, leading to the special class of positive (semi)definite matrices. Along the way there are quite a few small eigendecomposition and SVD examples.
This chapter reviews vectors and matrices, and basic properties like shape, orthogonality, determinant, eigenvalues, and trace. It also reviews operations like multiplication and transpose. These operations are used throughout the book and are pervasive in the literature. In short, arranging data into vectors and matrices allows one to apply powerful data analysis techniques over a wide spectrum of applications. Throughout, this chapter (and book) illustrates how the ideas are implemented in practice in Julia.
Many applications require solving a system of linear equations 𝑨𝒙 = 𝒚 for 𝒙 given 𝑨 and 𝒚. In practice, often there is no exact solution for 𝒙, so one seeks an approximate solution. This chapter focuses on least-squares formulations of this type of problem. It briefly reviews the 𝑨𝒙 = 𝒚 case and then motivates the more general 𝑨𝒙 ≈ 𝒚 cases. It then focuses on the over-determined case where 𝑨 is tall, emphasizing the insights offered by the SVD of 𝑨. It introduces the pseudoinverse, which is especially important for the under-determined case where 𝑨 is wide. It describes alternative approaches for the under-determined case such as Tikhonov regularization. It introduces frames, a generalization of unitary matrices. It uses the SVD analysis of this chapter to describe projection onto a subspace, completing the subspace-based classification ideas introduced in the previous chapter, and also introduces a least-squares approach to binary classifier design. It introduces recursive least-squares methods that are important for streaming data.
There are many applications of the low-rank signal-plus-noise model 𝒀 = 𝑿 + 𝒁 where 𝑿 is a low-rank matrix and 𝒁 is noise, such as denoising and dimensionality reduction. We are interested in the properties of the latent matrix 𝑿, such as its singular value decomposition (SVD), but all we are given is the noisy matrix 𝒀. It is important to understand how the SVD components of 𝒀 relate to those of 𝑿 in the presence of a random noise matrix 𝒁. The field of random matrix theory (RMT) provides insights into those relationships, and this chapter summarizes some key results from RMT that help explain how the noise in 𝒁 perturbs the SVD components, by analyzing limits as matrix dimensions increase. The perturbations considered include roundoff error, additive Gaussian noise, outliers, and missing data. This is the only chapter that requires familiarity with the distributions of continuous random variables, and it provides many pointers to the literature on this modern topic, along with several demos that illustrate remarkable agreement between the asymptotic predictions and the empirical performance even for modest matrix sizes.
This chapter focuses on artificial neural network models and methods. Although these methods have been studied for over 50 years, they have skyrocketed in popularity in recent years due to accelerated training methods, wider availability of large training sets, and the use of deeper networks that have significantly improved performance for many classification and regression problems. Previous chapters emphasized subspace models. Subspaces are very useful for many applications, but they cannot model all types of signals. For example, images of a single person’s face (in a given pose) under different lighting conditions lie in a subspace. However, a linear combination of face images from two different people will not look like a plausible face. Thus, all possible face images do not lie in a subspace. A manifold model is more plausible for images of faces (and handwritten digits) and other applications, and such models require more complicated algorithms. Entire books are devoted to neural network methods. This chapter introduces the key methods, focusing on the role of matrices and nonlinear operations. It illustrates the benefits of nonlinearity, and describes the classic perceptron model for neurons and the multilayer perceptron. It describes the basics of neural network training and reviews convolutional neural network models; such models are used widely in applications.
This chapter contains introductory material, including visual examples that motivate the rest of the book. It explains the book formatting, previews the notation, provides pointers for getting started with Julia, and briefly reviews fields and vector spaces.
This chapter discusses the important problem of matrix completion, where we know some, but not all, elements of a matrix and want to “complete” the matrix by filling in the missing entries. This problem is ill posed in general because one could assign arbitrary values to the missing entries, unless one assumes some model for the matrix elements. The most common model is that the matrix is low rank, an assumption that is reasonable in many applications. The chapter defines the problem and describes an alternating projection approach for noiseless data. It discusses algorithms for the practical case of missing and noisy data. It extends the methods to consider the effects of outliers with the robust principal component method, and applies this to video foreground/background separation. It describes nonnegative matrix factorization, including the case of missing data. A particularly famous application of low-rank matrix completion is the “Netflix problem”; this topic is also relevant to dynamic magnetic resonance image reconstruction, and numerous other applications with missing data (incomplete observations).