To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The theory of kernels offers a rich mathematical framework for the archetypical tasks of classification and regression. Its core insight consists of the representer theorem that asserts that an unknown target function underlying a dataset can be represented by a finite sum of evaluations of a singular function, the so-called kernel function. Together with the infamous kernel trick that provides a practical way of incorporating such a kernel function into a machine learning method, a plethora of algorithms can be made more versatile. This chapter first introduces the mathematical foundations required for understanding the distinguished role of the kernel function and its consequence in terms of the representer theorem. Afterwards, we show how selected popular algorithms, including Gaussian processes, can be promoted to their kernel variant. In addition, several ideas on how to construct suitable kernel functions are provided, before demonstrating the power of kernel methods in the context of quantum (chemistry) problems.
In this chapter, we change our viewpoint and focus on how physics can influence machine learning research. In the first part, we review how tools of statistical physics can help to understand key concepts in machine learning such as capacity, generalization, and the dynamics of the learning process. In the second part, we explore yet another direction and try to understand how quantum mechanics and quantum technologies could be used to solve data-driven task. We provide an overview of the field going from quantum machine learning algorithms that can be run on ideal quantum computers to kernel-based and variational approaches that can be run on current noisy intermediate-scale quantum devices.
In this chapter, we introduce one of the most important computational tools in linear algebra – the determinants. First, we discuss some motivational examples. Next we present the definition and basic properties of determinants. Then we study some applications of determinants, including the determinant characterization of an invertible matrix or mapping, Cramer’s rule for solving a system of nonhomogeneous equations, and a proof of the Cayley–Hamilton theorem.
Three models of a partially ionised fluid are considered by examining together three sets of (M)HD equations for the neutral, ionised, and electron components of a fluid. The first assumes low ionisation and isothermality leading to the one-fluid, isothermal model where all three non-ideal terms–resistance, the Hall effect, ambipolar diffusion–appear in the induction equation. New quantities introduced include: the ambipolar force density; coupling, rate, and ambipolar coefficients; and resistivity, all helping to determine the relative role of each non-ideal term. For resistive MHD, the Sweet–Parker model for magnetic reconnection, and dynamo theory are discussed. For the Hall effect, a two-fluid, isothermal model is introduced that refines the Sweet–Parker model to give a reconnection time scale in better keeping with observations of solar flares. Finally, the section on ambipolar diffusion derives the full two-fluid, non-isothermal model applicable for a fluid with arbitrary ionisation. Here, exchange terms are introduced to account for mass, momentum, and energy transfers when neutrals ionise or ions recombine.
This chapter looks at four important fluid instabilities – Kelvin–Helmholtz, Rayleigh–Taylor, magneto-rotational, and Parker–where normal mode analysis of the lin-earised equations is taught using each instability as an exemplar. All are examined from the linear regime in which conditions for instability and rates of growth of the fastest mode are developed from first principles. For the KHI, RTI, and MRI, numerical simulations are presented which recover the results of linear analysis from the early stages of a non-linear calculation. For the KHI and RTI, numerical simulations well into the non-linear regime are presented where the onset of fluid turbulence is noted. For the MRI, a section describing how it solved the angular momentum transport problem for accretion discs is included. For the Parker instability, an account is given how this purely astrophysical phenomenon explains the clumpy structure of the interstellar medium.
After some historical perspective on the subject, the introduction attempts to define, distinguish, and link in the broadest terms the various areas of physics related to fluid dynamics. These include fluid mechanics, hydrodynamics, gas dynamics, magnetohydrodynamics, and plasma physics. In particular, the link between ordinary hydrodynamics and magnetohydro-dynamics is made, and the approach this text takes in teaching both, namely wave mechanics, is revealed.
In this chapter, we introduce the field of reinforcement learning and some of its most prominent applications in quantum physics and computing. First, we provide an intuitive description of the main concepts, which we then formalize mathematically. We introduce some of the most widely used reinforcement learning algorithms. Starting with temporal-difference algorithms and Q-learning, followed by policy gradient methods and REINFORCE, and the interplay of both approaches in actor-critic algorithms. Furthermore, we introduce the projective simulation algorithm, which deviates from the aforementioned prototypical approaches and has multiple applications in the field of physics. Then, we showcase some prominent reinforcement learning applications, featuring some examples in games; quantum feedback control; quantum computing, error correction and information; and the design of quantum experiments. Finally, we discuss some potential applications and limitations of reinforcement learning in the field of quantum physics.
This chapter returns to the zero-field limit of MHD replacing the isotropic pressure force density in ideal HD with force densities arising from the viscous stress tensor for viscid HD. As tensor analysis is not a prerequisite for this course, the stress tensor is developed purely from a vector analysis of all stresses applied at a single point in a viscid fluid. This leads to the introduction of bulk and kinetic viscosity in a Newtonian fluid and the identification of ordinary thermal pressure with the trace of the stress tensor. Various flavours of the Navier–Stokes equation are developed including compressible and incompressible forms. The Reynold’s number is introduced as a result of scaling the Navier–Stokes equation which leads to a qualitative discussion on turbulent and laminar flow. Numerous examples are given in which a simplified form of the Navier–Stokes equation can be solved analytically, including plane-parallel flow, open channel flow, Hagen–Poiseuille flow, and Couette flow.
This chapter begins with a formal definition of a fluid (what it means to be a continuum rather than an ensemble of particles) followed by a review of kinetic theory of gases where the connections between pressure and particle momentum and between specific energy (temperature) and average particle kinetic energy are made. A distinction is made between extensive and intensive variables, from which the Theorem of Hydrodynamics is postulated and proven. From this theorem, the basic equations of ideal hydrodynamics (zero-field limit of MHD) are derived including continuity, total energy equation, and the momentum equation. Alternate equations of HD such as the internal energy, pressure, and Euler’s equations are also introduced. The equations of HD are then assembled into two sets–conservative and primitive–with the distinction between the two explained.
This chapter discusses more specialized examples on how machine learning can be used to solve problems in quantum sciences. We start by explaining the concept of differentiable programming and its use cases in quantum sciences. Next, we describe deep generative models, which have proven to be an extremely appealing tool for sampling from unknown target distributions in domains ranging from high-energy physics to quantum chemistry. Finally, we describe selected machine learning applications for experimental setups such as ultracold systems or quantum dots. In particular, we show how machine learning can help in tedious and repetitive experimental tasks in quantum devices or in validating quantum simulators with Hamiltonian learning.
The content of this chapter may serve as, yet, another supplemental topic to meet the needs and interests beyond those of a usual course curriculum. Here we shall present an oversimplified, but hopefully totally transparent, description of some of the fundamental ideas and concepts of quantum mechanics, using a pure linear algebra formalism.