To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many of the most interesting things in fluid mechanics occur because simple flows are unstable. If they get knocked a little bit, the fluid curls up into interesting shapes, or dissolves into some messy turbulent flow. In this chapter, we start to understand how these processes can happen.
Any education in theoretical physics begins with the laws of classical mechanics. The basics of the subject were laid down long ago by Galileo and Newton and are enshrined in the famous equation that we all learn in school. But there is much more to the subject and, in the intervening centuries, the laws of classical mechanics were reformulated to emphasise deeper concepts such as energy, symmetry, and action. This textbook describes these different approaches to classical mechanics, starting with Newton’s laws before turning to subsequent developments such as the Lagrangian and Hamiltonian approaches. The book emphasises Noether’s profound insights into symmetries and conservation laws, as well as Einstein’s vision of spacetime, encapsulated in the theory of special relativity. Classical mechanics is not the last word on theoretical physics. But it is the foundation for all that follows. The purpose of this book is to provide this foundation.
Much of classical mechanics treats particles as infinitesimally small. But most of our world is not like this. Planets and cats and tennis balls are not infinitesimally small, but have an extended size and this can be important for many applications. The purpose of this chapter is to understand how to describe the complicated motion of extended objects as they tumble and turn.
Jane Dewey (1900−1976) was the only woman in a group that John Slater described as the lucky generation of US physicists: those born near the beginning of the twentieth century and who spent time in Europe, learning with the leading quantum physicists of the era. After completing a PhD at the Massachusetts Institute of Technology in 1925, Dewey went to Niels Bohr’s Institute for Theoretical Physics in Copenhagen. She worked on the Stark effect in helium, a key test of the recently formulated quantum mechanics. Bohr praised her skills in a fellowship application, and Karl Compton later supported her (unsuccessful) efforts to land a permanent job. Although Dewey did pioneering work in the field of quantum optics, the conditions she encountered made it difficult for her to continue on this research path. Her promising abilities did not translate into a successful academic career as they did for many of the men of the lucky generation. Perhaps she was not lucky enough. Or was luck conditional on being a man? This chapter argues that subtle – yet, structural – gender discriminatory practices contributed to her gradual exclusion from physics research, and ultimately from academia.
The purpose of this chapter is to understand how quantum particles react to magnetic fields. There are a number of reasons to do be interested in this. First, quantum particles do extraordinary things when subjected to magnetic fields, including forming exotic states of matter known as quantum Hall fluids. But, in addition, magnetic fields bring a number of new conceptual ideas to the table. Among other things, this is where we first start to see the richness that comes from combining quantum mechanics with the gauge fields of electromagnetism.
For many systems, the full information of an underlying Markovian decription is not accessible due to limited spatial or temporal resolution. We first show that such an often inevitable coarse-graining implies that, rather than the full entropy production, only a lower bound can be retrieved from coarse-grained data. As a technical tool, it is derived that the Kullback–Leibler divergence decreases under coarse-graining. For a discrete time-series obtained from an underlying time-continuous Markov dynamics, it is shown how the analysis of n-tuples leads to a better estimate with increasing length of the tuples. Finally, state-lumping as one strategy for coarse-graining an underlying Markov model is shown explicitly to yield a lower bound for the entropy production. However, in general, it does not yield a consistent interpretation of the first law along coarse-grained trajectories as exemplified with a simple model.
The difference between quantum and classical mechanics does not involve just a small tweak. Instead it is a root and branch overhaul of the entire framework. In this chapter we introduce the key concept that underlies this new framework: the quantum state, as manifested in the wavefunction.
Space and time are not what they seem. Their true nature only becomes clear as particles reach the speeds close to the speed of light where some of the common sense ideas start to break down. Indeed, one of major themes of twentieth century physics is that common sense is not a good guide when we look closely at the universe. In this chapter, we start to understand the true nature of space and time, as encapsulated in Einsteins theory of special relativity. We will see many wonderful things, from time slowing down to the lengths shrinking. There will be stories of twins and trains and elementary particles failing to die.
This study proposes a machine-learning-based subgrid scale (SGS) model for very coarse-grid large-eddy simulations (vLES). An issue with SGS modelling for vLES is that, because the energy-containing eddies are not accurately resolved by the computational grid, the resolved turbulence deviates from the physically accurate turbulence. This limits the use of supervised machine-learning models commonly trained using pairs of direct numerical simulation (DNS) and filtered DNS data. The proposed methodology utilises both unsupervised learning (cycle-consistency generative adversarial network (GAN)) and supervised learning (conditional GAN) to construct a machine-learning pipeline. The unsupervised learning part of the proposed method first transforms the non-physical vLES flow field to resemble a physically accurate flow field. The second supervised learning part employs super-resolution of turbulence to predict the SGS stresses. The proposed pipeline is trained using a fully developed turbulent channel at the friction Reynolds number of approximately 1000. The a priori validation shows that the proposed unsupervised–supervised pipeline successfully learns to predict the accurate SGS stresses, while a typical supervised-only model shows significant discrepancies. In the a posteriori test, the proposed unsupervised–supervised-pipeline SGS model for vLES using a progressively coarse grid yields good agreement of the mean velocity and Reynolds shear stress with the reference data at both the trained Reynolds number 1000 and the untrained higher Reynolds number 2000, showing robustness against varying Reynolds numbers. A budget analysis of the Reynolds stresses reveals that the proposed unsupervised–supervised-pipeline SGS model predicts a significant amount of SGS backscatter, which results in the strengthened near-wall Reynolds shear stress and the accurate prediction of mean velocity.
Diffusion plays crucial roles in cells and tissues, and the purpose of this chapter is to theoretically examine it. First, we describe the diffusion equation and confirm that its solution becomes a Gaussian distribution. Then, we discuss concentration gradients under fixed boundary conditions and the three-color flag problem to address positional information in multicellular organism morphogenesis. We introduce the possibility of pattern formation by feed-forward loops, which can transform one gradient into another or convert a chemical gradient into a stripe pattern. Next, we introduce Turing patterns as self-organizing pattern formation, outlining the conditions for Turing instability through linear stability analysis and demonstrating the existence of characteristic length scales for Turing patterns. We provide specific examples in one-dimensional and two-dimensional systems. Additionally, we present instances of traveling waves, such as the cable equation, Fisher equation, FitzHugh–Nagumo equation, and examples of their generation from limit cycles. Finally, we introduce the transformation of temporal oscillations into spatial patterns, exemplified by models like the clock-and-wavefront model.
Physicists have a dirty secret: we’re not very good at solving equations. More precisely, humans aren’t very good at solving equations. We know this because we have computers and they’re much better at solving things than we are. This means that we must develop a toolbox of methods so that, when confronted by a problem, we have some options on how to go about understanding whats going on. The purpose of this chapter is to develop this toolbox in the guise of various approximation schemes.
The Maxwell demon and the Szilard engine demonstrate that work can be extracted from a heat bath through measurement and feedback in apparent violation of the second law. A systematic analysis shows that, by including the measurement process and the subsequent erasure of a memory according to Landauer’s principle, the second law is indeed restored. For such feedback-driven processes, the Sagawa–Ueda relation provides a generalization of the Jarzynski relation. For the general class of bipartite systems, the concepts from stochastic thermodynamics are developed. This framework applies to systems where one component “learns” about the changing state of the other one, as in simple models for bacterial sensing. The chapter closes with a simple information machine that shows how the ordered sequence of bits in a tape can be used to transform heat into mechanical work. Likewise, mechanical work can be used to erase information, i.e., randomize such a tape. These processes are shown to obey a second law of information processing.