To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While current abstract algebra does indeed deserve the adjective abstract, it has both concrete historical roots and modern day applications. Central to undergraduate abstract algebra is the notion of a group, which is the algebraic interpretation of the geometric idea of symmetry. We can see something of the richness of groups in that there are three distinct areas that gave birth to the correct notion of an abstract group: attempts to find (more accurately, attempts to prove the inability to find) roots of polynomials, the study by chemists of the symmetries of crystals, and the application of symmetry principles to solve differential equations.
The inability to generalize the quadratic equation to polynomials of degree greater than or equal to five is at the heart of Galois Theory and involves the understanding of the symmetries of the roots of a polynomial. Symmetries of crystals involve properties of rotations in space. The use of group theory to understand the symmetries underlying a differential equation leads to Lie Theory. In all of these the idea and the applications of a group are critical.
Math is Exciting. We are living in the greatest age of mathematics ever seen. In the 1930s, there were some people who feared that the rising abstractions of the early twentieth century would either lead to mathematicians working on sterile, silly intellectual exercises or to mathematics splitting into sharply distinct subdisciplines, similar to the way natural philosophy split into physics, chemistry, biology and geology. But the very opposite has happened. Since World War II, it has become increasingly clear that mathematics is one unified discipline. What were separate areas now feed off of each other. Learning and creating mathematics is indeed a worthwhile way to spend one's life.
Math is Hard. Unfortunately, people are just not that good at mathematics. While intensely enjoyable, it also requires hard work and self-discipline. I know of no serious mathematician who finds math easy. In fact, most, after a few beers, will confess as to how stupid and slow they are. This is one of the personal hurdles that a beginning graduate student must face, namely how to deal with the profundity of mathematics in stark comparison to our own shallow understandings of mathematics. This is in part why the attrition rate in graduate school is so high. At the best schools, with the most successful retention rates, usually only about half of the people who start eventually get their PhDs. Even schools that are in the top twenty have at times had eighty percent of their incoming graduate students not finish.
In the last chapter we saw various theorems, all of which related the values of a function on the boundary of a geometric object with the values of the function's derivative on the interior. The goal of this chapter is to show that there is a single theorem (Stokes' Theorem) underlying all of these results. Unfortunately, a lot of machinery is needed before we can even state this grand underlying theorem. Since we are talking about integrals and derivatives, we have to develop the techniques that will allow us to integrate on k-dimensional spaces. This will lead to differential forms, which are the objects on manifolds that can be integrated. The exterior derivative is the technique for differentiating these forms. Since integration is involved, we will have to talk about calculating volumes. This is done in section one. Section two defines differential forms. Section three links differential forms with the vector fields, gradients, curls and divergences from last chapter. Section four gives the definition of a manifold (actually, three different methods for defining manifolds are given). Section five concentrates on what it means for a manifold to be orient able. In section six, we define how to integrate a differential form along a manifold, allowing us finally in section seven to state and to sketch a proof of Stokes' Theorem.
Basic Goals: Cleverly Counting Large Finite Sets Central Limit Theorem
Beginning probability theory is basically the study of how to count large finite sets, or in other words, an application of combinatorics. Thus the first section of this chapter deals with basic combinatorics. The next three sections deal with the basics of probability theory. Unfortunately, counting will only take us so far in probability. If we want to see what happens as we, for example, play a game over and over again, methods of calculus become important. We concentrate on the Central Limit Theorem, which is where the famed Gauss-Bell curve appears. The proof of the Central Limit Theorem is full of clever estimates and algebraic tricks. We include this proof not only due to the importance of the Central Limit Theorem but also to show people that these types of estimates and tricks are sometimes needed in mathematics.
Counting
There are many ways to count. The most naive method, the one we learn as children, is simply to explicitly count the elements in a set, and this method is indeed the best one for small sets. Unfortunately, many sets are just too large for anyone to merely count the elements. Certainly in large part the fascination in card games such as poker and bridge is that while there are only a finite number of possible hands, the actual number is far too large for anyone to deal with directly, forcing the players to develop strategies and various heuristical devices.
Throughout this text we have used equivalence relations. Here we collect some of the basic facts about equivalence relations. In essence, an equivalence relation is a generalization of equality.
Definition A.0.1 (Equivalence Relation)An equivalence relation on a set X is any relation ‘x ∼ y’ for x, y ∈ X such that
1. (Reflexivity) For any x ∈ X, we have x ∼ x.
2. (Symmetry) For all x, y ∈ X, if x ∼ y then y ∼ x.
3. (Transitivity) For all x, y, z ∈ X, if x ∼ y and y ∼ z, then x ∼ z.
The basic example is that of equality. Another example would be when X = R and we say that x ∼ y if x − y is an integer. On the other hand, the relation x ∼ y if x ≤ y is not an equivalence relation, as it is not symmetric.
We can also define equivalence relations in term of subsets of the ordered pairs X × X as follows:
Definition A.0.2 (Equivalence Relation)An equivalence relation on a set X is a subset R ⊂ X × X such that
1. (Reflexivity) For any x ∈ X, we have (x, x) ∈ R.
2. (Symmetry) For all x, y ∈ X, if (x, y) ∈ R then (y,x) ∈ R.
3. (Transitivity) For all x, y, z ∈ X, if (x, y) ∈ R and (y, z) ∈ R, then (x, z) ∈ R.
Basic Maps: Continuous and Differentiate Functions
Basic Goal: The Fundamental Theorem of Calculus
While the basic intuitions behind differentiation and integration were known by the late 1600s, allowing for a wealth of physical and mathematical applications to develop during the 1700s, it was only in the 1800s that sharp, rigorous definitions were finally given. The key concept is that of a limit, from which follow the definitions for differentiation and integration and rigorous proofs of their basic properties. Far from a mere exercise in pedantry, this rigorization actually allowed mathematicians to discover new phenomena. For example, Karl Weierstrass discovered a function that was continuous everywhere but differentiable nowhere. In other words, there is a function with no breaks but with sharp edges at every point. Key to his proof is the need for limits to be applied to sequences of functions, leading to the idea of uniform convergence.
We will define limits and then use this definition to develop the ideas of continuity, differentiation and integration of functions. Then we will show how differentiation and integration are intimately connected in the Fundamental Theorem of Calculus. Finally we will finish with uniform convergence of functions and Weierstrass' example.
Linear algebra studies linear transformations and vector spaces, or in another language, matrix multiplication and the vector space Rn. You should know how to translate between the language of abstract vector spaces and the language of matrices. In particular, given a basis for a vector space, you should know how to represent any linear transformation as a matrix. Further, given two matrices, you should know how to determine if these matrices actually represent the same linear transformation, but under different choices of bases. The key theorem of linear algebra is a statement that gives many equivalent descriptions for when a matrix is invertible. These equivalences should be known cold. You should also know why eigenvectors and eigenvalues occur naturally in linear algebra.
Real Analysis
The basic definitions of a limit, continuity, differentiation and integration should be known and understood in terms of ∈'s and δ's. Using this ∈ and δ language, you should be comfortable with the idea of uniform convergence of functions.
Differentiating Vector-Valued Functions
The goal of the Inverse Function Theorem is to show that a differentiable function f : Rn → Rn is locally invertible if and only if the determinant of its derivative (the Jacobian) is non-zero. You should be comfortable with what it means for a vector-valued function to be differentiable, why its derivative must be a linear map (and hence representable as a matrix, the Jacobian) and how to compute the Jacobian.
Basic Goal: Computing the Efficiency of Algorithms
The end of the 1800s and the beginning of the 1900s saw intense debate about the meaning of existence for mathematical objects. To some, a mathematical object could only have meaning if there was a method to compute it. For others, any definition that did not lead to a contradiction would be good enough to guarantee existence (and this is the path that mathematicians have overwhelmingly chosen to take). Think back to the section on the Axiom of Choice in Chapter Ten. Here objects were claimed to exist which were impossible to actually construct. In many ways these debates had quieted down by the 1930s, in part due to Gödel's work, but also in part due to the nature of the algorithms that were eventually being produced. By the late 1800s, the objects that were being supposedly constructed by algorithms were so cumbersome and time-consuming, that no human could ever compute them by hand. To most people, the pragmatic difference between an existence argument versus a computation that would take a human the life of the universe was too small to care about, especially if the existence proof had a clean feel.
Systematic error is just a euphemism for experimental mistake. Such mistakes are broadly due to three causes:
(a) inaccurate instruments,
(b) apparatus that differs from some assumed form,
(c) incorrect theory, that is, the presence of effects not taken into account.
We have seen the remedy for the first – calibrate. There is no blanket remedy for the other two. The more physics you know, the more experience you have had, the more likely you are to spot the effects and hence be able to eliminate them. However, there are ways of making measurements, of following certain sequences of measurements, which automatically reveal – and sometimes eliminate – certain types of error. Such procedures form the subject of the present chapter. Some are specific, others are more general and add up to an attitude of mind.
Finding and eliminating a systematic error may sound a negative, albeit desirable, object. But there is more to it than that. The systematic error that is revealed may be due to a phenomenon previously unknown. It is then promoted from an ‘error’ to an ‘effect’. In other words, by careful measurement we may make discoveries and increase our understanding of the physical world.
Apparent symmetry in apparatus
It is a good rule that whenever there is an apparent symmetry in the apparatus, so that reversing some quantity or interchanging two components should have no effect (or a predictable effect – see the second example), you should go ahead and make the change.