To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Climate modeling developed further at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. The laws of physics that form the foundation of weather and climate models imply strict conservation of properties like mass, momentum, and energy. A household budget analogy can be used to explain these conservation requirements, which are stricter for climate models as opposed to weather models. A mismatch in the energy transfer between atmospheric and oceanic models that were part of a climate model led to a correction technique developed in the 1980s known as flux adjustment, which violated energy conservation. Subsequent improvements in climate models obviated the need for these artificial flux adjustments. Now we have more complex models, known as Earth System Models, that include biological and chemical processes such as the carbon cycle. The concept of the constraining power of models is introduced.
This chapter surveys least squares and related methods for designing inverse, prediction, and interpolation filters. The name Wiener filter is associated with these methods. Correlations and autocorrelations play a central role in forming normal equations for filter coefficients. The underlying theory assumes that true correlations are available, but in practice correlations and autocorrelations are estimated from data. We will find the surprising result that a least squares inverse to a linear filter with impulse response can be found from its autocorrelation, without knowledge of the impulse response itself. The least squares inverse filter is closely related to a filter for predicting future values of a time series. Similar ideas may be used to design interpolation filters and to develop linear filter models for time series. This chapter also reviews the correlation filters used in geophysical systems such as radar, sonar, and exploration seismology. We also discuss how correlation filtering is the essential element enabling navigation with the Global Positioning System (GPS).
A filter may be a physical system or computational algorithm with an input and output. If the filter is linear, then the relationship between input and output is the same regardless of the amplitude of the input. Throughout this chapter and the entire book we consider only time-invariant linear filters, that is, linear filters whose properties do not change with time. As a consequence, linear systems and filters obey a superposition principle, so that when two inputs are added together the output is the sum of the separate outputs that would result from separate inputs. Another consequence is that a single-frequency sinusoidal input produces a sinusoidal output at exactly the same frequency and no other. As a result, linear systems and filters are preferred models for physical processes and for data processing because they allow analysis and implementation in both the frequency and time domains. The DFT presented in theis the main tool for frequency domain analysis and implementation. This chapter develops important elements used in time domain implementation: digital filter equations and discrete convolution; the transfer function; and the impulse response. These concepts are extended to the properties of a cascade of linear filters (the successive application of several different filters to a time series) and are used to define the concept of an inverse filter. Example applications of linear filters in data processing, as models of physical processes, and in methods for finding practical inverse filters appear inand .
The power spectrum describes how the variance of a time series is distributed over frequency. The variance (mean squared signal value per time sample) is a broadband statistic measuring power, while the power spectrum at a given frequency is a statistic measuring the power in a narrow frequency band. Similarly, the coherence spectrum describes how the broadband correlation coefficient between two time series varies with frequency. The DFT is the main tool for estimating both the power and coherence spectra and will be our main focus, but we also compare the DFT (periodogram) results with estimates made using the prediction error filter (PEF) developed in . Using examples we show that PEF estimates tend to be smooth, but the choice of PEF order introduces some variability in these estimates. Periodogram spectrum estimates tend to be erratic but can be tamed at the expense of diminished frequency resolution. We describe standard methods of assigning confidence intervals to periodogram spectrum estimates.
The Rumsfeld knowledge matrix – which spans the knowledge categories “known knowns,” “known unknowns,” and “unknown unknowns” – is used to illustrate the process of model improvement. Two new knowledge subcategories – “poorly known unknowns” and “well-known unknowns” – are introduced to distinguish between accuracy of parameterizations. A distinction is made between “downstream benefits” of parameterizations, which improve prediction skill, and “upstream benefits,” which improve understanding of the phenomenon being parameterized but not necessarily the prediction skill. Since new or improved parameterizations add to the complexity of models, it may be important to distinguish between essential and nonessential complexity. The fourth knowledge category in the Rumsfeld matrix is “unknown knowns” or willful ignorance, which can be used to describe contrarian views on climate change. Contrarians dismiss climate models for their limitations, but typically only offer alternatives born of unconstrained ideation.
The least squares method is among the most widely used data analysis and parameter estimation tools. Its development is associated with the work of Gauss, the nineteenth century German geophysicist, who introduced many innovations in computation, geophysics, and mathematics, most of which continue to be in wide use today. We will introduce least squares from two viewpoints, one based on probability arguments and considering the data to be contaminated with random errors and the other based on a linear algebra viewpoint and involving the solution of simultaneous linear equations. We will employ these two viewpoints to develop the least squares approach, using, as an example, the fitting of polynomials to a time series. While this might appear to be a departure from linear filters and related topics, we show in subsequent chapters that in fact least squares serves as a powerful digital filter design tool. We will also find that a stochastic viewpoint (in which time series values are considered to be random variables; see ) leads also to the use of least squares in the development of prediction and interpolation filters.
Climate modeling requires supercomputers, which are simply the most powerful computers at a given time. The history of supercomputing is briefly described. As computer chips became faster each year for the same price (Moore’s Law), supercomputers also became faster. But due to technological limitations, chips are no longer becoming faster each year. More chips are therefore needed for faster supercomputers, which means that supercomputers cost more money and consume more power every year. The newest chips, known as Graphical Processing Units (GPUs), are not as efficient at running climate models, because they are optimized for other applications like machine learning (ML). Extrapolating current trends in computing, we can expect future supercomputers that run the high-resolution climate models of the future to be very expensive and power hungry.
To confront the climate crisis, we need to hedge our bets against the risk of climate change. We must be willing to spend some money now in order reap a larger benefit that will take many decades to deliver. This is similar to the philosophical concept of Pascal’s Wager, where one bets a finite resource for a potentially infinite reward. But estimating how much we should spend now is not easy; attempts to estimate this amount – the “social cost of carbon” – produce a wide range of numbers. We may need to accept that there is going to be radical uncertainty in any numerical estimate. Nevertheless, we can use climate models to quantify the risk of climate change as best we can, while taking into account the different types of uncertainties. Often, risk assessment requires numerical probabilities, but these are not always available for uncertainties associated with climate prediction.
Geoengineering describes a range of technologies that attempt to mitigate the effects of global warming caused by increasing greenhouse gas concentrations. Some geoengineering approaches remove carbon dioxide from the atmosphere. These are not controversial, but they are currently too expensive to serve as a viable option. The most cost-effective technique, called solar radiation management, aims to reflect sunlight by continuously dumping large quantities of sulfate aerosols into the stratosphere, much as a volcanic eruption would. But geoengineering attempts to address the symptoms of the disease of global warming rather than the disease itself, which will persist as long as carbon emissions continue. Computer models of climate are essential to assess the efficacy of any geoengineering approach, because large-scale physical experimentation would be dangerous. However, the information that is most crucial for us to know – the impact geoengineering would have on regional climates – is something models have trouble predicting.
Spatial and temporal samples of physical quantities are among the most common data form in the geosciences and many other fields. Such data are called time series (even if they are samples along a profile in space). This chapter presents examples of geophysical time series in order to illustrate the sorts of scientific questions that may be addressed by the methods inthrough . These time series are used in later chapters and exercises, and numerical values can be downloaded from www.cambridge.org/9781108931007. Virtually all the examples are available on the World Wide Web.
Climate is an emergent system with many interacting processes and components. Complexity is essential to accurately model the system and make quantitative predictions. But this complexity obscures the different compensating errors inherent in climate models. The Anna Karenina principle, which assumes that these compensating errors are random, is introduced. By using models with different formulations for small-scale processes to make predictions and then averaging them, we can expect to cancel out the random errors. This multimodel averaging can increase the skill of climate predictions, provided the models are sufficiently diverse. Climate models tend to borrow formulations from each other, which can lead to “herd mentality” and reduce model diversity. The need to preserve the diversity of models works against the need for replicability of results from those models. A compromise between these two conflicting goals becomes essential.