To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we describe the motivation and requirements for converting analog signals to digital time series so that they can be analyzed using digital computers. The process of digitizing analog geophysical signals, for example continuous voltages from seismometers, must ensure that all information about signal amplitudes (the dynamic range) and range of frequencies (the frequency bandwidth) is retained. Important elements of analog to digital conversion include the number of samples to be taken (the sampling rate) and how to represent samples as numbers in a computer. The chapter also presents the standard statistics used to describe time series and introduces the logarithmic decibel scale as a convenient way to compare the relative magnitudes of signals.
The Leaning Tower of Pisa, used by Galileo to demonstrate the simplicity of science, is also a testament to the complexity of science. Over an 800-year period, multiple attempts were made to fix the errors in the tower’s construction that caused it to lean. Often, the fixes had unanticipated consequences, necessitating additional compensating fixes. Climate models face a similar problem. The models use approximate formulas called parameterizations, with adjustable parameters, to represent processes like clouds that are too fine to be resolved by the model grids. The optimal values of these parameters that minimize simulation errors are determined by a trial-and-error process known as “model tuning.” Tuning minimizes errors in simulating current and past climates, but it cannot guarantee that the predictions of the future will be free of errors. This means that models can be confirmed, but they cannot be proven to be correct.
The Discrete Fourier Transform (DFT) is one of the most important tools in geophysical data processing and in many other fields. The DFT may be understood from a number of viewpoints, but here we emphasize that it is a Fourier series representing a uniformly sampled time series as a sum of sampled complex sinusoids. We refer to the time series as being in the time domain while the set of its complex-valued sinusoidal coefficients computed using the DFT is in the frequency domain. The inverse DFT (IDFT) computes time series values by adding together Fourier frequency sinusoids, each scaled by a frequency domain sinusoidal coefficient. We develop the DFT by converting the ordinary Fourier series to complex form, transitioning to sampled time series, and finally explaining standard normalization and the usual (and often baffling) frequency and time ordering conventions. The DFT came into widespread use only in the 1960s after the development of Fast Fourier Transform (FFT) algorithms. The speed of FFT algorithms has led to many important applications. Those presented here include the interpolation and computation of analytic signals for real-valued time series. In later chapters we show the important roles of the DFT in linear filtering and spectral analysis.
Communicating the strengths and limitations of climate modeling to those outside the field of climate science is a formidable challenge. The nuances of scientific language can be lost in the translation to natural language when climate predictions are presented to a general audience. This loss in translation can lead to misinformation and disinformation that hampers a rational response to the climate crisis. Even simple terms like “model,” “data,” and “prediction” have many different meanings depending on the context. Anytime we talk about the future, we are using a model. In climate science, we might think we are dealing with data from the past, but often this is processed data that is produced by analysis models applied to raw data. The word “prediction” can mean a range of things, from unconditional prophecies to conditional projections.
Global warming became a growing public concern following Jim Hansen’s US Senate testimony in 1988 asserting that the warming was happening. The Intergovernmental Panel on Climate Change (IPCC) was formed in response to this concern. The IPCC issues periodic assessments summarizing recent scientific developments relating to climate change. Climate models were used to attribute global warming to increasing concentrations of carbon dioxide and other greenhouse gases. Certain types of extreme weather can also be probabilistically attributed to these causes. The effect of aerosols and stochastic variability on the past global warming signal is described. The IPCC projects the global warming signal into the future using a range of carbon dioxide emission scenarios, resulting in different degrees of predicted warming. The importance of regional climate change and the difficulty of predicting it are discussed.
The concept of Occam’s Razor, also known as the principle of parsimony, is a motivating force in science. Galileo’s experiment of dropping objects of different weights from atop the Leaning Tower of Pisa to show that they fall at the same rate is an illustration of this principle. But the application of Occam’s Razor to climate modeling is less straightforward. Simple models of the kind used by Manabe provide qualitative insights, but they are not well-suited for quantitative predictions. To understand this, we can make an analogy between the hierarchy of climate models and the hierarchy of biological models, from fruit fly to mouse. Simple models are used to explore “climate tipping points,” where amplifying feedbacks lead to abrupt climate change, but they may not consider all the stabilizing feedbacks. It is therefore important to use a hierarchy of models, with varying degrees of complexity, to study climate phenomena.
The fundamental difference between weather prediction and climate prediction is explained, using a “nature versus nurture” analogy. To predict weather, we start from initial conditions of the atmosphere and run the weather forecast model. To predict climate, the initial conditions matter less, but we need boundary conditions, such as the angle of the sun or the concentration of carbon dioxide in the atmosphere, which control the greenhouse effect. Charles David Keeling began measuring carbon dioxide in the late 1950s, and found that its concentration was steadily increasing. Carbon dioxide concentrations for the past 800,000 years can also be measured using ice cores that contain trapped air. These ice core data show that the rise in carbon dioxide concentrations measured by Keeling was unprecedented. Manabe, and another scientist, Jim Hansen, used climate models to predict that increasing carbon dioxide could cause global warming.
The Aymara people use the metaphor that the future is behind us and the past is in front of us. Imagine that we are driving a car in reverse into our climate future. The past is in front of us; climate models act as the rearview mirror, showing what lies behind us in the future. The view is blurry because there is uncertainty, and the car is moving fast as we continue to emit greenhouse gases. We need to brake quickly – reduce emissions – because we know that the braking distance is very long. The Paris Agreement to reduce worldwide emissions is like a potluck dinner: Each guest decides how much food to bring. If the guests don’t bring enough food for everyone, then some will leave hungry. Similarly, emission reductions pledged in the Paris Agreement are voluntary and may be not be sufficient to strongly mitigate the warming.
The Geophysical Fluid Dynamics Laboratory (GFDL) is a pioneering institution in the field of climate modeling. Its founding director, Joseph Smagorinsky, was a member of the Princeton Meteorology Group. He hired a Japanese scientist, Syukuro Manabe, who formulated a one-dimensional model of climate, known as the radiative–convective model, that was able to calculate the amplifying climate feedback due to water vapor. This model provided one of the first reliable estimates of global warming. Manabe worked with other scientists to build three-dimensional climate models, including the first model that coupled an atmospheric model to an ocean model. The concepts of reductionism and emergentism, which provide the philosophical context for these scientific developments, are introduced.
Climate warming is occurring most rapidly in the Arctic, which is both a sentinel and a driver of further global change. Ecosystems and human societies are already affected by warming. Permafrost thaws and species are on the move, bringing pathogens and vectors to virgin areas. During a five-year project, the CLINF – a Nordic Center of Excellence, funded by the Nordic Council of Ministers, has worked with the One Health concept, integrating environmental data with human and animal disease data in predictive models and creating maps of dynamic processes affecting the spread of infectious diseases. It is shown that tularemia outbreaks can be predicted even at a regional level with a manageable level of uncertainty. To decrease uncertainty, rapid development of new and harmonised technologies and databases is needed from currently highly heterogeneous data sources. A major source of uncertainty for the future of contaminants and infectious diseases in the Arctic, however, is associated with which paths the majority of the globe chooses to follow in the future. Diplomacy is one of the most powerful tools Arctic nations have to influence these choices of other nations, supported by Arctic science and One Health approaches that recognise the interconnection between people, animals, plants and their shared environment at the local, regional, national and global levels as essential for achieving a sustainable development for both the Arctic and the globe.
Antarctica’s Pole of Inaccessibility (Southern Pole of Inaccessibility (SPI)) is the point on the Antarctic continent farthest from its edge. Existing literature exhibits disagreement over its location. Using two revisions of the Scientific Committee on Antarctic Research’s Antarctic Digital Database, we calculate modern-day positions for the SPI around 10 years apart, based on the position of the “outer” Antarctic coastline, i.e. its boundary with the ocean. These show that the position of the SPI in the year 2010 was around 83° 54’ S, 64° 53’ E, shifting on the order of 1 km per year as a result of changes of a similar magnitude in the Amery, Ronne-Filchner and Ross Ice Shelves. Excepting a position of the SPI calculated by British Antarctic Survey in 2005, to which it is very close, our newly calculated position differs by 150–900 km from others reported in the literature. We also consider the “inner” SPI, defined by the coastline with floating ice removed. The position of this SPI in 2010 is estimated as 83°37’ S, 53° 43’ E, differing significantly from other reported positions. Earlier cartographic data are probably not sufficiently accurate to allow its rate of change to be calculated meaningfully.
At the Arctic Council’s Ministerial Meeting in Reykjavik on 20 May 2021, Russia assumed the chairmanship of the council for the second time since its establishment in 1996. Though some Russian analysts and practitioners were skeptical about the usefulness of such a mechanism during the 1980s and 1990s, Russia has become an active contributor to the progress of the Arctic Council (AC). Russia’s first term as chair during 2004–2006 led to the creation of the Arctic Contaminants Action Program as an Arctic Council Working Group. Since then, Russia has served as co-lead of the Task Forces developing the terms of the 2011 agreement on search and rescue, the 2013 agreement on marine oil spill preparedness and response, and the 2017 agreement on enhancing international scientific cooperation. Russia also has participated actively in the creation of related bodies including the Arctic Coast Guard Forum and the Arctic Economic Council whose chairmanships rotate together with the chairmanship of the AC. Now, far-reaching changes in the broader setting are posing growing challenges to the effectiveness of these institutional arrangements. The impacts of climate change in the high latitudes have increased dramatically; the pace of the extraction and shipment of Arctic natural resources has accelerated sharply; great-power politics have returned to the Arctic foregrounding concerns regarding military security. Together, these developments make it clear that a policy of business as usual will not suffice to ensure that the AC remains an important high-level forum for addressing Arctic issues in a global context. The programme Russia has developed for its 2021–2023 chairmanship of the council is ambitious; it proposes a sizeable suite of constructive activities. In this article, however, we go a step further to explore opportunities to adapt the Arctic governance system to the conditions prevailing in the 2020s. We focus on options relating to (i) the AC’s constitutive arrangements, (ii) links between the council and related governance mechanisms, (iii) the role of science diplomacy, and (iv) the treatment of issues involving military security. We conclude with a discussion of the prospect of organising a heads of state/government meeting during the Russian chairmanship as a means of setting the Arctic governance system on a constructive path for the 2020s.