To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The patterns discussed in Chapters 6–7 adhere closely to the GoF philosophy of designing to an interface rather than an implementation. That maxim inspired the use of abstract classes in defining an abstract calculus, a strategy, and a surrogate. The puppeteer definition in Chapter 8 represented the one setting in which client code manipulated concrete implementations directly – although an exercise at the end of that chapter describes a way to liberate the puppeteer from the tyranny of implementations also.
Independent of whether the classes comprising the pattern demonstrations are abstract, the client codes (main programs in the cases considered) exploit knowledge of the concrete type of each object constructed in Chapters 6–8. Although we were able to write polymorphic procedures such as integrate in Figures 6.2(b) and 6.3(b)–(c), in each case, the actual arguments passed to these procedures were references to concrete objects the client constructed. One can write even more flexible code by freeing clients of even this minimal knowledge of concrete types. This poses the dilemma of where object construction happens. Put simply, how does one construct an object without directly invoking its constructor? Or from another angle, how does an object come to be if the client knows only the interface defined by its abstract parent, but that parent's abstract nature precludes definition of a constructor? By definition, no instance of an abstract type can be instantiated.
“Learn the rules so you know how to break them properly.”
Tenzin Gyatso, The 14th Dalai Lama
Why Be Formal?
Whereas Parts I and II focused on canonical examples, Part III marches toward complete applications, resplendent with runtime considerations. The current chapter addresses code and compiler correctness. Chapter 11 discusses language interoperability. Chapter 12 addresses scalability and weaves elements of the entire book into a vision for multiphysics framework design.
Formal methods form an important branch of software engineering that has apparently been applied to the design of only a small percentage of scientific simulation programs (Bientinesi and van de Geijn 2005; van Engelen and Cats 1997). Two pillars of formalization are specification and verification – that is, specifying mathematically what a program must do and verifying the correctness of an algorithm with respect to the specification. The numerical aspects of scientific programming are already formal. The mathematical equations one wishes to solve in a given scientific simulation provide a formal specification, whereas a proof of numerical convergence provides a formal verification. Hence, formal methods developers often cite a motivation of seeking correctness standards for non-scientific codes as rigorous as those for scientific codes (Oliveria 1997). This ignores, however, the nonnumerical aspects of scientific programs that could benefit from greater rigor. One such aspect is memory management. The current chapter specifies formal constraints on memory allocations in the Fortran implementation of the Burgers equation solver from Chapter 9.
The past several decades have witnessed impressive successes in the ability of scientists and engineers to accurately simulate physical phenomena on computers. In engineering, it would now be unimaginable to design complex devices such as aircraft engines or skyscrapers without detailed numerical modeling playing an integral role. In science, computation is now recognized as a third mode of inquiry, complementing theory and experiment. As the steady march of progress in individual spheres of interest continues, the focus naturally turns toward leveraging efforts in previously separate domains to advance one's own domain or in combining old disciplines into new ones. Such work falls under the umbrella of multiphysics modeling.
Overcoming the physical, mathematical, and computational challenges of multiphysics modeling comprises one of the central challenges of 21st-century science and engineering. In one of its three major findings, the National Science Foundation Blue Ribbon Panel on Simulation-Based Engineering Science (SBES) cited “open problems associated with multiscale and multi-physics modeling” among a group of “formidable challenges [that] stand in the way of progress in SBES research.” As the juxtaposition of “multiphysics” and “multiscale” in the panel's report implies, multi-physics problems often involve dynamics across a broad range of lengths and times.
At the level of the physics and mathematics, integrating the disparate dynamics of multiple fields poses significant challenges in simulation accuracy, consistency, and stability.
This book is written as a guide for the presentation of experimental data including a consistent treatment of experimental errors and inaccuracies. It is meant for experimentalists in physics, astronomy, chemistry, life sciences and engineering. However, it can be equally useful for theoreticians who produce simulation data: they are often confronted with statistical data analysis for which the same methods apply as for the analysis of experimental data. The emphasis in this book is on the determination of best estimates for the values and inaccuracies of parameters in a theory, given experimental data. This is the problem area encountered by most physical scientists and engineers. The problem area of experimental design and hypothesis testing – excellently covered by many textbooks – is only touched on but not treated in this book.
The text can be used in education on error analysis, either in conjunction with experimental classes or in separate courses on data analysis and presentation. It is written in such a way – by including examples and exercises – that most students will be able to acquire the necessary knowledge from self study as well. The book is also meant to be kept for later reference in practical applications. For this purpose a set of “data sheets” and a number of useful computer programs are included.
This book consists of parts. Part I contains the main body of the text.
This chapter is about the presentation of experimental results. When the value of a physical quantity is reported, the uncertainty in the value must be properly reported too, and it must be clear to the reader what kind of uncertainty is meant and how it has been estimated. Given the uncertainty, the value must be reported with the proper number of digits. But the quantity also has a unit that must be reported according to international standards. Thus this chapter is about reporting your results: this is the last thing you do, but we'll make it the first chapter before more serious matters require attention.
How to report a series of measurements
In most cases you derive a result on the basis of a series of (similar) measurements. In general you do not report all individual outcomes of the measurements, but you report the best estimates of the quantity you wish to “measure,” based on the experimental data and on the model you use to derive the required quantity from the data. In fact, you use a data reduction method. In a publication you are required to be explicit about the method used to derive the end result from the data. However, in certain cases you may also choose to report details of the data themselves (preferably in an appendix or deposited as “additional material”); this enables the reader to check your results or apply alternative data reduction methods.
This appendix contains programs, functions or code fragments written in Python. Each code is referred to in the text; the page where the reference is made is given in the header.
First some general instructions are given on how to work with these codes. Python is a general-purpose interpretative language, for which interpreters are available for most platforms, including Windows. Python is in the public domain and interpreters are freely available. Most applications in this book use a powerful numerical array extension NumPy, which also provides basic tools in linear algebra, Fourier transforms and random numbers. Although Python version 3 is available, at the time of writing NumPy requires Python version 2, the latest being 2.6. In addition, applications may require the scientific tools library SciPy, which relies on NumPy. Importing SciPy automatically implies the import of NumPy.
Users are advised first to download Python 2.6, then the most recent stable version of NumPy, and then SciPy. Further instructions for Windows users can be found at www.hjcb.nl/python.
There are several options to produce plots, for example Gnuplot.py, based on the gnuplot package or rpy based on the statistical package “R.” But there are many more. Since the user may find it difficult to make a choice, we have added yet another, but very simple to use, plotting module called plotsvg.py. It can be downloaded from the author's website.
If you want to fit parameters in a functional relation to experimental data, the best method is a least-squares analysis: Find the parameters that minimize the sum of squared deviations of the measured values from the values predicted by your function. In this chapter both linear and nonlinear least-squares fits are considered. It is explained how you can test the validity or effectiveness of the fit and how you can determine the expected inaccuracies in the optimal values of the parameters.
Introduction
Consider the following task: you wish to devise a function y = f(x) such that this function fits as accurately as possible to a number of data points (xi, yi), i = 1, …, n. Usually you have – based on theoretical considerations – a set of functions to choose from, and those functions may still contain one or more yet undetermined parameters. In order to select the “best” function and parameters you must use some kind of measure for the deviation of the data points from the function. If this deviation measure is a single value, you can then select the function that minimizes this deviation.
This task is not at all straightforward and you may be lured into pitfalls during the process. For example, your choice of functions and parameters may be so large and your set of data may be so small that you can choose a function that exactly fits your data.
There are errors and uncertainties. The latter are unavoidable; eventually it is the omnipresent thermal noise that causes the results of measurements to be imprecise. After trying to identify and correct avoidable errors, this chapter will concentrate on the propagation and combination of uncertainties in composite functional relations.
Classification of errors
There are several types of error in experimental outcomes:
(i) (accidental, stupid or intended) mistakes
(ii) systematic deviations
(iii) random errors or uncertainties
The first type we shall ignore. Accidental mistakes can be avoided by careful checking and double checking. Stupid mistakes are accidental errors that have been overlooked. Intended mistakes (e.g. selecting data that suit your purpose) purposely mislead the reader and belong to the category of scientific crimes.
Systematic errors
Systematic errors have a non-random character and distort the result of a measurement. They result from erroneous calibration or just from a lack of proper calibration of a measuring instrument, from careless measurements (uncorrected parallax, uncorrected zero-point deviations, time measurements uncorrected for reaction time, etc.), from impurities in materials, or from causes the experimenter is not aware of. The latter are certainly the most dangerous type of error; such errors are likely to show up when results are compared to those of other experimentalists at other laboratories. Therefore independent corroboration of experimental results is required before a critical experiment (e.g. one that overthrows an accepted theory) can be trusted.
It is impossible to measure physical quantities without errors. In most cases errors result from deviations and inaccuracies caused by the measuring apparatus or from the inaccurate reading of the displaying device, but also with optimal instruments and digital displays there are always fluctuations in the measured data. Ultimately there is random thermal noise affecting all quantities that are determined at a finite temperature. Any experimentally determined quantity therefore has a certain inaccuracy. If the experiment were to be repeated, the result would be (slightly) different. One could say that the result of a particular experiment is no more than a random sample from a probability distribution. When reporting the result of an experiment, it is important to also report the extent of the uncertainty, e.g. in terms of the best estimate of some measure of the width of the probability distribution. When experimental data are processed and conclusions are drawn from them, knowledge of the experimental uncertainties is essential to assess the reliability of the conclusion.
Ideally, you should specify the probability distribution from which the reported experimental value is supposed to be a random sample. The problem is that you have only one experiment; even if your experiment consists of many observations of which you report the average, you have only one average to report.