To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Establishing and understanding the relationship between quantities are principal goals in the physical sciences. As examples, we might be keen to know how the:
size of a crystal depends on the growth time of the crystal;
output intensity of a light emitting diode varies with the emission wavelength;
amount of light absorbed by a chemical species depends on the species concentration;
electrical power supplied by a solar cell varies with optical power incident on the cell;
viscosity of an engine oil depends upon the temperature of the oil;
rate of flow of a fluid through a hollow tube depends on the internal diameter of the tube.
Once an experiment is complete and the data presented in the form of an x–y graph, an examination of the data assists in answering important qualitative questions such as: Is there evidence of a clear trend in the data? If so, is that trend linear, and do any of the data conflict with the general trend? A qualitative analysis often suggests which quantitative methods of analysis to apply.
There are many situations in the physical sciences in which prior knowledge or experience suggests a relationship between measured quantities. Perhaps we are already aware of an equation which predicts how one quantity depends on another. Our goal in this situation might be to discover how well the equation can be made to ‘fit’ the data.
Experiments and experimentation have central roles to play in the education of scientists. For many destined to participate in scientific enquiry through laboratory or field based studies, the ability to apply ‘experimental methods’ is a key skill that they rely upon throughout their professional careers. For others whose interests and circumstances take them into other fields upon completion of their studies, the experience of ‘wrestling with nature’ so often encountered in experimental work, offers enduring rewards: Skills developed in the process of planning, executing and deliberating upon experiments are of lasting value in a world in which some talents become rapidly redundant.
Laboratory and field based experimentation are core activities in the physical sciences. Good experimentation is a blend of insight, imagination, skill, perseverance and occasionally luck. Vital to experimentation is data analysis. This is rightly so, as careful analysis of data can tease out features and relationships not apparent at a first glance at the ‘numbers’ emerging from an experiment. This, in turn, may suggest a new direction for the experiment that might offer further insight into a phenomenon or effect being studied. Equally importantly, after details of an experiment are long forgotten, facility gained in applying data analysis methods remains as a highly valued and transferable skill.
‘The principle of science, the definition almost, is the following:The test of all knowledge is experiment. Experiment is thesole judge of scientific “truth”’.
So wrote Richard Feynman, famous scientist and Nobel Prize winner, noted for hiscontributions to physics.
It is possible that when Feynman wrote these words he had in mind elaborateexperiments devised to reveal the‘secrets of the Universe’, suchas those involving the creation of new particles during high energy collisionsin particle accelerators or others to determine the structure of DNA.Experimentation encompasses an enormous range of more humble (but extremelyimportant) activities such as testing the temperature of a baby's bathwater by immersing an elbow into the water, or pressing on a bicycle tyre toestablish whether it needs inflating. The absence of numerical measures ofquantities distinguishes these experiments from those normally performed byscientists.
Thorough analysis of experimental data frequently requires extensive numerical manipulation. Many tools exist to assist in the analysis of data, ranging from the pocket calculator to specialist computer based statistics packages. Despite limited editing and display options, the pocket calculator remains a well-used tool for basic analysis due to its low cost, convenience and reliability. Intensive data analysis may require a statistics package such as Systat or Origin. As well as standard functions, such as those used to determine means and standard deviations, these packages possess advanced features routinely required by researchers and professionals. Between the extremes of the pocket calculator and specialised statistics package is the spreadsheet. While originally designed for business users, spreadsheet packages are popular with other users due to their accessibility, versatility and ease of use. The inclusion of advanced features into spreadsheets means that, in many situations, a spreadsheet is a viable alternative to a statistics package. The most widely used spreadsheet available for personal computers (PCs) is Excel by Microsoft. Excel appears within this book in the role of convenient data analysis tool with short sections within most chapters devoted to describing specific features. Its clear layout, extensive help facilities, range of in-built statistical functions and availability for both PCs and Mac computers make Excel a popular choice for data analysis. This chapter introduces Excel and describes some of its basic features using examples drawn from the physical sciences. Some familiarity with using a PC is assumed, to the extent that terms such as ‘mouse’, ‘pointer’, ‘Enter key’ and ‘save’ are assumed understood in the context of using a program such as Excel.
What is a spreadsheet?
A computer based spreadsheet is a sophisticated and versatile analysis and display tool for numeric and text based data. As well as the usual arithmetic and mathematical functions found on pocket calculators, spreadsheets offer other features such as data sorting and display of data in the form of an x–y graph. Some spreadsheet packages include more advanced analysis options such as linear regression and hypothesis testing. An attractive feature of many spreadsheets is the ability to accept data directly from other computer based applications, simplifying and speeding up data entry as well as avoiding mistakes caused by faulty transcription.
What can reasonably be inferred from data gathered in an experiment? This simple question lies at the heart of experimentation, as an experiment can be judged by how much insight can be drawn from data. An experiment may have a broad or narrow focus, and may be designed to:
challenge a relationship that has an established theoretical basis;
critically examine a discovery that results from ‘chance’ observations;
check for drift in an instrument;
compare analysis of materials carried out in two or more laboratories.
Such general goals give way to specific questions that we hope can be answered by careful analysis of data gathered in well designed experiments. Questions that might be asked include:
is there a linear relationship between quantities measured in an experiment;
could the apparent correlation between variables have occurred ‘by chance’;
does a new manufacturing process produce lenses with focal lengths that are less variable than the old manufacturing process;
is there agreement between two methods used to determine the concentration of iron in a specimen;
has the gain of an instrument changed since it was calibrated?
It is usually not possible to answer these questions with a definite ‘yes’ or definite ‘no’. Though we hope data gathered during an experiment will provide evidence as to which reply to favour, we must be satisfied with answers expressed in terms of probability.
Consider a situation in which a manufacturer supplies an instrument containing an amplifier with a gain specified as 1000. Would it be reasonable to conclude that the instrument is faulty or needs recalibrating if the gain determined by a single measurement is 995? It is possible that random errors inherent in the measurement process, as revealed by making repeat measurements of the gain, would be sufficient to explain the discrepancy between the ‘expected’ value of gain of 1000 and the ‘experimental’ value of 995. What we would really like to know is whether, after taking into account the scatter in the values of the gain obtained through repeat measurements, the difference between the value we have reason to expect will occur and those actually obtained through experiment or observation is ‘significant’.
Chemists, physicists and other physical scientists are proud of the quantitative nature of their disciplines. By subjecting nature to ever closer examination, new relationships between quantities are discovered, and established relationships are pushed to the limits of their applicability. When ‘numbers’ emerge from an experiment, they can be subjected to quantitative analysis, compared to the ‘numbers’ obtained by other experimenters and be expressed in a clear and concise manner using tables and graphs. If an unfamiliar experiment is planned, an experimenter will often carry out a pilot experiment. The purpose of such an experiment might be to assess the effectiveness of the experimental methods being used, or to offer a preliminary evaluation of a theoretical prediction. It is also possible that the experimenter is acting on instinct or intuition. If the results of the pilot experiment are promising, the experimenter typically moves to the next stage in which a more thorough investigation is undertaken and where there is increased emphasis on the quality of the data gathered. The analysis of these data often provides crucial and defensible evidence sought by the experimenter to support (or refute) a particular theory or idea.
The goal of an experiment might be to determine an accurate value for a particular quantity such as the electrical charge carried by an electron. Experimenters are aware that influences exist, some controllable and others less so, that conspire to adversely affect the values they obtain. Despite an experimenter’s best efforts, some uncertainty in an experimentally determined value remains. In the case of the charge on the electron, its value is recognised to be of such importance that considerable effort has gone into establishing an accurate value for it. Currently (2011) the best value for the charge on the electron is (1.602176487 ± 0.000000040) × 10−19 C. A very important part of the expression for the charge is the number following the ± sign. This is the uncertainty in the value for the electronic charge and, though the uncertainty is rather small compared to the size of the charge, it is not zero. In general, every value obtained through measurement has some uncertainty and though the uncertainty may be reduced by thorough planning, prudent choice of measuring instrument and careful execution of the experiment, it cannot be eliminated entirely.
The process of analysing experimental data frequently involves many steps which begin with the tabulation and graphing of data. Numerical analysis of data may require simple but repetitive calculations such as the summing and averaging of values. Spreadsheet programs are designed to perform these tasks, and in previous chapters we considered how Excel’s built in functions such as AVERAGE() and CORREL() can assist data analysis. While the functions in Excel are extremely useful, there is still some effort required to:
enter data into the functions;
format numbers returned by the functions so that they are easy to assimilate;
plot suitable graphs;
combine functions to perform more advanced analysis.
Excel contains numerous useful data analysis tools designed around the built in functions which will, as examples, fit an equation to data using least squares or compare the means of many samples using analysis of variance. Once installed, these tools can be found via Analysis Group on the Data Ribbon. The dialog box that appears when a tool is selected allows for the easy input of data. Once the tool is selected and applied to data, results are displayed in a Worksheet with explanatory labels and headings. As an added benefit, some tools offer automatic plotting of data as graphs or charts.
In this chapter we consider several of Excel’s advanced data analysis tools which form part of the Analysis ToolPak add-in, paying particular attention to those tools which relate directly to principles and methods described in this book. The Histogram and Descriptive Statistics tools are described in sections 2.8.1 and 2.8.2 respectively and will not be discussed further in this chapter. Tools which relate less closely to the material in this book are described briefly with references given to where more information may be found.
Many students of physical and applied science and of engineering find difficulty in copying with the mathematics necessary for the quantitative manipulation of the physical concepts they are atudying in their main course. This book is designed to help first and second year under-graduates at universities and polytechnics, as well as technical college students, to find their feet in the important mathematical methods they will need. Throughout the text the physical relevance of the mathematics is constantly stressed and, where it is helpful, use has been made of pictorial mathematics and qualitative verbal descriptions instead of over-compact mathematical symbolism. Topics are presented in three stages: a qualitative introduction, a more formal presentation and an explicit check or worked example. There are many exercises included in the text which are aimed at testing a student's understanding and building his confidence progressively throughout each piece of work.
Continuum mechanics and thermodynamics are foundational theories of many fields of science and engineering. This book presents a fresh perspective on these fundamental topics, connecting micro- and nanoscopic theories and emphasizing topics relevant to understanding solid-state thermo-mechanical behavior. Providing clear, in-depth coverage, the book gives a self-contained treatment of topics directly related to nonlinear materials modeling. It starts with vectors and tensors, finite deformation kinematics, the fundamental balance and conservation laws, and classical thermodynamics. It then discusses the principles of constitutive theory and examples of constitutive models, presents a foundational treatment of energy principles and stability theory, and concludes with example closed-form solutions and the essentials of finite elements. Together with its companion book, Modeling Materials, (Cambridge University Press, 2011), this work presents the fundamentals of multiscale materials modeling for graduate students and researchers in physics, materials science, chemistry and engineering.