We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To introduce conjectures at various points in the course of a historical account in order to fill gaps in the record is surely permissible; for what comes before and after these gaps—i.e. the remote cause and the effect respectively—can enable us to discover the intermediate causes with reasonable certainty, thereby rendering the intervening process intelligible. But to base a historical account solely on conjectures would seem little better than drawing up a plan for a novel. Indeed, such an account could not be described as a conjectural history at all, but merely as a work offidion.—Nevertheless, what it may be presumptuous to introduce in the course of a history of human actions may well be permissible with reference to the first beginning ofthat history, for if the beginning is a product of nature, it may be discoverable by conjectural means. In other words, it does not have to be invented but can be deduced from experience, assuming that what was experienced at the beginning of history was no better or worse than what is experienced now—an assumption which accords with the analogy of nature and which has nothing presumptuous about it. Thus, a history of the first development of freedom from its origins as a predisposition in human nature is something quite different from a history of its subsequent course, which must be based exclusively on historical records.
Nevertheless, conjectures should not make undue claims on our assent. On the contrary, they should not present themselves as a serious activity but merely as an exercise in which the imagination, supported by reason, may be allowed to indulge as a healthy mental recreation. Consequently, they cannot stand comparison with a historical account which is put forward and accepted as a genuine record of the same event, a record which is tested by criteria quite different from those derived merely from the philosophy of nature. For this very reason, and because the journey on which I am about to venture is no more than a pleasure trip, I may perhaps hope to be granted permission to employ a sacred document as my map, and at the same time to speculate that the journey which I shall make on the wings of imagination—although not without the guidance of experience as mediated by reason—will follow precisely the same course as that which the sacred text records as history.
Geophysics, as its name indicates, has to do with the physics of the earth and its surrounding atmosphere. Gilbert's discovery that the earth behaves as a great and rather irregular magnet and Newton's theory of gravitation may be said to constitute the beginning of geophysics. Mining and the search for metals date from the earliest times, but the scientific record began with the publication in 1556 of the famous treatise De re metallica by Georgius Agricola, which for many years was the authoritative work on mining. The initial step in applying geophysics to the search for minerals probably was taken in 1843, when Von Wrede pointed out that the magnetic theodolite, used by Lamont to measure variations in the earth's magnetic field, might also be employed to discover bodies of magnetic ore. However, this idea was not acted on until the publication in 1879 of Professor Robert Thalén's book On the Examination of Iron Ore Deposits by Magnetic Methods. The Thalén–Tiberg magnetometer manufacture in Sweden, and later the Thomson– Thalén instrument, furnished the means of locating the strike, dip, and depth below surface of magnetic dikes.
The continued expansion in the demand for metals of all kinds and the enormous increase in the use of petroleum products since the turn of the century have led to the development of many geophysical techniques of ever-increasing sensitivity for the detection and mapping of unseen deposits and structures.
This book began as a revision of the classic text, Eve and Keys – Applied Geophysics in the Search for Minerals. However, it soon became obvious that the great advances in exploration geophysics during the last two decades have so altered not only the field equipment and practise but also the interpretation techniques that revision was impractical and that a completely new textbook was required.
Readers of textbooks in applied geophysics will often have a background which is strong either in physics or geology but not in both. This book has been written with this in mind so that the physicist may find to his annoyance a detailed explanation of simple physical concepts (for example, energy density) and step-by-step mathematical derivations; on the other hand the geologist may be amused by the over-simplified geological examples and the detailed descriptions of elementary concepts.
The textbook by Eve and Keys was unique in that it furnished a selection of problems for use in the classroom. This feature has been retained in the present book.
Gravity prospecting involves measurements of variations in the gravitational field of the earth. One hopes to locate local masses of greater or lesser density than the surrounding formations and learn something about them from the irregularities in the earth's field. It is not possible, however, to determine a unique source for an observed anomaly. Observations normally are made at the earth's surface, but underground surveys also are carried out occasionally.
Gravity prospecting is used as a reconnaissance tool in oil exploration; although expensive, it is still considerably cheaper than seismic prospecting. Gravity data are also used to provide constraints in seismic interpretation. In mineral exploration, gravity prospecting usually has been employed as a secondary method, although it is used for detailed follow-up of magnetic and electromagnetic anomalies during integrated base-metal surveys. Gravity surveys are sometimes used in engineering (Arzi, 1975) and archaeological studies.
Like magnetics, radioactivity, and some electrical techniques, gravity is a natural-source method. Local variations in the densities of rocks near the surface cause minute changes in the gravity field. Gravity and magnetics techniques often are grouped together as the potential methods, but there are basic differences between them. Gravity is an inherent property of mass, whereas the magnetic state of matter depends on other factors, such as the inducing fields and/or the orientations of magnetic domains.
The application of several disciplines – geology, geochemistry, geophysics – constitutes an integrated exploration program. In a more restricted sense we may consider the integrated geophysics program as the use of several geophysical techniques in the same area. The fact that this type of operation is so commonplace is because the exploration geophysicist, by a suitable selection of, say, four methods, may obtain much more than four times the information he would get from any one of them alone.
Before elaborating on this topic it is necessary to point out again the paramount importance of geology in exploration work. Every geological feature, from tectonic blocks of subcontinental size to the smallest rock fracture, may provide a clue in the search for economic minerals. Thus geologic information exerts a most significant influence on the whole exploration program, the choice of area, geophysical techniques, and, above all, the interpretation of results. Without this control the geophysicist figuratively is working in the dark.
The subject of integrated geophysical surveys has received considerable attention in the technical literature since about 1960. In petroleum exploration the combination of gravity and magnetic reconnaissance, plus seismic for both reconnaissance and detail (and, of course, various well-logging techniques during the course of drilling) is well established.
The best combination for an integrated mineral exploration program is not so definite because of the great variety of targets and detection methods available. Base-metal search is a case in point.
In considering the human contribution to systems disasters, it is important to distinguish two kinds of error: active errors, whose effects are felt almost immediately, and latent errors whose adverse consequences may lie dormant within the system for a long time, only becoming evident when they combine with other factors to breach the system's defences (see Rasmussen & Pedersen, 1984). In general, active errors are associated with the performance of the ‘front-line’ operators of a complex system: pilots, air traffic controllers, ships’ officers, control room crews and the like. Latent errors, on the other hand, are most likely to be spawned by those whose activities are removed in both time and space from the direct control interface: designers, high-level decision makers, construction workers, managers and maintenance personnel.
Detailed analyses of recent accidents, most particularly those at Flixborough, Three Mile Island, Heysel Stadium, Bhopal, Chernobyl and Zeebrugge, as well as the Challenger disaster, have made it increasingly apparent that latent errors pose the greatest threat to the safety of a complex system. In the past, reliability analyses and accident investigations have focused primarily upon active operator errors and equipment failures. While operators can, and frequently do, make errors in their attempts to recover from an out-of-tolerance system state, many of the root causes of the emergency were usually present within the system long before these active errors were committed.
The seismic method is by far the most important geophysical technique in terms of expenditures (see Table 1.1) and number of geophysicists involved. Its predominance is due to high accuracy, high resolution, and great penetration. The widespread use of seismic methods is principally in exploring for petroleum: the locations for exploratory wells rarely are made without seismic information. Seismic methods are also important in groundwater searches and in civil engineering, especially to measure the depth to bedrock in connection with the construction of large buildings, dams, highways, and harbor surveys. Seismic techniques have found little application in direct exploration for minerals where interfaces between different rock types are highly irregular. However, they are useful in locating features, such as buried channels, in which heavy minerals may be accumulated.
Exploration seismology is an offspring of earth-quake seismology. When an earthquake occurs, the earth is fractured and the rocks on opposite sides of the fracture move relative to one another. Such a rupture generates seismic waves that travel outward from the fracture surface and are recorded at various sites using seismographs. Seismologists use the data to deduce information about the nature of the rocks through which the earthquake waves traveled.
Exploration seismic methods involve basically the same type of measurements as earthquake seismology. However, the energy sources are controlled and movable, and the distances between the source and the recording points are relatively small.
Magnetic and gravity methods have much in common, but magnetics is generally more complex and variations in the magnetic field are more erratic and localized. This is partly due to the difference between the dipolar magnetic field and the monopolar gravity field, partly due to the variable direction of the magnetic field, whereas the gravity field is always in the vertical direction, and partly due to the time-dependence of the magnetic field, whereas the gravity field is time-invariant (ignoring small tidal variations). Whereas a gravity map usually is dominated by regional effects, a magnetic map generally shows a multitude of local anomalies. Magnetic measurements are made more easily and cheaply than most geophysical measurements and corrections are practically unnecessary. Magnetic field variations are often diagnostic of mineral structures as well as regional structures, and the magnetic method is the most versatile of geophysical prospecting techniques. However, like all potential methods, magnetic methods lack uniqueness of interpretation.
History of Magnetic Methods
The study of the earth's magnetism is the oldest branch of geophysics. It has been known for more than three centuries that the Earth behaves as a large and somewhat irregular magnet. Sir William Gilbert (1540–1603) made the first scientific investigation of terrestrial magnetism. He recorded in de Magnete that knowledge of the north-seeking property of a magnetite splinter (a lodestone or leading stone) was brought to Europe from China by Marco Polo.
Various spontaneous ground potentials were discussed in Section 5.2.1. Only two of these have been considered seriously in surface exploration, although the self-potential method is used in a variety of ways in well logging (§11.3). Mineralization potentials produced mainly by sulfides have long been the main target of interest, although recently exploration for geothermal sources has included self-potential surveys as well. The remainder of these spontaneous potentials may be classified as background or noise. This also means that geothermal anomalies become noise if they occur in the vicinity of a sulfide survey and vice versa (§6.1.4). A more detailed description of these sources follows.
Background potentials are created by fluid streaming, bioelectric activity in vegetation, varying electrolytic concentrations in ground water, and other geochemical action. Their amplitudes vary greatly but generally are less than 100 mV. On the average, over intervals of several thousand feet, the potentials usually add up to zero, because they are as likely to be positive as negative.
In addition there are several characteristic regional background potentials. One is a gradient of the order of 30 mV/km, which sometimes extends over several kilometers and may be either positive or negative. It is probably due to gradual changes in diffusion and electrolytic potentials in ground water. Sometimes a more abrupt change will result in a baseline shift of background potential. Another regional gradient of similar magnitude seems to be associated with topography.
In Chapter 1, a distinction was made between error types and error forms. Error types are differentiated according to the performance levels at which they occur. Error forms, on the other hand, are pervasive varieties of fallibility that are evident at all performance levels. Their ubiquity indicates that they are rooted in universal processes that influence the entire spectrum of cognitive activities.
The view advanced in this chapter is that error forms are shaped primarily by two factors: similarity and frequency. These, in turn, have their origins in the automatic retrieval processes – similarity-matching and frequency-gambling – by which knowledge structures are located and their products delivered to consciousness (thoughts, words, images, etc.) or to the outside world (action, speech or gesture). It is also argued that the more cognitive operations are in some way underspecified, the more likely it is that error forms will be shaped by the frequency-gambling heuristic.
If the study of human error is to make a useful contribution to the safety and efficiency of hazardous technologies, it must be able to offer their designers and operators some workable generalizations regarding the information- handling properties of a system's human participants (see Card, Moran & Newell, 1983). This chapter explores the generality of one such approximation:
When cognitive operations are underspecified, they tend to default to contextually appropriate, high-frequency responses.
Exactly what information is missing from a sufficient specification, or which controlling agency fails to provide it, will vary with the nature of the cognitive activity being performed.
The 14 or so years that have elapsed since the writing of Applied Geophysics have seen many changes as results of better instrumentation, the extensive application of computer techniques, and more complete understanding of the factors that influence mineral accumulations. Changes have not been uniform within the various areas of applied geophysics, however. In gravity field work there have been few changes except for the use of helicopters and inertial navigation, but the ability to calculate the gravity field of a complicated model, use the differences from the measured field to modify the model, and iterate the calculations has significantly changed gravity interpretation. The greatly improved sensitivities of proton-precession and optically pumped magnetometers and the use of gradiometers have considerably increased the number of meaningful magnetic anomalies extractable from magnetic data; iterative interpretation has also had a significant impact, as in gravity interpretation. No major individual innovations have affected seismic exploration but a combination of minor improvements has produced probably the greatest improvement in seismic data quality in any comparable period of time. The improved data quality has resulted in new types of interpretation, such as seismic stratigraphy; now, interactive capabilities promise major interpretational advances. Whereas there has been little change in self-potential methods, magnetotellurics has blossomed from a research tool to a practical exploration method. Resistivity methods have changed only a little, but perhaps the greatest changes in any area result from the development of a number of new electromagnetic exploration methods.
Just over 60 years ago, Spearman (1928) grumbled that “crammed as psychological writings are, and must needs be, with allusions to errors in an incidental manner, they hardly ever arrive at considering these profoundly, or even systematically.” Even at the time, Spearman's lament was not altogether justified (see Chapter 2); but if he were around today, he would find still less cause for complaint. The past decade has seen a rapid increase in what might loosely be called ‘studies of errors for their own sake’.
The most obvious impetus for this renewed interest has been a growing public concern over the terrible cost of human error: the Tenerife runway collision in 1977, Three Mile Island two years later, the Bhopal methyl isocyanate tragedy in 1984, the Challenger and Chernobyl disasters of 1986, the capsize of the Herald of Free Enterprise, the King's Cross tube station fire in 1987 and the Piper Alpha oil platform explosion in 1988. There is nothing new about tragic accidents caused by human error; but in the past, the injurious consequences were usually confined to the immediate vicinity of the disaster. Now, the nature and the scale of certain potentially hazardous technologies, especially nuclear power plants, means that human errors can have adverse effects upon whole continents over several generations.
Aside from these world events, from the mid-1970s onwards theoretical and methodological developments within cognitive psychology have also acted to make errors a proper study in their own right.
The purpose of this chapter is to provide a conceptual framework–the generic error-modelling system (GEMS)–within which to locate the origins of the basic human error types. This structure is derived in large part from Rasmussen's skill-rule-knowledge classification of human performance (outlined in Chapter 2), and yields three basic error types:
skill-based slips (and lapses)
rule-based mistakes
knowledge-based mistakes
In particular, GEMS seeks to integrate two hitherto distinct areas of error research: (a) slips and lapses, in which actions deviate from current intention due to execution failures and/or storage failures (see Reason, 1979,1984a,b; Reason & Mycielska, 1982; Norman, 1981; Norman & Shallice, 1980); and (b) mistakes, in which the actions may run according to plan, but where the plan is inadequate to achieve its desired outcome (Simon, 1957, 1983; Wason & Johnson-Laird, 1972; Rasmussen & Jensen, 1974; Nisbett & Ross, 1980; Rouse, 1981; Hunt & Rouse, 1984; Kahneman, Slovic & Tversky, 1982; Evans, 1983).
The chapter begins by explaining why the simple slips/mistakes distinction (outlined in Chapter 1) is not sufficient to capture all of the basic error types. The evidence demands that mistakes be divided into at least two kinds: rulebased mistakes and knowledge-based mistakes. The three error types (skillbased slips and lapses, rule-based mistakes and knowledge-based mistakes) maybe differentiated by a variety of processing, representational and task-related factors, as discussed in Section 2.
Human error is a very large subject, quite as extensive as that covered by the term human performance. But these daunting proportions can be reduced in at least two ways. The topic can be treated in a broad but shallow fashion, aiming at a wide though superficial coverage of many well-documented error types. Or, an attempt can be made to carve out a narrow but relatively deep slice, trading comprehensiveness for a chance to get at some of the more general principles of error production. I have tried to achieve the latter.
The book is written with a mixed readership in mind: cognitive psychologists, human factors professionals, safety managers and reliability engineers – and, of course, their students. As far as possible, I have tried to make both the theoretical and the practical aspects of the book accessible to all. In other words, it presumes little in the way of prior specialist knowledge of either kind. Although some familiarity with the way psychologists think, write and handle evidence is clearly an advantage, it is not a necessary qualification for tackling the book. Nor, for that matter, should an unfamiliarity with high-technology systems deter psychologists from reading the last two chapters.
Errors mean different things to different people. For cognitive theorists, they offer important clues to the covert control processes underlying routine human action. To applied practitioners, they remain the main threat to the safe operation of high-risk technologies.