To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There is an even more severe conceptual difficulty for the modular view of mind than how it can operate efficiently. The efficiency problem can at least be posed in a ‘mechanistic’ framework closely related to the framework in which ‘modularity’ itself is expressed. A more complex issue is why the human cognitive-processing system, which is apparently modular, should have the property of being conscious, unlike most modular systems – for example, present-day complex machines.
The obvious strategy within a modular approach is to identify some aspects of the operation of some particular module – say, its input – as conscious experience. However, one is then faced with the question of what could be so special about the processing in that module as to give its input such exceptional status. No real progress appears to have been made. One appears merely to be taking the first step on the road to infinite regress.
The situation is worse than it appears. Not only is there no apparent line of attack on how and why a modular system might be conscious, but an explanation of consciousness within the conceptual framework of modularity would probably need to be ‘functionalist’ (Putnam, 1960). In other words, consciousness would correspond to some ‘system-level’ property (i.e. information-processing characteristic), of the brain and not to some aspect of its material constituents.
The modular model of the mind that is suggested by cognitive neuropsychology research contains a conceptual lacuna. The existence of many special-purpose processing systems, each of which can operate autonomously, would not seem sufficient to produce coherent and effective operation for the whole system. Does this function itself require special-purpose units?
Another aspect of the functional architecture, in addition to modularity, needs to be considered in responding to this question. It has been widely assumed in theorising on cognitive processes that all the routine cognitive and motor skills that we have are controlled by more or less program-like entities, such as ‘productions’ (e.g. Newell & Simon, 1972) or action or thought ‘schemata’ (Schmidt, 1975; Rumelhart & Ortony, 1977). It is presumed that just as the mind contains representations of a finite but large and extensible set of words, so it contains a large but finite set of action or thought schemata. What would any individual thought or action schema control? There would be an enormous variety of operations. Take, for instance, how to use a table knife for cutting, for pushing, or for spreading food; how to subtract one number from another; how to rotate an object mentally; and so on. Moreover, schema control would be on multiple levels; so the schema for making a sandwich could call those for cutting bread and for spreading butter.
Separate or Common Input and Output Processes: Are the Two Empirically Distinguishable?
The last three chapters have been concerned with the application of the cognitive neuropsychology method to whole domains, not just with the isolation of individual subsystems. Yet if some form of modularity framework is assumed as a general design principle for cognition, the conclusion that the orthographic, phonological, and semantic analyses of words should be conducted by functionally distinct subsystems is not too surprising. The sights, sounds, and meanings of words are phenomenologically very different. If one were to design a system to categorise words orthographically from the output of earlier visual processing, another to categorise them phonologically from the output of earlier auditory processing, and a third to specify them semantically from the outputs of the orthographic and phonological analyses, then the computational requirements of the three processes would be sufficiently distinct to make a modular ‘solution’ plausible.
In this chapter, an issue will be addressed for which general design principles and phenomenology do not provide any obvious answer. What is the relation between the sets of conclusions reached in chapters 5 and 6? To put it more generally, are the central representations and processes used by output systems the same as those used by input systems? On the claims being made for the cognitive neuropsychology method, this is just the sort of question that the approach should be suited to answer.
For 100 years, it has been well known that the study of the cognitive problems of patients suffering from neurological diseases can produce strikingly counterintuitive observations. From time to time, research workers studying normal function have been strongly influenced by such observations or by the ideas of the neurologists who made them. Bartlett (1932) and Hebb (1949) are two examples. However, in general, neuropsychology has had little impact on the study of normal function.
With any knowledge of the history of clinical neuropsychology, it is easy to understand why this neglect occurred. The standard of description of the psychological impairments of patients was low, often being little more than the bald statement of the clinical opinion of the investigator. There was frequently a dramatic contrast between the vagueness of the psychological account of the disorder and the precision with which anatomical investigation of the lesion that had given rise to it was carried out at post-mortem. Also, the field, like psychology itself, could agree on little but the most obvious and basic theories. Typical are the disputes about the existence of the syndrome visual object agnosia, a specific difficulty in the perception of objects when both sensation and the intellect are intact. The syndrome was widely accepted as real in the golden age of the flowering of neuropsychology (1860-1905) (e.g. Lissauer, 1890). Yet its existence was still being denied nearly a century later (e.g. Bay, 1953; Bender & Feldman, 1972).
Ten years of work on the acquired dyslexias has been basically positive as far as the broader cognitive neuropsychology research program is concerned. However, the overall picture is complicated as the use of the syndrome-complex approach and the large variety of syndromes and sub-syndromes that have been isolated have led to the natural lines of functional cleavage in the domain being not too clearly visible. As a counterpoint, it would be useful to take another domain where the correspondence between syndromes and normal function is simpler. The complementary set of disorders – the agraphias, impairments in the writing process – provides an excellent example in this respect.
Before 1980, agraphia was treated by neuropsychologists as a poor relation of aphasia. Writing was viewed as a highly complex secondary skill, with forms of breakdown of little theoretical interest. Most work was concerned with the pattern of the concomitant aphasic or apraxic disorders that occurred with agraphic difficulties (see, e.g., Marcie & Hécaen, 1979). One influential view was that cases of agraphia that appeared to be pure were not the result of damage to specific mechanisms concerned with writing, but were the secondary effect of a confusional state characterised by a reduction and/or ready shifting of attention (Chedru & Geschwind, 1972). Writing, it was argued, was affected because it is a complex skill that is rarely overlearned.
In chapter 9, it was shown that the use of neuropsychological group studies is not likely to lead to rapid advance in our understanding of normal function. Earlier, it was argued that, by contrast, the single-case study approach is an effective source of evidence. The argument was, however, a pragmatic one. The method leads to conclusions that are internally consistent and that mirror those arrived at by other means. Yet the theoretical structures used to interpret the different types of evidence may, as Rosenthal (1984) has pointed out, seem satisfactory as much for the ease and simplicity with which we can use them as for their empirical adequacy in modelling reality. If one examines the theoretical inferences made from single-case studies in earlier chapters, it becomes clear that they are delicately balanced on a set of implicit supporting assumptions. The inference procedures therefore need to be examined directly to assess whether they can bear the theoretical weight placed on them. In fact, those who have adopted the single-case approach have only rarely attempted to justify their leap from findings on a single patient to a general conclusion.
The most rigorous treatment of the inference procedure is that of Caramazza (1986).
The Selective Preservation of Phonological Reading
Chapter 4 began with the programme of understanding dyslexic difficulties using a multiple-route model of the normal reading process. On this programme, the selective impairment of any individual route would correspond to a form of central dyslexia. However, the one candidate reading disorder considered, surface dyslexia, has proved a disappointment. Far from consisting of a selective impairment of the semantic reading route, in its best known form, it seems to consist of compensatory behaviour for an underlying peripheral dyslexic difficulty.
Can an improvement be obtained using the dissociation approach? Can one adapt the method of defining syndromes by dissociations in order to lessen the probability that the dissociation reflects only the operation of a compensatory procedure? One approach is to insist that the better performed task is not merely ‘better’ than the poorly performed task, but also normal or nearly so on any relevant measure. In the terminology to be developed in chapter 10, this dissociation is a ‘classical’ or near-classical one. In this case, it would be unlikely to arise as a result of the operation of a laborious compensatory strategy. Having made this distinction, I will, however, immediately relax it. The critical aspects that distinguish the use of, say, a normal phonological reading procedure – if somewhat impaired – from the compensatory strategies discussed in chapter 4 are the speed and fluency of reading.
To isolate a new functional syndrome that does not have its characteristics mapped out by previous studies is a difficult and delicate process. The investigator has to be sensitive to the presence of a novel dissociation, itself a far from straightforward matter. Then a set of simpler and duller explanations in terms of syndromes that are already known have to be assessed. Only if they can be adequately rejected has a putative functional syndrome been isolated and only then can one begin to consider its theoretical implications. In this chapter, I am going to illustrate the process by considering a single syndrome – the short-term memory syndrome – from both a clinical and a theoretical perspective.
It is in clinical practice that new syndromes are detected. An unexpected result on a particular test is noticed and explored. In the present case, the unexpected result occurred on the Wechsler IQ battery. Many clinicians begin their assessment of a patient by using Wechsler subtests, not primarily to obtain an estimate of IQ but to see if any particular pattern of scores occurs across the different subtests (e.g. McFie, 1975; Lezak, 1976). In the late 1960s, Elizabeth Warrington was using this procedure to assess a patient, KF, who had sustained a severe head injury. He had a very low score on the Digit Span subtest, with performance on other subtests being relatively normal (Table 3.1) Obviously, no theoretical inferences can be made unless the deficit is reliable.
The last few chapters have shown the cognitive neuropsychology approach to be applicable to a number of different topics. Yet the areas treated have actually covered a fairly narrow range by comparison with those that are conventionally included in, say, either clinical neuropsychology or cognitive psychology. The topics discussed so far have all been aspects of language. In later chapters, the approach will be applied much more widely by considering areas where the method provides fascinating glimpses into relatively unexplored terrain. In general, though, these areas are not too helpful for an overall assessment of the solidity of the cognitive neuropsychology methodology. One area outside language – visual perception –does contain a set of interesting and solid neuropsychological studies, and the inferences drawn from these investigations can be compared with those derived from completely different disciplines.
This area is important to consider for another reason. So far, it has been argued that the only effective methodology in cognitive neuropsychology is the single-case study. Group studies, it has been suggested, particularly in chapter 7, are not an effective source of evidence. This view is too extreme. Indeed, some of the more interesting studies on disorders of visual perception have been group studies, although of a type somewhat different from those discussed in chapter 7.
In this chapter and the next, I consider whether the cognitive neuropsychology research programme is working at a level more complex than a single functional syndrome. Can the approach provide information about the organisation of a group of subsystems, and not just about the functioning of a single one? If each potential subsystem could be shown to be damaged by a pure syndrome specific to it, the power and plausibility of the approach would be greatly increased.
What domain should one choose to explore in detail in order to assess whether the breakdown of related functions in different patients is caused by damage to different components of a modular organisation? It might seem natural to take a domain like language or object perception, in which any such modular organisation would have been honed by evolution. Instead, I am going to consider the breakdown of reading, a skill that is specific not only to one species, but also to what is, from an evolutionary perspective, a tiny time period. A prerequisite for taking such a domain as a prototype is that contrary to one of Fodor's (1983) assumptions, the human modular structure must be affected by the experience of the organism, with respect to not only the operation of individual subsystems, but also the organisation of the functional architecture itself.
There are a number of reasons for choosing the reading system and the syndromes that occur when it is damaged – the acquired dyslexias.
Initiation of an action sequence can occur in an unintended fashion. This is well shown by the existence of certain types of action lapses called ‘capture errors’ (Reason, 1979; Norman, 1981), as, for instance, William James's (1890) famous example of going upstairs to change and discovering himself in bed. Such errors tend to occur when one is preoccupied with some other line of thought, as Reason (1984) has shown. Action initiation is occurring in parallel with some other activity. Unintended actions do not, though, occur only when they are inappropriate. They can be both appropriate and unmonitored. This fits with the suggestion made early in chapter 13 that the control of which subsystems will be devoted to what task is often carried out in a decentralised fashion.
Actions such as these can be contrasted with ones that are preceded by ‘an additional conscious element in the shape of a fiat, mandate or expressed consent’, to quote William James (1890). When we decide or choose or intend or concentrate or prepare, decentralised control of the operation of particular subsystems does not appear to be the sole principle operating. How does the control of either of these types of action relate to the discussions on congitive control at the beginning of chapter 13? Such phenomenological contrasts by themselves provide only a very shaky basis for theorising.
If Lashley's (1929) idea of mass action were valid, then neuropsychology would be of little relevance for understanding normal function. Any form of neurological damage would deplete by a greater or lesser degree the available amount of some general resource, say the mythical g. Knowing which tasks a patient could or could not perform would enable us to partition tasks on a difficulty scale. It would tell us little, if anything, about how the system operated.
If one considers the design principles that might underlie cognitive systems, a rough contrast can be drawn between systems based on equipotentiality (e.g., those that follow principles such as mass action) and more modular ones. At a metatheo-retical level, some form of the ‘modularity’ thesis is probably the most widely accepted position in the philosophy of psychology today (e.g. Chomsky, 1980; Morton, 1981; Marr, 1982; Fodor, 1983). Over the past 30 years, the arguments for the position have become increasingly compelling and diverse; they are of a number of different types: computational, linguistic, physiological, and psychological.
The most basic argument is the computational one, which we owe to Simon (1969) and Marr (1976). In Marr's words.
Any large computation should be split up and implemented as a collection of small sub-parts that are as nearly independent of one another as the overall task allows. […]
Twenty-five years ago, human memory was a self-contained topic. It had its own laws, its own empirical paradigms, its chain of father figures leading back to Ebbinghaus. Yet in the past 15 years, memory research has been increasingly integrated with other areas of psychology. Short-term memory has almost hived off into perception and language. Semantic memory is now approached from the perspective of general models of cognition. Recently, links have been developed with attention (e.g. Hasher & Zacks, 1979). Before the mid-1970s, research on amnesia had much the same type of isolation as memory itself had had 15 years before. Admittedly, some ideas from the study of normal memory were beginning to be influential, such as ‘levels of processing’, and the phenomena being discussed were of much greater intrinsic interest than the interference paradigms of traditional memory research on normal subjects. Yet amnesia research was still very much a closed world, with debates couched in the conceptual terms of the 1950s. Moreover, there were fierce empirical disputes in the field about whether key results arose from artefacts.
Since the mid-1970s, there has been a great change. As was discussed in chapter 2, the disputes over replication have, to a considerable extent, been resolved. It is now widely accepted that many patients with severe memory disorders have additional damage to other processing systems, which can lead to the existence of observed associations between memory and non-memory disorders that may be functionally misleading.