To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The first edition of this book was about artificial intelligence (AI). This second edition is about education. It is hard to see how this could be. Are these two subjects really in any way the same? My answer is that fundamentally they are, but of course, I recognize that such a notion would not necessarily be accepted as gospel truth. The common element is learning. Without learning there are neither intelligent machines nor intelligent people.
The subtitle of the original Dynamic Memory was “a theory of reminding and learning in computers and people”. In the late 1970s and early 1980s, I was fascinated by the idea that computers could be as intelligent as people. My assumption was that if we could figure out what intelligence was like in people, then we could get computers to model people. Detail the process sufficiently and - presto! - intelligent machines. I no longer hold such views.
Since 1981, not as much has happened in AI as one might have hoped. The goal of building a dynamic memory, a memory that changed over time as a result of its experiences, has proven to be quite difficult to achieve. The major reason for this is really one of content rather than structure. It is not so much that we can't figure out how such a memory might be organized, although this is indeed a difficult problem. Rather, we simply were not able to even begin to acquire the content of a dynamic memory.
Any memory system must have the ability to cope with new information in a reasonable way, so that new input causes adjustments in the system. A dynamic memory system is altered in some way by every experience it processes. A memory system that fails to learn from its experiences is unlikely to be very useful. In addition, any good memory system must be capable of finding what is contained within itself. This seems to go without saying, but the issue of what to find can be quite a problem. When we process events as they happen, we need to find particular episodes in memory that are closely related to the current input we are processing. But how do we define relatedness? What does it mean for one experience to be like another? Under what labels, using what indices, do we look for related episodes?
The phenomenon of reminding sheds light on both the problem of retrieval and our ability to learn. This crucial aspect of human memory received little attention from researchers on memory prior to the first edition of this book. (For example, in a highly regarded book that attempts to catalog research by psychologists on memory [Crowder 1976], reminding does not even appear in the subject index.) Yet reminding is an everyday occurrence, a common feature of memory. We are reminded of one person by another, or of one building by another. More significant are the remindings that occur across situations. One event can remind you of another.
Remindings can occur both in context and across contexts. To understand learning, we must attempt to understand the nature of the structures in memory that contain the episodes of which we are reminded. Educators especially need to understand the nature of these structures because teaching means facilitating changes both in mental structures and in the organization of those structures.
Change in memory depends upon reminding. We cannot alter a memory structure without somehow melding a current experience with a prior one. Reminding is also about prediction. When we find a structure in memory to help us process a new experience, that structure is, in essence, predicting that a new experience will turn out just like an old one. In a sense, too, learning is about predicting outcomes. When we enter a new situation, we are interested in how it will turn out. This can be just a passive wondering about how events will unfold, or it can be an active undertaking to make events play out in a certain way. When we learn, we are learning about which actions will cause which effects, and which events normally follow other events; we are also learning to distinguish between the long-term and shortterm effects of an action.
Predictions come in a variety of forms and depend upon remindings that can occur across contexts. We see analogies. We make generalizations. We come to conclusions about how things will turn out.
If we are going to teach nonconscious knowledge, we must understand what this kind of knowledge looks like and where it is used naturally. It is hard to teach what you can't talk about. Nonconscious knowledge isn't all that difficult to see, however. For example, as described in Chapter 11, when people have trouble falling asleep, they report that their minds are racing. Similarly, when they wake up earlier than they would like, and want to fall back to sleep but can't, their minds seem to have a mind of their own. They find themselves thinking about things that seem unnecessary to worry about, or about subjects or problems they have been avoiding.
The sense that the mind has a mind of its own relates strongly to what people refer to when they speak about consciousness. Clearly, we can know what we think. We view ourselves thinking when we're in a semi-wakeful state, and cannot stop ourselves from doing so. We are conscious of our thoughts.
But, if this is indeed what is meant by consciousness, it is an odd situation, to say the least. We may “hear ourselves thinking,” but we seem to have no control over the process. We can, of course, interrupt the process, give in to it if you will, and begin to think harder about what our minds were thinking about anyway, eliminating the mind's racing and forcing an order to things. The curious thing is that we have no words to describe these various states.
For thousands, maybe millions, of years people have been telling stories to each other. They have told stories around the campfire, they have traveled from town to town telling stories to relate the news of the day, they have told stories transmitted by electronic means to passive audiences incapable of doing anything but listening (and watching). Whatever the means, and whatever the venue, storytelling seems to play a major role in human interaction. We get reminded of stories and we use them in conversation to hold up our end of the conversation. In essence, conversation is a mutual remind-athon. Stories follow stories. In a group conversation we take turns in storytelling, as we each wait for our chance to say what we are reminded of.
In some sense, stories seem to be almost the opposite of scripts when viewed as a part of the functioning of memory. Stories are our personal take on what isn't scriptlike about the world; we don't tell a story unless it deviates from the norm in some interesting way. Stories embody our attempts to cope with complexity, whereas scripts obviate the need to think. No matter what the situation, people who have a script to apply need do little thinking; they just do what the script says and they can choose to ignore what doesn't fit. People have thousands of highly personal scripts used on a daily basis that others do not share. Every mundane aspect of life that requires little or no thought can be assumed to be a script.
How are folk psychology and scientific psychology related? Are they complementary, or in competition? To what extent do they operate at the same explanatory level? Should scientific psychology assume the basic ontology and some, at least, of the categories recognised by folk psychology? Or should we say that in psychology, as elsewhere, science has little to learn from common sense, and so there is no reason why ‘a serious empirical psychologist should care what the ordinary concept of belief is any more than a serious physicist should care what the ordinary concept of force is’ (Cummins, 1991)? The present chapter begins to address these questions.
Realisms and anti-realisms
Before we can determine what, if anything, scientific psychology should take from the folk, we must have some idea of what there is to take. This is a matter of considerable dispute in the philosophy of mind. Specifically, it is a dispute between realists about folk psychology and their opponents. The realists (of intention – see below) think that there is more to take, because they believe that in explaining and predicting people's actions and reactions on the basis of their intentional states (beliefs, desires, hopes, fears, and the like) we are committed both to there being such things as intentional states (as types or kinds – we return to this point later) and to these states having a causal effect. Opponents of this sort of folk-psychological realism come in various forms, but are all at least united in rejecting the claim that folk psychology commits us to the existence of causally efficacious intentional state-types.
We humans are highly social animals, unique in the flexibility with which we can adapt to novel patterns of interaction, both co-operative and competitive. So it is easy to see why our folk psychology, or capacity for mind-reading, is such an important psychological ability, both to individual lives and for our success as a species. But it is not only other minds which one needs to read. What should also be appreciated is that this very same capacity is used to think about what is going on in our own minds, as we shall see further in chapter 9. (One of the themes of this book is that this capacity for reflexive thinking greatly enhances our cognitive resources.) Other theses we argue for in the present chapter are that our mind-reading ability functions via a central module, that it operates by means of applying a core of theoretical knowledge, and that this core knowledge is a product of maturation rather than learning. In other words, we think that the ‘theory of mind’ module (often called ‘ToM’ in the literature on this topic) fits the general view on modularity and nativism which we outlined in chapter 3.
The alternatives: theory-theory versus simulation
Research into our mind-reading capacities has been assisted both by the investigations of developmental psychologists and by the debate between two rival views, theory-theory and simulation-theory.
Theory-theory
Theory-theory is a product of functionalism in the philosophy of mind.
Many people have thought that consciousness – particularly phenomenal consciousness, or the sort of consciousness which is involved when one undergoes states with a distinctive subjective phenomenology, or ‘feel’ – is inherently, and perhaps irredeemably, mysterious (Nagel, 1974, 1986; McGinn, 1991). And many would at least agree with Chalmers (1996) in characterising consciousness as the ‘hard problem’, which forms one of the few remaining ‘final frontiers’ for science to conquer. In the present chapter we discuss the prospects for a scientific explanation of consciousness, arguing that the new ‘mysterians’ have been unduly pessimistic.
Preliminaries: distinctions and data
In this opening section of the chapter, we first review some important distinctions which need to be drawn; and then discuss some of the evidence which a good theory of consciousness should be able to explain.
Distinctions
One of the real advances made in recent years has been in distinguishing between different notions of consciousness (see particularly Rosenthal, 1986; Dretske, 1993; Block, 1995; and Lycan, 1996) – though not everyone agrees on quite which distinctions need to be drawn. All are agreed that we should distinguish creature-consciousness from mental-state-consciousness. It is one thing to say of an individual person or organism that it is conscious (either in general or of something in particular); and it is quite another thing to say of one of the mental states of a creature that it is conscious.
It is also agreed that within creature-consciousness itself we should distinguish between intransitive and transitive variants. To say of an organism that it is conscious simpliciter (intransitive) is to say just that it is awake, as opposed to asleep or comatose.
In this chapter we review, and contribute to, the intense debate which has raged concerning the appropriate notion of content for psychology (both folk and scientific). Our position is that the case for wide content (that is, content individuated in terms of its relations to worldly objects and properties) in any form of psychology is weak; and that the case for narrow content (that is, content individuated in abstraction from relations to the world) is correspondingly strong. But we also think that for some common-sense purposes a notion of wide content is perfectly appropriate.
Introduction: wide versus narrow
The main reasons why this debate is important have to do with the implications for folk and scientific psychology, and the relations between them. (But it will also turn out, in chapter 9, that the defensibility of narrow content is crucial to the naturalisation of consciousness.) For if, as some suggest, the notion of content employed by folk psychology is wide, whereas the notion which must be employed in scientific psychology is narrow, then there is scope here for conflict. Are we to say that science shows folk psychology to be false? Or can the two co-exist? And what if the very idea of narrow content is incoherent, as some suggest? Can scientific psychology employ a notion of content which is externally individuated? Or would this undermine the very possibility of content-involving psychology?
Some wide-content theorists, such as McDowell (1986, 1994), believe that the debate has profound implications for philosophy generally, particularly for epistemology.
In this chapter we consider how the human mind develops, and the general structure of its organisation. There has been a great deal of fruitful research in this area, but there is much more yet to be done. A fully detailed survey is far beyond the scope of a short book, let alone a single chapter. But one can set out and defend certain guiding principles or research programmes. We will be emphasising the importance of nativism and modularity.
We use the term ‘nativism’ to signify a thesis about the innateness of human cognition which does justice to the extent to which it is genetically pre-configured, while being consistent with the way in which psychological development actually proceeds. In terms of structure, we maintain that the human mind is organised into hierarchies of sub-systems, or modules. The chief advocate of the modularity of mind has been Fodor (1983), but our version of the modularity thesis is somewhat different from his. In one respect it is more extreme because we do not restrict the thesis of modularity to input systems, as Fodor does. But on the other hand, we think one needs to be a little more relaxed about the degree to which individual modules are isolated from the functioning of the rest of the mind.
The point of these disputes about the nature of modularity should become clearer as we go on. It ought to be stressed, however, that we think of modules as a natural kind – as a natural kind of cognitive processor, that is – and so what modules are is primarily a matter for empirical discovery, rather than definitional stipulation.
In this chapter we consider the challenge presented to common-sense belief by psychological evidence of widespread human irrationality, which also conflicts with the arguments of certain philosophers that widespread irrationality is impossible. We argue that the philosophical constraints on irrationality, such as they are, are weak. But we also insist that the standards of rationality, against which human performance is to be measured, should be suitably relativised to human cognitive powers and abilities.
Introduction: the fragmentation of rationality
According to Aristotle, what distinguishes humankind is that we are rational. Yet psychologists have bad news for us: we are not so rational after all. They have found that subjects perform surprisingly poorly at some fairly simple reasoning tests – the best known of which is the Wason Selection Task (Wason, 1968 – see section 2 below). After repeated experiments it can be predicted with confidence that in certain situations a majority of people will make irrational choices. Results of this kind have prompted some psychologists to comment on the ‘bleak implications for human rationality’ (Nisbett and Borgida, 1975; see also Kahneman and Tversky, 1972). Philosophers sometimes tell a completely different story, according to which we are committed to assuming that people are rational, perhaps even perfectly rational. There has, until recently, been almost a disciplinary divide in attitudes about rationality, with psychologists seemingly involved in a campaign of promoting pessimism about human reason, while philosophers have been trying to give grounds for what may sound like rosy optimism.
It would seem that one or other of these views about human rationality must be seriously wrong.
When we initially conceived the project of this book, our first task was to determine what sort of book it should be. The question of intended audience was relatively easy. We thought we should aim our book primarily at upper-level undergraduate students of philosophy and beginning-level graduate students in the cognitive sciences generally, who would probably have some previous knowledge of issues in the philosophy of mind. But we also hoped, at the same time, that we could make our own contributions to the problems discussed, which might engage the interest of the professionals, and help move the debates forward. Whether or not we have succeeded in this latter aim must be for others to judge.
Content
The question of the content of the book was more difficult. There is a vast range of topics which could be discussed under the heading of ‘philosophy of psychology’, and a great many different approaches to those topics could be taken. For scientific psychology is itself a very broad church, ranging from various forms of cognitive psychology, through artificial intelligence, social psychology, behavioural psychology, comparative psychology, neuro-psychology, psycho-pathology, and so on. And the philosopher of psychology might then take a variety of different approaches, ranging from one which engages with, and tries to contribute to, psychological debates (compare the way in which philosophers of physics may propose solutions to the hidden-variable problem); through an approach which attempts to tease out philosophical problems as they arise within psychology (compare the famous ‘under-labourer’ conception of the role of the philosopher of science); to an approach which focuses on problems which are raised for philosophy by the results and methods of psychology.
Over the last two chapters we have been considering the nature of psychological content. In the present chapter we take up the question of how such content is represented in the human brain, or of what its vehicles might be. Following a ground-clearing introduction, the chapter falls into two main parts. In the first of these, the orthodox Mentalese story is contrasted with its connectionist rival. Then in the second, we consider what place natural language representations may play in human cognition. One recurring question is what, if anything, folk psychology is committed to in respect of content-representation.
Preliminaries: thinking in images
One traditional answer to the questions just raised, concerning the vehicles of our thoughts, is that thinking consists entirely of mental (mostly visual) images of the objects which our thoughts concern, and that thoughts interact by means of associations (mostly learned) between those images. So when I think of a dog, I do so by virtue of entertaining some sort of mental image of a dog; and when I infer that dogs bark, I do so by virtue of an association which has been created in me between the mental images of dog and of barking. This view has been held very frequently throughout the history of philosophy, at least until quite recently, particularly amongst empiricists (Locke, 1690; Hume, 1739; Russell, 1921). Those who hold such a view will then argue that thought is independent of language on the grounds that possession and manipulation of mental images need not in any way involve or presuppose natural language.
Readers of this book should already have some familiarity with modern philosophy of mind, and at least a glancing acquaintance with contemporary psychology and cognitive science. (Anyone of whom this is not true is recommended to look at one or more of the introductions listed at the end of the chapter.) Here we shall only try to set the arguments of subsequent chapters into context by surveying – very briskly – some of the historical debates and developments which form the background to our work.
Developments in philosophy of mind
Philosophy of mind in the English-speaking world has been dominated by two main ambitions throughout most of the twentieth century – to avoid causal mysteries about the workings of the mind, and to meet scepticism about other minds by providing a reasonable account of what we can know, or justifiably infer, about the mental states of other people. So most work in this field has been governed by two constraints, which we will call naturalism and psychological knowledge.
According to naturalism human beings are complex biological organisms and as such are part of the natural order, being subject to the same laws of nature as everything else in the world. If we are going to stick to a naturalistic approach, then we cannot allow that there is anything to the mind which needs to be accounted for by invoking vital spirits, incorporeal souls, astral planes, or anything else which cannot be integrated with natural science.