To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence has aroused debate ever since Hubert Dreyfus wrote his controversial report, Alchemy and Artificial Intelligence (1965). Philosophers and social scientists who have been influenced by European critical thought have often viewed AI models through philosophical lenses and found them scandalously bad. AI people, for their part, often do not recognize their methods in the interpretations of the critics, and as a result they have sometimes regarded their critics as practically insane.
When I first became an AI person myself, I paid little attention to the critics. As I tried to construct AI models that seemed true to my own experience of everyday life, however, I gradually concluded that the critics were right. I now believe that the substantive analysis of human experience in the main traditions of AI research is profoundly mistaken. My reasons for believing this, however, differ somewhat from those of Dreyfus and other critics, such as Winograd and Flores (1986). Whereas their concerns focus on the analysis of language and rules, my own concerns focus on the analysis of action and representation, and on the larger question of human beings' relationships to the physical environment in which they conduct their daily lives. I believe that people are intimately involved in the world around them and that the epistemological isolation that Descartes took for granted is untenable. This position has been argued at great length by philosophers such as Heidegger and Merleau-Ponty; I wish to argue it technologically.
This is a formidable task, given that many AI people deny that such arguments have any relevance to their research.
One episode of the TV series Trials of Life involved filming the remarkable behaviour of the tarantula wasp. Mother wasp finds a tarantula, stings it to paralyse it, and drags it back to her hole. Then she lays her eggs in it, so that the newborn larvae have their own reliable food supply. Or so all the textbooks say. The first attempt to film this sequence of events went beautifully - except that, right at the end, the wasp forgot to lay her eggs. The next attempt was going really well until a bird fiew by and ate the wasp. On the third attempt the wasp never managed to find the tarantula … and so it went, for a dozen or more attempts. None of the wasps filmed managed to complete the entire sequence in textbook fashion.
Despite this, the final film looked great: it edited several different wasps together to get exactly the textbook story.
However, one could be forgiven for concluding that real wasps don't read textbooks.
In the previous chapter we made an important distinction between universal and parochial features in evolution, and argued in passing that intelligence is a universal evolutionary strategy, likely to be found wherever life has taken sufficient hold. In this chapter we refine that theme by examining the evolution of intelligence on our own Earth.
There is a parasitic flatworm that spends part of its life inside an ant, while its reproductive stage is inside a cow. The technique that it has evolved to affect the transfer from one animal to the other shows just how subtle the effects of ‘blind’ evolution can be. The parasite infects the ant, and presses on a particular part of its brain. This interferes with the normal behaviour of the brain, which causes the ant to climb a grass stem, grasp it with its jaws, and hang there, permanently attached. So when a cow comes along and eats the grass, the parasite enters the cow.
You will have noticed that in the game tree of figure 14 there is a gap between top-down and bottom-up. How big is it?
It contains virtually the whole of the game tree.
There is a similar gap between what is accessible to top-down and bottom-up reductionist science. In this chapter we give this gap a name: Ant Country. The origins of the name lie in a simple mathematical system, Langton's ant, which we shortly introduce. We shall employ Langton's ant as a metaphor to open up the nature of simplicity, complexity, and the relationship between them. Langton's ant itself is an instance of ‘simplexity’, the tendency of a single, simple system of rules to generate highly complex behaviour, but it also leads to a more subtle concept, which in Collapse we called ‘complicity’.
A senior Royal Air Force officer had organised an official reunion for World War II veterans, all in full dress uniform, covered in medals and ribbons, aged about seventy. The highlight of the event was a fly-past of restored aircraft - Spitfires, Lancaster bombers, and so on - and he stood in front of the veterans to watch them. Suddenly, sensing something odd, he turned round - to find that the veterans had disappeared. Then he realised that they were all lying flat on the grass. The explanation?
A Fokker (a WWII German fighter) had roared across the field, flying low …
‘It would be very singular,’ wrote Voltaire, ‘that all nature, all the planets, should obey eternal laws, and that there should be a little animal, five feet high, who, in contempt of these laws, could act as he pleased, solely according to his caprice.’ It is an eloquent statement of the problem of free will, and it is the place where our figments run slap up against reality, like the proverbial irresistible force meeting the immovable object. We have a distinct, overwhelming impression that we have a free choice concerning the actions that we take: free, that is, subject to the evident constraints of physical law. We cannot choose to float into the air, for example. Yet there is absolutely nothing in the inorganic world that possesses that kind of freedom.
My argument throughout has turned on an analysis of certain metaphors underlying AI research. This perspective, while limited, provides one set of tools for a critical technical practice. I hope to have conveyed a concrete sense of the role of critical self-awareness in technical work: not just as a separate activity of scholars and critics, but also as an integral part of a technical practitioner's everyday work. By attending to the metaphors of a field, I have argued, it becomes possible to make greater sense of the practical logic of technical work. Metaphors are not misleading or illogical; they are simply part of life. What misleads, rather, is the misunderstanding of the role of metaphor in technical practice. Any practice that loses track of the figurative nature of its language loses consciousness of itself. As a consequence, it becomes incapable of performing the feats of self-diagnosis that become necessary as old ideas reach their limits and call out for new ones to take their place. No finite procedure can make this cycle of diagnosis and revision wholly routine, but articulated theories of discourses and practices can certainly help us to avoid some of the more straightforward impasses.
Perhaps “theories” is not the right word, though, since the effective instrument of critical work is not abstract theorization; rather it is the practitioner's own cultivated awareness of language and ways it is used. The analysis of mentalism, for example, has demonstrated how a generative metaphor can distribute itself across the whole of a discourse.
According to the opening paragraph of Stephen Hawking's A Brief History of Time, a famous scientist – possibly Bertrand Russell – was giving a public lecture on astronomy. He described the structure of the solar system and its place in the galaxy. At the end of the talk, a little old lady at the back stood up and complained that the lecture was utter rubbish. The world, she pointed out, was a flat disc riding on the back of four elephants, which in turn rode on the back of a turtle.
‘But what supports the turtle?’ the scientist objected, with a superior smile.
‘You're very clever young man,’ said the woman, ‘but you can't fool me. It's turtles all the way down!’
(Actually Hawking tells the story with ‘tortoise’ where we have put ‘turtle’, and unaccountably omits the elephants. We have rewritten the story slightly in order to pay proper deference to Great A'Tuin – whom, of course, you recognise as the turtle who supports Discworld in the fantasy series by Terry Pratchett.)
To many people, science is seen as a source of certainty, a box full of answers that can be trotted out when dealing with life's many questions. Most working scientists, however, see their subject in a very different light: as a method for navigating effectively in an uncertain world. Whatever science may be, it is not just a matter of assembling ‘the facts’.
All engineering disciplines employ mathematics to represent the physical artifacts they create. The discipline of computing, however, has a distinctive understanding of the role of mathematics in design. Mathematical models can provide a civil engineer with some grounds for confidence that a bridge will stand while the structure is still on paper, but the bridge itself only approximates the math. The computer, by contrast, conforms precisely to a mathematically defined relationship between its inputs and its outputs. Moreover, a civil engineer is intricately constrained by the laws of physics: only certain structures will stand up, and it is far from obvious exactly which ones. The computer engineer, by contrast, can be assured of realizing any mathematical structure at all, as long as it is finite and enough money can be raised to purchase the necessary circuits.
The key to this remarkable state of affairs is the digital abstraction: the discrete 0s and 1s out of which computational structures are built. This chapter and the next will describe the digital abstraction and its elaborate and subtle practical logic in the history of computer engineering and cognitive science. The digital abstraction is the technical basis for the larger distinction in computer work between abstraction (the functional definition of artifacts) and implementation (their actual physical construction). Abstraction and implementation are defined reciprocally: an abstraction is abstracted from particular implementations and an implementation is an implementation of a particular abstraction. This relationship is asymmetrical: a designer can specify an abstraction in complete detail without making any commitments about its implementation. The relationship is confined to the boundaries of the computer; it does not depend on anything in the outside world.
My goal in this chapter, as in much of this book, depends on who you are. If you have little technical background, my purpose is to help prepare you for the next few chapters by familiarizing you with the building blocks from which computers are made, together with the whole style of reasoning that goes with them. If you are comfortable with this technology and style of thinking, my goal is to help defamiliarize these things, as part of the general project of rethinking computer science in general and AI in particular (cf. Bolter 1984: 66–79).
Modern computers are made of digital logic circuits (Clements 1991). The technical term “logic” can refer either to the abstract set of logical formulas that specify a computer's function or to the physical circuitry that implements those formulas. In each case, logic is a matter of binary arithmetic. The numerical values of binary arithmetic, conventionally written with the numerals 1 and 0, are frequently glossed using the semantic notions of “true” and “false.” In practice, this terminology has a shifting set of entailments. Sometimes “true” and “false” refer to nothing more than the arithmetic of 1 and 0. Sometimes they are part of the designer's metaphorical use of intentional vocabulary in describing the workings of computers. And sometimes they are part of a substantive psychological theory whose origin is Boole's nineteenth-century account of human reasoning as the calculation of the truth values of logical propositions (Boole 1854).
As an agent gets along in the world, its actions are plainly about the world in some sense. In picking up a cup, I am not just extending my forearm and adjusting my fingers: those movements can be parsimoniously described only in relation to the cup and the ways that cups are used. A conversation about a malfunctioning refrigerator, likewise, really is about that refrigerator; it is not just a series of noises or grammatical constructions. When someone is studying maps and contemplating which road to take, it is probably impossible to provide any coherent account of what that person is doing except in relation to those roads.
AI researchers have understood these phenomena in terms of representations: actions, discussions, and thoughts are held to relate to particular things in the world because they involve mental representations of those things. It can hardly be denied that people do employ representations of various sorts, from cookbooks and billboards to internalized speech and the retinotopic maps of early vision. But the mentalist computational theory of representation has been simultaneously broader and more specific. In this chapter I will discuss the nature and origins of this theory, as well as some reasons to doubt its utility as part of a theory of activity. Chapter 12 will suggest that the primordial forms of representation are best understood as facets of particular time-extended patterns of interaction with the physical and social world. Later sections of the present chapter will prepare some background for this idea by discussing indexicality (the dependence of reference on time and place) and the more fundamental phenomenon of intentionality (the “aboutness” of actions, discussions, and thoughts).
In the fossil layers of the Burgess Shale are the remains of strange, soft-bodied creatures. So strange are they that some palaeontologists believe that they represent more biological diversity of form than now exists upon the entire Earth. Indeed some of the forms present in the Burgess Shale have no surviving descendants at all.
Reconstructing the shape of these creatures, in three dimensions, is immensely difficult because their fossil forms are squashed flat, and a certain amount of careful interpretation is necessary. For a long time one of the most strikingly bizarre Burgess Shale creatures, of a form not seen at all in today's world, was Hallucinogenia, which - it was thought - stood on the sea floor using a set of seven pairs of sharply pointed struts. Seven tentacles with two-pronged tips wiggled on its back, together with a bunch of even tinier tentacles. It had a blobby head, and its rear end was a tube.
It then turned out that Hallucinogenia was really a form that is still common today. The ‘struts’ were spines on its back, the ‘tentacles’ were its legs.
It had been reconstructed upside down.
We have already offered you two versions of what happened during the evolution of life on Earth. We described the origins of life, the endless aeons when bacteria - many of them photosynthetic and emitting oxygen - were the dominant life-form, the development of eukaryote cells with nuclei, of multi-celled organisms including complex animals with brains, and the appearance of organisms that could learn.
The Ringmaster of the Zarathustran cruise-vessel Watcher-of-Moons lay back and tried to relax in a sensuous swaddle of preening-curd, only his eyes and beak projecting from the glutinous layers, giggling slightly whenever one of the nanotribbles that roamed the curd in search of tiny parasites and dirt particles encountered a sensitive patch of skin around the base of his funny-feathers.
His mind was troubled. It had been a strange voyage. Those extelligent ape-creatures with their overprivileged solo minds and their extraordinarily unoctimistic view of how the world worked were really disturbing. Always obsessed with the insides of things – no doubt a resurgence of their child-aspect in later life, the monkey curiosity that tried to find out how things worked by breaking them and seeing what they no longer did.
He expanded his neck-ruff, the Zarathustran equivalent of a sigh. The problem with preening-curd is that once you have opened a tuble you have to wallow for a full octad, and after a time preenwallow gets boring. Especially to a Ringmaster, who spends so much time making sense out of what everybody else is doing …. And this Ringmaster was subject to troubled thoughts, things he was having difficulty rationalising. He recalled that not an octuple of octoons away from him was an almost inexhaustible source of alien extelligence, refreshing even if naive. And once Hewer-of-wood had got the catalytic converter working again, Watcher-of-Moons would resume its voyage … For a moment he wondered which catalyst it was failing to convert, but he would only be able to explain that to everybody when Destroyer-of-facts had found out.
Having prepared some background, let us now consider a technical exercise. Since readers from different disciplinary backgrounds will bring contrasting expectations to an account of technical work, I will begin by reviewing the critical spirit in which the technical exercises in this book are intended.
Reflexively, the point is not to start over from scratch, throwing out the whole history of technical work and replacing it with new mechanisms and methods. Such a clean break would be impossible. The inherited practices of computational work form a massive network in which each practice tends to reinforce the others. Moreover, a designer who wishes to break with these practices must first become conscious of them, and nobody can expect to become conscious of a whole network of inherited habits and customs without considerable effort and many false starts. A primary goal of critical technical work, then, is to cultivate awareness of the assumptions that lie implicit in inherited technical practices. To this end, it is best to start by applying the most fundamental and familiar technical methods to substantively new ends. Such an effort is bound to encounter a world of difficulties, and the most valuable intellectual work consists in critical reflection upon the reductio ad absurdum of conventional methods. Ideally this reflexive work will make previously unreflected aspects of the practices visible, thus raising the question of what alternatives might be available.
Substantively, the goal is to see what happens in the course of designing a device that interacts with its surroundings. Following the tenets of interactionist methodology, the focus is not on complex new machinery but on the dynamics of a relatively simple architecture's engagement with an environment.
As a substantive matter, the discourse of cognitive science has a generative metaphor, according to which every human being has an abstract inner space called a “mind.” The metaphor system of “inside,” which Lakoff and Johnson (1980) call the CONTAINER metaphor, is extraordinarily rich. “Inside” is opposed to “outside,” usually in the form of the “outside world,” which sometimes includes the “body” and sometimes does not. This inner space has a boundary that is traversed by “stimuli” or “perception” (headed inward) and “responses” or “behavior” (headed outward). It also has “contents” – mental structures and processes – which differ in kind from the things in the outside world. Though presumably somehow realized in the physical tissue of the brain, these contents are abstract in nature. They stand in a definite but uncomfortable relation to human experiences of sensation, conception, recognition, intention, and desire. This complex of metaphors is historically continuous with the most ancient Western conceptions of the soul (Dodds 1951; Onians 1954) and the philosophy of the early Christian Platonists. It gradually became a secular idea in the development of mechanistic philosophy among the followers of Descartes. In its most recent formulation, the mind figures in a particular technical discourse, the outlines of which I indicated in Chapter 1.
This metaphor system of inside and outside organizes a special understanding of human existence that I will refer to as mentalism. I am using the term “mentalism” in an unusually general way. The psychological movements of behaviorism and cognitivism, despite their mutual antagonism, both subscribe to the philosophy of mentalism.