To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Critical analysis is necessary and valuable, but the progress of intellectual work always turns out to be underlain by deep continuities. Technical work in particular will always pick up again where it left off, hopefully the wiser but nonetheless constrained by the great mass of established technique. Critics interrogating the existing techniques may discover a whole maze of questionable assumptions underneath them, but that discovery in itself does not make the techniques any easier to replace. I will not try to throw the existing techniques of AI out the window and start over; that would be impossible. Instead, I want to work through the practical logic of planning research, continuing to force its internal tensions to the surface as a means of clearing space for alternatives. My starting place is Fikes, Hart, and Nilsson's suggestion (quoted in Chapter 8) that the construction and execution of plans occur in rapid alternation. This suggestion is the reductio ad absurdum of the view that activity is organized through the construction and execution of plans. The absurdity has two levels. On a substantive level, the distinction between planning and execution becomes problematic; “planning” and “execution” become fancy names for “thinking” and “doing,” which in turn become two dynamically interrelated aspects of the same process. On a technical level, the immense costs involved in constructing new plans are no longer amortized across a relatively long period of execution. Even without going to the extreme of constant alternation between planning and execution, Fikes, Hart, and Nilsson still felt the necessity of heroic measures for amortizing the costs of plan construction. These took the form of complex “editing” procedures that annotated and generalized plans, stored them in libraries, and facilitated their retrieval in future situations.
As the intellectual history sketched in Chapter 11 makes clear, AI research has been based on definite but only partly articulated views about the nature and purpose of representation. Representations in an agent's mind have been understood as models that correspond to the outside world through a systematic mapping. As a result, the meanings of an agent's representations can be determined independently of its zcurrent location, attitudes, or goals. Reference has been a marginal concern within this picture, either assimilated to sense or simply posited through the operation of simulated worlds in which symbols automatically connect to their referents. One consequence of this picture is that indexicality has been almost entirely absent from AI research. And the model-theoretic understanding of representational semantics has made it unclear how we might understand the concrete relationships between a representation-owning agent and the environment in which it conducts its activities.
In making such complaints, one should not confuse the articulated conceptions that inform technical practice with the reality of that practice. As Smith (1987) has pointed out, any device that engages in any sort of interaction with its environment will exhibit some kind of indexicality. For example, a thermometer's reading does not indicate abstractly “the temperature,” since it is the temperature somewhere, nor does it indicate concretely “the temperature in room 11,” since if we moved it to room 23 it would soon indicate the temperature in room 23 instead. Instead, we need to understand the thermometer as indicating “the temperature here” – regardless of whether the thermometer's designers thought in those terms.
This chapter demonstrates RA, the computer program introduced in Chapter 9 that illustrates the concept of running arguments. RA has three motivations, which might be reduced to slogans as follows:
It is best to know what you're doing. Plan execution – in the conventional sense, where plans are similar to computer programs and execution is a simple, mechanical process – is inflexible because individual actions are derived from the symbols in a plan, not from an understanding of the current situation. The device that constructed the plan once had a hypothetical understanding of why the prescribed action might turn out to be the right one, but that understanding is long gone. Flexible action in a world of contingency relies on an understanding of the current situation and its consequences.
You're continually redeciding what to do. Decisions about action typically depend on a large number of implicit or explicit premises about both the world and yourself. Since any one of those premises might change, it is important to keep your reasoning up to date. Each moment's actions should be based, to the greatest extent possible, on a fresh reasoning–through of the current situation.
All activity is mostly routine. Almost everything you do during the day is something you have done before. This is not to say that you switch back and forth between two modes, one for routine situations and one for the occasional novel situation. Even when something novel is happening, the vast majority of what you are doing is routine.
As a matter of computational modeling, all of this is more easily said than done. This chapter explains how RA instantiates these three ideals.
Isn't it strange that the animal we used to be developed into the creature that we now are? How – and why – did human intelligence and culture evolve? How did we evolve minds, philosophies and technologies? And now that we have them, where are they taking us?
The orthodox answer to these questions looks inside our brains to see what they are made of and how the various components operate. This leads to a story based upon DNA biochemistry, the evolution of nerve cells as pathways for sensory information, and their organisation into complex networks – brains – that can manipulate neural models of natural objects and processes. Mind is seen as a property of an unusual brain – complex enough to develop culture – but here the ‘reductionist’ story starts to lose its thread. Many people see mind as something that transcends ordinary matter altogether. Philosophers worry that the universe around us may be a figment of our own imagination.
In Figments of Reality we explore a very different, but complementary, theory: that minds and culture co-evolved within a wider context. Every step of our development is affected by our surroundings. Our minds are rooted in ordinary matter; they are complex processes – or complexes of processes – that happen in material brains. Our brains are linked to reality by their molecules; but they are also linked to reality on another level, their ability to model reality within themselves.
It is well known that Albert Einstein was born in Ulm in 1879, but his family moved almost immediately to Munich where his father Hermann and his uncle Jakob set up a small engineering company. Later he went to Milan, and he studied in Zurich. It is much less well known that for a few years Jules Shloer, who was then studying mathematics but later went on to found the famous soft drink company, lived in an apartment block next to Einstein. Not far away was a corner shop, with a cramped partitioned section at the rear which served as a café. Here Einstein and Shloer would often meet, to drink coffee and talk. The shop was run by an Italian immigrant, Antonio Mezzi, and the only kind of coffee that he served was thick, dark, and enormously strong, made from beans imported from one particular Arabian village. In later life Shloer and Einstein both attributed their success to the remarkable mental clarity induced by Mr Mezzi's special coffee.
We end our journey through human mind and culture by trying to answer some of the questions that we raised in the opening chapter. How did such a peculiar animal as the human gain such a grip upon the planet? What is it that makes us the way we are? And where are we going next?
Let us first take stock.
We are genuinely remarkable members of the animal kingdom.
This chapter discusses one use of dependencies, a programming language called Life. Although the demonstrations of Chapter 9 and 10 will use Life to make some points about improvised activity, this chapter describes Life programming as a technical matter with little reference to theoretical context. Readers who find these descriptions too involved ought to be able to skip ahead without coming to harm.
Life is a rule language, a simplified version of the Amord language (de Kleer, Doyle, Rich, Steele, and Sussman 1978). This means that a Life “program” consists of a set of rules. Each rule continually monitors the contents of a database of propositions, and sometimes the rules place new propositions in the database. The program that does all of the bookkeeping for this process is called the rule system. The rule system functions as the reasoner for a dependency system. The rule system and dependency system both employ the same database, and most of the propositions in the database have a value of IN or OUT. Roughly speaking, when an IN proposition (the trigger) matches the pattern of an IN rule, the rule fires and the appropriate consequence is assigned the value of IN. If necessary, the system first builds the consequent proposition and inserts it in the database. This might cause other rules to fire in turn, until the whole system settles down. In computer science terms, this is a forward–chaining rule system. The role of dependencies is to accelerate this settling down without changing its outcome. The technical challenge is to get the rule system to mesh smoothly with the dependency maintenance system underneath.
One might take two views of the Life rule system in operation. On one view, dependencies are accelerating the operation of rules.
When JC's children David and Rebecca were about seven and eight years old, the family had many pets including cats, a tokay gecko, a corn snake, hooded rats, and several tanks of tropical fish. JC fed mice, baby rats, and cockroaches to the gecko and the snake, and large wriggly worms to the larger fish. The children invented a rationale for this, a tiny morality: some animals (worms, cockroaches, most fish) ‘don't have minds’; some (geckoes, snakes, mature rats and mice, cichlid fish) have ‘minds for themselves’; and a few (cats and people) have ‘minds for others’. Rebecca was very worried when she was about thirteen, because she felt that most of the time she didn't have a mind for others, and was therefore not a real person. She stopped worrying only when she was told that she was not, as she thought, the only person with that problem. Indeed, most of the time people have no minds, sometimes they have minds for themselves, and only rarely does anybody have a mind for others.
In The Philosophical Review for October 1974 Thomas Nagel wrote a celebrated essay: ‘What is it like to be a bat?’ In it he examined the difference between an external observer's understanding of the physical processes that occur in a bat's brain, and the bat's own mental perceptions. He argued that no amount of external observation can tell us what being a bat feels like to the bat.
For the past thirty years or so, computational theorizing about action has generally been conducted under the rubric of “planning.” Whereas other computational terms such as “knowledge” and “action” and “truth” come to us burdened with complex intellectual histories, the provenance of “plan” and “planning” as technical terms is easy to trace. Doing so will not provide a clear definition of the word “planning” as it is used in AI discourse, for none exists. It will, however, permit us to sort the issues and prepare the ground for new ideas. My exposition will not follow a simple chronological path, because the technical history itself contains significant contradictions; these derive from tensions within the notion of planning.
In reconstructing the history of “plan” and “planning” as computational terms, the most important road passes through Lashley's “serial order” paper (1951) and then through Newell and Simon's earliest papers about GPS (e.g., 1963). Lashley argued, in the face of behaviorist orthodoxy, that the chaining of stimuli and responses could not account for complex human behavioral phenomena such as fluent speech. Instead, he argued, it was necessary to postulate some kind of centralized processing, which he pictured as a holistic combination of analog signals in a tightly interconnected network of neurons. The seeds of the subsequent computational idea of plans lay in Lashley's contention that the serial order of complex behavioral sequences was predetermined by this centralized neural activity and not by the triggering effects of successive stimuli.
JC's daughter Beth, at about the age of eight, was out with her parents in the car and noticed a line of birds sitting on telephone wires - black blobs spaced along a set of parallel lines.
‘Oh, look,’ she said. ‘Music!’
Human minds do more than just recognising various bits and pieces of the universe. They look for patterns in what they recognise, and do their best to understand how the universe works. The universe, however, is very complex: in order to understand it we must also simplify it. Indeed the whole point of understanding something is that you can grasp it as a whole, and that necessitates some kind of simplification or data-compression. An explanation of the universe that was just as complex as the universe itself would merely replace one puzzle by another. In this chapter we shall argue that the brain organises its perceptions of the world into significant chunks, which we shall call ‘features’. As usual we shall take an evolutionary and contextual view of this ability, as well as asking about the internal structure of the brain. Not just ‘how does it work?’ but ‘how did it arise?’ And to get started, we shall take a look at two simpler creatures: the mantis shrimp and the octopus.
Both the octopus and the mantis shrimp are effective organisms, even if they never meet another of their own kind to learn from.
The Introduction has sketched the notion of a critical technical practice, explored the distinctive form of knowledge associated with AI, and described a reorientation of computational psychology from a focus on cognition to a focus on activity. It should be clear by now that I am proceeding on several distinct levels at once. It is time to systematize these levels and to provide some account of the theses I will be developing on each level.
On the reflexive level, one develops methods for analyzing the discourses and practices of technical work. Reflexive research cultivates a critical self-awareness, including itself among its objects of study and developing useful concepts for reflecting on the research as it is happening. To this end, I will begin by suggesting that technical language – that is, language used to investigate phenomena in the world by assimilating them to mathematics – is unavoidably metaphorical. My reflexive thesis is that predictable forms of trouble will beset any technical community that supposes its language to be precise and formally well defined. Awareness of the rhetorical properties of technical language greatly facilitates the interpretation of difficulties encountered in everyday technical practice. Indeed, I will proceed largely by diagnosing difficulties that have arisen from my own language – including language that I have inherited uncritically from the computational tradition, as well as the alternative language that I have fashioned as a potential improvement.
On the substantive level, one analyzes the discourses and practices of a particular technical discipline, namely AI. Chapter 1 has already outlined my substantive thesis, which has two parts.
A species of viperine snake, which is not poisonous, has evolved three ways to protect itself against predators. The first is camouflage, so that it gets ‘lost’ against its background. However, its camouflage is very similar to that of the poisonous adder, which leads to the second method: mimicry. If a predator sees through its camouflage, it exploits the resemblance to an adder by behaving like an adder. But if this doesn't work either, for example when the predator is a crow, which kills adders, it adopts the third strategy. It flips about like a demented rope, and then it arranges itself on the ground to look for all the world like a dead snake, lying on its back in the dust at an awkward angle, with a vaguely bloated look ….
However, if it is now turned on to its front, it promptly and energetically flings itself back into its ‘dead snake’ pose.
The background theory and philosophy is now out of the way, and we are ready to begin the journey from molecules to minds. It is a journey which, at every stage, involves the concept of evolution. Evolution is a general mechanism whereby systems can ‘spontaneously’ become more complex, more organised, more startling in their abilities.
This chapter describes a computer program that illustrates some of the themes I have been developing. Before I discuss this program in detail, let me summarize the argument so far. Recalling the scheme laid out in Chapter 2, this argument has three levels: reflexive, substantive, and technical.
The reflexive argument has prescribed an awareness of the role of metaphor in technical work. As long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem. Critical technical work continually reflects on its substantive commitments, choosing research problems that might help bring unarticulated assumptions into the open. The technical exercises in this book are intended as examples of this process, and Chapter 14 will attempt to draw some lessons from them.
The substantive argument has four steps:
Chapter 2 described two contrasting metaphor systems for AI. Mentalist metaphors divide individual human beings into an inside and outside, with the attendant imagery of contents, boundaries, and movement into and out of the internal mental space. Interactionist metaphors, by contrast, focus on an individual's involvement in a world of familiar activities.
As Chapters 1 and 4 explained, mentalist metaphors have organized the vocabularies of both philosophical and computational theories of human nature for a long time, particularly under the influence of Descartes. This bias is only natural. Our daily activities have a vast background of unproblematic routine, but this background does its job precisely by not drawing attention to itself.