To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We argue that human reasoning is guided by a collection of innate domain-specific systems of knowledge. Each system is characterized by a set of core principles that define the entities covered by the domain and support reasoning about those entities. Learning, on this view, consists of an enrichment of the core principles, plus their entrenchment, along with the entrenchment of the ontology they determine. In these domains, then, we would expect cross-cultural universality: cognitive universals akin to language universals.
However, there is one crucial disanalogy to language. The history of science and mathematics demonstrates that conceptual change in cognitive domains is both possible and actual. Conceptual change involves overriding core principles, creating new principles, and creating new ontological types. We sketch one potential mechanism underlying conceptual change and motivate a central empirical problem for cognitive anthropology: To what extent is there cross-cultural universality in the domains covered by innate systems of knowledge?
Domain-specific cognition
The notion of domain-specific cognition to be pursued here is articulated most clearly by Chomsky (1980a). Humans are endowed with domainspecific systems of knowledge such as knowledge of language, knowledge of physical objects, and knowledge of number. Each system of knowledge applies to a distinct set of entities and phenomena. For example, knowledge of language applies to sentences and their constituents; knowledge of physical objects applies to macroscopic material bodies and their behavior; knowledge of number applies to sets and to mathematical operations such as addition. More deeply, each system of knowledge is organized around a distinct body of core principles.
It has been truly said that the computer is the most economically important technological innovation of this century. No other piece of technology has expanded in a way comparable to that of the computer. A vast computer industry has emerged, and what is perhaps even more important, since the 1980s applied computer technology has spread so widely throughout the worlds of business, public administration, science, and so on. that an economy of information seems to be substituting the industrial economy. “Information” also indicates the tremendous changes that have taken place in computing technology and applied technology since computing started in the 1940s. What began as the history of computing is now being transformed into the history of information technology and information society. It is no wonder that the historian and the social and economic analyst of modern technology are experiencing difficulties in coping with this huge and expanding field of research.
Until the 1980s, the American computer industry dominated the international development of the computer, and IBM surpassed any other company to such a degree that everybody else was reduced to followers of the leader. The 1980s saw changes in this pattern, however. European and particularly Japanese industry rose to equal the Americans in many fields of technology, while industry and applications were being radically changed all over the place. This trend has continued into the 1990s.
When Vannevar Bush, in his 1945 article As We May Think, presented his vision of the Memex, the proposed information retrieval system that later served as an inspiration for Ted Nelson's Hypertext concept, this vision was clearly and necessarily rooted in modernity as it appeared, unchallenged, at the end of World Warn.
Bush held the view that science, having been in the service of destructive forces for the duration of the war, now was able to return to its main objective - to secure progress of mankind. It was to this end that he proposed the Memex system:
Presumably man's spirit should be elevated if he can better review his shady past and analyse more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanise his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory (Bush 1945: 108).
This utopian modern optimism elicited by Bush, was considerably tempered by the time Ted Nelson's Computer Lib appeared in 1974, a book that clearly places Nelson within the social critical tradition of the American intellectual left in the 1960s. His critical stance was augmented by the ecological awareness of the 1970s expressed through journals like Co-Evolution Quarterly and The Whole Earth Catalogue, and it is no coincidence that the editor of these two publications, Stewart Brand, wrote the introduction to the 1987 edition of Computer Lib.
For a long time we were occupied with computers as media in a metaphorical sense, for evaluating such system types as graphical systems, filing systems, or process control systems. In 1989, however, we started to take the media perspective in its literal sense. We wanted to use the computer like a real medium, such as film, television and theater, to create art. Although there are many similarities between other arts and systems design, there is one fundamental difference: Interaction plays an important part in the computer medium, whereas it is absent in other media.
To make our intention clear, we began a project where we used the computer to create interactive fiction. As distinct from more tool-like systems we wanted to define a new class of systems that we called “narrative systems.” The characteristic feature of these systems is that their main purpose is communication, so their functionality is almost the same as the interface.
Examples are teaching systems, databases, mail systems, and video games. Our purpose was to answer the following questions:
What kinds of techniques are useful for telling a story where interaction is a fundamental part?
What kinds of methods are the optimal ones for systems development?
Could we discover more general narrative techniques that can be used in non-fiction applications, such as data bases and teaching systems.
With new information and communication technologies, new organizational forms are becoming of increasing importance. Particularly, new forms of organisations – so-called “Hi-Tech Network Organizations” – have emerged. The concept of Hi-Tech Organizations is not well defined, but covers such phenomena as telework (“elusive offices”), distance training (“network colleges” and “virtual classrooms”), computer conferencing systems (“network meetings”), “soft cities,” “intelligent buildings,” “electronic libraries” – organizational or social networks in which people don't interact or work together physically, in an office, a building, or a classroom. However, they still are part of a common organization (a company, a class, an association), but their organizational interaction is based on computers and telecommunication.
Simultaneously, a new paradigm for social theory in general and, specifically, for organizational communication theory has been launched: social and organizational systems are conceptualized as so-called “self-referential” systems – self-producing and self-reflective systems. Often this approach uses the label of organizations as “autopoietical systems”, being inspired by the Chilean biologists H. Maturana and F. Varela (1980), and in Europe the dominating approach, explicitly “transforming” the autopoetical concept from biology to social theory, has been elaborated by the German social philosopher Niklas Luhmann (cf. particularly Luhmann 1984 and 1990).
It is my hypothesis that the concept of self-referentiality is particularly fruitful for understanding the new Hi-Tech Network Organizations.
This series for Cambridge University Press is becoming widely known as an international forum for studies of situated learning and cognition.
Innovative contributions from anthropology, cognitive, developmental, and cultural psychology, computer science, education, and social theory are providing theory and research that seeks new ways of understanding the social, historical, and contextual nature of the learning, thinking, and practice emerging from human activity. The empirical settings of these research inquiries range from the classroom to the workplace to the high-technology office to learning in the streets and in other communities of practice.
The situated nature of learning and remembering through activity is a central fact. It may appear obvious that human minds develop in social situations, and that they come to appropriate the tools that culture provides to support and extend their sphere of activity and communicative competencies. But cognitive theories of knowledge representation and learning alone have not provided sufficient insight into these relationships.
This series is born of the conviction that new and exciting interdisciplinary syntheses are underway, as scholars and practitioners from diverse fields seek to develop theory and empirical investigations adequate to characterizing the complex relations of social and mental life, and to understanding successful learning wherever it occurs. The series invites contributions that advance our understanding of these seminal issues.
This chapter is concerned with the intrinsic semiotic nature of a computer. We shall begin by sketching the principles of the computational paradigm of cognitive science in order to establish the cognitive relevance of the computer. Then we shall show that this cognitive relevance is also semiotically relevant by analyzing the glossematic relations, with particular reference to the “constant” concept. Finally, we shall define a “cognitive virtual machine” that specifies (a part of) the semiotic nature of a computational machine.
For nearly 40 years, the methodology of linguistics has been based on an “epistemology of refutation” (Popper, 1959; Milner, 1989). According to this epistemology, a theory is scientific only to the degree that it can be refuted. This claim can be summed up in the following two points:
(1) The theory must be expressed in unequivocal and calculable terms in order that theoretical descriptions can be submitted to a refutation test.
(2) The theory must contain operational methods for the refutation of hypotheses.
The framework of computer science satisfies these prerequisites in a coherent way: The general theory of formal languages – the principal foundation of computer science – allows a precise formulation of linguistic descriptions, and at the same level of formal analysis (the logico-algebraic level) it provides a device for evaluating linguistic theories, since the computer can process symbolic data automatically.
This epistemological position has pointed linguistic research in a certain direction: It assigns a test function to the computer system, and it forces linguists to choose a certain kind of formal description.
The question to what extent a computer can think is an often recurring one that, owing to the quick shift in generations within the computer sciences, continues to be of relevance, although there is no real reason to expect an unequivocal answer. It can, however, be moderated to – to what extent can computers learn to think? Were we to bide our time, would it not be possible to make a machine that could stand comparison to human thinking? Or we could choose to ask ourselves: What is common to machinal and human reasoning? From which follows the closely related questions: Can computers dream? Can they discover, feel, error, lie, get good ideas? Can they develop or mature? Have they ethics, morals, aesthetic judgement, fantasy, intuition? And, last but not least, can they distinguish between differences that make a difference, and differences that do not?
Regardless of how one chooses to address these interrelated questions, the various answers that will ensue allow us to distinguish on the one hand between the cultural or developmental pessimists, who are convinced that machines are on the verge of a take-over and, on the other, the cultural optimists, whose view is that machines will continue to be but an ancillary instrument for human thought.
The tradition of analytical semiotics initiated by C. S. Peirce is often thought of as being in some sense contrary to formal logic. This is especially so, when formal logic is associated with the classical correspondence theory of truth as in the project of formal semantics for natural language. However, the perceived opposition between the two traditions has become dubious since the advent of logic grammar, which came mainly from the works of Richard Montague in the late 1960s.
Logic grammar differs from philosophical logic first and foremost by specifying a translation procedure from natural language into formal logic. It is true, of course, that the very idea of representing the meaning of natural language sentences by logical forms has been prominent in philosophical logic and analytical philosophy throughout our century. One particularly famous example is Bertrand Russell's discussion of “definite descriptions” (Russell, 1905), and his suggestion of representing sentences containing definite descriptions by a certain type of logical forms. For instance, the sentence
(1) “The present King of France is bald” should be represented as
(2) ∃! x. King-of-France(x) ∧ bald(x)
where (2) may be read as “there exists one and only one object, which is king of France, and which is bald.” Russell considered the sentence (1) to be false if uttered in our time (or 1905); the transcription of (1) into (2) would serve as a formal explanation of the falsehood of (1). It should be noted, however, that in philosophical logic formal representations are simply stipulated for the purposes of whichever discussion is going on.
The concept of modernization does not mean the implementation of universal values in democratic institutions, but it does mean the implementation of sophisticated new technologies. (Heller, 1985:146)
Ongoing processes of change and restructuring of work are becoming a dominant reality in most organizations, often linked to the introduction of technology. While researchers in system development and design are becoming increasingly aware of the importance of the social context for the design of information technology, the subject is rarely seen in its historical perspectives. When participating in a research project on participatory design at a local district of the National Labor Inspection Service in Denmark (NLIS), we became interested in a historical perspective on the processes of change that were taking place within this institution (Bødker et al. 1991).
The work of the NLIS seems fascinating and significant to us in several ways. Not only are we ourselves analyzing work practices at the NLIS, but the institution itself is generally concerned with the interrelations of work and technology. The range of its activities is to a large degree defined by the Work Environment Laws. The laws, however, not only define the range of the NLIS's work, they also bear witness to a public understanding of work environments. In the words of Ricoeur they represent “a horizon of expectation” concerning work environment (Ricoeur, 1985: 207). In this respect the NLIS's domain is symptomatic of work and work practices in general at a social level.
The ideas presented in this chapter grew out of the authors experiences in an earlier attempt to create interactive fiction. In Chapter 6, “Narrative Computer Systems,” we describe our struggle with a short story mainly based on pictures. The first part of the story explores the graphical possibilities as means of sign production. However, when we began to work on the next part, where interaction took over as the most important means of expression, we got stuck. There were three interrelated problems:
The balance between reader and author shifts, since the reader must perform some of the functions previously allotted to the author. Who is responsible for getting a satisfactory experience out of the product? The reader or the author? What is the best balance between the two?
As a consequence of this, the length of the narrative controlled by the author is shortened, so that the author no longer plans and constructs a 300-page story. Instead, he constructs short narrative pieces that must be combinable in different ways, rather like a construction set. What should these narrative pieces look like?
How should one compose interactive fiction? We know by tradition how to write a text, but which techniques are suitable for developing a product that is not one, but many texts?
These problems are not special to our project, but are general problems of interactive media, as several researchers in the field have already pointed out. (Yellowlees Douglas, 1990; Moulthrop, 1989; Bolter & Joyce, 1987; Marshall & Irish, 1989).