To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
THIS chapter is concerned with the use of French in working lives. The nature of employment or activity affects language use, the words and expressions selected or preferred, just as much as does geographical location or social category, and the variation involved is as systematic. For reasons of space three types of employment only will be considered, in the fields of science and technology, the law, and commerce, although most trades and types of employment have their own recognisable linguistic characteristics.
Professional discourse
Tasks and groups
We have mentioned above a distinction which is commonly made between primary, or face-to-face, human groups, and secondary groups whose members are linked indirectly (see Sprott, 1958). Primary groups come together in a number of different ways and for different purposes: family members meet around the breakfast table, neighbours might drop in for a chat over coffee, small groups converse in the pub, and most such casual meetings concern themselves with ‘making contact’, social interaction in which the conversation covers a range of subject areas, and in which social links are established and broken. Chapter 10 below will be concerned with interaction of this type, and with alternative sociolinguistic models, such as networks, for understanding it.
FROM the sixteenth century, after the disappearance of Latin and despite the competition from Italian and Spanish, French was widely used throughout Europe among the aristocracy and for diplomatic contact. Over the next two centuries it strengthened this role as a ‘universal’ language, and the central importance of France and the French language in disseminating ideas reached its climax in the eighteenth century and with the Revolution. In the nineteenth century the European powers including France were hungry for colonial expansion, for the movement of people and goods, for raw materials for their industries and for markets for their products; France participated eagerly in the rush to extend influence world-wide. The present-day situation, in which French is one of the few languages used in all continents, is a direct consequence of military, economic, cultural and political expansion; significantly, it is not a consequence of mass emigration such as that of Spanish-speaking peoples to South America or English-speaking peoples to North America or Australia.
After the establishment and consolidation of France's European territory, a slow process mainly of military and diplomatic conquest from Paris, and which was not completed until 1860 with the attachment of Nice and Savoy, France's first overseas empire began in the sixteenth and seventeenth centuries with settlements in Canada and Louisiana, in the West Indies and across the Pacific, including continental India.
THE thirteen volumes of Brunot (1966) provide much information on the history of the formation of contemporary French, while Wolf 1983a, Genouvrier (1986, 114-38), Grillo 1989, Rickard 1989, among others, summarise the process of language control and codification, and the relevant groups and persons involved, to the present day.
The Frankish kings who invaded France in the fifth and sixth centuries, and their successors through to the period of the French Revolution in 1789, attempted to expand their control of territory from northern France and particularly the Paris basin towards ‘natural’ frontiers of the sea, rivers and mountains. This military process was accompanied by the spread of their dialect of the French language, which had developed from the Latin spoken by Roman administrators and the soldiers, traders and settlers who followed them and established themselves across Europe. The first ‘French’ text, recognisably distinct from Latin, is generally regarded as the Life of Ste Eulalie (842) and the slow linguistic development which was to follow over the Middle Ages led to the modern language, recognisable as such from about 1600 and codified in the seventeenth and eighteenth centuries. The development reflects in part the history of the triumph of the Ile de France dialect over other langue d'oïl dialects and then over the regional languages, and in part, and closely associated, that of the triumph of centralising, élitist, military, diplomatic and ecclesiastical power over feudal fragmentation and over the lower orders of society.
OUTSIDERS are defined for the purposes of this chapter as immigrants and social outcasts, the latter limited to criminals with an identifiable linguistic pattern. France has traditionally welcomed immigrants, whether they are political or economic refugees or simply driven by the desire to settle in an attractive country. As in most European countries, the population of France is racially mixed, and has been for generations: contemporary social statistics showing origin by nationality provide a snapshot of a constantly changing scene. For the purposes of this chapter we consider mainly three settled groups of immigrants - Jews, Gypsies and Armenians - principally because these three have been separately identified in the Giordan report (Giordan 1982).
In discussion of immigrants generally, three recurring themes have linguistic as well as social and personal implications: the constant tension between assimilation/integration to the host society and the maintenance of a separate identity; the inevitable differences between the problems of the original immigrants and those of their descendants of the second and third generations; and the problems associated with culture differences, for example in dress, in religious practices, or in the economic role of women.
Jews
Historical situation
Jews were present in France from the fourth century, and the object of social and religious discrimination from the fifth and sixth, although this did not become severe until the First Crusade in 1095.
MOST specialists refer to the language of the south of France as ‘occitan’, a term which dates from the fourteenth century. This language used generally to be called ‘limousin’ in the Middle Ages, when the troubadours spread its fame as a literary vehicle, and ‘provençal’ in the nineteenth century, when a major revival of its literary renown was last attempted. The area, almost a third of metropolitan France; lacks any one large centre which could rival Paris, avoided early invasion by the Franks and retained a mixture of Roman and feudal administration until quite late in its history. For Occitan nationalists three dates are significant: 1228, when northern France conquered the South in the Albigensian Crusade; 1793, when federalism was crushed by centralisation as the fundamental policy of the Revolution; and 1907, when peasants and workers rose in revolt against economic domination.
After the Roman Empire collapsed in the fifth century, and despite its replacement by feudalism, the South retained both Roman law, thus avoiding the imposition of the customary law which had been introduced by the Franks, and, through the Church, the administrative organisation of the Roman Empire. Although fragmented, much of the area was controlled from the ninth century by the counts or dukes of Provence, of Toulouse, and of Aquitaine, while in the thirteenth century it became part of the ‘Angevin Empire’, to be finally attached to France in the fifteenth.
ANY analysis of social variation, or of language variation in social settings, is dependent on an overt or covert model of society and of social analysis. Such models are referred to by social scientists, including sociolinguists, in their work, and ordinary members of society, particularly of French society, are aware of them, at least in general outline. This awareness, demonstrated by the popularity of such compilations as Mermet 1985, reedited in 1987 and 1989, exists not least because of the political implications of these socio-economic models. Public perceptions of sociolinguistic variation may well affect, and be affected by, political viewpoints: both social scientists and ordinary citizens bring their own values and points of view to social analysis, and there are few value-free analyses of society, of linguistic variation in society, or of social attitudes.
Society is made up of human groups which engage in interaction; generally speaking sociologists and political scientists are concerned with individuals only in so far as they exemplify the group(s) to which they belong, or play roles in interaction between groups. The groups - a family, a faith, women, children, the working class - can be identified through their roles in social systems: the legal, educational, religious, political, economic; while their interaction is revealed through such social processes as the differentiation of functions (in a functional or structural analysis), the socialisation of children, or the dialectic of the power struggle (in a Marxian analysis).
STANDARD modern French derives from francien, the name now given to the langue d'oïl dialects spoken around Paris after the disappearance of Latin. France is usually divided into three main linguistic regions: the North, where these langue d'oïl dialects are grouped; the South, where langue d'oc or Occitan dialects are spoken; and the franco-provençal area. In addition to Occitan, six other ‘indigenous’ languages, only two of them deriving from Latin, are spoken around the geographical periphery of France, while a number of ‘immigrant’ languages such as Arabic or Portuguese are now spoken over the whole country.
The northern langue d'oïl dialects of French can be thought of as lying in three zones: zone 1, closest to Paris, including the dialects of francien, zone 2 in a circle from angevin, normand, picard, champenois, bourguignon, berrichon to poitevin, and zone 3, farthest from Paris including gallo, lorrain, and wallon. Such a labelling represents the Paris-dominated world of linguistic research, but is convenient in examining linguistic characteristics by contrast with standard French.
In the South, dialects of north Occitan include limousin, north and south auvergnat, and alpine provençal. South Occitan dialects are languedocien and provençal, and the dialects of gascon are situated in the south-west of France.
Futurologists have proclaimed the birth of a new species, machina sapiens, that will share (perhaps usurp) our place as the intelligent sovereigns of our earthly domain. These “thinking machines” will take over our burdensome mental chores, just as their mechanical predecessors were intended to eliminate physical drudgery. Eventually they will apply their “ultra-intelligence” to solving all of our problems. Any thoughts of resisting this inevitable evolution is just a form of “speciesism,” born from a romantic and irrational attachment to the peculiarities of the human organism.
Critics have argued with equal fervor that “thinking machine” is an oxymoron – a contradiction in terms. Computers, with their foundations of cold logic, can never be creative or insightful or possess real judgment. No matter how competent they appear, they do not have the genuine intentionality that is at the heart of human understanding. The vain pretensions of those who seek to understand mind as computation can be dismissed as yet another demonstration of the arrogance of modern science.
Although my own understanding developed through active participation in artificial intelligence research, I have now come to recognize a larger grain of truth in the criticisms than in the enthusiastic predictions. But the story is more complex. The issues need not (perhaps cannot) be debated as fundamental questions concerning the place of humanity in the universe. Indeed, artificial intelligence has not achieved creativity, insight, and judgment. But its shortcomings are far more mundane: we have not yet been able to construct a machine with even a modicum of common sense or one that can converse on everyday topics in ordinary language.
Systems of interconnected and interdependent computers are qualitatively different from the relatively isolated computers of the past. Such “open systems” uncover important limitations in current approaches to artificial intelligence (AI). They require a new approach that is more like organizational designs and management than current approaches. Here we'll take a look at some of the implications and constraints imposed by open systems.
Open systems are always subject to communications and constraints from outside. They are characterized by the following properties:
Continuous change and evolution. Distributed systems are always adding new computers, users and software. As a result, systems must be able to change as the components and demands placed upon them change. Moreover, they must be able to evolve new internal components in order to accommodate the shifting work they perform. Without this capability, every system must reach the point where it can no longer expand to accommodate new users and uses.
Arm's-length relationships and decentralized decision making. In general, the computers, people, and agencies that make up open systems do not have direct access to one another's internal information. Arm's-length relationships imply that the architecture must accommodate multiple computers at different physical sites that do not have access to the internal components of others. This leads to decentralized decision making.
Perpetual inconsistency among knowledge bases. Because of privacy and discretionary concerns, different knowledge bases will contain different perspectives and conflicting beliefs. Thus, all the knowledge bases of a distributed AI system taken together will be perpetually inconsistent. Decentralization makes it impossible to update all knowledge bases simultaneously.
“But why,” Aunty asked with perceptible asperity, “does it have to be a language?” Aunty speaks with the voice of the Establishment, and her intransigence is something awful. She is, however, prepared to make certain concessions in the present case. First, she concedes that there are beliefs and desires and that there is a matter of fact about their intentional contents; there's a matter of fact, that is to say, about which proposition the intentional object of a belief or a desire is. Second, Aunty accepts the coherence of physicalism. It may be that believing and desiring will prove to be states of the brain, and if they do that's OK with Aunty. Third, she is prepared to concede that beliefs and desires have causal roles, and that overt behavior is typically the effect of complex interactions among these mental causes. (That Aunty was raised as a strict behaviorist goes without saying. But she hasn't been quite the same since the sixties. Which of us has?) In short, Aunty recognizes that psychological explanations need to postulate a network of causally related intentional states. “But why,” she asks with perceptible asperity, “does it have to be a language?” Or, to put it more succinctly than Aunty often does, what – over and above mere Intentional Realism – does the Language of Thought Hypothesis buy? That is what this discussion is about.
A prior question: what – over and above mere Intentional Realism – does the Language of Thought Hypothesis (LOT) claim? Here, I think, the situation is reasonably clear.
Artificial intelligence is still a relatively young science, in which there are still various influences from different parent disciplines (psychology, philosophy, computer science, etc.). One symptom of this situation is the lack of any clearly defined way of carrying out research in the field (see D. McDermott, 1981, for some pertinent comments on this topic). There used to be a tendency for workers (particularly Ph.D. students) to indulge in what McCarthy has called the “look-ma-no-hands” approach (Hayes, 1975b), in which the worker writes a large, complex program, produces one or two impressive printouts and then writes papers stating that he has done this. The deficiency of this style of “research” is that it is theoretically sterile – it does not develop principles and does not clarify or define the real research problems. What has happened over recent years is that some attempt is now made to outline the principles which a program is supposed to implement. That is, the worker still constructs a complex program with impressive behaviour, but he also provides a statement of how it achieves this performance. Unfortunately, in some cases, the written “theory” may not correspond to the program in detail, but the writer avoids emphasizing (or sometimes even conceals) this discrepancy, resulting in methodological confusion. The “theory” is supposedly justified, or given empirical credibility, by the presence of the program (although the program may have been designed in a totally different way); hence the theory is not subjected to other forms of argument or examination.
Rational reconstruction (reproducing the essence of the program's significant behavior with another program constructed from descriptions of the purportedly important aspects of the original program) has been one approach to assessing the value of published claims about programs.
Campbell attempts to account for why the status of AI vis-a-vis the conventional sciences is a problematic issue. He outlines three classes of theories, the distinguishing elements of which are: equations; entities, operations and a set of axioms; and general principles capable of particularization in different forms. Models in AI, he claims, tend to fall in the last class of theory.
He argues for the methodology of rational reconstruction as an important component of a science of AI, even though the few attempts so far have not been particularly successful, if success is measured in terms of the similarity of behavior between the original AI system and the subsequent rational reconstruction. But, as Campbell points out, it is analysis and exploration of exactly these discrepancies that is likely to lead to significant progress in AI.
The second paper in this section is a reprint of one of the more celebrated attempts to analyse a famous AI program. In addition, to an analysis of the published descriptions of how the program works with respect to the program's behaviour (Lenat's ‘creative rediscovery’ system AM), Richie and Hanna discuss more general considerations of the rational-reconstruction methodology.
There is a continuing concern in AI that proof and correctness, the touchstones of the theory of programming, are being abandoned to the detriment of AI as a whole. On the other hand, we can find arguments to support just the opposite view, that attempts to fit AI programming into the specify-and-prove (or at least, specify-and-test correctness) paradigm of conventional software engineering, is contrary to the role of programming in AI research.
Similarly, the move to establish conventional logic as the foundational calculus of AI (currently seen in the logic programming approach and in knowledge-based decision-making implemented as a proof procedure) is another aspect of correctness in AI; and one whose validity is questioned (for example, Chandrasekaran's paper in section 1 opened the general discussion of such issues when it examined logic-based theories in AI, and Hewitt, in the section 11, takes up the more specific question of the role of logic in expert systems). Both sides of this correctness question are presented below.
Philosophers constantly debate the nature of their discipline. These interminable debates frustrate even the most patient observer. Workers in AI also disagree, although not so frequently, about how to conduct their research. To equate programs with theories may offer a simple unifying tool to achieve agreement about the proper AI methodology. To construct a program becomes a way to construct a theory. When AI researchers need to justify their product as scientific, they can simply point to their successful programs. Unfortunately, methodological agreement does not come so easily in AI.
For a number of reasons, theorists in any discipline do not relish washing their proverbially dirty laundry in public. Methodology creates a great deal of that dirt, and philosophy of science supposedly supplies the soap to cleanse the discipline's methodology. Scientists often appeal to philosophers of science to develop methodological canons. It certainly does not instill confidence in a discipline if its practitioners cannot even agree on how to evaluate each other's work. Despite public images to the contrary, disagreements over how to approach a subject matter predominate in most scientific disciplines. Can philosophy of science come to the rescue of AI methodology? Yes and no.
Before describing the middle ground occupied by philosophy of science in its relationship to AI, we need to examine how some dismiss the AI research project altogether. In a previous article I argued against the various philosophical obstacles to the program/theory equation (Simon, 1979). I considered three types of objections to AI research: impossibility, ethical, and implausibility.
The area of non-monotonic reasoning and the area of logic programming are of crucial and growing significance to artificial intelligence and to the whole field of computer science. It is therefore important to achieve a better understanding of the relationship existing between these two fields.
The major goal in the area of non-monotonic reasoning is to find adequate and sufficiently powerful formalizations of various types of non-monotonic reasoning – including common-sense reasoning – and to develop efficient ways of their implementation. Most of the currently existing formalizations are based on mathematical logic.
Logic programming introduced to computer science the important concept of declarative – as opposed to procedural – programming, based on mathematical logic. Logic programs, however, do not use logical negation, but instead rely on a non-monotonic operator – often referred to as negation as failure – which represents a procedural form of negation.
Non-monotonic reasoning and logic programming are closely related. The importance of logic programming to the area of non-monotontic reasoning follows from the fact that, as observed by several researchers (see e.g. Reiter, [to appear]) the non-monotonic character of procedural negation used in logic programming often makes it possible to efficiently implement other non-monotonic formalisms in Prolog or in other logic programming languages. Logic programming can also be used to provide formalizations for special forms of non-monotonic reasoning. For example, Kowalski and Sergot's calculus of events (1986) uses Prolog's negation-asfailure operator to formalize the temporal persistence problem in AI.