Skip to main content Accessibility help
×
Hostname: page-component-7dd5485656-6kn8j Total loading time: 0 Render date: 2025-10-24T17:05:34.216Z Has data issue: false hasContentIssue false

Nonmonotonic Logic

Published online by Cambridge University Press:  18 October 2025

Christian Straßer
Affiliation:
Ruhr University Bochum

Summary

Nonmonotonic logics serve as formal models of defeasible reasoning, a type of reasoning where conclusions are drawn absent absolute certainty. Defeasible reasoning takes place when scientists interpret experiments, in medical diagnosis, and in practical everyday situations. Given its wide range of applications, nonmonotonic logic is of interest to philosophy, psychology, and artificial intelligence. This Element provides a systematic introduction to the multifaceted world of nonmonotonic logics. Part I familiarizes the reader with basic concepts and three central methodologies: formal argumentation, consistent accumulation, and semantic methods. Parts II–IV provide a deeper understanding of each of these methods by introducing prominent logics within each paradigm. Despite the apparent lack of unification in the domain of nonmonotonic logics, this Element reveals connections between the three paradigms by demonstrating translations among them. Whether you're a novice or an experienced traveler, this Element provides a reliable map for navigating the landscape of nonmonotonic logic.

Information

Type
Element
Information
Online ISBN: 9781108981699
Publisher: Cambridge University Press
Print publication: 13 November 2025

Nonmonotonic Logic Logics for Defeasible Reasoning

Introduction

Nonmonotonic logic (abbreviated as NML) and its domain, defeasible reasoning, are multifaceted areas. In crafting an Element that serves as both an introduction and an overview, we must adopt a specific perspective to ensure coherence and systematic coverage. It is, however, in the nature of illuminating a scenario with a spotlight that certain aspects emerge prominently, while others recede into shadow. The focus of this Element is on unveiling the core ideas and concepts underlying NML. Rather than exhaustively presenting concrete logics from existing literature, we emphasize three fundamental methods: (i) formal argumentation, (ii) consistent accumulation, and (iii) semantic approaches.

An argumentative approach for understanding human reasoning has been proposed both in a philosophical context by Toulmin’s forceful attack on formal logic in Reference Toulmin1958, and more recently in cognitive science by Mercier and Sperber (Reference Mercier and Sperber2011). Pioneers such as Pollock (Reference Pollock1991) and Dung (Reference Dung1995) have provided the foundation for a rich family of systems of formal argumentation.

Consistent accumulation methods are based on the idea that an agent facing possibly conflicting and not fully reliable information is well advised to reason on the basis of only a consistent part of that information. The agent could start with certain information and then stepwise add merely plausible information. In this way they stepwise accumulate a consistent foundation to reason with. Accumulation methods cover, for instance, Reiter’s influential default logic (Reiter, Reference Reiter1980) or methods based on maximal consistent sets, such as early logics by Rescher and Manor (Reference Rescher and Manor1970) and (constrained) input–output logic (Makinson & Van Der Torre, Reference Makinson and Van Der Torre2001).

While the previous two methods are largely based on syntactic or proof-theoretic considerations, interpretation plays the essential role in semantic approaches. The core idea is to order interpretations with respect to normality considerations and then to select sufficiently normal ones. These are used to determine the consequences of a reasoning process or to give meaning to nonmonotonic conditionals. The idea surfaces in the history of NML in many places, among others in Batens (Reference Batens1986), Gelfond and Lifschitz (Reference Gelfond and Lifschitz1988), Kraus et al. (Reference Kraus, Lehman and Magidor1990), McCarthy (Reference McCarthy1980), and Shoham (Reference Shoham and Ginsberg1987).

A central aspect of this Element is its unifying perspective (inspired by works such as Bochman (Reference Bochman2005) and Makinson (Reference Makinson2005). Defeasible reasoning gives rise to a variety of formal models based on different assumptions and approaches. Comparing these approaches can be difficult. The Element presents several translations between NMLs, illustrating that in many cases the same inferences can be validated in terms of diverse formal methods. These translations offer numerous benefits. They enrich our understanding by offering different perspectives: the same underlying inference mechanism may be considered as a form of (formal) argumentation, a way of reasoning with interpretations that are ordered with respect to their plausibility, or as a way of accumulating and reasoning with consistent subsets of a possibly inconsistent knowledge base. They demonstrate the robustness of the underlying inference mechanism, since several intuitive methods give rise to the same result. While the different methodological strands of NML have often been developed with little cross-fertilization, it is remarkable that the resulting systems can often be related with relative ease. Finally, the translations may convince the reader that, despite the fact that the field of NML seems a bit of a rag rug at first sight, there is quite some coherence when taking a deeper dive. In particular, by showcasing formal argumentation’s exceptional ability to represent other NMLs, this Element adds further evidence to the fruitfulness of Dung’s program of utilizing formal argumentation as a unifying perspective on defeasible reasoning (Dung, Reference Dung1995).

The Element is organized in four parts. Part I provides a general introduction to the topic of defeasible reasoning and NML. The three core methods are each introduced in a nutshell. It provides a condensed and self-contained overview of the fundamentals of NML for readers with limited time. Part II to IV deepen on each of the respective methods by providing metatheoretic insights and presenting concrete systems from the literature.

While some short metaproofs that contribute to an improved understanding are left in the body of the Element, two technical appendices are provided for others. In particular, results marked with ‘⋆’ are proven in the appendices.

Many important aspects and systems of NML didn’t get the spotlight and fell victim to the trade-off between systematicity and scope from which an introductory Element of this length will necessarily suffer. Nevertheless, with this Element a reader will grow the wings necessary to maneuver in the lands of nonflying birds, that is, they will be well equipped to understand, say, first-order versions of logics that are discussed here on the propositional level, or systems such as autoepistemic logic.

Part I Logics for Defeasible Reasoning

1 Defeasible Reasoning

1.1 What is Defeasible Reasoning?

We certainly want more than we can get by deduction from our evidence. … So real inference, the inference we need for the conduct of life, must be nonmonotonic.

Henry Kyburg, Reference Kyburg2001.

This Element introduces logical models of defeasible reasoning, so-called NonMonotonic Logics (in short, NMLs). When we reason, we make inferences, that is, we draw conclusions from some given information or basic assumptions. Whenever we reserve the possibility to retract some inferences upon acquiring more information, we reason defeasibly.Footnote 1 Two paradigmatic examples of defeasible inferences are:

AssumptionDefeasible conclusionReason for retraction
The streets are wet.It rained.The streets have been cleaned.
Tweety is a bird.Tweety can fly.Tweety is a penguin.

As the examples highlight, we often reason defeasibly if our available information is incomplete: we lack knowledge of what happened before we observed the wet streets, or we lack knowledge of what kind of bird Tweety is. Defeasible inferences often add new information to our assumptions: while being explanatory of the streets being wet, the fact that it rained is not contained in the fact that the streets are wet, and while being able to fly is a typical property of birds, being a bird does not necessitate being able to fly. In this sense defeasible inferences are ampliative.

Logics that may lose conclusions once more information is acquired are called nonmonotonic. The vast majority of logics the reader will typically encounter in logic textbooks are monotonic, with classical logic (in short, CL) being the celebrity. Whenever the given assumptions are true, an inference sanctioned by CL will securely pass the torch from the assumptions to the conclusion, warranting with absolute certainty the truth of the conclusion. Truth is preserved in inferences sanctioned by CL. No matter how much information we add, how many inferences we chain between our premises and our final conclusion, or how often the torch is passed, truth endures: the flames reach their final destination. Thus, inferences are never retracted in CL, and conclusions accumulate the more assumptions we add. This property, called monotonicity, is highly desirable for certain domains of reasoning such as mathematics, a domain where CL reigns.

However, a key motivation behind the development of NML is that out in the wild of commonsense, expert, or scientific reasoning, good inferences need not be truth preservational: we often change our minds and retract inferences when watching a crime show and wondering who the most likely murderer is; medical doctors may change their diagnosis with the arrival of more evidence, and so do scientists, sometimes resulting in scientific revolutions. In less idealized circumstances than those of purely formal sciences (such as mathematics), we usually need to reason with incomplete, sometimes even conflicting information. As a consequence, our inferences allow for exceptions and/or criticism. They are adaptable: learning or inferring more information may cause retraction, previous inferences may get defeated. Outside the ivory tower of mathematics, in the stormy domain of commonsense reasoning, the torch’s fire may get extinguished.

It is therefore not surprising that examples of defeasible reasoning are abundant. In what follows, we will list some paradigmatic examples.

Example 1. We first imagine a scenario at a student party.Footnote 2

Table 001
An example showing a conversation among Peter, Mary, and Anne at a student party discussing Ruth's absence. See long description.
Table 001Long description

The example shows five lines of dialogue set at a student party. 1. Peter: I haven't seen Ruth!. 2. Mary: Me neither. If there's an exam the next day Ruth studies late in the library. 3. Peter: Yes, that's it. The logic exam tomorrow!. 4. Anne: But today is Sunday. Isn't the library closed question mark. 5. Peter: True, and indeed there she is, exclamation mark. [pointing to Ruth entering the room].

In her reply to Peter’s observation concerning Ruth’s absence (1), Mary states a regularity in form of a conditional (2): If there’s an exam the next day, Ruth studies late in the library. She offers an explanation as to why Ruth is not around. The explanation is hypothetical, since she doesn’t offer any insights as to whether there is an exam. Peter supplements the information that, indeed, (3) there is an exam. Were our students to treat information (2) and (3) in the manner of CL as a material implication, they would be able to apply modus ponens to infer that Ruth is currently studying late in the library.Footnote 3 And, indeed, after utterance (3) it is quite reasonable for Mary and Peter to conclude that

  1. (⋆) Ruth is not at the party since she’s studying late at the library.

Anne’s statement (4) casts doubt on the argument (), since the library might be closed today. This does not undermine the regularity stated by Mary, but it points to a possible exception. Anne’s statement may lead to the retraction of (), which is further confirmed when Peter finally sees Ruth (5): this is defeasible reasoning in action!

Defaults. Statements such as “Birds fly.” allow for exceptions. It is therefore not surprising that one of the most frequent characters in papers on NML is Tweety. While the reader may sensibly infer that Tweety can fly when they are told that Tweety is a bird, they might be skeptical when being informed that Tweety lives at the South Pole, and most definitely will retract the inference as soon as they hear that Tweety is a penguin.Footnote 4 As we have also seen in our example, we often express regularities in the form of conditionals – so-called default rules, or simply defaults – that hold typically, mostly, plausibly, and so on, but not necessarily.

Closed-World Assumption. Often, defeasible reasoning is rooted in the fact that communication practices are based on an economic use of information. When making lists such as menus at restaurants or timetables at railway stations, we typically only state positive information. We interpret (and compile) such lists under the assumption that what is not listed is not the case. For instance, if a meal or connection is not listed, we consider it not to be available. This practice is called the closed-world assumption (Reiter, Reference Reiter1981).

Rules with Explicit Exceptions. Before presenting more examples of defeasible reasoning, let us halt for a moment to address a possible objection. Is CL really inadequate as a model of this kind of reasoning? Can’t we simply express all possible exceptions as additional premises? For instance,

  1. (†) If there’s an exam the next day and the library is open late and Ruth is not ill and on her way didn’t get into a traffic jam and …, then Ruth studies late in the library.

There are several problems with this proposal. The first concerns the open-ended nature of the list of exceptions which characterizes most rules that express what typically/usually/plausibly/and so on holds. Even in the (rare) cases in which it is – in principle – possible to compile a complete list of exceptions, the resulting conditional will not adequately represent a reasoning scenario in which our agent may not be aware of all possible exceptions. They may merely be aware of the possibility of exceptions and be able, if asked for it, to list some (such as penguins as nonflying birds). Others may escape them (such as kiwis), but they would readily retract their inference that Tweety flies after learning that Tweety is a kiwi. In other words, the complexities involved in generating explicit lists of exceptions are typically far beyond the capacities of real-life and artificial agents. What is more, in order to apply modus ponens to conditionals such as (), our reasoner would have to first check for whether each possible exception holds. This may be impossible for some, for others unfeasible, and altogether it would render out of reach the pace of reasoning that is needed to cope with their real-life situation.

In contrast to reasoning from fixed sets of axioms in mathematics, commonsense reasoning needs to cope with incomplete (and possibly conflicting) information. In order to get off the ground, it (a) jumps to conclusions based on regularities that allow for exceptions and (b) adapts to potential problems in the form of exceptional circumstances on the fly, by means of the retraction of previous inferences.

Abductive Inferences. Another type of defeasible reasoning concerns cases in which we infer explanations of a given state of affairs (also called abductive inferences). For instance, upon seeing the wet street in front of her apartment, Susie may infer that it rained, since this explains the wetness of the streets. However, when Mike informs her that the streets have been cleaned a few minutes ago, she will retract her inference. We see this kind of inference often in diagnosis and investigative reasoning (think of Sherlock Holmes or a scientist wondering how to interpret the outcome of an experiment). As both the exciting histories of the sciences and the twisted narratives of Sir Arthur Conan Doyle reveal, abductive inference is defeasible.

Inductive Generalizations. In both scientific and everyday reasoning, we frequently rely on inductive generalizations. Having seen only white swans, a child may infer that all swans are white, only to retract the inference during a walk in the park when a black swan crosses their path.

These are some central, but far from the only types of defeasible inferences. A more exhaustive and systematic overview can be found, for instance, in Walton et al. (Reference Walton, Reed and Macagno2008), where they are informally analyzed in terms of argument schemes.Footnote 5

1.2 Challenges to Models of Defeasible Reasoning

Formal models of defeasible reasoning face various challenges. Let us highlight some.

1.2.1 Human Reasoning and the Richness of Natural Language

As we have seen, defeasible reasoning is prevalent in contexts in which agents are equipped with incomplete and uncertain information. By providing models of defeasible reasoning, NMLs are of interest to both philosophers investigating the rationality underlying human reasoning and computer scientists interested in the understanding and construction of artificially intelligent agents. Human reasoning has a peculiar status in both investigations in that selected instances of it serve as role models of rational and successful artificial reasoning. After all, humans are equipped with a highly sophisticated cognitive system that has evolutionarily adapted to an environment of which it only has incomplete and uncertain information. Therefore, it seems quite reasonable to assume that we can learn a good deal about defeasible reasoning, including the question of what is good defeasible inference, by observing human inference practices.

There are, however, several complications that come with the paradigmatic status of human defeasible reasoning. First, human reasoning is error-prone, which means we have to rely on selected instances of good reasoning. But what are exemplars of good reasoning? In view of this problem, very often nonmonotonic logicians simply rely on their own intuitions. There are good reasons why one should not let expert intuition be the last word on the issue. We may be worried, for instance, about the danger of myside bias (also known as confirmation bias; see Mercier and Sperber (Reference Mercier and Sperber2011)): intuitions may be biased toward satisfying properties of the formal system that is proposed by the respective scholar.

Then, there is the possibility of “déformation professionnelle,” given that the expert’s intuitions have been fostered in the context of a set of paradigmatic examples about penguins with the name Tweety, ex-US presidents (see Examples 2 and 3), and the like.Footnote 6

Another complication is the multifaceted character of defeasible reasoning in human reasoning. First, there is the variety of ways we can express in natural language regularities that allow for exceptions. We have “Birds fly,” “Birds typically fly,” “Birds stereotypically fly,” “Most birds fly,” and so on, none of which are synonymous: for example, while tigers stereotypically live in the wild, most tigers live in captivity. What is more important, the different formulations may give rise to different permissible inferences. Consider the generic “Lions have manes.” While having a mane implies being a male lion, “Lions are males” is not acceptable (Pelletier & Elio, Reference Pelletier and Elio1997). The inference pattern blocked is known as right weakening: if A by default implies B, and C follows classically from B, then C follows by default from A as well. It is valid in most NMLs, and it seems adequate for the “typical,” “stereotypical,” and “most” reading of default rules, but not for some generics.Footnote 7 For NMLs this poses the challenge to keep in mind the intended interpretation of defaults and differences in the underlying logical properties that various interpretations give rise to.

Despite these problems, it seems clear that “reasoning in the wild” should play a role in the validation and formation of NMLs.Footnote 8 This pushes NML in proximity to psychology. In practice, nonmonotonic logicians try to strike a good balance by obtaining metatheoretically well-behaved formal systems that are to some degree intuitively and descriptively adequate relative to (selected) human reasoning practices.

1.2.2 Conflicts and Consequences

Defeasible arguments frequently conflict. This poses a challenge for normative theories of defeasible reasoning, which must specify the conditions under which inferences remain permissible in such scenarios.

For this discussion, some terminology and notation will be useful. An argument (in our technical sense) is obtained by either stating basic assumptions or by applying inference rules to the conclusions of other arguments. An argument is defeasible if it contains a defeasible rule (such as a default), symbolized by . Such an argument may include also truth-preservational strict inference rules (such as the ones from CL), symbolized by . A conflict between two arguments arises if they lead to contradictory conclusions A and ¬A (where ¬ denotes negation).

Let us now take a look at two paradigmatic examples.

Example 2 Nixon; Reiter and Criscuolo (1981). One of the most well-known examples in NML is the Nixon Diamond (see Fig. 1):Footnote 9

1. Nixon is a Dove.

NixonDove

2. Nixon is a Quaker.

NixonQuaker

3. By default, Doves are Pacifists.

DovePacifist

4. By default, Quakers are not Pacifists.

Quaker¬Pacifist

Given the conflict between the arguments NixonDovePacifist and NixonQuaker¬Pacifist, should we conclude that Nixon is (not) a pacifist? It seems an agnostic stance is recommended in this example.

Example 3 (Tweety; Doyle and McDermott (1980)). Another well-known example is Tweety the penguin (see Fig. 2) based on the following information:

1. Tweety is a penguin.

Tweetypenguin

2. Penguins are birds.

penguinbird

3. By default, birds fly.

birdfly

4. By default, penguins don’t fly.

penguin¬fly

We use the example to demonstrate a way to resolve conflicts among defeasible arguments, here between (a) Tweetypenguinbirdfly and (b) Tweetypenguin¬fly. According to the specificity principle more specific defaults such as penguin¬fly are prioritized over less specific ones, such as birdfly. The reason is that more specific defaults may express exceptions to the more general ones. So, in this example the preferred outcome ¬fly will be obtained since the less specific defeasible argument (a) should be retracted in favor of (b).

Two single arrows from Nixon (rectangular node) lead to Dove (black node) and Quaker (black node). A double arrow from Dove leads to Pacifist (light node) and from Quaker leads to ¬ Pacifist. A wavy arrow is drawn between pacifist and ¬pacifist.

Figure 1 The Nixon Diamond from Example 2. Double arrows symbolize defeasible rules, single arrows strict rules, and wavy arrows conflicts. Black nodes represent unproblematic conclusions, while light nodes represent problematic conclusions. Rectangular nodes represent the starting point of the reasoning process. We use the same symbolism in the following figures.

A single arrow from Tweety (rectangular node) leads to penguin (black node). See long description.

Figure 2 Tweety and specificity, Example 3.

Figure 2Long description

A single arrow from penguin leads to bird (black node). A double arrow from penguin leads to ¬ fly (black node) and from bird leads to fly (light node). A way arrow is drawn between fly and ¬fly.

Our examples indicate that, first, conflicts between defeasible arguments can occur, and second, the context may determine whether and, if so, how a conflict can be resolved. We now take a look at two further challenges that come with conflicts in defeasible reasoning.

Figure 3 encodes the following information: AB, AC, A, and ¬B. Should we infer C? Nonmonotonic logics that block this inference have been said to suffer from the drowning problem (Benferhat et al., Reference Benferhat, Cayrol, Dubois, Lang and Prade1993). Examples like the following seem to suggest that we should accept C.

Two double arrows from A (rectangular node) lead to B (light node) and C (light node). A wavy arrow is drawn between B and ¬ B (rectangular node).

Figure 3 A drowning scenario.

Example 4. We consider the scenario:

1. Micky is a dog.

MickyA

2. Dogs normally (have the ability to) to tag along with a jogger.

AB

3. Dogs normally (have the ability to) bark.

AC

4. Micky lost a leg and can’t tag along with a jogger.

Micky¬B

In this example it seems reasonable to infer, C, Micky has the ability to bark, despite the presence of ¬B. In other contexts one may be more cautious when jumping to a conclusion.

Example 5. Take the following scenario.

1. It is night.

A

2. During the night, the light in the living room is usually off.

AB

3. During the night, the heating in the living room is usually off.

AC

4. The light in the living room is on.

¬B

In this scenario it seems less intuitive to infer, C, The heating in the living room is off. The fact that we have in (4) an exception to default (2) may have an explanation in the light of which also default (3) is excepted. For example, the inhabitant forgot to check the living room before going to sleep, she is not at home and left the light and heating on before leaving, she is still in the living room, and so on.

These examples show that concrete reasoning scenarios often contain a variety of relevant factors that influence what real-life reasoners take to be intuitive conclusions. Specific NMLs typically only model a few of these factors and omit others. For instance, although Elio and Pelletier (Reference Elio and Pelletier1994) and Koons (Reference Koons and Zalta2017) argue that it is useful to track causal and explanatory relations in the context of drowning problems, systematic research in this direction is lacking.

Another class of difficult scenarios has to do with so-called floating conclusions.Footnote 10

These are conclusions that follow from two opposing arguments. For example, formally the scenario may be as depicted in Fig. 4.

A double arrow from A 1 (rectangular node) leads to B 1 (light node) and from A 2 (rectangular node) leads to B 2 (light node). A single arrow from each B 1 and B 2 lead to C (light node). A wavy arrow is drawn between B 1 and B 2.

Figure 4 A scenario with the floating conclusion C.

Example 6. Suppose two generally reliable weather reports:

1. Station 1: The hurricane will hit Louisiana and spare Alabama.

A1B1

2. Station 2: The hurricane will hit Alabama and spare Louisiana.

A2B2

3. If the hurricane hits Louisiana, it hits the South coast.

B1C

4. If the hurricane hits Alabama, it hits the South coast.

B2C

The floating conclusion, (5), The storm will probably hit the South coast, may seem acceptable to a cautious reasoner. The rationale being that both reports agree on the upcoming storm and even roughly where it will hit. The disagreement may be due to different weighing of diverse factors in their respective underlying scientific weather models. But the combined evidence of both stations seems to rather confirm conclusion (5) than dis-confirm it. This is not always the case with partially conflicting expert statements, as the next example shows.

Example 7. Assume two expert reviewers, Reviewer 1 and Reviewer 2, evaluating Anne for a professorship. She sent in two manuscripts, A and B.

1. Reviewer 1: Manuscript A is highly original, while manuscript B repeats arguments already known in the literature.

A1B1

2. According to Reviewer 1, one manuscript is highly original.

B1C

3. Reviewer 2: Manuscript B is highly original, while manuscript A repeats arguments already known in the literature.

A2B2

(We assume the inconsistency of B1 with B2.)

4. According to Reviewer 2, one manuscript is highly original.

B2C

Should we conclude that one manuscript is highly original, since it follows from both reviewers’ evaluations? It seems a more cautious stance is advisable. The disagreement may well be an indication of the sub-optimality of each of the two reviews. Indeed, a possible explanation of their conflicting assessments could be that (a) Reviewer 1 is aware of an earlier article B (by another author than Anne) that already makes the arguments presented in B and which is not known to Reviewer 2, and vice versa, that (b) Reviewer 2 is aware of an earlier article A in which similar arguments to those in A are presented. In view of this possibility, it would seem overly optimistic to infer that Anne has a highly original article in her repertoire.

2 Central Concepts

Nonmonotonic logics are designed to answer the question what are (defeasible) consequences of some available set of information. This gives rise to the notion of a nonmonotonic consequence relation. In this section we explain this central concept and some of its properties from an abstract perspective (Section 2.2). Nonmonotonic consequences are obtained by means of defeasible inferences, which are themselves obtained by applying inference rules. We discuss two ways of formalizing such rules in Section 2.3. Before doing so, we discuss some basic notation in Section 2.1.

2.1 Notation and Basic Formal Concepts

Let us get more formal. We assume that sentences are expressed in a (formal) language L. We denote the standard connectives in the usual way: ¬ (negation), (conjunction), (disjunction), (implication), and (equivalence). We use lowercase letters p,q,s,t, as propositional atoms, collected in the set Atoms, and uppercase letters A,B,C,D, as metavariables for sentences such as p, pq or (pq)r. We denote the set of sentences underlying L by sentL. In the context of classical propositional logic and typically in the context of a Tarski logic (see later), this will simply be the closure of the atoms under the standard connectives.Footnote 11 We denote sets of sentences by the uppercase calligraphic letters A, S, and T. Where S is a finite nonempty set of sentences, we write S and S for the conjunction resp. the disjunction over the elements of S.Footnote 12

A consequence relation, denoted by , is a relation between sets of sentences and sentences: SA denotes that A is a -consequence of the assumption set S. So, the right side of encodes the given information resp. the assumptions on which the reasoning process is based, while the left side encodes the consequences which are sanctioned by given S.

We will often work in the context of Tarski logics L, whose consequence relations L are reflexive (S{A}LA), transitive (SLA and S{A}LB implies SLB) and monotonic (Definition 2.1). We will also assume compactness (if SLA then there is a finite SS for which SLA). The most well-known Tarski logic is, of course, classical logic CL.

2.2 An Abstract View on Nonmonotonic Consequence

The following definition introduces one of our key concepts: nonmonotonic consequence relations.

Definition 2.1. A consequence relation is monotonic iff (“if and only if”) for all sets of sentences S and T and every sentence A it holds that STA if SA. It is nonmonotonic iff it is not monotonic.

We use | as a placeholder for nonmonotonic consequence relations. Our definition expresses that for nonmonotonic consequence relations | there are sets of sentences S{A} and T for which S|A while ST̸|A (i.e., A is not a |-consequence of ST).

In the following we will introduce some properties that are often discussed as desiderata for nonmonotonic consequence relations.Footnote 13 A positive account of what kind of logical behavior to expect from these relations is particularly important given the fact that ‘nonmonotonicity’ only expresses a negative property. This immediately raises the question whether there are restricted forms of monotonicity that one would expect to hold even in the context of defeasible reasoning? One proposal is

Cautious Monotonicity (CM).

S{B}|A, if S|A and S|B.Footnote 14

Whereas nonmonotonicity expresses that adding new information to one’s assumptions may lead to the retraction resp. the defeat of previously inferred conclusions, CM states that some type of information is safe to add: namely, adding a previously inferred conclusion does not lead to the loss of conclusions.

We sketch the underlying rationale. Suppose S|A and S|B. In view of S|A, the defeasible consequence A of S is sanctioned. So, S does not contain defeating information for concluding A. Now, the only reason for S{B}̸|A would be that the addition of B to S generates defeating information for concluding A. However, B already followed from S, since S|B. Thus, this defeating information should have already been contained in S, before adding B. But then S̸|A, a contradiction.

One may also demand that adding |-consequences to an assumption set should not lead to more consequences.

Cautious Transitivity (CT).

S|A, if S{B}|A and S|B.

Combining CM and CT comes down to requiring that | is robust under adding its own conclusions to the set of assumptions.

Cumulativity (C).

If S|B, then S|A iff S{B}|A.

Instead of considering the dynamics of consequence under additions of new assumptions, one may wonder what happens when assumptions are manipulated. For instance, it seems desirable that a consequence relation is robust under substituting assumptions for equivalent ones.

Left Logical Equivalence (LLE).

Where S and T are classically equivalent sets,Footnote 15 S|A iff T|A.

Note that in the context of nonmonotonic consequence it would be too strong to require

Left Logical Strengthening (LLS).

Where ACLB, S{B}|C implies S{A}|C.

In order to see why LLS is undesirable, consider an example featuring Tweety. If it is only known that Tweety is a bird, it nonmonotonically follows that it can fly, {b}|f. The situation changes when it is also known that Tweety is a penguin, {bp}̸|f.

For the right-hand side of | one may also expect a property similar to LLE: if A is a consequence, so is each equivalent formula B. The following principle is stronger. It is motivated by the truth-preservational nature of CL-inferences (but recall from Section 1.2 that in the context of generics it may be problematic):

Right Weakening (RW).

Where ACLB, S|A implies S|B.

Finally, if we take our assumptions to express certain information (rather than defeasible assumptions, see Section 4), then one may expect

Reflexivity (Ref).

S{A}|A.

Consequence relations that satisfy RW, LLE, Ref, CT, and CM are called cumulative consequence relations (Kraus et al., Reference Kraus, Lehman and Magidor1990).Footnote 16 The authors consider them “the rockbottom properties without which a system should not be considered a logical system.” (p. 176), a point mirroring Gabbay (Reference Gabbay and Apt1985). Some other intuitive principles hold for a cumulative |.

Proposition 2.1. Every cumulative consequence relation | also satisfies:

  1. 1. Equivalence. If S{A}|B and S{B}|A then: S{A}|C iff S{B}|C.

  2. 2. AND. If S|A and S|B then S|AB.

Proof. Item 1 follows by CT and CM. To see this suppose (a) S{A}|B, (b) S{B}|A, and (c) S{A}|C. We show S{B}|C (the inverse direction is analogous). By CM, (a) and (c), S{A,B}|C. Thus, by CT and (b), S{B}|C.

Ad 2. Suppose (a) S|A and (b) S|B. By Ref, S{AB}|AB and by LLE, (c), S{A,B}|AB. By CM, (a) and (b), S{A}|B. By CT and (c), S{A}|AB. By (a) and CT, S|AB. □

Another property of some NMLs is constructive dilemma: given a fixed context represented by S, if C is both a consequence of A and of B, it should also be a consequence of AB.

Constructive Dilemma (OR)

If S{A}|C and S{B}|C then S{AB}|C.

Cumulative consequence relations that also satisfy OR are called preferential (Kraus et al., Reference Kraus, Lehman and Magidor1990). We show some derived principles for preferential consequence relations.

Proposition 2.2. Every preferential consequence relation | also satisfies:

  1. 1. Reasoning by Cases (RbC). If S{A}|B and S{¬A}|B then S|B.

  2. 2. Resolution. If S{A}|B then S|AB.

Proof. (RbC). Suppose S{A}|B and S{¬A}|B. By OR, S{A¬A}|B and by LLE, S|B. (Resolution). Suppose now that S{A}|B. By RW, (a), S{A}|AB. By Ref, S{¬A}|¬A and by RW, (b), S{¬A}|AB. By RbC, (a) and (b), S|AB. □

A more controversial property than CM is rational monotonicity (RM).Footnote 17 The basic intuition is similar to CM: given an assumption set S, we are interested in securing a safe set of sentences under the addition of which | is monotonic. While for CM this was the set of the |-consequences of S, RM considers the set of all sentences that are consistent with the consequences of S (consistent in the sense that their negation is not a |-consequence of S).

Rational Monotonicity (RM)

S{B}|A, if S|A and S̸|¬B.

One way to think about RM is as follows. Let us (i) say that B is defeating information for S if there is an A for which S|A, while S{B}̸|A, and (ii) B is rebutted by S in case S|¬B.Footnote 18 Then, when putting CM and RM in contrapositive form,

  • CM expresses that no defeating information for S is derivable from any S: formally, if S|A and S{B}̸|A then S̸|B;

  • RM expresses the stronger demand that every defeating information for S is rebutted by S: formally, if S|A and S{B}̸|A then S|¬B.

So, RM requires that reasoners take into account potentially defeating information by having rebutting counterarguments at hand. This is quite demanding, since, as we have discussed in Section 1, (a) the reasoner may not be aware of all possibly rebutting information to her previous inferences and (b) it may be counterintuitive to conclude that each and every possible defeater is false.

Poole (Reference Poole1991) points out another problem. Consider the statement that Tweety is a bird. Now, all bird species are exceptional to some defaults about birds: penguins don’t fly, hummingbirds have an unusual size, sandpipers nest on the ground, and so on. But, then RM requires us to infer that Tweety is not a penguin, not a hummingbird, not a sandpiper, and so on, and therefore does not belong to any bird species.

In this section we have seen various properties of nonmonotonic consequence relations, many of which are considered desiderata by nonmonotonic logicians. Their study is therefore of central interest in NML and we will come back to them in the context of many of the methods presented in this Element.

2.3 Plausible and Defeasible Reasoning

A fundamental question underlying the design of NMLs is whether to model defeasible reasoning

  1. 1. by means of classical inferences based on defeasible assumptions, or

  2. 2. by means of (genuinely) defeasible inference rules.

The former is sometimes called Plausible Reasoning, the latter Defeasible Reasoning.Footnote 19 Table 1 provides an overview on which of the two reasoning styles is modeled by various NMLs discussed in this Element. We illustrate with an abstract example. Suppose we want to model that

  • p defeasibly implies q, and that

  • q defeasibly implies ¬r.

In the first approach we encode these two defeasible regularities in terms of classical implications. It can be realized in two ways.

Plausible Reasoning via abnormality assumptions.  One way is by formalizing the defeasible rules by

p¬ab1qandq¬ab2¬r,

where ab1 and ab2 are atomic sentences that encode exceptional circumstances, that is, abnormalities, for the respective rules. These abnormalities are assumed to be false, by default. Suppose that p is true. Then, by also assuming the falsity of ab1 and ab2 we can apply modus ponens to both material implications and conclude ¬r.

Table 1Reasoning styles modelled by various logics discussed in this Element. () A NML with genuine defaults, such as Reiter’s default logic can “simulate” Plausible Reasoning by encoding defeasible assumptions A by defaults with empty bodies A.
A table categorizes various Non-Monotonic Logics (NML) based on their ability to perform Defeasible Reasoning and Plausible Reasoning. See long description.
Table 1Long description

The table consists of four columns: NML, Defeasible Reasoning, Plausible Reasoning, and Blank. It read as follows. Row 1: ASPIC plus; checkmark in Defeasible Reasoning; checkmark in Plausible Reasoning; Section 8. Row 2: Logic-based argumentation; checkmark in Plausible Reasoning; Section 9. Row 3: Rescher and Manor; checkmark in Plausible Reasoning; Section 11 dot 3 dot 1. Row 4: Default Assumptions; checkmark in Plausible Reasoning; Section 11 dot 3 dot 1. Row 5: Adaptive Logic; checkmark in Plausible Reasoning; Section 11 dor 3 dot 1. Row 6: Input-Output Logic; checkmark in Defeasible Reasoning; checkmark in Plausible Reasoning with asterisk; Section 11 dot 3 dot 2. Row 7: Reiter Default Logic; checkmark in Defeasible Reasoning; checkmark in Plausible Reasoning with asterisk; Section 12. Row 8: Logic Programming; checkmark in Plausible Reasoning; Section 16.

Let us see how retraction works in this approach by supposing r. In this case we can classically derive ab1ab2, but neither ab1 nor ab2. Note that contraposition of defeasible rules is available in this approach. For instance, q¬ab2¬r is CL-equivalent to qrab2.Footnote 20 So, we know (at least) one of the assumptions must be false, but we don’t know which. Absent any other reason to prefer one over the other, we can’t rely on ¬ab1 to derive q. In view of this, q should not be considered a nonmonotonic consequence of the given information.

Plausible Reasoning via naming of defaults. Another way to proceed is by naming defaults (see, e.g., Poole (Reference Poole1988)). Here, we make use of defeasible assumptions r1 and r2, which name defeasible inference rules and which are assumed to be true, by default. In our example, we add

r1(pq)andr2(q¬r)

to the (nondefeasible) assumptions. Note that r1(pq) (resp.  r2(q¬r)) is classically equivalent to r1pq (resp.  r2q¬r). So, when substituting r1 for ¬ab1 and r2 for ¬ab2 the approach based on naming defaults and the approach based on abnormality assumptions boil down to the same.

Defeasible Reasoning. In this approach, regularities are expressed as genuinely defeasible rules (written with ), that is, without additional and explicit defeasible assumptions that are part of the antecedent of the rule. We encode our preceding example by

pqandq¬r.

Note that is not classical implication, in particular ABC does, in general, not follow from AC in this approach. In the first scenario, where only p is given, we apply a defeasible modus ponens rule to obtain q and then again to obtain ¬r. Many NMLs implement a greedy style of reasoning, according to which defeasible modus ponens is applied as much as possible. Now, if r is also part of the assumptions, we derive q from p and pq, but then stop, since inferring ¬r from q¬r and q would result in inconsistency.

Example 8. For a more general context, we consider an example with defaults p1p2,,pn1pn and the (certain) information p1 and ¬pn depicted in Figure 5. In the greedy style of reasoning underlying Defeasible Reasoning we will be able to apply defeasible modus ponens to derive p2, p3, …, pn1. Only the last application resulting in pn is blocked by the defeating information ¬pn. The situation is different for Plausible Reasoning. Since contraposition is available, for each argument p1p2pi (where each pjpj+1 is modeled by pj¬abjpj+1) there is a defeating argument ¬pn¬pn1¬pi. Altogether, we obtain

{piabipi+11i<n}{p1,¬pn1}ab1abn1.

This means that at least one abi cannot be assumed to be false, but we don’t know which one. Thus, no pi (for i{2,,n}) is derivable according to Plausible Reasoning.

Content of image described in text.

Figure 5 Top: Defeasible Reasoning giving rise to a greedy reasoning style. Bottom: Plausible Reasoning giving rise to contrapositions of defeasible rules.

3 From Knowledge Bases to Consequences and NMLs

Nonmonotonic logics represent the information relevant for the reasoning process (knowledge representation) and determine what follows defeasibly from the given information (nonmonotonic consequence, see Fig. 6).

Nonmonotonic consequence showing the following: Input: knowledge base leads to knowledge representation extensions, models, arguments, etcetera which further leads to output: consequences. The entire flow is labelled as consequence relation |~.

Figure 6 The workings of NMLs.

The task of knowledge representation concerns, for instance, the structuring of the starting point of defeasible reasoning processes in terms of knowledge bases (Section 4) in which different types of information are distinguished, such as different types of assumptions and inference rules. Another task is to organize the given information in a way that is conducive of determining its defeasible consequences. As we have seen, this is challenging since the given information may give rise to conflicts and inconsistencies. NMLs provide methods for generating coherent chunks of information. We will highlight several ways of doing so, most roughly distinguished into syntactic and semantic approaches. The following three concepts play essential roles in the ways knowledge is represented in these approaches:Footnote 21

Extensions

In syntactic approaches, coherent units of information are typically called extensions. What exactly extensions are differs in various NMLs. They may, for example, be sets of defeasible information from the knowledge base, sets of arguments (given a underlying notion of argument), or sets of sentences. In Section 5.1, 5.2 we will introduce two major families of syntactic approaches: argumentation and consistent accumulation. In Parts II and III they will be studied in more detail.

Arguments

In syntactic approaches, arguments (or proofs) play a central role when building extensions. Arguments are obtained by applications of the given inference rules to the assumptions provided in the knowledge base.

Models

In semantic approaches the focus is on classes of models provided by a given base logic. In Section 5.3 we will introduce semantic approaches and study some of them in more detail in Part IV.

The attentive reader will have noticed that we did not yet define what exactly NMLs are. In the narrow sense, one may consider them as nonmonotonic consequence relations (see Section 2.2), so a theory of what sentences follow defeasibly in the context of some knowledge base. In the wider sense they are methods for both knowledge representation and for providing nonmonotonic consequence relation(s).

In this Element we minimally assume that every NML nmL comes with a formal language L including a notion of what counts as a formula or sentence (written sentL), an associated class of knowledge bases KnmL (see Section 4 for details), at least one consequence relation and one of the following two:

  • in syntactic approaches: a notion of (in)consistent sets of sentences, of argument or proof, and a method to generate extensions (see Sections 5.1 and 5.2 and Parts II and III);

  • in semantic approaches: a notion of model and a method to select models (see Section 5.3 and Part IV).

4 Defeasible Knowledge Bases

Reasoning never starts in a void but it is initiated in a given context. For instance, some information will be factually given and some assumptions may hold by default. Moreover, when we reason we make use of inference rules. Some of these may be truth-preservational (such as the rules provided by CL), others defeasible, allowing for exceptional circumstances. Defeasible knowledge bases structure reasoning contexts into different types of constituents, such as different types of assumptions and inference rules. Most broadly conceived they are tuples of the form:

K=As,Adstrict anddefeasibleAssumptions,Rs,Rd,Rmstrict,defeasible,and metaRules,preferencesamong defeasibleelements(4.0.1)

We let Def(K)=dfAdRd be the defeasible part of K consisting of its defeasible assumptions and rules.

A concrete nmL has an associated fixed class of knowledge bases KnmL. Its underlying consequence relation(s) | are relations between KnmL and sentL.Footnote 22

In concrete NMLs, usually not all components of (4.0.1) are utilized or explicitly listed. For example, some NMLs do not consider defeasible rules, some come without defeasible assumptions, some without priorities, many without metarules. Take, for instance, NMLs that model Plausible Reasoning. Here, we omit Rd since such NMLs do not work with defeasible rules. Moreover, specific components of the knowledge base are fixed for many NMLs, or they are constrained. For instance, only specific preferences relations may be allowed for, such as transitive ones. Or, often the strict rules are induced by classical logic. (In such cases, the strict rules are often omitted from KnmL.) In some NMLs the strict rules vary over different applications (e.g., in logic programming where strict rules usually represent domain-specific knowledge such as “penguins are birds”). In Table 2 we provide an overview for NMLs presented in this Element.

Table 2The class of associated knowledge bases for specific NMLs. In gray the nonfixed parts. For example, for specific input–output logics the set of metarules Rm is fixed, while the strict assumptions and defeasible rules vary in their applications. RL is the class of strict rules induced by a logic L, where CL is classical logic.
A table depicting knowledge bases for Non-Monotonic Logics. See long description.
Table 2Long description

The table with seven columns titled: NML, A subscript s, A subscript d, R subscript s, R subscript d, R subscript m, and Section. The rows read as follows: Row 1: ASPIC plus, tick in A subscript s, A subscript d, R subscript s, and R subscript d; Section 8. Row 2: Logic-based argumentation, tick in A subscript s and A subscript d, tick with R subscript L in R subscript s; Section 9. Row 3: Rescher and Manor: tick in A subscript d, tick in R subscript CL in R subscript s; Section 11 dot 3 dot 1. Row 4: Default Assumptions, tick in A subscript s and A subscript d, tick with R subscript CL in R subscript s; Section 11 dot 3 dot 1. Row 5: Input-output logics, tick in A subscript s, tick with R subscript CL in R subscript s; tick in R subscript d and tick mark in R subscript m; Section 11 dot 3 dot 2. Row 6: Reiter Default Logic, tick in A subscript s, tick with R subscript CL in R subscript s, tick in R subscript d; Section 12. Row 7: Logic Programming, tick in R subscript s; Section 16.

We now explain in more detail the components of K.

Strict assumptions

As is a set of sentences expressing information that is taken as indisputable or certain.

Defeasible assumptions

Ad is a set of sentences that are assumed to hold normally/typically/and so on but which may be retracted in case of conflicts.

Strict rules

Rs is a set of truth-preservational inference rules or relations, written A1,,AnB.Footnote 23 There are two types of such rules. On the one hand, we have material inferences, such as “If it is a penguin, it is a bird” which may be encoded by pb. On the other hand, we have inferences that are valid with respect to an underlying logic L, such as classical logic. If such inferences are considered, we let A1,,AnBRs if {A1,,An}LB. If Rs consist exclusively of such rules, we say that it is induced by the logic L and write RL for the set containing them. All logics L considered in this Element will be Tarski logics. If Rs is induced by a logic (with an implication ) one may model the former class of material inferences simply by means of . For example, in our example one may add pb to the strict assumptions As. Sometimes we find strict assumptions A being modeled as strict rules with empty bodies A.

Given a set of strict rules Rs and a set of sentences S{A}, we write ACnRs(S) to indicate that there is a deduction of A based on Rs and S. This means that there is a sequence A1,,An where A=An and for each Ai+1 (with 0i<n), either Ai+1S or there are j1,,jmi for which Aj1,,AjmAi+1Rs.

Defeasible rules

Rd is a set of defeasible inference rules written A1,,AnB, often just called defaults. As discussed in Section 2.3, defeasible rules are sometimes “indirectly” modeled as strict rules with defeasible assumptions. In NMLs that adopt this method of Plausible Reasoning, Rd may be empty. In such cases we are typically dealing with a logic-induced set of strict rules RL and defaults are sentences of the type A1An¬abB in As where ¬abAd. Defeasible assumptions AAd may be also considered as defaults A with empty bodies.

For reasons of simplicity and following the tradition of many central NMLs, we do not consider as a defeasible conditional operator in the object language L, that is, an operator that can be nested within Boolean connectives. Rather, we model AB as representing a defeasible rule that prima facie justifies detaching B, given A. However, it should be noted that this does impose a limitation on our expressive capabilities. For instance, we cannot “directly” express canceling in the context of specificity, such as penguin¬(birdfly). Many systems have been developed to overcome this limitation, such as Delgrande (Reference Delgrande1987) or conditional logics of normality (Boutilier, Reference Boutilier1994a).Footnote 24

Example 9. In Section 2.3 we presented two ways to model a scenario in which p defeasibly implies q and q defeasibly imply ¬r. Suppose now additionally that r and ¬r both strictly imply s and that p defeasibly implies r.

In Defeasible Reasoning we may work with the knowledge base K=As,Ad,Rs,Rd consisting of As={p}, Ad=, Rd={pq,pr,q¬r}, and Rs={rs,¬rs}. Alternatively, one may use the strict assumptions rs and ¬rs and RCL as strict rules.

In Plausible Reasoning we may utilize K=As,Ad,RCL, where As={p,p¬ab1q,p¬ab2r,q¬ab3¬r,rs,¬rs} and Ad={¬ab1,¬ab2,¬ab3}.

Metarules

Rm is a set of metarules, written R1,,RnR (where R1,,Rn are strict and defeasible rules and R is a defeasible rule) that allow one to infer new defeasible rules from those in Rd and Rs. For example, metarules implementing reasoning-by-cases and right weakening are:

OR

(AB),(CB)((AC)B)

RW

(BC),(AB)(AC)

Given a set RRd, we write CnRm(R) for the set of defeasible rules that are Rm-deducible from RRs by the metarules in Rm (where deductions are defined as in the context of the strict rules Rs).Footnote 25

Preferences

is an order on the defeasible elements Def(K) of K. It encodes that some sources of defeasible information may be more reliable or have more authority than others. This information can be utilized for the purpose of resolving conflicts between defeasible arguments of different strengths. Typically is reflexive and transitive, but it may allow for incomparabilities and for equally strong but different defeasible elements. We write for the strict version of , that is, XX iff XX and X/X.

Example 10.

Consider K=As,Rd with As={p} and Rd={pq,p¬q}. There is a conflict between the arguments pq and p¬q. Absent priorities, there is no way to resolve the conflict on the basis of K. If we enhance K to K=As,Rd, where (pq)(p¬q) it seems reasonable to resolve the conflict in favor of p¬q.

The situation can get more involved, as the following example shows.

Example 11 (Example 9 cont.). We may extend our knowledge base to K=As,Ad,Rd,Rs, by adding the preference order: (pq)(pr)(q¬r) (assuming transitivity).Footnote 26 In this case we have two conflicting arguments, pq¬r and pr. Comparing their strengths is no longer straightforward, since the former involves both a stronger and a weaker default than the latter. In Part III (Examples 28 and 29) we will see that different methods give rise to different conclusions for K (see also Liao et al. (Reference Liao, Oren, van der Torre, Villata, Cariani, Grossi, Meheus and Parent2016)).

5 Methodologies for Nonmonotonic Logics

We now introduce three central methodologies to obtain nonmonotonic consequence relations and to represent defeasible knowledge, namely: formal argumentation (Section 5.1), consistent accumulation (Section 5.2), and semantic methods (Section 5.3). In this part we explain basic ideas underlying each method based on simplified settings (e.g., without metarules and preferences). More details are presented in the dedicated Parts II to IV.

5.1 The Argumentation Method

The possibility of inconsistency complicates the question as to what follows from a knowledge base K. As described earlier, the idea is to generate coherent sets of information from K and to reason on the basis of these. For this, arguments and attacks between them play a key role. Arguments are obtained from K by chaining strict and defeasible inference rules. We can define the set of arguments ArgK induced by K, their conclusions, subarguments, and defeasible elements (written Con(a), Sub(a), resp.  Def(a) for some aArgK) in a bottom-up way.Footnote 27

Definition 5.1 (Arguments). Where K=As,Ad,Rs,Rd is a knowledge base we let aArgK iff

  • a=A, where AAsAd.

    We let Con(a)=A, Sub(a)={a}, Rd(a)=, Ad(a)={A}Ad.

  • a=a1,,anA where {,}, a1,,anArgK and r=Con(a1),,Con(an)A is a rule in RsRd.

    We let Con(a)=A, Sub(a)={a}i=1nSub(ai), Ad(a)=i=1nAd(ai), Rd(a)=i=1nRd(ai)({r}Rd).

Where aArgK we let Def(a)=Ad(a)Rd(a). Where DDef(K), we let ArgK(D) be the set of all aArgK for which Def(a)D.

Example 12 (Example 9 cont.). Given our knowledge base K in Example 9 we obtain the arguments depicted in Fig. 7 (left). We have, for instance, Ad(a5)=, Rd(a5)=Def(a5)={pq,q¬r}, and Sub(a5)={a1,a2,a3,a5}.

Content of image described in text.

Figure 7 The arguments and the argumentation framework for Example 13 (omitting the nonattacked and nonattacking a1 and a2). We explain the shading in Example 14.

There are many ways to define argumentative attacks and subtlety is required to avoid problems with consistency in the context of selecting arguments. We will go into more details in Part II. For now we simply suppose there to be a relation attArgK×ArgK that determines when two arguments attack each other. We end up with a directed graph ArgK,att, a so-called argumentation framework (Dung, Reference Dung1995).

Example 13 (Example 12 cont.). One way to define attacks in our example is to let aArgK attack bArgK if for some cSub(b) of the form c=a1,,anC, Con(a)=¬C or ¬Con(a)=C. For instance, a3 and a4 attack each other. In Fig. 7 (right) we find the underlying argumentation framework.

Argumentation frameworks allow us to select coherent sets of arguments X, which we will call A-extensions (for argumentative extensions). The latter represent argumentative stances of rational reasoners equipped with the knowledge base K. For this we utilize a number of constraints which represent rational desiderata on these stances. Two such desiderata on sets of arguments X are, for instance (we refer to Part II for a more comprehensive overview):

Conflict-freeness:

avoid argumentative conflicts, that is, for all a,bX, (a,b)att; and

Stability:

additionally, be able to attack arguments that you don’t commit to, that is, for all cArgKX there is an aX for which (a,c)att.

Such sets of constraints give rise to so-called argumentation semantics which determine A-extensions of a given argumentation framework (Dung, Reference Dung1995). For instance, according to the stable semantics the set of A-extensions is the set of all sets of arguments that satisfy stability. Once we have settled for an argumentation semantics s (such as the stable semantics) we denote the set of A-extensions of K relative to s by AExts(K).

Example 14 (Example 13 cont.). We have two stable A-extensions, that is, sets of arguments that satisfy the stability requirement (see the shaded sets in the argumentation framework of Fig. 7):

X1={a1,a2,a3,a5}andX2={a1,a2,a4,a6}.

Suppose we select an A-extension X. We then commit to all of the conclusions of the arguments in X, that is, to Con[X], where Con[X]={Con(a)aX}. This induces another notion of extension, which we dub P-extensions (propositional extensions) which are sets of conclusions associated with A-extensions. We write PExts(K) for the set of P-extensions of K (relative to a given argumentation semantics s).

Example 15 (Example 14 cont.). The following P-extensions are associated with our A-extensions:

E1={p,q,¬r,s}andE2={p,q,r,s}.

Once an argumentation semantics is fixed and the A- and corresponding P-extensions are generated, we can define three different consequence relations for two underlying reasoning styles (see Fig. 8): skeptical and credulous reasoning.

A flow diagram illustrating skeptical credulous reasoning styles in nonmonotonic logic. See long descripiton.

Figure 8 The skeptical and the credulous reasoning style.

Figure 8Long description

The diagram depicts the following: Defeasible knowledge base leads to build extensions which further leads to 1. Sceptical approach: infer A if it is supported in every extension and 2. Credulous approach: infer A if it is supported in some extension by some argument |~ ∪Ext. Sceptical approach further leads to 1. by some argument (conclusion-focused) |~ ∩PExt and 2. by the same argument (argument-focused) |~∩AExt. Both of which further lead to floating conclusion.

Definition 5.2. Where K is a knowledge base, A a sentence, and s is an argumentation semantics, we define the consequence relations in Table 3.

Table 3Three types of nonmonotonic consequence relations.
A table categorizing three types of nonmonotonic consequence relations. See long description.
Table 3Long description

The table consists of two columns. It read as follows: Row 1: Skeptical 1: K turnstile superscript S subscript intersection P-extensions A iff A element of intersection P-extensions subscript S (K), A is a member of every extension in P-extensions subscript S (K). Row 2: Skeptical 2: K turnstile superscript S subscript intersection A extensions A iff there is an a element A-extensions subscript S (K) s. t. Con (a) equals A, There is an argument a with Con (a) equals A that is contained in every A-extension of K. Row 3: Credulous: K turnstile superscript S subscript union extensions A iff A element of union P extensions subscript S (K), A is a member of some extension in P-extensions subscript S (K).

To avoid clutter in notation, we will omit the super- and subscripts whenever the context disambiguates or the strategy is not essential to a given claim. Note that the definition of the three consequence relations imposes a hierarchy in terms of strength, namely:

K|AExtA implies K|PExtA implies K|ExtA.

Example 16 (Example 15 cont.). Based on our extensions, we have the following consequences:

Table 016
A table illustrating the satisfaction of different propositions by three types of non-monotonic consequence relations. See long description.
Table 016Long description

The table consists of five columns: p, q, not r, r, and s. It reads as follows: Row 1: Turnstile intersection P extensions: tick in p, tick in q, blank in not r and r, tick in s. Row 2: Turnstile intersection A extensions: tick in p and q; blank in not r, r, and s. Row 3: Turnstile union extensions: tick in all columns - p, q, not r, r, and s.

The example illustrates that a floating conclusion such as s follows by |PExt but not by the more cautious |AExt.

5.2 Methods based on Consistent Accumulation

Given a knowledge base K, the basic idea behind the accumulation methods is to iteratively build coherent sets of defeasible elements from Def(K).Footnote 28 We will call such sets D-extensions (extensions consisting of defeasible elements). Below we identify two central methods of building D-extensions: the greedy and the temperate method. Once D-extensions have been generated by one of these methods, we can associate each D-extension D with an A-extension ArgK(D) consisting of all the arguments based on elements in D. Moreover, each A-extension X has the corresponding P-extension Con[X] as discussed in Section 5.1. Once A- and P-extensions are obtained, we define consequence relations just like in Definition 5.2 (see the overview in Fig. 9). We now discuss the two types of accumulation methods.

K branches into a. Accumulation method: temperate versus greedy and b. Argumentation method. See long description.

Figure 9 Types of nonmonotonic consequence based on syntactic approaches.

Figure 9Long description

Both a. and b. leads to extensions which further leads to 1. Skeptical consequence and 2. Credulous consequence. Skeptical consequence leads to |~ ∩PExt and |~ ∩AExt. Credulous consequence leads to |~ UExt.

5.2.1 The Greedy Method

Given a knowledge base K=As,Ad,Rs,Rd, methods based on consistent accumulation build iteratively sets of defeasible elements from Def(K). One may think of a rational agent that extends her commitment store Def consisting of elements in Def(K) in a stepwise manner. She starts off with the empty set and in each step she adds an element of Def(K)Def to Def or she stops the procedure. She stops when adding any new element d would lead to inconsistency, that is, in case she would be able to construct conflicting arguments on the basis of Def{d}.

According to the greedy method, she will only consider adding elements in Def(K)Def to her commitment store that (a) give rise to new arguments (that is the greedy part) and (b) do not give rise to conflicting arguments. We will make this formally precise with the algorithm GreedyAcc in what follows, but we first need to introduce some concepts. Where DefDef(K), we say that a default r=A1,,AnB

  • is triggered by Def, if A1,,AnCon[ArgK(Def)],Footnote 29

  • is consistent with Def, if ¬BCon[ArgK(Def{r})].

If r is triggered by Def, adding r to Def gives rise to new arguments in ArgK(Def{r}). The reason for this is that for each Ai (with i=1,,n) there is an argument aiArgK(Def) with conclusion Ai, and a1,,anBArgK(Det{r})ArgK(Det). We treat defeasible assumptions BAd like defaults with empty left-hand sides: they are always triggered, and consistent with Def only if ¬BCon[ArgK(Def{B})].

The algorithm GreedyAcc implements the greedy accumulation method. We note that the element dDef(K)Def in lines 3 and 4 is chosen nondeterministically.

Algorithm 1Greedy accumulation
A step-by-step algorithm showing the steps of greedy accumulation. See long description.
Algorithm 1Long description

The Algorithm displays seven lines: 1. procedure Greedy Accumulation (K); K equals A subscript s, A subscript d, R subscript s, R subscript d. 2. Defeasible asterisk left arrow empty set; init scenario. 3. while exists d element of Defeasible (K) backslash Defeasible asterisk triggered by and consistent with Defeasible asterisk do. 4. Defeasible asterisk left arrow Defeasible asterisk union {d}; update scenario. 5. end while; no more triggered and consistent defaults. 6. return Defeasible asterisk; return D-extension. 7. end procedure.

GreedyAcc takes as input a knowledge base K and outputs a D-extension D. Its associated A-extension is given by X=ArgK(D) and its associated P-extension by Con[X]. The latter can be used to determine our three consequence relations from Definition 5.2. We write DExtgr(K) [resp.  AExtgr(K), PExtgr(K)] for the set of D-[resp. A-, P-]extensions of K (gr for greedy accumulation). We are now in a position to define three consequence relations analogous to Definition 5.2 (see Table 3), for example, by:

K|PExtgrA iff APExtgr(K).

Example 17 (Example 12 cont.). We apply GreedyAcc to the given knowledge base K. There are three different runs (due to the nondeterministic nature of the algorithm):

Table 01
A table showing three runs with different rule orderings across two rounds, and the resulting P-extensions and A-extensions. See long description.
Table 01Long description

The table consists of three columns titled: Run 1, Run 2, and Run 3. Row 1: Round 1: p implies r; p implies q; p implies q. Row 2: Round 2: p implies q; p implies r; q implies not r. Row 3: P-extension: set of p, r, q, and s; set of p, r, q, and s; set of p, not r, q, and s. Row 4: A-extension: set of a subscript 1, a subscript 2, a subscript 4, and a subscript 6; set of a subscript 1, a subscript 2, a subscript 4, and a subscript 6; set of a subscript 1, a subscript 2, a subscript 3; set of a subscript 5.

Next we list consequences according to the three different consequence relations:

Table 02
A table with five columns titled p, q, not r, r, and s, showing ticked values across three extension types. See long description.
Table 02Long description

The table consists of five columns: p, q, not r, r, and s. It reads as follows: Row 1: Turnstile intersection P-extension, tick in columns p, q, and s. Row 2: Turnstile intersection A-extension, tick in columns p and q. Row 3: Turnstile union extension, tick in columns p, q, not r, r, and s.

Note that for |PExt we have to consider the intersection of all P-extensions {p,q,s} and so we get the floating conclusion s (just like in Example 16). For |AExt we consider the intersection of the A-extensions {a1,a2}: while p,qCon[{a1,a2}] the floating conclusion s is not in Con[{a1,a2}]. Finally, for |Ext we consider the union of all P-extensions {p,q,r,¬r,s}.

5.2.2 Temperate Accumulation

Our second accumulation method is nongreedy (or, temperate) in that the defeasible elements from Def(K) that may be added to the commitment store Def in each step of the algorithm can be such that they don’t give rise to new arguments. In more technical terms, our agent may also add defeasible rules which are not triggered by Def. This is described in Algorithm 2, TemAcc. We use the same notation as before: DExttem(K) is the set of D-extensions generated by TemAcc(K) and AExttem(K)=df{ArgK(D)DDExttem(K)} resp.  PExttem(K)=df{Con[A]AAExttem(K)} is the corresponding set of A- resp. P-extensions. The three types of consequence relations |PExttem, |AExttem, and |Exttem are defined analogously to the greedy versions (see Table 3).

Algorithm 2Temperate accumulation
An algorithm with seven steps showing the temperate accumulation procedure. See long description.
Algorithm 2Long description

The Algorithm displays seven lines: 1. Procedure Temperate accumulation (K). 2. Defeasible asterisk left arrow empty set, init scenario. 3. while exists d element of Defeasible (K) backslash Defeasible asterisk consistent with Defeasible asterisk do. 4. Defeasible asterisk left arrow Defeasible asterisk union {d}, update scenario. 5. end while, no more consistent defaults. 6. return Defeasible asterisk, return D-extension. 7. end procedure.

Remark 1. Let us make two immediate observations to better understand how the greedy approach relates to the temperate approach. First, since defeasible assumptions are always triggered, the greedy and the temperate accumlation methods coincide for knowledge bases without defeasible rules (where Rd=). Second, every run via GreedyAcc corresponds to the initial segment of some runs via TemAcc.

Example 18 (Example 17 cont.). We apply TemAcc to our knowledge base. There are six possible runs, omitting runs 1–3 which are analogous to Example 17:

Table 03
A table displaying the P-extensions and A-extensions resulting from Runs 4, 5, and 6 of the Temperate accumulation algorithms. See long description.
Table 03Long description

The table consists of three columns titled: Run 4, Run 5, and Run 6. Row 1: Round 1: q implies not r; q implies not r; p implies r. Row 2: Round 2: p implies q; p implies r; q implies not r. Row 3: P-extension: set of p, not r, q, and s; set of p, r, and s; set of p, r, and s. Row 4: A-extension: set of a subscript 1, a subscript 2, a subscript 3, and a subscript 5; set of a subscript 1, a subscript 4, and a subscript 6; set of a subscript 1, a subscript 4, and a subscript 6.

In comparison with GreedyAcc we get three additional runs, namely 4–6. While run 4 is just a permutation of run 3, runs 5 and 6 give rise to new D-extensions. They show the nongreedy character of TemAcc. Consider, for instance, run 6: although in round 2 the default pq is both triggered and consistent with {pr}, the algorithm chooses the nontriggered q¬r.

We list consequences according to the different notions of consequence, marking differences to GreedyAcc with [!]:

Table 04
A table comparing how three non-monotonic consequence relations satisfy five propositions. See long description.
Table 04Long description

The table consists of five columns titled: p, q, not r, r, and s. Row 1: Turnstile intersection P-extension, tick in columns p and s; exclamation mark in column q; columns not r and r are empty. Row 2: Turnstile intersection A-extension, tick in column p; exclamation mark in column q; columns not r, r, and s are empty. Row 3: Turnstile union extension, tick in all five columns - p, q, not r, r, and s.

We see that q does not follow anymore by |PExt and |AExt.

While in our example every D-extension based on greedy accumulation is also one based on temperate accumulation, the example demonstrates this typically doesn’t hold vice versa. As a consequence, temperate accumulation gives rise to a more cautious style of reasoning than the greedy approach, at least in terms of the skeptical consequence relations and when there are no preferences involved (see Example 29 for a counterexample with preferences).

Figure 10 gives an overview on NMLs discussed in this Element and where they fall in terms of our classification.

A hierarchical chart showing the following: Extension generation via branches into 1. Argumentation, Part Two and 2. Accumulation, Part Three. See long description.

Figure 10 The syntactic approach and NMLs discussed in this Element.

Figure 10Long description

Argumentation, Part Two further leads to A S P I C and logic-based. Accumulation, Part Three branches into 1. temperate, Chapter 11, which further leads to M C S-based reasoning and input/output logic and 2. greedy, Chapter 12, which further leads to default Logic.

5.2.3 Temperate Accumulation and Maxicon Sets

Alternative to the iterative procedure TemAcc, the D-extensions of temperate accumulation can also be characterized in terms of maxicon sets (for maximally consistent sets).

Definition 5.3. Given a knowledge base K, a set DDef(K) is a maxicon set of K (in signs, Dmaxcon(K)) iff (i) D is consistent in K (i.e., Con[ArgK(D)] is consistent) and (ii) for all DDef(K), if DD then D is inconsistent.

Proposition 5.1. Let K be a knowledge base and DDef(K). D is a D-extension generated by TemAcc iff Dmaxcon(K).

Proof. Suppose D={d1,,dn}maxcon(K). We consider a run of TemAcc in which in the i th round of the loop di is added to D. We note that since D is consistent in K, so is every of its subsets. Thus, the while loop is not exited before the n th round. When the condition of the loop is checked the n+1 th time, D=D. By the maximal consistency of D in K, there is no dDef(K)D left for which D{d} is consistent in K. So, TemAcc terminates and returns D. The other direction is similar. □

Example 19 (Example 18 cont.). Our knowledge base K has the maxicon sets D1={pq,q¬r}, D2={pq,pr}, and D3={q¬r,pr}. These exactly correspond to the D-extensions of temperate accumulation.

As a consequence of Proposition 5.1 we obtain an alternative characterizations of the nonmonotonic consequence relations |PExttem, |AExttem, and |Exttem.

Corollary 5.1. Let K be a knowledge base and S{A} a set of sentences.

  1. 1. S|PExttemA iff for every Dmaxcon(K), ACon[ArgK(D)].

  2. 2. S|AExttemA iff there is an ACon[{ArgK(D)Dmaxcon(K)}].

  3. 3. S|ExttemA iff for some Dmaxcon(K), ACon[ArgK(D)].

The consequence relation |AExttem can be equivalently characterized by means of minimal conflict sets:

Definition 5.4. DDef(K) is a minimal conflict set for K iff D is inconsistent in K but every DD is consistent in K. The set of innocent bystanders in K, IB(K), consists of all members of Def(K) that are not members of minimal conflict sets for K.

Example 20 (Example 9 cont.). For our knowledge base K we have IB(K)= since every defeasible element is part of a minimal conflict. Were we to add, for instance, pu to Rd, resulting in K, we would have IB(K)={pu}.

Proposition 5.2. Let K be a knowledge base. Then, (i) IB(K)=maxcon(K) and (ii) K|AExttemA iff ACon[ArgK(IB(K))].

Proof. We show (i). (ii) follows then immediately by Corollary 5.1. Suppose dIB(K). Thus, there is a minimal conflict set D in K with dD. So, D{d} is consistent and there is a Dmaxcon(K) with D{d}D and dD. So dmaxcon(K). The other direction is similar and left to the reader. □

5.3 Semantic Methods

Let us suppose a knowledge base of the form K=As,Ad,RL for a Tarski logic L (such as CL, see Section 4). A natural interpretation of K|A is that A holds in the most normal situations that are consistent with the strict assumptions As in K, where the standard of normality is contributed by the defeasible elements Ad of K.

In many NMLs this idea is realized in terms of semantic selections.Footnote 30 Supposing that L provides a model semantics to interpret formulas in AsAd, we consider the models of As, written M(As). We write MA if A is interpreted as true in M. On these models an order is imposed where MM in case M is at least as normal as M. What it means to be more normal is determined by the defeasible information in K (a concrete example is given in the next paragraph). The entailment relation is then defined by:

that is, the most normal models of As validate A.

To make this idea more concrete we return to the system of Plausible Reasoning in Section 2.3. There we modeled defeasible inferences A1,,AnB in terms of implications A1An¬abB supplemented with normality assumptions ¬abAd. The strict rules RCL were contributed by classical logic. So, the knowledge base has the form As,Ad,RCL, or in short As,Ad. We additionally assume that As is classically satisfiable (so it has a model). According to the rationale stated earlier, As,Ad|A means that A holds in all situations in which the assumptions of As are true and which are most normal relative to the defeasible assumptions in Ad.

Where M is a classical model of As, let for this NK(M)=df{AAdMA} be the normal part of M. We can then order the models by M(As)×M(As) as follows:

MM(M is at least as normal as M) iff NK(M)NK(M).

In other words, the more defeasible assumptions a model verifies, the more normal it is. The most normal models will then be those in min(M(As)). See for an illustration Fig. 11.

Content of image described in text.

Figure 11 Nonmonotonic entailment by semantic selections.

Example 21 (Example 9 cont.). We take another look at K from Example 9. We have, among others, the classical models of As listed in Fig. 12 (left) whose ordering is illustrated on the right. The minimal models are M1,M2 and M3. We therefore have, for instance, K|p and K|rq.

A table with the following columns: q, r, ab subscript 1, ab subscript 2 and ab subscript 3 and a sequence. See long description.

Figure 12 The order on the models of Example 21. Highlighted are the -minimal models. The atoms p and s are true in every model of As.

Figure 12Long description

The row-wise data is as follows: Row 1 (highlighted): M subscript 1: 1, 1, 0, 0 and 1. Row 2 (highlighted): M subscript 2: 1, 0, 0, 1 and 0. Row 3 (highlighted): M subscript 3: 0, 1, 1, 0 and 0. Row 4: M subscript 4: 1, 0, 1,1 and 0. Row 5: M superscript i end superscript subscript 4: 0, i is an element of left curly bracket 0, 1 right curly bracket, 1, 1 and 0. Row 5: M superscript i end superscript subscript 5: 1, i is an element of left curly bracket 0, 1 right curly bracket, 0, 1 and 1. Row 6: M superscript i end superscript subscript 6: i is an element of left curly bracket 0, 1 right curly bracket, 1, 1, 0 and 1. Row 7: M superscript i end superscript subscript 7, : i is an element of left curly bracket 0, 1 right curly bracket, : j is an element of left curly bracket 0, 1 right curly bracket, 1, 1 and 1. On the left, it shows the following sequence: M superscript i, j end superscript subscript 7 leads to M superscript i end superscript subscript 6, M superscript i end superscript subscript 5, M superscript i end superscript subscript 4 and M subscript 4. M superscript i end superscript subscript 6 and M superscript i end superscript subscript 5 leads to M subscript 1. M superscript i end superscript subscript 6 leads to M subscript 3. M superscript i end superscript subscript 5 also leads to M subscript 2. M superscript i end superscript subscript 4 leads to both M subscript 3 and M subscript 2. M subscript 4 leads to M subscript 3 and M subscript 2.

Semantic selections have also been used as a model of the closed-world assumption in McCarthy’s circumscription (McCarthy, Reference McCarthy1980).Footnote 31 In our presentation this is realized by letting Ad be a set of negated atoms.

Example 22. Suppose Anne checks the online menu of the university canteen and finds the information that fries are served and that either pizza or burger is available. Consider the knowledge base Kcan=As,Ad,RCL where As={fries,pizzaburger}, Ad consists of {¬AAAtoms}, and Atoms={fries,pizza,burger,soup}. In Fig. 13 we find the -ordering of the models of As. With | Anne concludes, for instance, ¬soup and ¬pizza¬burger. This is in accordance with the closed-world assumption: what is not listed in the menu is assumed not to be offered.

A table with columns model, fries, pizza, burger and soup and a sequence. See long description.

Figure 13 Models of As in Example 22 with highlighted -minimal models.

Figure 13Long description

The row-wise data is as follows: Row 1 (highlighted): M subscript 1: 1, 1, 0 and 0. Row 2 (highlighted): M subscript 2: 1, 0, 1 and 0. Row 3: M subscript 3: 1, 1, 1 and 0. Row 4: M subscript 4: 1, 1, 0 and 1. Row 5: M subscript 5: 1, 0, 1 and 1. 6. M subscript 6: 1, 1, 1 and 1. The sequence shows the following: M subscript 6 leads to M subscript 4, M subscript 3 and M subscript 5. M subscript 4 and M subscript 3 further leads to M subscript 1. M subscript 3 and M subscript 5 further leads to M subscript 2.

6 A Roadmap

In this introduction we have explained the main ideas and concepts behind several core methods of NML. In what follows we will deepen our understanding of

  • the argumentation method in which a reasoner analyzes the interplay between arguments and their counterarguments to determine coherent sets of arguments (Part II);

  • the methods based on consistent accumulation, temperate and greedy, in which a reasoner gradually commits to more and more defeasible information from the given knowledge base (Part III); and

  • the semantic method in which a reasoner determines the most normal interpretations of the given knowledge base (Part IV).

We will study metatheoretic properties that come with these methods and discuss central logics from the literature that implement them.

Given that the field of NML comes with such a variety of systems and methods, it will also be our task to provide links between the methods. As we will see, several classes of logics belonging to different methods give rise to the same class of nonmonotonic consequence relations (see Fig. 14 for an overview).

A flow diagram consisting of NML relationships and interlinked concepts. See long description.

Figure 14 Links between the various methods studied in this Element.

Figure 14Long description

The links are as follows: Semantic Methods is connected to the following: 1. Greedy Accumulation via Thm.16.1; 2. Argumentation via Thm.16.2; 3. Temperate Accumulation via Cor.15.1; Thm.15.2; 4. Fixpoints via Def.16.4. Greedy Accumulation is connected to the following: 1. Argumentation via Prop.12.1; 2. Temperate Accumulation via Rem.1; 3. Fixpoints via Prop.12.1; Thm.10.1. Argumentation leads to the follows: 1. Temperate Accumulation via Thm.11.3; Thm.9.1 and 2. Fixpoints via Def.7.1. Temperate Accumulation is connected to Fixpoints via Thm.10.1.

Part II Formal Argumentation

Argumentation theory as a study of defeasible reasoning has been proposed already by Toulmin (Reference Toulmin1958). His book provides a critique of formal logic as a model of the defeasible nature of commonsense reasoning. While in the early 1980s many NMLs were proposed, we have to wait for the most influential pioneering works in formal argumentation such as Pollock (Reference Pollock1991, Reference Pollock1995) and Dung (Reference Dung1995) until the 1990s. What distinguishes these approaches from earlier NMLs is the prominent status of arguments and defeat. The ambition is to provide both an intuitive and unifying account of defeasible reasoning. Recently, Mercier and Sperber (Reference Mercier and Sperber2017) have made a strong case for the argumentative nature of human reasoning. Together with the rich tradition in informal argumentation theory (e.g., Eemeren & Grootendorst, Reference Eemeren and Grootendorst2004; Walton et al., Reference Walton, Reed and Macagno2008) this strongly motivates formal argumentation as an account of defeasible reasoning which is close to human reasoning practices.

In this part we deepen our understanding of formal argumentation theory. In Section 7 we explain how Dung’s abstract perspective provides a way to select arguments from an argumentation framework. In Section 8, 9, we present two ways of equipping arguments with logical structure.

7 Abstract Argumentation

In formal argumentation the question as to what follows from a given defeasible knowledge base K is answered by means of an argumentative analysis. It is the essential idea behind abstract argumentation (introduced by Dung (Reference Dung1995) that as soon as the arguments induced by K are generated and collected in the set ArgK, and as soon as the attacks between them are determined and collected in the relation attArgK×ArgK, we can abstract from the concrete content of those arguments, focus on the directed graph given by ArgK,att and select arguments simply by means of analyzing this graph.Footnote 32 The latter is called the argumentation framework for K. The argumentation semantics defined in the following definition offer criteria to select arguments that form a defendable and consistent stance. We call the selected sets of arguments A-extensions of K. A-extensions form the basis of three types of nonmonotonic consequence relations: |AExt,|PExt, and |Ext (see Table 3). Due to its strict division of labor between argument and attack generation, on the one hand, and argument selection with its induced notion of nonmonotonic consequence, on the other hand, formal argumentation offers a transparent and clean methodology.

Definition 7.1 (Argumentation Semantics, Dung (1995)). Let Arg,att be an argumentation framework and XArg a set of arguments. We say that X defends aArg if for all bArg, if (b,a)att then there is a cX such that (c,b)att. We write defended(X) for the set of arguments that are defended by X. In Table 4 we list several types of A-extensions.

Table 4Argumentation semantics.
A table defines six terms related to argumentation semantics. See long description.
Table 4Long description

The table consists of two columns: X is, and iff. It reads as follows: Row 1: conflict-free; for all a, b elements of X, (a, b) not an element of att. Row 2: admissible; X subset or equal to defended (X). Row 3: complete; X equals defended (X). Row 4: grounded; X is the unique subset or equal to-minimal complete set. Row 5: preferred; X is a subset of or equal to the maximal admissible. Row 6: stable; X is conflict-free and for all a element of Arguments backslash X, there is a b element of X such that (b, a) element of Arguments.

In Fig. 15 we see the logical connections between the different argumentation semantics, all of which have been shown in Dung (Reference Dung1995). Dung also showed that, except for stable extensions, extensions of all other types always exist (they may be empty, though) and the grounded extension consists exactly of those arguments that are contained in every complete extension. Stable extensions often do not exist in frameworks that give rise to odd cycles: consider, for instance, AF={a},{(a,a)} in which neither nor {a} is stable. In Fig. 16 we find an argumentation framework with five arguments. Depicted are some of its extensions.

A flowchart showing the following sequence: 1. Stable; 2. Preferred; 3. Complete; 4. Admissible and 5. Conflict-free. Grounded also leads to complete.

Figure 15 Relations between argumentation semantics. Every extension of the type left of an arrow is also an extension of the type to its right.

Content of image described in text.

Figure 16 Left: An argumentation framework composed of five arguments. Highlighted in the center and on the right are its two preferred extensions. The extension in the center is the only stable extension. The grounded extension in this example is .

8 ASPIC+

We now move from abstract to structured argumentation.Footnote 33 This means that our arguments will now get a logical form and attacks will be defined in terms of logical relations between arguments. ASPIC+ is one of the most prominent and most expressive frameworks in formal argumentation (Modgil & Prakken, Reference Modgil and Prakken2013). Arguments are generated on the basis of the inference rules and assumptions in a given knowledge base K of the form As,Ad,Rs,Rd, (see Definition 5.1). We let ArgK denote the set of all arguments induced by K. In the context of ASPIC+ we frequently find three types of attacks. In order to define them, we need to enhance knowledge bases with two elements. (a) A contrariness relation A associates formulas with a set of contraries, for example, A={¬A} or A={BB¬ARs}. (b) A naming function name allows us to refer to defeasible rules rRd in the object language by name(r). So our knowledge bases will have the extended form K=As,Ad,Rs,Rd,,k,name.

Definition 8.1. Where a,bArgK, we define three types of attacks:

Rebut:

a rebuts b in bSub(b) iff b is of the form b1,,bnB and Con(a)B.

Undercut:

a undercuts b in bSub(b) iff b is of the form b1,,bnB where the top rule is rRd and Con(a)=name(r).

Undermining:

a undermines b in a defeasible assumption BAd in case Con(a)B and BSub(b).

An informal example of a rebut is one where Peter calls upon weather report 1 to argue that it will rain, while Anne counters by calling upon weather report 2 that predicts the opposite. An undercut may occur in a case of specificity: while Peter argues that Tweety can fly based on the fact that Tweety is a bird and birds usually fly, Anne counters that the default “Birds fly” is not applicable to Tweety since Tweety is a penguin and, as such, Tweety is exceptional to “Birds fly.” Undermining happens if Anne argues against one of Peter’s basic (defeasible) assumptions: Peter may argue that they should go and buy groceries, since the shop is open, when Anne reminds him of the fact that it is a public holiday and therefore shops are closed.

Whenever the defeasible elements of a knowledge base differ in strength, not every attack may be successful. In the context of ASPIC+ we refer to successful attacks as defeats. There are various ways defeats can be defined, but they are all based on a lifting of to the level of arguments (recall that Def(K)×Def(K) orders the defeasible elements of our knowledge base K). We present here the most common approach, called weakest link. To simplify things, we also suppose that is a total preorder (so it is reflexive, transitive and total). Where D1,D2Def(K), we let D1D2 if there is a d1D1 such that for all d2D2, d1d2. Then, for two arguments a,bArgK, we let ab iff Def(a)Def(b).Footnote 34 We now say that a defeats b iff a attacks b (Definition 8.1) and (i) ba or (ii) the attack is an undercut.Footnote 35

In the special case in which no preference order is specified in the knowledge base, a defeats b iff a attacks b. If the naming function is left unspecified in K, undercuts are omitted.

Definition 8.2. Let K=Ad,As,Rd,Rs,,k,name be a knowledge base. AFK=ArgK, is an ASPIC+-based argumentation framework, where for a,bArgK, ab iff a defeats b.

A-extensions obtained via the different argumentation semantics s (grounded, preferred, stable, etc.) in Definition 7.1 can serve as a basis for the three types of consequences, defined exactly as in Definition 5.2 and Table 3 in Section 5.1.

Example 23. We consider the knowledge base K=As,Ad,Rs,Rd,,t,name, where As={p}, Ad={q}, Rs={¬qu,vu},

Rd={r1:p7¬t,p3s,s5¬q,q3t,t2 v,r2:p9t},

name(r1)={s}, and name(r2)={t}. In order to define we “rank” the members of Def(K) as indicated in the superscripts of the defaults and let the rank of the defeasible assumption q be 3. Where d1,d2Def(K), we then let d1d2 iff rank(d1)rank(d2).

The arguments induced by K and the corresponding argumentation framework are depicted in Fig. 17. We note that a1 is defeated by b1 despite the fact that b1 is weaker than a1 (by comparing their weakest links) since the attack is an undercut, for which the strength of the attacker plays no role. The defeat between b2 and c0 is symmetric. We have an undermine attack from b2 to c0, while the other way around it is a rebuttal. In Table 5 we list the different argumentation extensions and the corresponding consequence relations.

Content of image described in text.

Figure 17 The argumentation framework for Example 23. Solid arrows represent rebuttals, dashed arrows undermining, and dotted arrows undercuts.

Table 5The various extensions and consequences for Example 23.
A table presenting different types of extensions and their corresponding consequences. See long description.
Table 5Long description

The table has four columns: Complete, Grounded, Preferred, and Stable. It reads as follows: Row 1: X subscript zero equals set of a subscript zero, b subscript one; X subscript zero; blank; blank. Row 2: X subscript one equals set of a subscript zero, b subscript one, b subscript two, b subscript three; blank; X subscript one; blank. Row 3: X subscript two equals set of a subscript zero, b subscript one, c subscript zero, c subscript one, c subscript two, c subscript three; blank; X subscript two; X subscript two. Row 4: Turnstile intersection P-extension: set of p and s; set of p and s; set of p, s, and u; set of p, q, u, s, t, and v. Row 5: Turnstile intersection A-extension: set of p and s; set of p and s; set of p and s; set of p, q, u, s, t, and v. Row 6: Turnstile union extension: S equals set of p, q, not q, t, s, not s, u, and v; set of p and s; S; set of p, q, u, s, t, and v.

Example 24. Consider the knowledge base K=As,RCL,Rd (without preferences), where Rd={pq,ps,p¬(qs)} and As={p}. We have, for instance, the following arguments:

Table 05
A diagram defining six knowledge base rules. See long description.
Table 05Long description

The six knowledge base rules are: 1. a subscript 1 equals p implies q; 2. a subscript 2 equals p implies s; 3. a subscript 3 equals p implies not (q and s); 4. a subscript 4 equals a subscript 1, a subscript 2 right arrow q and s; 5. a subscript 5 equals a subscript 1, a subscript 3 right arrow not s; and 6. a subscript 6 equals a subscript 2, a subscript 3 right arrow not q.

The reader may be puzzled by on odd restriction in Definition 8.1, namely, when attacking an argument in which inference rules have been applied, only attacks in the heads of defeasible rules are allowed. Why did we not simply define: a attacks b iff CLCon(a)¬Con(b)? Figure 18 features the resulting argumentation framework. We observe that there is now a preferred (and stable) extension with the conclusions p,s and ¬(ps). This may be considered as unwanted if we want our A-extensions to represent rational and therefore consistent stances of debaters.

Content of image described in text.

Figure 18 Example 24 with the inconsistent preferred and stable extension {a1,a2,a3}.

Problems such as the one highlighted in our previous example show the need for a set of design desiderata, or rationality postulates, that argumentation-based NMLs should fulfill. The following have become standard in the literature (Caminada & Amgoud, Reference Caminada and Amgoud2007). Given a standard of consistency, a knowledge base K=As,Ad,Rs,Rd,,k,name, an argumentation semantics, an A-extension X based on it, and the argumentation framework ArgK,, we define

Direct consistency.

For all a,bX, {Con(a),Con(b)} is consistent.

Indirect consistency.

Con[X] is consistent.

Strict closure.

Where a1,,anX and Con(a1),,Con(an)ARs, also a1,,anAX.

In Example 24 we have seen that allowing in our simple framework for “unrestricted” rebut results in a violation of indirect consistency and strict closure,Footnote 36 unlike the unrestricted rebut of Definition 8.1.

Another rationality property has to do with syntactic relevance. We give an example to motivate it.

Example 25. Consider the knowledge base K1=As,RCL,Rd1,k, where As={t}, Rd1={ts}, and A={BCLA¬B}. Clearly, the grounded extension will contain the argument a:ts and therefore both t and s follow with |AExt and |PExt.

We now extend our knowledge base to K2=As2,RCL,Rd2,k, where As2={t,p} and Rd2={pq,p¬q,ts}. Figure 19 shows a relevant excerpt of the argumentation framework for K2. Argument c is obtained by the rule q,¬q¬s that holds due to the classical explosion principle.

Note that we only added information to K1 that is syntactically irrelevant to both t and s. Nevertheless, the grounded extension of K2 only consists of arguments that do not involve defeasible rules (such as b0 or b0=b0pq). Therefore, a is not part of it. As a consequence, |PExt and |AExt will deliver only classical consequences of {p,t}, but not anymore s.

Content of image described in text.

Figure 19 Excerpt of the argumentation framework of Example 25.

The rationality property noninterference (Caminada et al., Reference Caminada, Carnielli and Dunne2012) expresses, informally, that adding syntactically irrelevant information to a knowledge base should not lead to the loss of consequences. Our example shows that this property does not hold for grounded extensions.

9 Logic-Based Argumentation

Another line of research within structured argumentation is logic-based (or deductive) argumentation. In what follows we will show that it has close connections to temperate accumulation and that, just as in the case of ASPIC+, ill-conceived combinations of attack forms and argumentation semantics can lead to undesired metatheoretic behavior.

Logic-based argumentation has been proposed, for instance, in Arieli and Straßer (Reference Arieli and Straßer2015) and Besnard and Hunter (Reference Besnard and Hunter2001). Our presentation follows the approach in Arieli et al. (Reference Arieli, Borg and Straßer2023), but simplifies it in some respects.Footnote 37 Knowledge bases have the form K=Ad,As,RL, where the set of strict rules RL is induced by an underlying Tarski logic L.

In Definition 5.1, arguments are proof trees. In the context of knowledge bases without defeasible rules and for which the strict rules are induced by a base logic L, arguments are often modeled more abstractly simply as premise-conclusion pairs.

Definition 9.1. Where K=As,Ad,RL, we let ArgK={(S,A)SAsAd is finite and SLA}. Where a=(S,A) is an argument in ArgK, Con(a)=A and Def(a)=Ad(a). Where AAd, ArgK(A)={aArgKDef(a)A}.

Attacks between arguments can be defined in various ways. Some examples are given in Table 6.Footnote 38

Table 6Attack types in logic-based argumentation.
The table consists of four columns: Type, Attacker, Attacked, and Conditions. Row 1: Defeat; A subscript 1, not A subscript 2; A subscript 2 union A prime subscript 2, C; A subscript 2 not equal to the empty set, A subscript 2 is a subset of A subscript d. Row 2: DirDefeat; A subscript 1, not A; A subscript 2 union the set containing A, C; A is an element of A subscript d. Row 3: ConDefeat; A subscript 1, not A subscript 2 back slash A subscript s; A subscript two, C; A subscript 1 is a subset of A subscript s, A subscript 2 intersection A subscript d is not equal to the empty set.
Table 6Long description

The table consists of four columns: Type, Attacker, Attacked, and Conditions. Row 1: Defeat; A subscript 1, not A subscript 2; A subscript 2 union A prime subscript 2, C; A subscript 2 not equal to the empty set, A subscript 2 is a subset of A subscript d. Row 2: DirDefeat; A subscript 1, not A; A subscript 2 union the set containing A, C; A is an element of A subscript d. Row 3: ConDefeat; A subscript 1, not A subscript 2 back slash A subscript s; A subscript two, C; A subscript 1 is a subset of A subscript s, A subscript 2 intersection A subscript d is not equal to the empty set.

Definition 9.2. Let α be a nonempty set of attack types based on the knowledge base K=As,Ad,RL from Table 6, arguments be defined as in Definition 9.1, and attArgK×ArgK be defined by (a,b)att iff a attacks b in view of an attack type in α. We let AFα(K)=arg(K),att be the argumentation framework induced by K and α. For a given argumentation semantics s{grounded,preferred,stable} (see Table 4) and an attack type α, we denote the corresponding set of A-extensions by AExts,α(K) and the underlying nonmonotonic consequences analogous to Table 3. For instance,

  • K|PExts,αA iff in every s-extension XAExts,α(K) there is an argument S,A.

Let in the following AttDir={{DirDefeat},{DirDefeat,ConDefeat}}, and AttSet={{Defeat},{Defeat,ConDefeat}}.

Example 26. We let K=As,Ad,RCL, where As= {s} and Ad={pu,¬pu,q,¬s}. In Fig. 20 we see (a fragment of) the argumentation framework AFα(K). We note that for αAttSet the grounded extension concludes q, but not for α={DirDefeat}. The latter is counterintuitive since q is syntactically unrelated to the conflicts in pu and ¬pu and the conflict in s and ¬s. On the right (center and bottom) we see the two stable resp. preferred extensions for this example. In both cases we can conclude q and the floating conclusion  u.

We also note a correspondence between the argumentative extensions and selections based on maxicon sets of K (see Section 5.2.3). We have maxcon(K)={{pu,q},{¬pu,q}} and maxcon(K)={q}. So, in our example, the grounded semantics induces the same consequence relations |AExtgrounded,α and |PExtgrounded,α as |AExttem for αAttSet, while the stable and preferred semantics (s{stable,preferred}) induce the same consequence relations |Exts,α as |PExttem for any αAttDir (recall Section 5.2.3 and Corollary 5.1). This is not coincidental, as we see with Theorem 9.1.

Content of image described in text.

Figure 20 Example 26. Left: We let A={pu,¬pu}. The black nodes represent the grounded extension. Dashed arrows correspond to those Defeats and ConDefeats that are not DirDefeats, while solid arrows are (also) DirDefeats. Right top: The grounded extension for α={DirDefeat}. Right center and bottom: the two stable resp. preferred extensions.

In fact, there is a close relation between logic-based argumentation and reasoning based on temperate accumulation.Footnote 39

Theorem 9.1 (Arieli et al., 2021b). Let K=As,Ad,RL be a knowledge base. We have:

  1. 1. AExts,α(K)={ArgK(T)Tmaxcon(K)} and |PExttem=|PExts,α, for αAttDir and s{stable,preferred}.

  2. 2. AExts,α(K)=ArgK(maxcon(K)) and |AExttem=|AExts,α=|PExts,α, for αAttSet and s=grounded.

While Theorem 9.1 identifies well-behaved combinations of attack types and argumentation semantics, the following two examples show that one has to be careful in order to avoid counter-intuitive behavior. (Recall similar problems in the context of ASPIC+ in Section 8.)

Example 27. We consider the knowledge base

K=As:,Ad:{p,q,¬(pq)},RCL.

In Fig. 21 we see that with αAttSet we obtain a problematic stable and preferred extension X featuring the inconsistent set of conclusions {p,q,¬(pq)} violating the indirect consistency property (see Section 8). On the right we find the argumentation framework with αAttDir where X is not anymore preferred (and therefore also not stable).

Content of image described in text.

Figure 21 Example 27. Left: αAttSet. Right: αAttDir.

Selected Further Readings

An excellent overview on the state of the art in formal argumentation is provided by the handbook series Handbook of Formal Argumentation (Baroni et al. Reference Baroni, Gabbay and Giacomin2018) and Handbook of Formal Argumentation (Gabbay et al. Reference Gabbay, Giacomin, Guillermo and Thimm2021). Volume 5 of Argument & Computation contains several tutorials on central approaches, such as Modgil and Prakken (Reference Modgil and Prakken2014), and Toni (Reference Toni2014).

Already in the seminal Dung (Reference Dung1995) several embeddings of NMLs in abstract argumentation were provided, including default logic. A recent overview on structured argumentation can be found in Arieli et al., (Reference Arieli, Borg, Heyninck and Straßer2021a). Links to default logic with a special emphasis on preferences are established in, for example, Liao et al. (Reference Liao, Oren, van der Torre and Villata2018); Straßer and Pardo (Reference Straßer, Pardo, Liu, Marra, Portner and Van de Putte2021), and Young et al. (Reference Young, Modgil and Rodrigues2016), connections to maxicon sets are numerous (Arieli et al., Reference Arieli, Borg and Heyninck2019; Cayrol, Reference Cayrol1995; Heyninck & Straßer, Reference Heyninck and Straßer2021b; Reference Vesic2013), and links to adaptive logics are to be found in Borg (Reference Borg2020), Heyninck and Straßer (Reference Heyninck, Straßer, Kern-Isberner and Wassermann2016), and Straßer and Seselja (Reference Straßer and Seselja2010), and to logic programming in Caminada and Schulz (Reference Caminada and Schulz2017); Heyninck and Arieli (Reference Heyninck and Arieli2019), and Schulz and Toni (Reference Schulz and Toni2016). Nonmonotonic reasoning properties of several systems of structured argumentation are studied in Borg and Straßer (Reference Borg, Straßer and Lang2018; Reference Čyras, Toni, Black and Modgil2015; Čyras and Toni (Reference Heyninck and Straßer2021a), and Li et al. (Reference Li, Oren and Parsons2018). Probabilistic approaches can be found, for instance, in Haenni (Reference Haenni2009); Hunter and Thimm (Reference Hunter and Thimm2017), and Straßer and Michajlova (Reference Straßer and Michajlova2023). The Handbook of Formal Argumentation offers an excellent overview and detailed surveys of central topics in the area (see Handbook of Formal Argumentation, Reference Gabbay, Giacomin, Guillermo and Thimm2021).

Part III Consistently Accumulating Defeasible Information

10 Consistent Accumulation: General Setting

In this section we study in a systematic way the two variants of the consistent accumulation method: greedy and temperate accumulation. First, in Section 10.1 we present the algorithms GreedyAcc and TemAcc in the settings of knowledge bases in the general form of Section 4 (including preferences). Then, in Section 10.2 we present alternative characterizations in terms of fixed points. In Section 10.3 we study metatheoretic properties of extensions and nonmonotonic consequences. While this section provides a general perspective, we dive into particularities and concrete systems in Sections 11 and 12.

10.1 Greedy and Temperate Accumulation

We now consider knowledge bases with all components

K=As,Ad,Rs,Rd,Rm,

as introduced in Section 4, with the only restriction that the set of defeasible elements in K, Def(K)(=AdRd), is finite. As compared to Part I, we slightly generalize our two accumulation methods, greedy and temperate accumulation, by taking into account preferences among elements in Def(K). For this, we suppose there to be a reflexive and transitive order on Def(K).

In the following we suppose for any given nmL a formal language L, a class of associated knowledge bases KnmL, a notion of what it means that a set of sentences SsentL is (in)consistent, for each KKnmL a set ArgK of arguments based on K, and for each aArgK a notion Con(a) of conclusion and Def(a) of the defeasible part of a (e.g., Definitions 5.1, 9.1 and 11.3). Moreover, where DDef(K), ArgK(D)=df{aArgKDef(a)D}. Many of the results presented in this part of the Element (e.g., the metatheoretic insights in Section 10.3, 11.1) will not rely on a specific underlying notion of argument, but apply to many concrete logics from the literature (such as the ones presented in Section 11.3.1, 11.3.2).

We first discuss greedy accumulation. As explained in Section 5.2, the main idea behind the algorithm is to build a D-extension by accumulating (1) triggered and (2) consistent defeasible information dDef(K). Since we now consider prioritized defeasible information, we add the requirement (3) that d is -maximal with properties (1) and (2). Let us make this precise.

Definition 10.1. For a defeasible rule r=A1,,AnBRd, we let Body(r)=df{A1,,An} and Con(r)=dfB. Similarly, for any AAd, we let Body(A)=df and Con(A)=dfA. Then, where DDef(K) and dDef(K), we say that

  • d is triggered by D iff Body(d)Con[ArgK(D)].Footnote 40

  • d is consistent with D iff Con[ArgK(D{d})] is consistent.

  • dmax(D) iff dD and for all dD, d/d.

We write ConsK(D) for all the elements in Def(K) that are consistent with D, TrigK(D) for all elements in Def(K) triggered by D, and TrigK(D) for all the elements in ConsK(D) that are triggered by D.

Note that our definition implies that defeasible assumptions are automatically triggered. Algorithm GreedyAcc generates D-extensions for the greedy accumulation method. The A- resp. the P-extension associated with a D-extension D is defined by ArgK(D) resp. by Con[ArgK(D)]. We write AExtgr(K) resp.  PExtgr(K) for the set of A-resp. P-extensions for K. In this way we obtain the consequence relations |AExtgr,|PExtgr and |Extgr (see Table 3), where the superscript indicates that the underlying extensions have been obtained via greedy accumulation.

Example 28 (The order puzzle, Example 11 cont.). We recall the knowledge base K containing the preference order: (pq)(pr)(q¬r) (supposing reflexivity and transitivity). Our algorithm GreedyAcc has exactly one run in which in the first round of the loop pr is added to Def, since it is the -preferred one among the two triggered and consistent defaults pr and pq. In the second round only pq is triggered and consistent. So we end up with Def={pr,pq} and GreedyAcc terminates since the remaining default q¬r is inconsistent with the set of the already selected ones. This implies that K|q for all |{|PExtgr,|AExtgr,|Extgr}.

Algorithm 3Greedy accumulation (general version)
An algorithm of an eight-step procedure for Greedy accumulation, general version. See long description.
Algorithm 3Long description

The Algorithm displays in eight lines: 1. procedure Greedy accumulation (K), where K equals (A subscript s, A subscript d, R subscript s, R subscript d, R subscript m, subset or equal). 2. Defeasible asterisk left arrow empty set, init D-extension. 3. while Trig subscript K superscript T (Defeasible asterisk) backslash Defeasible asterisk not equals empty set do. 4. (nondeterministically) choose d element of max subscript subset or equal to (Trig subscript K superscript T (Defeasible asterisk) backslash Defeasible asterisk). Defeasible asterisk left arrow Defeasible asterisk union the set containing d, update D-extension. 6. end while, no more triggered and consistent defaults. 7. return Defeasible asterisk, return D-extension. 8. end procedure.

We now move to temperate accumulation which is characterized by the algorithm TemAcc. Recall that the main difference from greedy accumulation is that, when building D-extensions, temperate accumulation also considers nontriggered defaults that are consistent with the already accumulated defeasible elements. The set of D-, A-, and P-extension of K (denoted by DExttem(K), AExttem(K), and PExttem(K)) and the consequence relations |AExttem,|PExttem, and |Exttem are defined in analogy to the greedy case.

Algorithm 4Temperate accumulation (general version)
An algorithm of an eight-step procedure for Temperate accumulation, general version. See long description.
Algorithm 4Long description

The Algorithm displays code in eight lines: 1. procedure Temperate accumulation(K), where K equals (A subscript s, A subscript d, R subscript s, R subscript d, R subscript m, subset or equal). 2. Defeasible asterisk left arrow empty set, init D-extension. 3. while Cons subscript K superscript T (Defeasible asterisk) backslash Defeasible asterisk not equals empty set do. 4. (nondeterministically) choose d element of maximum subscript subset or equal to (Cons subscript K (Defeasible asterisk) backslash Defeasible asterisk). 5. Defeasible asterisk left arrow Defeasible asterisk union {d}, update D-extension. 6. end while, no more consistent defaults. 7. return Defeasible asterisk, return D-extension. 8. end procedure.

Example 29 (Example 28 cont.). We now apply TemAcc to K. There is again only one possible run: in the first round we choose (the nontriggered) q¬r as it is preferred over the other two defaults. In the second round we choose pr as it is preferred over pq. This is when TemAcc terminates since the only remaining default pq is not consistent with {pr,q¬r}. This implies that K/|q for all |{|PExttem,|AExttem,|Exttem}.

This shows that unlike in the nonprioritized setting, for knowledge bases with preferences there may be D-extensions for temperate accumulation that do not correspond to D-extensions for greedy accumulation.

10.2 Accumulation and Fixed Points

In this section we consider alternative characterizations of our two accumulation methods. Instead of using iterative algorithms such as TemAcc and GreedyAcc, we now describe these reasoning styles, that is, the D-extension they characterize, as fixed points of specific operations Π:(Def(K))(Def(K)). The underlying idea is that the possible final products of the reasoning process of a rational agent can be characterized as equilibrium states based on the given knowledge base K. In what follows, we only consider knowledge bases without preferences.

Lemma 10.1. Let K be a knowledge base. Then, Dmaxcon(K) iff D=ConsK(D).

Proof. Let Dmaxcon(K). By Definition 5.3 (i) and Definition 10.1, DConsK(D). If dConsK(D), then dD by Definition 5.3 (ii), and so ConsK(D)D. The other direction is similar. □

Theorem 10.1. Let K be a knowledge base and DDef(K).

  1. 1. D is a D-extension generated by TemAcc iff D=ConsK(D).

  2. 2. D is a D-extension generated by GreedyAcc iff D=TrigK(D).

Proof. Item 1 follows with Proposition 5.1 and Lemma 10.1.

Consider Item 2. () Let D=i=0nDi be produced by GreedyAcc such that Def=Di in round i and {di+1}=Di+1Di for 0i<n.

”. Let dD. So d=di+1 for some 0i<n. We have to show that dTrigK(D). Since dTrigK(Di), dTrigK(D). Assume for a contradiction that dConsK(D). So, there is a -minimal DD such that D{d} is inconsistent in K. Let dj be the element in D with maximal index. If j>i+1, djConsK(Dj1)TrigK(Dj1). If j<i+1, di+1ConsK(Di)TrigK(Di). Each case is a contradiction. So, dConsK(D) and so dTrigK(D).

”. Let dDef(K)D. By the guard of the while-loop (line 3), dTrigK(D).

() Let now D=TrigK(D). It can easily be seen that D can be enumerated by dii=1n in such a way that D0=, d1TrigK(D0), D1={d1}, and di+1TrigK(Di)Di and Di+1=Di{di+1}. Moreover, there is a run of GreedyAcc in which each di is added to the scenario at round i for each i=1,,n. Note that the algorithm terminates after round n since TrigK(D)D=. □

An advantage of the characterization of D-extensions in terms of fixed points as in Theorem 10.1 or with maxicon-sets as in Proposition 5.1 is that the restriction to finite sets of defeasible information Def(K) in our knowledge bases can be lifted. The restriction was necessary to warrant the termination of GreedyAcc and TemAcc.

10.3 More on Nonmonotonic Reasoning Properties

In this section we take another, more detailed look at abstract properties of nonmonotonic consequence relations (see Section 2.2). To simplify things, we will study them in a nonprioritized setting.

10.3.1 Knowledge Bases and Abstract Properties of Consequence Relations

Now that we have a better understanding of knowledge bases, let us have another look at the properties introduced in Section 2.2. Recall that consequence relations are used to study the question of what follows from a given defeasible knowledge base. An nmL gives an answer to this question on the basis of the coherent units of information provided by its underlying model of knowledge representation.Footnote 41 It gives rise to nonmonotonic consequence relations | that hold between knowledge bases (in its associated class KnmL) and sentences in its object language L. In proof-theoretic approaches consequences will be determined by the given extensions of the knowledge base, while in semantic approach they will be based on (typically a selection of) its models.

In the remainder of the Element it will be our task to explain different central methods of knowledge representation and consequence underlying NMLs. Before doing so, we have to comment on what the introduction of knowledge bases means for the abstract study of nonmonotonic consequence presented in Section 2.2. There, the left-hand side of | merely consisted of sets of sentences, but defeasible knowledge bases typically come with more structure. That means that the reasoning principles discussed in Section 2.2 need to be disambiguated. For example, one may distinguish between a strict and a defeasible form of cautious (or rational) monotonicity (see Fig. 22). Where

  • As,Ad,Rs,Rd,RmsA=dfAs{A},Ad,Rs,Rd,Rm,

  • As,Ad,Rs,Rd,RmdA=dfAs,Ad{A},Rs,Rd,Rm,

A diagram showing the following sequence: 1. Assumptions include strict A subscript s and defeasible A subscript d. 2. Inference Engine includes strict R subscript s and defeasible R subscript d. 3. Conclusions. See long description.

Figure 22 Versions of cautious monotonicity with defeasible knowledge bases

Figure 22Long description

Conclusions lead to strict A subscript s via C M subscript s. Conclusions lead to defeasible A subscript d via C M subscript d. Inference engine loops back to inference engine via meta R subscript m.

and where i{s,d}, we define:

Table 06
A list of logical inference rules for a defeasible reasoning system. See long description.
Table 06Long description

The list is as follows: 1. CM subscript i (turnstile): K turnstile A and K turnstile B implies K oplus subscript i B turnstile A. 2. CT subscript i (turnstile): K turnstile A and K oplus subscript i B turnstile A implies K turnstile B. 3. C subscript i (turnstile): CM subscript i (turnstile) and CT subscript i (turnstile) hold. 4. M subscript i (turnstile): K turnstile A implies K oplus subscript i B turnstile A. 5. OR subscript i (turnstile): K oplus subscript i A turnstile C and K oplus subscript i B turnstile C implies K oplus subscript i (A or B) turnstile C. 6. LLE subscript i (turnstile): A element of Cn subscript R subscript s of set of B, B element of Cn subscript R subscript s of set of A, and K oplus subscript i A turnstile C implies K oplus subscript i B turnstile C. 7. Ref (turnstile): K oplus subscript s A turnstile A. 8. RW (turnstile): K turnstile A and B element of Cn subscript R subscript s of set of A implies K turnstile B.

Since it seems not desirable to expect for defeasible assumptions to be derivable in just any given context, we didn’t include reflexivity under the addition of defeasible assumptions (KdA|A). Similarly, we only stated the RW and LLE in the less demanding version relative to strict rules (as opposed to defeasible rules).

Definition 10.2. Let i{d,s}. A nonmonotonic consequence relation | is i-cumulative if it satisfies RW (|), LLE i(|), Ref(|), and C i(|). It is i-preferential if it additionally satisfies OR i(|).

10.3.2 Nonmonotonic Reasoning Properties and Extensions

So far, we have discussed cumulativity and related properties in the context of nonmonotonic consequence relations. We now consider these and similar properties from the perspective of extensions. The shift in perspective is well-motivated since, after all, nonmonotonic consequence is determined by the given extensions (see Table 3). In view of this, nonmonotonic reasoning properties should have counterparts from a perspective more focused on knowledge representation. E.g., where cautious monotonicity and transitivity concern the robustness of the consequence set under the addition of consequences to the knowledge base, we should expect a similar robustness of the set of extensions.

In this section we show that many metatheoretic properties hold for both accumulation methods if the underlying notion of argument satisfies some basic requirements.

Given a knowledge base K=As,Ad,Rs,Rd,Rm, a sentence A, and a set DDef(K), we let DdA=dfD{A}, DsA=dfD, DsKA=dfD, and

DdKA=dfD{A}if AAdDelse.

Definition 10.3. Let nmL be an NML based on consistent accumulation with an associated class of knowledge bases KnmL and let i{s,d}. We define the following properties for nmL. For all KKnmL and all sentences A, if K|PExtA, then

  • CM i(PExt) holds, if EPExt(KiA) implies EPExt(K).

  • CM i(DExt) holds, if DDExt(KiA) implies DiKADExt(K).

  • CT i(PExt) holds, if EPExt(K) implies EPExt(KiA).

  • CT i(DExt) holds, if DDExt(K) implies DiADExt(KiA).

Moreover,

  • C i(PExt) holds, if CT i(PExt) and CM i(PExt) hold.

  • C i(DExt) holds, if CT i(DExt) and CM i(DExt) hold.

These notions are related as in Fig. 23 (see Theorem 10.2) for D-extensions induced by greedy or temperate accumulation and for any underlying notion of argument, as long as it fulfills the following requirements.

Content of image described in text.

Figure 23 Relations between extensional and consequence-based notions of cumulativity, cautious transitivity and monotonicity (where i{s,d}).

(arg-trans) Let {d,s} and DDef(K). If there is an aArgK(D) with Con(a)=A, then for all bArgKA(DA), there is a cArgK(D) with Con(b)=Con(c).

The criterion states that adding a conclusion ACon[ArgK(D)] to K and D does not generate new conclusions: Con[ArgKA(DA)]Con[ArgK(D)].

(arg-mono) Let {d,s} and DDef(K). We have ArgK(D)ArgKA(D).

The criterion expresses that adding assumptions to a knowledge base does not result in the loss of arguments.

(arg) (arg-trans) and (arg-mono).

Since by the definition of ArgK(), we have ArgK(D)ArgK(DD) for any D,DDef(K), by (arg-mono), ArgK(D)ArgKA(DA). Therefore, if (arg) holds and ACon[ArgK(D)], then Con[ArgKA(DA)]=Con[ArgK(D)].

Lemma 10.2. Definitions 5.1 and 9.1 satisfy (arg).

Proof. In the case of Definition 9.1 this follows by the monotonicity and the transitivity of L. (arg-mono) follows directly by Definition 5.1. For (arg-trans), let bArgKA(DA), where A=Con(a) for some aArgK(D). Let c be the result of replacing every ASub(b) in b by a. Clearly cArgK(D) and Con(c)=Con(b). □

CT i(DExt) and CM i(DExt) (highlighted in Fig. 23) have a central place. Instead of showing the corresponding properties CT i(|) and CT i(|) for the nonmonotonic consequence relations directly, one can show the corresponding extensional principles.Footnote 42

Theorem 10.2.  Given (arg), the logical dependencies of Fig. 23 hold for both accumulation methods.

Moreover, both accumulation methods satisfy CT i(DExt) if (arg) holds.

Proposition 10.1. Let i{s,d}. Given (arg), CT i(DExt) holds for both accumulation methods.

Also LLE and RW hold given some intuitive requirements on the underlying notion of argument.

(arg-re) Let i{d,s}. If ACnRs(B) and BCnRs(A), then for every aArgKiA there is a bArgKiB with Con(a)=Con(b)

 and Def(a)=Def(b)if AAd(a) or i=s(Def(a){A}){B}=Def(b)else.

The criterion expresses that if assumptions in the knowledge base are replaced with equivalent ones, we can still conclude the same sentences.

(arg-strict) For all AAs, (i) AArgK() and (ii) for all A1,,AnBRs and all DDef(K), if A1,,AnCon[ArgK(D)] then BCon[ArgK(D)].

The criterion expresses that every strict assumption gives rise to an argument and arguments can be extended by strict rules.

Lemma 10.3. Definitions 5.1 and 9.1 satisfy (arg-re), and (arg-strict).

Proof. Consider Definition 5.1. (arg-strict) follows trivially. For (arg-re), let ACnRs(B) and BCnRs(A). So, there is a cArgKB() of the form BA. Let b be the result of replacing each ASub(a) in a by c. Then a and b satisfy the requirements of (arg-re). The proof for Definition 9.1 is similar, making use of the transitivity of L. □

Proposition 10.2. Let i{s,d}, τ{tem,gr} and |{|AExtτ,|PExtτ}. If (arg-re), LLE i(|) holds.

Proposition 10.3. Let τ{tem,gr} and |{|AExtτ,|PExtτ,|Extτ}. If (arg-strict), Ref(|) and RW(|) hold.

11 Temperate Accumulation: Properties and Some Concrete Systems

In this section we study temperate accumulation in more detail. We show that it gives rise to preferential consequence relations (Section 11.1), if some basic conditions are met. Moreover, by “naming” default rules the structure of knowledge bases can be simplified (Section 11.2.1). Temperate accumulation can be characterized in terms of formal argumentation (Section 11.2.2).

Finally, we present two families of systems based on temperate accumulation: reasoning with maxicon-sets (Section 11.3.1) and input–output logics (Section 11.3.2) and apply the results from Section 11.1 to them.

11.1 Cumulativity and Preferentiality

The temperate accumulation method often yields cumulative or even preferential consequence relations. Table 7 gives an overview for the following two classes of knowledge bases:

  • the “universal” class KΩ containing all knowledge bases of the form As,Ad,Rs,Rd,Rm;

  • the class KAd containing all knowledge bases of the form As,Ad,RL for a Tarski logic L. In this context we suppose that arguments are defined by Definition 9.1 and fulfill the following two properties:

    (arg-ex) it is explosive in the sense that a set of sentences is inconsistent iff its consequence set is trivial; and

    (arg-or) A{AB}LC1C2 iff A{A}LC1 and A{B}LC2.Footnote 43

Table 7Two classes of knowledge bases and the properties of the associated consequence relations with the notion of argument from Definition 5.1.
A table comparing two knowledge bases across i-cumulativity and i-preferentiality using four consequence relation types, each with tick marks and references. See long description.
Table 7Long description

The table consists of five columns. The first column lists two knowledge bases: K subscript omega and K subscript A d. The next four columns are grouped under two headers: i-cumulativity and i-preferentiality. Under i-cumulativity: Column 2: Turnstile superscript t e m intersection P-extension. Column 3: Turnstile superscript t e m intersection A-extension. Under i-preferentiality: Column 4: Turnstile superscript t e m intersection P-extension. Column 5: Turnstile superscript t e m intersection A-extension. Row 1: K subscript omega, tick in all four consequence relation columns; each entry references Corollary Eleven point One. Row 2: K subscript A d, tick in the first three consequence relation columns, referencing Corollary Eleven point One; the fourth column also has a tick, referencing Theorem Eleven point One.

As the reader may expect, the results in this section depend also on the underlying notion of argument construction (see Fig. 24 for an overview). In the following we show that any NML based on temperate accumulation and on the argument construction in Definition 5.1 or another definition satisfying (arg-re), (arg-strict), and (arg), satisfies C i(DExt) (for i{s,d}) and is therefore cumulative, that is, C i(|) holds for |{|AExttem,|PExttem}.

Content of image described in text.

Figure 24 Nonmonotonic reasoning properties for temperate accumulation, where i{s,d}, |{|AExttem,|PExttem,|Exttem}, and |{|AExttem,|PExttem}.

Proposition 11.1.

Let i{s,d}. Given (arg), C i(DExt) holds for KΩ.

With Theorem 10.2 and Propositions 10.1 to 10.3 we get:

Corollary 11.1. Let i{s,d} and |{|AExttem,|PExttem}. Given (arg), (arg-re), and (arg-strict), | is i-cumulative for KΩ.

In the presence of defeasible rules OR(|) does not hold in general.

Example 30. Let |{|PExt,|AExt} and K=As,RCL,Rd with As= and Rd={ps,qs}. Clearly, Ks(pq)/|s while Ksp|s and Ksq|s.

There are good news, however, for knowledge bases in KAd, |PExttem is i-preferential for i{s,d}.

Theorem 11.1. Let i{s,d}. |PExttem is i-preferential for KAd.

Proof. In view of Corollary 11.1 and Lemmas 10.3 and 10.2 we only have to show OR(|), where |=|PExttem. Suppose KiA|C and KiB|C. We show the case i=s. Suppose DDExt(Ks(AB)) and hence, by Theorem 10.1, D=ConsKs(AB)(D). If D is inconsistent in KsA, then CCon[ArgKsA(D)]=CnL(AsD{A}) by (arg-ex). Else, assume for a contradiction that there is a D for which DDAd that is consistent in KsA. So, CnL(AdD{A}) is nontrivial and by (arg-or) so is CnL(AdD{AB}). So DConsKs(AB)(D) which is a contradiction. So, D=ConsKsA(D) and hence, by Theorem 10.1, DDExt(KsA). Thus, CCon[ArgKsA(D)]=CnL(AsD{A}) since KsA|C. So, in any case CCnL(AsD{A}).

For an analogous reason CCnL(AsD{B}). By (arg-or), CCnL(AsD{AB})=Con[ArgKs(AB)(D)]. Hence, Ks(AB)|C. □

The preceding result does not consider |AExttem. We will show in Section 11.3.1 that OR(|AExttem) does not hold even for L=CL.

As a last result in this section we show that |Exttem is monotonic.

Proposition 11.2. Md(|Exttem) (and so also CM d(|Exttem) and RM d(|Exttem)) hold for KΩ.

Proof. Let |=|Exttem and suppose K|A. Thus, there is a DDExt(K) for which there is an aArgK(D) with Con(a)=A. We have to show that KdB|A where B is an arbitrary sentence. We have, ArgK(D)=ArgKdB(D). So, D is consistent in KdB. Thus, there is a maxicon-set DDef(KdB) for which DD. We have aArgKdB(D). Thus, KdB|A. □

11.2 Alternative Characterizations

In this section we present two alternative characterizations of temperate accumulation. First, in Section 11.2.1 we show that in temperate accumulation defeasible rules are dispensable in that a given knowledge base featuring defeasible rules can be translated into one without, in such a way that extensions and consequences are preserved. In Section 11.2.2 we show that temperate accumulation can be translated into formal argumentation.

11.2.1 Naming Defaults in Temperate Accumulation

We now show that, in the context of NMLs based on temperate accumulation, every knowledge base of the form K=As,Ad,Rs,Rd can be translated into a knowledge base of the form K=As,Ad,Rs, which gives rise to the same D- and P-extensions (Theorem 11.2). The idea is to refer (or “name”) the defaults in Rd in the object language, add a strict modus ponens–like rule, and a rule that expresses that a default is defeated in case its antecedents hold but its conclusion is false. This implies that genuinely defeasible rules can be “simulated” by strict rules in systems of temperate accumulation.

Suppose in the following that nmL is a NML based on a language L with a class of associated knowledge bases KnmL of the form of K. We assume that the notion of inconsistency underlying nmL satisfies for any set of sentences S{A} the sufficient condition: A,¬AS implies that S is inconsistent. Our translated knowledge bases K make use of an enriched language: every sentence in L is a sentence in L, for every r=A1,,AnBRd, (A1,,AnB) and ¬(A1,,AnB) are sentences in L, nothing else is a sentence in L. Note that is an object-level symbol in L but not in L. We write sentL [resp.  sentL] for the set of all sentences in L [resp. in L].

Definition 11.1. Let the translation of a knowledge base K=As,Ad,Rs,Rd to K=As,Ad,Rs,, be given by:

Ad=AdRd, Rs=RsRsmpRscp, whereRsmp={A1,,An,rBr=(A1,,AnB)Rd}Rscp={A1,,An,¬B¬rr=(A1,,AnB)Rd}.

Note that Def(K)=AdRd=Def(K).

Example 31. Recall the knowledge base from Example 9, K=As,Ad,Rd,Rs with As={p}, Ad=, Rd={r1:pq,r2:pr,r3:q¬r} and Rs={rs,¬rs}. We translate it to K=As,Ad,Rs with

  • Ad={r1,r2,r3} and

  • Rs=Rs{p,r1q,p,r2r,q,r3¬r}{p,¬q¬r1,p,¬r¬r2,q,¬¬r¬r3}.

Theorem 11.2. Let nmL be based on temperate accumulation, let KKnmL be of the form As,Ad,Rs,Rd, and let K be the translation defined in Definition 11.1. Then, DExt(K)=DExt(K) and PExt(K)=PExt(K).

11.2.2 Temperate Accumulation as a Form of Argumentation

In the following we give an elegant argumentative characterization of NMLs based on temperate accumulation and on knowledge bases of the type K=As,Ad,Rs.Footnote 44 We work under the assumption that (a) ArgK and ArgK() are defined as in Definition 5.1 and (b) the inconsistency of a set of defeasible assumptions AAd can be argumentatively expressed by

(⋆)

A is inconsistent in K iff for every AA there is an aArgK(A{A}) that concludes that the assumption A is false, that is, Con(a)=¬A.

() holds if Rs=RCL or, more generally, if Rs=RL for some logic L which has the property that S is inconsistent in L iff for all AS, S{A}L¬A.

Definition 11.2. We define the argumentation framework AFK=ArgK, where ab for a,bArgK iff Con(a)=¬B for some BAd(b). Where X{A,P}, we let, moreover, |XExtstb be the consequence relation induced by the X-extensions and the stable argumentation semantics (see Definition 5.2) and stable(K) be the set of stable A-extensions of K.

Example 32. We consider K=,{pq,¬pq,s},RCL. An excerpt from the argumentation framework AFK is illustrated in Fig. 25. We note that K|PExttemq and K|PExtstbq.

We have, on the one hand, two D-extensions according to temperate accumulation, X1={pq,s} and X2={¬pq,s}, with the corresponding A-extensions ArgK(X1) and ArgK(X2) and the P-extensions CnCL(X1) and CnCL(X2). On the other hand, we have two stable A-extensions of AFK (highlighted in Fig. 25), namely ArgK(X1) and ArgK(X2), with the corresponding P-extensions CnCL(X1) and CnCL(X2).

Content of image described in text.

Figure 25 The argumentation framework for the knowledge base of Example 32 based on the arguments to the right. The rectangular node represents a class of arguments based on the inconsistent assumption set {pq,¬pq}. An outgoing [resp. ingoing] arrow symbolizes an attack from [resp. to] some argument in the class.

The following theorem shows that the observed correspondences are not coincidental.

Theorem 11.2. Let nmL be an NML based on temperate accumulation for which () holds and K=As,Ad,RsKnmL be a knowledge base.

  1. 1. If DDExttem(K) then ArgK(D)stable(K).

  2. 2. If Xstable(K), there is a DDExttem(K) such that X=ArgK(D).

Proof. For Item 1 suppose DDExttem(K). By Proposition 5.1, Dmaxcon(K). Consider a,bArgK such that ab and aArgK(D). By () and the consistency of D in K, bArgK(D). Thus, ArgK(D) is conflict-free.

Let now aArgKArgK(D). So, there is an AAd(a)D. Since Dmaxcon(K), D{A} is inconsistent in K and by () there is a bArgK(D) with Con(b)=¬A. Thus, ArgK(D)stable(D).

For Item 2 let Xstable(AFK) and () D=aXAd(a). Clearly, XArgK(D). Assume for a contradiction that there is an aArgK(D)X. By stability, there is a bX such that ba and so Con(b)=¬A for some AAd(a). Since AD and (), there is a cX for which AAd(c) and so bc in contradiction to the conflict-freeness of X. Thus, X=ArgK(D).

By Proposition 5.1, we have to show that Dmaxcon(K). In view of () and the conflict-freeness of X, D is consistent in K. Suppose AAd is such that D{A} is consistent. If AD, then a=AArgKD and by stability, there is a bX such that ba and therefore Con(b)=¬A. But then, by (), D{A} is inconsistent in K. So, AD and therefore Dmaxcon(K). □

11.3 Two Families of NMLs from the Literature

In this section we will introduce two well-known families of NMLs, both based on the idea of forming maxicon sets of defeasible information from the given knowledge base.

11.3.1 Reasoning with Maxicon Sets of Sentences

A time-honored family of NMLs has been proposed by Rescher and Manor (Reference Rescher and Manor1970). These NMLs model reasoning scenarios in which an agent is confronted with reliable but not infallible information (e.g., resulting from testimonies, weather reports, and so on) that may give rise to contradictions. Such information is encoded by sets of defeasible assumptions. Clearly, due to the possibility of logical explosion, classical logic cannot be applied to such sets, at least not naively. The basic idea behind Rescher’s and Manor’s approach is to form (-maximal) consistent sets of defeasible assumptions and reason on their basis. In our terminology these maxicon sets of defeasible assumptions form D-extensions and their classical closures are P-extensions induced by temperate accumulation. We obtain the three types of consequences that have been introduced in Definition 5.2.

While Rescher and Manor considered knowledge bases of the form ,Ad,RCL, Makinson’s system of Default Assumptions (Makinson (Reference Makinson2005)) also considered strict assumptions and so generalized the considered class of knowledge bases to those of the form As,Ad,RCL.Footnote 45 Of course, one may consider other Tarski-logics L instead of classical logic. Let the class Kmcon consist of all knowledge bases of the form As,Ad,RL. We let:Footnote 46

  1. 1. K|PextmconA iff ACnL(SAs) for every Smaxcon(K).

  2. 2. K|AExtmconA iff there is a Smaxcon(K) for which ACnL(SAs).

  3. 3. K|ExtmconA iff there is a Smaxcon(K) such that ACnL(SAs).

In what follows we let arguments be defined as in Definition 9.1.

Example 33. Consider the knowledge base K=As,Ad,RCL where As={¬u} and Ad={pq,¬pt,s,u}. We have maxcon(K)={X1,X2} where X1={pq,s}, and X2={¬pt,s}. Note that the defeasible assumption u conflicts with the strict assumption ¬u.

We first observe that qt is a floating conclusion in view of the conflicting arguments ({pq},qt)Arg(X1) and ({¬pt},qt)Arg(X2). Indeed, K|PExtmconqt while K/|AExtmconqt.

In view of Proposition 5.1 the three consequence relations |PExtmcon,|AExtmcon, and |Extcmon are identical to |PExttem,|AExttem, and |Exttem on the class Kmcon. Therefore, the results from Section 11.1 are applicable (Table 8).

Table 8Overview on properties of the consequence relations based on maxicon sets. All positive results follow from the general results for NMLs based on temperate accumulation in Section 11.1 (see also Corollary 11.2 below).
A table providing an overview of properties of consequence relations derived from maxicon sets. See long description.
Table 8Long description

The table consists of eleven columns. The first column lists three consequence relation types: turnstile subscript m c o n intersection P-extension, turnstile subscript m c o n intersection A-extension, and turnstile subscript m c o n union extension. The next ten columns are titled: M subscript d, M subscript s, C M subscript d, C M subscript s, C T subscript d, C T subscript s, R M subscript d, R M subscript s, O R subscript d, and O R subscript s. Row 1: Turnstile subscript m c o n intersection P-extension: tick in columns C M subscript d, C M subscript s, C T subscript d, C T subscript s, O R subscript d, and O R subscript s. Row 2: Turnstile subscript m c o n intersection A-extension: tick in columns C M subscript d, C M subscript s, C T subscript d, C T subscript s, and O R subscript s. Row 3: Turnstile subscript m c o n union extension: tick in columns M subscript d, M subscript s, R M subscript d, and R M subscript s.

Proposition 11.3. Let KKmcon. Then, (i) K|PExttemA iff K|PExtmconA, (ii) K|AExttemA iff K|AExtmconA, and (iii) K|ExttemA iff K|ExtmconA.

Proof. We show case (ii). The others are analogous and left to the reader. K|AExttemA, iff, there is an a{ArgK(D)DDExt(K)} with Con(a)=A, iff, ACnL({DDDExt(K)}As), iff [by Proposition 5.1], ACnL({DDmaxcon(K)}As), iff, K|AExtmconA. □

In view of Corollary 11.1, Theorem 11.1 and Lemmas 10.2 and 10.3 we therefore get:

Corollary 11.2. Let i{s,d}. |Aextmcon is i-cumulative. If (arg-ex) and (arg-or) hold, |Pextmcon is i-preferential.

Example 34. Where i{s,d}, in Table 9 we list counter-examples to (OR) and therefore to the i-preferentiality of |AExtmcon and |Extcmon. In Table 10 we find counterexamples to RM i(|) for |{|PExtmcon,|AExtmcon}.

Table 9Counterexamples to OR i(|). Where j{1,2,3} let Kj=,Aj,RCL, A1={¬pr,¬qr}, A2={¬p,¬q,¬pr,¬qr}, A3=A3uA3¬u, A3u={u(pr)}, A3¬u={¬u(qr)}, and A2A=A2{A}.
A table shows the propositional satisfaction and knowledge base extensions. See long description.
Table 9Long description

The table consists of four columns: K asterisk equals, K Oplus subscript i p, K Oplus subscript i q, and K Oplus subscript i (p or q). It read as follows: Row 1: i equals s, K equals K subscript 1: maxcon (K asterisk); {not q and r}; {not p and r}; {not q and r}, {not p and r}. Row 2: i equals s, K equals K subscript 1: intersection D-extension (K asterisk); {not q not r}; {not p not r}; empty set. Row 3: i equals s, K equals K subscript 1: K asterisk turnstile superscript mcon subscript A-extension r question mark; checkmark; checkmark; cross. Row 4: i equals d, K equals K subscript 2: maxcon (K asterisk); A subscript 2 superscript p backslash {not p} and below it A subscript 2; A subscript 2 superscript p backslash {not q} and below it A subscript 2; A subscript 2 superscript p or q backslash {not p} and below it A subscript 2 superscript p or q backslash {not q} and below it A subscript 2. Row 5: i equals d, K equals K subscript 2: intersection D-extension (K asterisk); A subscript 2 backslash {not p}; A subscript 2 backslash {not q}; A subscript 2 backslash {not p, not q}. Row 6: i equals d, K equals K subscript 2: K asterisk turnstile superscript mcon subscript A-extension r question mark; checkmark; checkmark; cross. Row 7: i is an element of {s, d}, K equals K subscript 3: maxcon (K asterisk); A subscript 3 superscript u Oplus subscript i p and below it A subscript 3 superscript not u Oplus subscript i p; A subscript 3 superscript u Oplus subscript i q and below it A subscript 3 superscript not u Oplus subscript i q; A subscript 3 superscript u Oplus subscript i (p or q) and below it A subscript 3 superscript not u Oplus subscript i (p or q). Row 8: i is an element of {s, d}, K equals K subscript 3: K asterisk turnstile superscript mcon subscript U-extension r question mark; checkmark; checkmark; cross.

Table 10Counterexamples to RM i(|) where K=,Ad,RCL with Ad={p,¬p,qr}. We have: K|r and K/|¬¬(pr), while K¬(pr)/|r.
A table shows maxcon and D-extension results for knowledge base K. See long description.
Table 10Long description

The table consists of four columns: K asterisk equals, K, K oplus subscript s not (p and r), and K oplus subscript d not (p and r). It reads as follows: Row 1: maxcon of K asterisk; set of p, q and r; set of not p, q and r; set of p; set of not p, q and r; set of p, not (p and r); set of p, q and r; set of not p, q and r, not (p and r). Row 2: intersection D-extension of K asterisk; set of q and r; empty set; empty set. Row 3: K asterisk turnstile superscript m c o n subscript intersection A-extension r question mark; tick; cross; cross. Row 4: K asterisk not turnstile superscript m c o n subscript intersection A-extension not (p and r) question mark; tick; blank; blank. Row 5: K asterisk turnstile superscript m c o n subscript intersection P-extension r question mark; tick; cross; cross. Row 6: K asterisk not turnstile superscript m c o n subscript intersection P-extension not (p and r) question mark; tick; blank; blank.

We end this section with two simple counterexamples concerning the cautious monotonicity and transitivity of |Extcmon and a positive result concerning its rational monotonicity.

Example 35. Let |=|Extmcon. We first let K1=,{p,¬p},RCL. Then, K1|p and K1|¬p, but K1sp/|¬p. Note for this that {¬p}maxcon(K1)maxcon(K1sp). This shows that CM s(|Extcmon) and M s(|Extcmon) don’t hold.

Let now K2=,{pq,¬p,(¬pq)s},RCL and i{s,d}. We note that K2|q (since {pq,(¬pq)s}maxcon(K2)) and K2iq|s (since {¬p,(¬pq)s}iqmaxcon(K2iq)). However, K2/|s. This shows that CT i(|Extmcon) does not hold.

Proposition 11.4. Let K=As,Ad,RCLKmcon, i{s,d}, and |=|Extmcon. Then, RM i(|) holds.

Sketch of the Proof. Suppose K|A and K/|¬B and let {s,d}. In view of K/|¬B every Dmaxcon(K) is consistent with B. It is therefore easy to see that Dmaxcon(K) iff DBmaxcon(DB) and therefore also KB|A.

11.3.2 Reasoning with Consistent Sets of Defaults and Metarules: Input–Output Logic

Input–output logics (in short, IO-logics) have been first presented in Makinson and Van Der Torre (Reference Makinson and Van Der Torre2000) and in a nonmonotonic setting in Reference Makinson and Van Der Torre2001. We work with the class of knowledge bases Kio of the type K=As,RL,Rd,Rm, where the strict rules are provided by a Tarski base logic L (such as classical or intuitionistic logic).Footnote 47 Instead of Definitions 5.1, arguments in IO-logic are constructed according to the following “two-phase” definition in which (a) the derivation of information from strict assumptions by the strict rules and (b) the derivation of defeasible rules from strict and defeasible rules by means of metarules are separated. The detachment of argument conclusions is applied to the results of (a) and (b).

Definition 11.3 (Arguments, Consistency, and Consequences in IO-logic). Let K=As,RL,Rd,RmKio. Where DDef(K), (A,B)ArgKio(D) iff (a) ACnL(As) and (b) ABCnRm(DRs). We let Con((A,B))=B and ArgKio=ArgKio().

D is consistent in K iff there is no sentence B for which B,¬BCon[ArgKio(D)].

We define D-, A-, and P-extensions as usual (see Section 10). Where X{PExt,AExt,Ext}, we will write |Xio for the induced consequence relation (see Definition 5.2) on the class of knowledge bases Kio.

In IO-logic metarules play a central role. Paradigmatic rules are:

Table 07
A formatted list displaying Paradigmatic rules. See long description.
Table 07Long description

The Paradigmatic rules are: 1. Right Weakening (RW); (A right arrow B), (C implies A) turnstile (C implies B). 2. Left Strengthening (LS): (A right arrow C), (C implies B) turnstile (A implies B). 3. Right Conjunction (AND): (A implies B), (A implies C) turnstile (A implies B and C). 4. Cumulative Transitivity (CT): (A implies B), (A and B implies C) turnstile (A implies C). 5. Left Disjunction (OR): (A implies C), (B implies C) turnstile (A or B implies C). 6. Identity (ID): Turnstile A implies A.

Depending on the underlying class of knowledge bases, we have 12 base systems, summarized in Fig. 26.

Content of image described in text.

Figure 26 The basic input–output logics and their associated knowledge bases where IO1={RW,LS,AND} and Rdid={AAAsentL}.

Example 36. Let K=As,RCL,Rd,Rm where Rd={pq,p¬s,qs}, As={p}, and Rm=IO3={RM,LS,AND,CT}. We have three maxicon sets for K: X1={pq,p¬s}, X2={pq,qs}, and X3={p¬s,qs}. By Proposition 5.1, we know that these correspond to the D-extension generated by TemAcc. We have K/|Pextioq since X3 doesn’t contain an argument for q.

Note that Rd is not consistent since it contains the argument (p,¬s) for ¬s and (p,s) for s based on the Rm-derivation

IO-logics have found applications in deontic logic where the rules in Rd are interpreted as conditional norms: ABRd is read as “A commits us/you/etc. to bring about B” (Parent & van der Torre, Reference Parent, van der Torre, Gabbay, Horty, Parent, van der Meyden and van der Torre2013). The right side of the consequence relation | encodes the obligations derivable from a knowledge base, where the latter represents the information available about the actual situation As and the given conditional norms Rd. In deontic logic, conflicts between norms can occur in various ways, for example, in terms of contrary-to-duty situations.

Example 37. Let h stand for “helping the neighbor,” and n for “notifying the neighbor” (Chisholm, Reference Chisholm1963). Consider As={¬h}, Rd={h,hn,¬h¬n} and Rm=IO3. We have three maxicon sets, namely X1={h,hn}, X2=|,{h,¬h¬n}, and X3={¬h¬n,hn}. One may object to h being part of the D-extensions since our strict assumptions express that our agent already determined the outcome ¬hAs and so h would not be action-guiding. Moreover, in X2 this leads to a pragmatic oddity according to which an agent should help and also not notify the neighbor. To deal with this problem, knowledge bases have been extended with a set of constraints (such as here {¬h}) on the output in Makinson and Van Der Torre (Reference Makinson and Van Der Torre2001). In order to simplify the presentation, we have omitted constraints in this section.

We will now consider some of the properties studied in Section 10.3. We say that L has a proper conjunction iff (a) {A1,,An}LB iff {i=1nAi}LB and (b) {A1,,An}LB1, …, {A1,,An}LBm implies {A1,,An}Li=1mBi. In the following we assume that L has a proper conjunction.

Lemma 11.1. An IO-logic whose metarules include LS and CT satisfies (arg-re) and (arg).

Proof. For (arg-trans) consider a DRd for which there is an (B,A)ArgK(D) and suppose (C,D)ArgKsA(D). Thus, there are proofs P1 resp.  P2 based on the rules in Rm of BA resp. of CD from DRL. Moreover, BCnL(As) and CCnL(As{A}). So, there are A1,,AnAs for which A,A1,,AnLC. By the monotonicity of L and since is a proper conjunction, Bi=1nAiB and Bi=1nAiAC. Consider the following proof based on the metarules Rm:

Note that Bi=1nAiCnL(As) and so Bi=1nAiDArgK(D).

The simple proofs of (arg-re) and (arg-mono) are left to the reader. □

By Lemma 11.1, Propositions 10.2 and 11.1, and Theorem 10.2 we get:

Corollary 11.3. Let |{|AExtio,|PExtio}. Any IO-logic whose metarules include LS and CT satisfies C s(|) and LLE s(|).

In view of Corollary 11.1 the logics IOi with i{3,4} from Fig. 26 are s-cumulative, since their notions of argument satisfy (arg-strict).

Lemma 11.2. Any IO-logic IOi from Fig. 26 satisfies (arg-strict).

Proof. Concerning (i) we note that if AAs, since IDRm, also (A,A)ArgK(). Concerning (ii), where DRd, suppose A1,,AnBRL and there are (B1,A1),,(Bn,An)ArgK(D). So, BiCnL(As) for each i=1,,n. So, CCnL(As) where C=i=1nBi. By (LS), (C,Ai)ArgK(D) for each i=1,,n. By AND, (C,D)ArgK(D) where D=i=1nAi. Since DBRL and by RW, (C,B)ArgK(D). □

By Corollary 11.1 and Lemmas 11.1 and 11.2 we get:

Corollary 11.4. Let X{A,P} and i{3,4}. |Xextio satisfies s-cumulativity for IOi.

Example 38. s-cumulativity is not satisfied for IOi+ since Ref does not hold. Consider K={p},RCL,Rdid{¬p},IO4. Clearly, there are maxicon sets including ¬p in view of which K/|p, where |{|AExtio,|PExtio}.

The situation is different when considering K={p},RCL,{¬p},IO4{ID}. Now, the only D-extension is and therefore K|p since (p,p)ArgK() due to the presence of the metarule pp.

The following example demonstrates that the OR metarule allows for a form of disjunctive reasoning that is not available in systems without.

Example 39. Let K=As,RCL,Rd,IO4, where Rd={pqt,qs,ts} and As={p}. Note that (p,s)ArgK(Rd) in view of the proof: (qs),(ts)(qts)(p(qt)s) (by OR and LS) and (pqt),(p(qt)s)(ps) by CT. Note also that Rd is consistent. Therefore, K|PExtios and K|AExtios. The situation is different for weaker logics; for example, if we let Rm=IOi where i{1,2,3} and K=As,RCL,Rd,Rm, then (p,s)ArgK(Rd) and so K/|PExtios and K/|AExtios.

If OR is available, we get OR s(|PExtio) for base logics that satisfy (arg-or) and (arg-ex) (such as CL, see Section 11.1).

Proposition 11.5. Let i{2,4} and |=|PExtio. If (arg-or) and (arg-ex), we have OR s(|) for IOi,IOi+,IOi.

Proof. Let |=|PExtio. Suppose KsA|C and KsB|C and consider DDExt(KsAB). Assume for a contradiction that D is inconsistent in both KsA and KsB. So, there are (D,E),(D,¬E)ArgKsA(D) and (F,G),(F,¬G)ArgKsB(D). By (arg-ex) and LS, (G,E),(G,¬E)ArgKsB(D) for some G,GCnL(As{B}). By (arg-or), DG,DGCnL(As{AB}). So, by OR, (DG,E),(DG,¬E)ArgKsAB(D). This shows that D is inconsistent in KsAB, which is a contradiction.

So, DConsKsA(D) or DConsKsB(D). Without loss of generality, assume the former. Thus, there is a DmaxconKsA(K) for which DD. Assume for a contradiction that D is inconsistent in KsAB. So, there are (D,E),(D,¬E)ArgKs(AB)(D). Since D,DCnCL(AssAB) and by (arg-or), also D,DCnCL(AssA). But then D is not consistent in KsA, which is a contradiction. So, D is consistent in KsAB and by the -maximally of D, D=D. Since KsA|C, CCon[ArgKsA(D)].

Altogether this shows that KsAB|C. □

An immediate consequence of Corollary 11.4 and Proposition 11.5 is:Footnote 48

Corollary 11.5. Let i{3,4}. If (arg-or) and (arg-ex), |PExtio satisfies s-preferentiality for IOi.

12 Greedy Accumulation: Properties and Reiter’s Default Logic

In this section we take a closer look at greedy accumulation. We start by considering some of the properties of nonmonotonic inference for greedy accumulation in Section 12.1. We then investigate Reiter’s more general formulation of default rules in Section 12.2. In Section 12.3 we show that default logic can be considered a form of formal argumentation.

12.1 Properties of Nonmonotonic Reasoning

As we have seen in Section 10.3, some properties of nonmonotonic inference (Propositions 10.1 to 10.3, in particular CT, LLE, Ref, and RW) hold for greedy accumulation. In this section we present some negative results.

Example 40 (Makinson, 2003). We consider the default theory

K=As:,Ad:,RCL,Rd:{q,pq¬q}.

We get one D-extension, namely {q} with corresponding P-extension CnCL({q}) and so K|AExtq, K|AExtpq, K|PExtq, and K|PExtpq.

When considering Ks(pq) resp.  Kd(pq) the situation changes. We now have the additional D-extension {pq¬q} resp.  {pq,pq¬q} with the corresponding P-extension CnCL({p,¬q}). Thus, where {s,d}, K(pq)/|AExtq and K(pq)/|PExtq.

The example shows that CM does not hold for greedy accumulation. The next example shows that also OR fails (it is analogous to Example 30 for temperate accumulation).

Example 41. Let {s,d}, |{|AExt,|PExt}, and

K=As:,Ad:,RCL,Rd:{pr,qr}.

We note that Kp|r and Kq|r, although K(pq)/|r (since the only D-extension of K(pq) is ).

It is not surprising that several alternative formulations of default logic have been introduced to obtain CT or OR, a discussion of which goes beyond the scope of this Element (see Section 12.3).

12.2 Nonnormal Defaults

Reiter’s default logic is one of the most prominent NMLs to reason with default rules such as “Birds usually fly.” In Reiter’s original account defaults are more expressive in the sense that they allow one to express additional consistency assumptions. They have the following general form:

r=A1,,AnB1,,BmC.(12.2.1)

Besides the body Body(r)={A1,,An} and a head Head(r)=C, each default rule also comes with justifications Just(r)={B1,,Bm}. Where Rd is a set of generalized defaults we call knowledge bases of the form As,Rs,Rd Reiter default theories.Footnote 49

Example 42. We compare the following two defaults:

Defaults of the form d2, for which the justification is identical to the conclusion, are called normal defaults. Both, d1 and d2 have the same conditions of defeat: defeat happens if we learn that a person is not guilty or not suspect. However, d1 has a weaker conclusion in that it only allows one to infer that the person who has a motive is suspect, but unlike d2 it does not warrant the inference to the person’s guilt as well. The use of nonnormal defaults is motivated by cases in which the conclusion is logically weaker than the justification. From the perspective of argumentation these are cases in which we do not only want to retract inferences when being rebutted, but also allow other forms of defeat which are expressed in terms of richer justifications (one may think of these justifications as anchors for undercuts).

Definition 12.1. Let K=As,Rd,Rs be a Reiter default theory and DRd. We let ArgK and ArgK(D) be defined similar to Definition 5.1: aArgK iff

  • a=A, where AAs. We let Con(a)=A, Sub(a)={a}, and Rd(a)=.

  • a=a1,,anAD, where a1,,anArgK, D=i=1nRd(ai){r}, and

    r=Con(a1),,Con(an)B1,,BmARd.
    We let Con(a)=A, Sub(a)=i=1nSub(ai){a}, and Rd(a)=D.
  • a=a1,,anAD, where a1,,anArgK, Con(a1),,Con(an)ARs, and D=i=1nRd(ai).

    We let Con(a)=A, Sub(a)=i=1nSub(ai){a}, and Rd(a)=D.

Where aArgK and DRd, we let Def(a)=dfRd(a) and ArgK(D)=df{aArgKDef(a)D}. We let TrigK(D) be the set of all rRd such that for all ABody(r) there is an aArgK(D) with Con(a)=A. Where E is a set of L-sentences, we let ConsK(E) be the set of all rRd for which each BJust(r) is consistent with E. We let TrigK(E,D)=TrigK(D)ConsK(E).

D-extensions of Reiter default theories are generated by an algorithm, similar to GreedyAcc. However, we need to accommodate the consistency check (in the loop guard, line 3) to the additional ingredient of defaults, their justifications. Since justifications need not be implied by the heads of their respective defaults, the consistency check cannot proceed iteratively anymore. This is to avoid that a justification of a default added earlier conflicts with the head of one added later on in the procedure. Reiter solved this problem by means of a semi-inductive procedure in which the reasoner has to first guess the outcome. Consistency checks are then performed relative to the guessed set of sentences. This results in the algorithm GreedyAccGen.Footnote 50

Algorithm 5Generalized Greedy Accumulation
A step-by-step pseudocode titled Algorithm 5 Generalized Greedy Accumulation. See long description.
Algorithm 5Long description

The Algorithm displays code in twelve lines: 1. procedure Generalized Greedy Accumulation (K, D): where K equals A subscript s, R subscript d, R subscript s and D is a subset of R subscript d. 2. D asterisk left arrow empty set: init. 3. E left arrow Con [Arg subscript K (D)]: the guessed P-extension. 4. while exists r in Trig superscript T subscript K (E, D asterisk) backslash D do: scan triggered and consistent defaults. 5. D asterisk left arrow D asterisk union {r}: update scenario. 6. end while: no more triggered and consistent defaults. 7. if D equals D asterisk then. 8. return (D asterisk): correct guess. 9. else. 10. return(failure): incorrect guess. 11. end if. 12. end procedure.

Definition 12.2. A Reiter D-extension of a Reiter default theory K=As,Rd,Rs is a set DRd for which D=GREEDYACCGEN(K,D). Its corresponding A-extension is ArgK(D), and its corresponding P-extension is Con[ArgK(D)]. We write again DExt(K) [resp.  AExt(K), PExt(K)] for the set of Reiter D- [resp. A-,P-]extensions of K.

Example 43. We consider K=As,Rd,RCL, where

As={t,t}andRd=r1:tp,¬sp,r2:tss.

We simulate two runs of GreedyAccGen.

  1. 1. When running GreedyAccGen(K,{r1}), E is the set {t,t,p} closed under classical logic. In the first round of the while-loop we add r1 to D. In the second round we add r2 since its justification is also consistent with E. This leads to failure since D{r1}.

  2. 2. We consider GreedyAccGen(K,{r2}). Now E is the set {t,t,s} closed under classical logic. Since r1 is inconsistent with E, the loop terminates with D={r2} and therefore returns the only D-extension of K.

Reiter’s format of defaults and the GreedyAccGen algorithm generalize greedy accumulation as presented in Section 5.2.1. Suppose we have a knowledge base K=As,Rd,Rs with only defaults of the form r=A1,,AnB. We can translate K to a Reiter default theory K by translating each default r to a Reiter default

Applying GreedyAcc to K and GreedyAccGen to K will lead to the same D-extensions (under the translation) and therefore the same P-extensions (Łukaszewicz, Reference Łukaszewicz1988).

Nevertheless, the introduction of generalized defaults may lead to scenarios in which no D-extensions exist.

Example 44. Consider K=As,Rd,RCL where As={p} and Rd only contains the default r=p¬qq. We have two possible guesses to run GreedyAccGen: D1={r} and D2=. Note that with the first guess the algorithm never enters the while-loop, since the justification ¬q of r is inconsistent with Con[ArgK(D1)]=CnCL({p,q}) and it therefore returns failure. With the second guess, however, since ¬q is consistent with Con[ArgK()]=CnCL({p}), the while-loop is entered and r is added to D, again leading to failure.

Similarly as described in Section 10.2 for normal default theories, also Reiter D-extensions can be characterized in terms of fixed-points.

Proposition 12.1. Let K=As,Rd,Rs be a Reiter default theory, DRd, and E=Con[ArgK(D)]. D is a Reiter D-extension of K iff D=TrigK(E,D).

12.3 An Argumentative Characterization of Reiter’s Default Logic

In this section we demonstrate that there is a natural argumentative characterization of extensions of Reiter default theories.Footnote 51 For this we use a slightly generalized language sentL=sentL{AAsentL}. The unary operator will track the consistency assumptions underlying the justifications in generalized defaults. For this, it need not get equipped with logical properties, but it will be used when defining argumentative attacks.

Definition 12.3. Given a Reiter default theory K=As,Rd,Rs, we define the argumentation framework AFK=Arg(K),, where K=As,Ad, RsRs, Ad contains B for every sentence B, and r=A1,,An,B1,, BnCRs iff

r=A1,,AnB1,,BmCRd.(12.2.1)

We let ab if there is a bSub(b) such that Con(a)Con(b), where A={¬A} for all sentences A.

Example 45 (Ex. 43 cont.). We recall K=As,Rd,RCL from Example 43. In Fig. 27 we see an excerpt of AFK. The only Reiter D-extension of K is . Its induced P-extension corresponds exactly to the consequences of the arguments in the only stable A-extension of K (highlighted).

Content of image described in text.

Figure 27 Illustration of Example 45.

The following result shows that the correspondence between Reiter extensions and stable A-extensions is not coincidental.

Theorem 12.1 Let K=As,Rd,Rs be a general default theory and K as in Definition 12.3. Then

  1. 1. for every Reiter P-extension E of K, there is a stable A-extension X of K for which Con[X]sentL=E,

  2. 2. for every stable A-extension X of K, Con[X]sentL is a Reiter P-extension of K.

Selected Further Readings

Metatheoretic properties of reasoning on the basis of maxicon sets in the style of Rescher and Manor have been thoroughly studied in Benferhat et al. (Reference Benferhat, Dubois and Prade1997). A well-known prioritized version is provided in Brewka (Reference Brewka1989). An overview on the state of the art in input–output logic can be found in Parent and van der Torre (Reference Parent, van der Torre, Gabbay, Horty, Parent, van der Meyden and van der Torre2013). Input–output logics have also been applied to causal and explanatory reasoning in many works by Bochman: Bochman (Reference Bochman2005) is a good starting point. Proof theories for input–output logics that allow for Boolean combinations of defeasible conditionals are presented in Straßer et al. (Reference Straßer, Beirlaen and Van De Putte2016) and van Berkel and Straßer (Reference van Berkel, Straßer, Toni, Polberg, Booth, Caminada and Kido2022). The latter also provides a translation of input/output logic to formal argumentation. Hansen’s approach to (prioritized) deontic conditionals falls within temperate accumulation (Hansen, Reference Hansen2008), while Horty’s follows the greedy approach of deontic logic (Horty, Reference Horty2012).

An overview on many variants of default logic can be found in Antoniou and Wang (Reference Antoniou, Wang, Gabbay and Woods2009). Due to the problems indicated in Section 12.1, some cumulative variants have been proposed (Antonelli, Reference Antonelli1999; Brewka, Reference Brewka1991) as well as disjunctive versions (Gelfond et al., Reference Gelfond, Lifschitz, Przymusinska and Truszczynski1991). In Poole (Reference Poole1985) special attention is paid to specificity.

Moore’s autepistemic logic has close links to default logic (Denecker et al., Reference Denecker, Marek and Truszczynski2011; Konolige, Reference Konolige1988) and to logic programming (Gelfond & Lifschitz, Reference Gelfond and Lifschitz1988). The equi-expressivity of adaptive logics and default assumptions have been shown in Van De Putte (Reference Van De Putte2013). A modal selection semantics (as presented in Sections 5.3 and Section 15) for default logic has been studied in Lin and Shoham (Reference Lin and Shoham1990).

Part IV Semantic Methods

In this final part of the Element we move the focus from syntax to semantics. The main underlying method will be based on imposing preference orders on interpretations and selecting specific interpretations of the given information. In Section 13 we will study a well-known semantics for defaults (Kraus et al., Reference Kraus, Lehman and Magidor1990) based on the idea of preferring more “normal” models over less “normal” ones. In particular, we investigate a sophisticated method to determine the set of defaults that are entailed by a given set of defaults, the Rational Closure (Lehmann & Magidor, Reference Lehmann and Magidor1992). Section 14 provides an overview on some quantitative methods for providing meaning to defaults, including probabilistic methods. In Section 15 we use the idea of ordering models to obtain a semantics for temperate accumulation. Finally, in Section 16 we introduce one of the central paradigms in logic programming, answer set programming, and show how it is closely related to both default logic and formal argumentation. In this way, we once more demonstrate that although the underlying formal methods of NMLs are quite diverse, they often result in the same consequence relations and extensions (recall Fig. 14).

13 A Semantics for Defaults

In Section 1.1 we proposed to interpret defaults AB as “If A then normally/typically/usually/etc.  B”. The argumentation (Part II) and accumulation methods (Part III) model reasoning with defaults by focusing on inference rules: arguments are formed by treating as a defeasible inference rule and the notions of consistency and conflict are used to obtain nonmonotonic consequence relations. In this way the meaning of is characterized in a syntax-based, proof-theoretic way.

In what follows, we will interpret AB in a semantic, model-theoretic way, by:

(⋆)

B holds under the most normal/plausible/etc. situations in which A holds.

This interpretation naturally leads to nonmonotonicity: while Anne jogs most mornings (morningjog), rainy mornings are exceptional (morningrain/jog).

The proposed interpretation can be made precise by using models of the form M=S,,v with a nonempty set of situations S which are interpreted by means of an assignment function v:Atoms(S) that associates atoms with the set of those situations (also referred to as states) in which they hold, and an order S×S that orders situations according to their normality. We read ss (where s,sS) as expressing that s is less normal than s. Where sS we let

  • (M,s)

  • (M,s)A for an atom A iff sv(A)

  • (M,s)¬A iff (M,s)/A

  • (M,s)AB iff (M,s)A and (M,s)B

  • (M,s)AB iff (M,s)A or (M,s)B.

We let [[A]]M=df{sS(M,s)A} be the set of all situations which validate A and skip the subscript whenever the context disambiguates. Such sets of situations are called propositions. Following the idea expressed in (), we let:Footnote 52

  • MAB iff in all smin([[A]]), (M,s)B.

Note that validity for AB is defined globally, not relative to a given state. In the following we write M for the set of all AB for which MAB, and we write AMB in case ABM. We will call M the conditional theory induced by M.

By letting A<MB=dfABM¬B we can express that A is “more normal” than B. Indeed, if in all minimal states of AB, A¬B holds, then the minimal states of A are -lower than those of B.

One may think of M as a model of the belief state of an agent. AMB expresses that if the agent were to learn A, she would believe B, where MB means that, absent new information, the agent believes B. If A<MB, the agent would be less surprised when learning A than when learning B.

Example 46. Let M=S,,v where S={s1,s2,s3}, v(p)=S, v(q)={s1,s3}, v(r)={s1,s2}, v(s)={s3}, and ={(s2,s3)}. We have, for instance, min([[p]])={s1,s2}, min([[q]])={s1,s3}, min([[pq]])={s1,s2}, min([[rs]])={s1,s2}, and min([[pq]])={s1,s3}. Thus,

  1. 1. pMr and pqMr,

  2. 2. rsM¬s and so r<Ms,

  3. 3. but p/M¬q and pq/Mr.

In models with infinite sequences of more and more normal states we may face situations in which min([[A]]) is empty, although [[A]] is not. To exclude such scenarios, we restrict the focus on only those models for which it holds that for all sentences A and all s[[A]]min([[A]]) there is an smin([[A]]) such that ss. Models that satisfy this requirement are called smooth (Kraus et al., Reference Kraus, Lehman and Magidor1990) or stuttered (Makinson, Reference Makinson2003). In what follows we will discuss some other basic properties one may impose on , such as transitivity (if s1s2 and s2s3, then also s1s3) or irreflexivity (s/s).

In particular, we will study two classes of well-behaved models by only considering models for which the underlying order has specific properties.

13.1 Preferential Models

Let us now state properties one may expect from the conditional theory M induced by a model M. For this, we adjust the properties from Section 4 to statements of the form AB.

The properties are to be read as closure conditions on a set of defaults D. For example, REF states that AAD for all sentences A, or CM states that if AB,ACD, then also ABCD for all sentences A,B,C.

Table 08
A list of six logical properties, each defined by a name and a corresponding logical statement. See long description.
Table 08Long description

The list of logical properties includes: 1. Left Logical Equivalence (LLE): If turnstile subscript CL A equivalence B, then A implies C iff B implies C. 2. Right Weakening (RW): If A turnstile subscript CL B and C implies A implies C implies B. 3. Reflexivity (Ref): A implies A. 4. Constructive Dilemma (OR): If A implies C and B implies C, then A or B implies C. 5. Cautious Monotonicity (CM): If A implies B and A implies C, then A and B implies C. 6. Cautious Transitivity (CT); If A implies B and A and B implies C, then A implies C.

We call a set of defaults D preferential theory in case it is closed under these properties.

Given the intuitive nature of the previously mentioned properties, a natural question is: what kinds of models give rise to preferential theories? With Kraus et al. (Reference Kraus, Lehman and Magidor1990) we call M=S,,v a preferential model in case is smooth, irreflexive, and transitive. It is an easy exercise to confirm that the preceding properties hold for M, where M is a preferential model. We paradigmatically consider CM (for which smoothness is needed) and OR.

For CM, suppose that AMB and AMC. In case min([[AB]])=, trivially ABC. Otherwise, consider some smin([[AB]]). Assume for a contradiction that smin([[A]]). Thus, by smoothness, there is a smin([[A]]) such that ss. Since smin([[A]]) and AMB, (M,s)B and so (M,s)AB. But then s was not -minimal in [[AB]], a contradiction. So, smin([[A]]) and so (M,s)C since AMC. Thus, ABMC.

For OR suppose that AMC and BMC. If min([[AB]])=, trivially ABC. Otherwise consider an smin([[AB]]). Assume for a contradiction that smin([[A]])min([[B]]). Then, there is a s[[A]][[B]] such that ss. Since s[[AB]], this contradicts the -minimality of s. So, smin([[A]])min([[B]]) and so, by the supposition, (M,s)C. So, ABMC.

Altogether, it can be shown that:

Theorem 13.1 (Kraus et al., 1990). If M=S,,v is a preferential model, M is a preferential theory.

Also the inverse holds, that is, any preferential can be characterized by a preferential model. As a result, preferential models provide an adequate semantic characterization of preferential theories.

Theorem 13.2 (Kraus et al., 1990). If D is a preferential theory, then there is a preferential model M for which M=D.

13.2 Ranked Models

Preferential models do not, in general, validate the rational monotonicity property:Footnote 53

Table 013
RMIf AB and A/¬C then ACB.

RM has in A/¬C a negative condition. A set of defaults D is closed under RM if for all sentences A,B,C, if ABD and A¬CD, then ACBD. A preferential theory D that is closed under RM is called rational.

Example 47 (Example 46 cont.). In our example we have: pMr and p/M¬q, but pq/Mr. So, M does not validate RM, although it is preferential.

As Example 47 shows, preferential models M do not, in general, give rise to rational theories M. What kind of models are such that their induced conditional theories are rational? The key will be to let all states be comparable.

Preferential models allow for incomparabilities of states in the following sense: there are s1,s2,s3 for which s2s3 but s1 is not comparable to s2 and s3. We have such a situation in Example 46. We notice that this is responsible for a violation of RM by M.

The RM property holds if the set of minimal states of AC is contained in the set of minimal states of A, in case there are some minimal states of A that validate C. In view of A/¬C, one may expect them to be, since there are indeed minimal states of A that do not satisfy ¬C and in which therefore C holds. In our example the outlier is s3min([[pq]])min([[p]]). The situation improves if s1 is comparable to s2 and/or to s3. In the rightmost model of Fig. 28 we have pMr, p/M¬q, and pqMr (while in the model in the center we have pM¬q).

Content of image described in text.

Figure 28 (Left) The preferential model M=S,,v of Example 46. (Middle and Right) Ranked models M=S,,v based on modular extension of . The dashed arrow is optional.

A ranked model M=S,,v is a preferential model for which is modular, that is, for all s1,s2,s3, if s2s3, then s1s3 or s2s1. It can easily be seen that an order is modular in case its states can be “ranked” by a function r:ST to a total order T,< in such a way that ss iff r(s)<r(s). So, any states s1 and s2 in a ranked model are comparable in that r(s1)r(s2) or r(s2)<r(s1). Ranked models provide an adequate semantic characterization of rational theories.

Theorem 13.3 (Lehmann and Magidor (1992)). (i) If M is a ranked model, M is a rational theory. (ii) If D is a rational theory, there is a ranked model M for which M=D.

13.3 What Does a Conditional Knowledge Base Entail?

While so far our focus has been on semantic characterizations of defaults, we now turn to a different, although related question. Given a set of defaults D (of the form AB): what other defaults follow from them? To answer this question, one may take the principles LLE, RW, REF, OR, CM, and CT (and RM) underlying preferential (resp. rational) consequence relations and use them as metarules Rm, just like we have seen metarules being applied to defeasible rules in the context of input–output logic (see Section 11.3.2).

The set of metarules consisting of LLE, RW, REF, OR, CM and CT is often referred to as system P, while adding RM to P results in system R. Where S{P,R}, we write ABCnS(D) if AB is derivable from D by means of the metarules in S.

Example 48. Suppose we have the set of defaults D={environmentalistvegan; environmentalistavoidsFlying}. Then, environmentalistveganavoidsFlying follows from D in both system P and system R (by means of CM).

Given the representational results Theorems 13.1 to 13.3 from Section 13.1, 13.2 concerning the adequacy of preferential resp. of ranked models, it is easy to see that P- and R-entailment can be semantically expressed. We say that a preferential model M is a model of D iff for all ABD, AMB.

Theorem 13.4 (Lehmann and Magidor, 1992). Where D is a set of defaults, ABCnP(D) iff for all preferential models M of D, AMB.

The preferential theory CnP(D) is called the preferential closure of D. It is interesting, and maybe somewhat disappointing, to observe that for any D, P-entailment and R-entailment are identical, that is, CnP(D)=CnR(D). This is for a rather trivial reason: the rule RM is also applied to negated defaults, no set of defaults D contains such objects, and so RM is never applied. In fact, the question is, what is a more rewarding interpretation of / in the context of RM? It seems reasonable to consider RM as a closure principle, that is, a principle that extends CnR(D) to a set RatClosure(D) for which:

  1. (†) if

    1. (α1) ABRatClosure(D) and

    2. (α2)   A¬CRatClosure(D),

  2. (γ) then ACCRatClosure(D).

But, how to find such a set RatClosure(D)? A first idea could be to simply take the theory provided by the intersection of the theories induced by ranked models of D. But, as the following example shows, this does not work.

Example 49 (Example 48 cont.). We now show that, although

  1. (α1) in all ranked models of D, d1:environmentalistvegan holds; and

  2. (α2) there are ranked models of D in which d2:environmentalist¬drummer doesn’t hold; but

  3. (γ‾) there are ranked models of D in which d3:environmentalistdrummervegan doesn’t hold.

Given D, in view of d2, being a drummer is irrelevant to the question whether environmentalists are usually vegans (d1). If Anne is an environmentalist who happens to be a drummer, this should still allow us to infer that she (likely) is a vegan (d3). While from an intuitive point of view the rule RM seems to exactly allow for the strengthening of an antecedent with information that is not atypical for the antecedent (like being a drummer for being an environmentalist), the example demonstrates that it doesn’t fulfill this role. For this we consider the following states (where i,j,k{0,1} and in the case of s3i,j,k we let (i,j,k){(1,0,1),(1,1,1)}):

Table 09
A table displaying ranked model data. See long description.
Table 09Long description

The table consists of four columns: S subscript 1; S subscript 2; S subscript 3 superscript i, j, k, and S subscript 4 superscript i, j, k. It read as follows: Row 1: environmentalist: 1; 1; 1; 0. Row 2: vegan: 1; 1; i; i. Row 3: drummer: 0; 1; j; j. Row 4: avoidsFlying: 1; 1; k; k.

Figure 29 shows two ranked models of D. Only M2 validates d3. Not so M1, since

s30,1,1min([[environmentalistdrummer]]) and (M1,s30,1,1)/vegan.

The problem with M1 is that it validates d2 due to the fact that

min([[environmentalist]])={s1}

and in s1, ¬drummer is true. In contrast, in M2,

min([[environmentalist]])={s1,s2}

and since in s2 drummer holds, d2 is invalid in this model, and by means of RM it has to be the case that d3 holds. Indeed, in M2 we have:

min([[environmentalist]])min([[environmentalistdrummer]]).

In sum, although each ranked model M satisfies RM for its induced theory M, RM as interpreted in () is not satisfied for the consequence relation induced by the ranked models. We need another approach.

In order to let RM fulfill this role, it would seem that we need to interpret sentences ¬B as not being default entailed by A (i.e., A¬B) as much as possible, in order to allow for the inference from AC to ABC via RM. After all, in all ranked models M of D in which d2 doesn’t hold, d3 holds (unlike in our M1). So, our strategy is to somehow trade in the invalidity of more general defaults (A¬B) for the validity of more specific defaults (ABC). How to execute this tricky task?

Content of image described in text.

Figure 29 Two ranked models of Example 49.

Figure 29 gives a hint at a procedure for this: when moving from model M1 to M2 we ranked one state, namely s2, more normal. We can generalize this: our goal is to rank each state as normal as possible. This intuitive idea can be made precise in terms of imposing an order on the ranked models of a given set of defaults D and to select the best model. The way two ranked models M and M of D are compared has an argumentative interpretation. Suppose there are two discussants, one arguing in favor of M, the other one in favor of M. Each discussant produces attacks against the model favored by the other agent and defends her model against such attacks. M is preferred to M (written: MM) if the proponent of M can attack M such that the proponent of M cannot defend M, and M can be defended from every attack by the latter. But, how are attacks and defenses supposed to work?

A proponent of M may attack the model M favored by the other agent by accusing it of validating too many inferential relations, that is, by pointing to a default AB that holds in M but not in M. Recall that our goal is to invalidate for arbitrary A and B the default AB, if possible. There is a trade-off though, since invalidating some A¬B may lead, via RM, to the strengthening of the antecedent of others, for example, from AC to ABC.

In Fig. 29, the discussant arguing in favor of M1 may attack the proponent of M2 by stating that

environmentalistdrummerM2vegan, while environmentalistdrummer/M1vegan.

However, a way to defend M1 is to point out that (a)

(a)environmentalistM1¬drummer,while environmentalist/M2¬drummer,

and (b) environmentalist is according to M1’s standards even more normal than environmentalistdrummer (formally, environmentalist<M1environmentalistdrummer).

Altogether, our informal discussion motivates the following definition of an order between models of a set of defaults D. For this we let

MA=df{CDMC<MA}.

Definition 13.1. Where M and M are two ranked models, MM iff the following two conditions hold:

(defeat)

there is an AB(MM) such that MAM, and

(defense)

for all AB(MM), (MAM).

(defeat) expresses that there is an attack from M to M which is undefendable, while (defense) expresses that every attack from M to M can be defended. In terms of the preceding described argumentative reading, the two conditions describe winning conditions for the proponent of M when arguing with an opponent favoring M.

Definition 13.2. In case there is a unique -minimal model M among all ranked models of a given set of defaults D, M is called the rational closure of D.Footnote 54

Example 50 (Exaple 49 cont.). We consider models M1 and M2 in Fig. 29. We have M2M1. (defeat) holds since

environmentalist¬drummer(M1M2)and M2environmentalist=M1.

Also (defense) holds, for example, where A=environmentalistdrummer, although AveganM2M1, there is

environmentalist¬drummer(M1AM2).

We now discuss an alternative characterization of the rational closure in terms of ranking sentences according to their normality (relative to D). This will also help defining a significant class of sets of defaults for which the rational closure exists, so-called admissible sets (see Proposition 13.1). For this we inductively associate sentences with ordinals via a function rank. We say that a sentence A is exceptional for D in case ¬ACnP(D), so in case D expresses that normally A is false. Similarly, ABD is exceptional for D if A is exceptional for D. We collect the exceptional defaults in D in the set Exc(D). Where D0=D, we let Dτ+1=Exc(Dτ) for all successor ordinals τ+1 and Dτ=τ<τDτ for all limit ordinals τ. Now, some sentence A has a rank for D in case there is a least ordinal τ for which A is not exceptional for Dτ, in which case the rank of A is τ. Otherwise, A has no rank.Footnote 55

Example 51 (Example 50 cont.). For all A{environmentalist,vegan,avoidsFlying,drummer}, A and ¬A are not exceptional for D and so have rank 0. Exceptional for D are, for instance, environmentalist¬vegan and environmentalistvegan¬avoidsFlying. These formulas have rank 1.

We call a set D admissible if for every sentence A that has no rank, ACnP(D). Examples for admissible sets are sets D based on a finite language (a language with only finitely many atoms), or for which the preferential closure has no infinite sequences of more and more normal sentences.

Proposition 13.1 (Lehmann and Magidor (1990). Where D is admissible, the rational closure of D exists and it consists of all AB for which A¬B has no rank, or for which rank(A)<rank(A¬B).

Example 52 (Example 51 cont.). In view of Proposition 13.1, for instance, environmentalistveganavoidsFlying is in the rational closure of D since environmentalistvegan has rank 0 while environmentalistvegan¬avoidsFlying has rank 1.

However, environmentalist¬veganavoidsFlying is not in the rational closure of D since

environmentalist¬veganandenvironmentalist¬vegan¬avoidsFlying

have the same rank, namely 1. This shows that rational closure “suffers” from the drowning problem (see Section 1.2): since nonvegans are exceptional with respect to environmentalistvegan, they turn out exceptional also with respect to environmentalistavoidsFlying.Footnote 56

14 Quantitative Methods

So far, we have interpreted AB as B holds in the most normal situations in which A holds (recall ()). According to a similar idea:

()

AB holds if, given A, B is more normal/plausible/etc. than ¬B.

The notion of normality was rendered precise in terms of a preference order on the logically possible situations. Instead of this qualitative approach, one may follow the idea behind () but proceed quantitatively and interpret AB in terms of probabilities: given A, B is more probable than ¬B. In what follows we introduce the central approach to probabilistic semantics by Adams (Reference Adams1975), which corresponds to system P.Footnote 57

14.1 Adams’ Approach: ϵ-Semantics

We again consider a set of situations S interpreted via an assignment v:Atoms{0,1}.Footnote 58

We now equip (S) with a probability function P which maps sets of situations into [0,1] such that (1) P(S)=1 and (2) for any pairwise disjoint S1,,Sn(S), P(S1Sn)=i=1nP(Si). We call each M=S,P,v a probabilistic model. For every formula A, a probabilistic model provides information of how probable it is to be in a situation consistent with A. Where [[A]]M={sS(M,s)A} (we skip the subscript when the context disambiguates), we will write P(A) instead of P([[A]]) for the formal expression of this information. The conditional probability P(AB) is, as usual, defined by P(AB)P(A) in case P(A)>0 (otherwise, it is undefined). It expresses the probability of being in a situation in which A holds, given that B holds.

Example 53. Let Atoms={b,f,w}, where b stands for ‘being a bird’, f for ‘flying’ and w for ‘having wings’. The probabilistic models M1=S,P1,v1 and M2=S,P2,v2 are given by Table 11.

We have, for instance, P1(b)=P1({s1,,s4})=1, P1(bf)=P1({s3,s4})=.6 and therefore P1(fb)=.6/1=.6, while P2(b)=.6, P2(bf)=.4 and P2(fb)=.4/.6=2/3.

Table 11The states and probability functions for Example 53.
A table listing different states along with their associated probability functions. See long description.
Table 11Long description

The table consists of six columns: situations; s Double Turnstile b; s Double Turnstile f; s Double Turnstile w; p subscript 1; and p subscript 2. It read as follows: Row 1: s subscript 1; checkmark; Blank; Blank; dot 2; dot 1. Row 2: s subscript 2; checkmark; Blank; checkmark; dot 2; dot 1. Row 3: s subscript 3; checkmark; checkmark; Blank; dot 2; dot 2. Row 4: s subscript 4; checkmark; checkmark; checkmark; dot 4; dot 2. Row 5: s subscript 1 prime; Blank; Blank; Blank; 0; dot 1. Row 6: s subscript 2 prime; Blank; Blank; checkmark; 0; dot 1. Row 7: s subscript 3 prime; Blank; checkmark; Blank; 0; dot 1. Row 8: s subscript 4 prime; Blank; checkmark; checkmark; 0; dot 1.

We now define when a default AB holds in a given probabilistic model M=S,P,v (in signs, AMB). A consequence relation can then be defined as follows. Where D is a set of defaults: ABCnϵ(D) iff for all probabilistic models M of D, AMB.Footnote 59 Before moving to Adams’ approach, we state three naive ideas. Let M=S,P,v.

  1. Naive 1 AM1B, iff P(AB)>P(A¬B) or if P(A)=0.Footnote 60

  2. Naive 2 AM2B, iff P(BA)>P(¬BA) or if P(A)=0.

  3. Naive 3 AM3B, iff P(BA)>τ for some threshold value τ (such as τ=.5), or if P(A)=0.Footnote 61

We note that approaches Naive 1 and Naive 2 are equivalent since, in case P(A)>0: P(BA)>P(¬BA), iff P(AB)P(A)>P(A¬B)P(A), iff P(AB)>P(A¬B). The weakness of the preceding naive approaches can be illustrated by applying them to our example.

Example 54 (Example 53 cont.). Let i{1,2,3}. In model M1 we have P1(bw)=P1({s2,s4})=.6>P1(b¬w)=P1({s1,s3})=.4 which is why bM1iw. Similarly, P1(bf)=P({s3,s4})=.6>P1(b¬f)=P1({s1,s2})=.4, which is why bM1if. However, since P1(bfw)=P1({s4})=.4<P1(b¬(fw))=P1({s1,s2,s3})=.6, we also have b/M1ifw, even bM1i¬(fw).

This means that AND is violated for the induced consequence relation. This model allows for a situation where bM1f,bM1w,bM1¬(fw), albeit {b,f,¬(fw)} is an inconsistent set. Similarly, other central properties of nonmonotonic entailment, such as CT, fail in our naive approaches.

In view of these weaknesses, Adams introduced another approach. In his semantics the degree of assertability of AB is modelled by the conditional probability P(BA). In a nutshell, the central idea is that some AB is entailed by a set of defaults D in case its assertability approximates 1 when the elements of D are being interpreted as increasingly assertable. In formal terms, let P be a proper probability function for D in case P(BA)>0 for all ABD. We define:

Definition 14.1 (ϵ-entailment, Adams, 1975; Pearl, 1989). Let D{AB} be a set of defaults. We define: ABCnϵ(D), iff, for any ϵ(0,1], there is a δ(0,1] such that for all proper probability functions P for D: if, for all CDD, P(DC)1δ, then P(BA)1ϵ.

Does this approach lead to a more well-behaved entailment relation and what are characteristic properties of ϵ-entailment? Let us have another look at our example.

Example 55 (Example 54 cont.). Where D={b¬f,bw}, we have, for instance, b¬fwCnϵ(D). In order to show this, let ϵ(0,1] be arbitrary and consider a probability function P. We need to find a δ(0,1] such that if P(¬fb),P(wb)1δ, then P(¬fwb)1ϵ. Let δ=ϵ/2 and suppose that P(¬fb),P(wb)1δ. Then,

P(¬fwb)=1P(f¬wb)=1P((f¬w)b)P(b)1P(fb)P(b)+P(¬wb)P(b)=1P(fb)+P(¬wb)=1((1P(¬fb))+(1P(wb)))=P(¬fb)+P(wb)12(1δ)1=22δ1=12δ=1ϵ

What we have just shown in the context of our example is not coincidental. Indeed, the proof of AND for ϵ-entailment follows exactly the structure of the proof in Example 55. What is more, ϵ-entailment can be shown to coincide with system P for finite knowledge bases (Geffner, Reference Geffner1992; Lehmann & Magidor, Reference Lehmann and Magidor1990): a remarkable correspondence between two rather different perspectives on the meaning of .

14.2 Other Quantitative Approaches

We close this section with some pointers to related approaches. While in Adams’ approach we find a probabilistic characterization of P-entailment, the reader may wonder whether also R-entailment can be represented by a quantitative approach. Indeed, utilizing a nonstandard probabilistic approach including infinitesimal values, Lehmann and Magidor (Reference Lehmann and Magidor1992) present a variant of Adams’ system that characterizes rational entailment.

Instead of probability measures other quantitative measures have been utilized in the literature to give meaning to defaults. We let S again be a finite set of situations and v:Atoms(S) an assignment function.

  • A possibility measure (Dubois & Prade, Reference Dubois, Prade, Shafer and Pearl1990) Poss:(S)[0,1] determines the possibility of a set of situations, from impossible (0) to maximally possible (1).Footnote 62 It is required that Poss()=0, Poss(S)=1, and for any SS, Poss(S)=maxsS(Poss({s})). A possibilistic model M is of the form S,Poss,v.

  • An ordinal ranking function (Goldszmidt & Pearl, Reference Goldszmidt and Pearl1992; Spohn, Reference Spohn, Harper and Skyrms1988) κ:(S){0,1,,} associates each set of situations with a level of surprise, from unsurprising (0) to shocking (). It is required that κ(S)=0, κ()=, and, for any SS, κ(S)=minsS(κ({s})). An ordinal ranking model M is of the form S,κ,v.

It is easy to see that letting ss iff Poss({s})>Poss({s}) in the context of a possibilistic model M=S,Poss,v [resp. iff κ(s)<κ(s) in the context of an ordinal ranking model M=S,κ,v] gives rise to a ranked model M=S,,v.

In each of these approaches the meaning of defaults in a given model is defined analogous to the underlying idea of ():

  • Where M=S,Poss,v is a possibilistic model, we let AMB iff Poss([[A]])=0 or Poss([[AB]])>Poss([[A¬B]])

  • Where M=S,κ,v is an ordinal ranking model, we let AMB iff κ([[A]])= or κ([[A¬B]])>κ([[AB]]).

We say that a possibilistic model [resp. an ordinal ranking model] M is a model of a set of defaults D in case AMB for all ABD.

For instance, a possibilistic model verifies AB just in case A is impossible, or if AB is strictly more possible than A¬B. Since ranking functions model the level of surprise an agent would face when learning that some A is true, according to a ranking function AB is valid in case A would cause maximal surprise or if learning about AB would cause strictly less surprise than learning about A¬B.

Example 56 (Ex. 53 cont.). We consider the set of states S in Example 53. Fig. 30 shows a cardinal ranking function κ and a possibility function Poss. Recall that for SS,

κ(S)=minsS(κ({s}))andPoss(S)=maxsS(Poss({s})),

which is why the figure fully characterizes κ and Poss by illustrating what values are assigned to single states. Where Mκ=S,κ,v and MPoss=S,Poss,v, we have:

  • bMκfw since κ([[b(fw)]])=κ({s4})=0<1=mins{s1,s2,s3}(κ({s}))=κ([[b¬(fw)]])

  • bMPossfw since Poss([[b(fw)]])=Poss({s4})=1>.9=maxs{s1,s2,s3}(Poss({s}))=κ([[b¬(fw)]])

  • ¬w/Mκ¬f since κ([[¬w¬f]])=mins{s1,s1}=1/<1=mins{s3,s3}=κ([[¬wf]]), and

  • ¬w/MPoss¬f since Poss([[¬w¬f]])=maxs{s1,s1}=.9/<.9=maxs{s3,s3}=Poss([[¬wf]]).

Content of image described in text.

Figure 30 Example 56. (Left) The cardinal ranking κ. (Right) The possibility function Poss.

Entailment relations are induced in the usual way. Given a set of defaults D, we let

  • ABCnposs(D) iff for all possibilistic models M=S,Poss,v of D, AMB,

  • ABCnrank(D) iff for all ordinal ranking models M=S,κ,v of D, AMB.

It is a most astonishing result in NML that all these different perspectives lead exactly to a characterization of P-entailment, a result that strongly underlines the central character of its underlying reasoning principles.

Theorem 14.1 (Dubois and Prade, 1991; Geffner, 1992; Lehmann and Magidor, 1992). Let D be a finite set of defaults. We have: ABCnP(D) iff ABCnϵ(D) iff ABCnposs(D) iff ABCnrank(D) iff ABCnR(D).

15 A Preferential Semantics for Some NMLs

In this section we present a preferential semantics for logics based on temperate accumulation and knowledge bases of the type As,Ad,RL,Footnote 63 such as Rescher and Manor’s logics based on maxicon sets and Makinson’s default assumptions (see Section 11.3.1).

In fact, this is exactly the semantics we introduced in Section 5.3, so our main aim in this section is to show its adequacy for temperate accumulation. We refer to Example 21, 22 in Section 5.3 for an illustration of this idea.

Let us briefly recall the general setup. We work in the context of a Tarski logic L (such as classical logic) which has an adequate model-theoretic semantic representation: for any set of L-sentences S{A} it holds: SA iff for all L-models M of S (i.e., for all BS, MB) it is the case that MA. In particular, we assume that the consistency of a set of sentences S can be expressed by ML(S).

In order to determine whether A defeasibly follows from K, we compare the L-models of the strict assumptions As in terms of how normal they interpret the defeasible assumptions in Ad. For this, we consider the normal part of a given model M, which is simply the subset of defeasible assumptions it validates: NK(M)=df{AAdMA}. Now we define an order on the L-models of As by:

MKM iff NK(M)NK(M).

We select the most normal models of K and define a consequence relation | by:

K|A iff for all MminK(M(As)),MA.

In the following we show how |PExttem and |AExttem can be characterized by a semantics based on .Footnote 64 For this we make use of the characterization of temperate accumulation in terms of maxicon sets (see Lemma 10.1 and Theorem 10.1).

Theorem 15.1. Let K=As,Ad,RL be a knowledge base. Then, K|PExttemA iff K|A.

The theorem follows in view of the following lemmas.

Lemma 15.1. For every Mmin(M(As)), NK(M)maxcon(K).

Proof. Suppose that Mmin(M(As)) and let D=N(M). Thus, DAs is consistent. Consider some DAd for which DAs is consistent and DD. Thus, there is a MM(AsD). Since DN(M), N(M)N(M) and by the -minimality of M, N(M)=D=D. Thus, Dmaxcon(K). □

Lemma 15.2. For every Dmaxcon(K) there is an Mmin(M(As)) for which NK(M)=D.

Proof. Suppose Dmaxcon(K). By the consistency of DAs, there is an MM(AsD). Consider a MM(As) for which N(M)N(M). Then N(M)As is consistent. Since DN(M) and the maximality of D, N(M)=D. So, N(M)=N(M) and thus, Mmin(M(As)). □

Proof of Theorem 15.1 K|PExttemA, iff [by Proposition 11.3], for all Dmaxcon(K), ACnL(AsD), iff [by Lemmas 15.1 and 15.2], for all Mmin(M(As)), MA, iff, K|A. □

We now move on to characterize |AExttem semantically. We can capture this consequence relation by defining a threshold function τ on the degree of normality a selected model is allowed to have.

τ:K{NK(M)Mmin(M)}core(K)=dfMM(As)NK(M)τ(K).

So the core of K consists of those models whose normal part contains at least all those sentences that are part of the normal parts of every -minimal model. Clearly, each -minimal model belongs to the core, but possibly also other models. Let, moreover,

K|coreA iff for all Mcore(K),MA.

Given that core(K)min(M(As)), the consequence relation |core will typically give rise to a more cautious reasoning style than |.

Example 57 (Example 21 cont.). For our K=As,Ad,RCL with

As={p,p¬ab1q,p¬ab2r,q¬ab3¬r,rs,¬rs},

and Ad={¬ab1,¬ab2,¬ab3} we have three minimal models: M1 with NK(M1)={¬ab1,¬ab2}, M2 with NK(M2)={¬ab1,¬ab3}, and M3 with NK(M3)={¬ab2,¬ab3} (see Fig. 12). So, τ(K)= and therefore core(K)=M(As), and K|coreA iff AsCLA. This highlights the fact that |core leads to a more cautious reasoning style than |.

We now consider K2=Ksq=As2,Ad,RCL, where As2=As{q}. In Fig. 31 we highlight the models in core(K2). In this case min(M(As2))core(K2)M(As2). This is reflected, for instance, in the consequences K2|core¬ab1 while AsCL¬ab1, and K2|¬(ab2ab3) while K2/|core¬(ab2ab3).

Content of image described in text.

Figure 31 The order on the models of K2 in Example 57. Highlighted are the models in core(K2). The atoms p,q, and s are true in every model of As2.

With Lemmas 15.1 and 15.2 we immediately get:

Corollary 15.1. Let K=As,Ad,RL be a knowledge base and MM(As). Then, Mcore(K) iff NK(M)maxcon(K).

Theorem 15.2. Let K=As,Ad,RL be a knowledge base. Then, K|coreA iff K|AExttemA.

Proof. Suppose K|AExttemA. Thus, by Proposition 11.3, maxcon(K)AsLA. Let Mcore(K). By Corollary 15.1, N(M)maxcon(K) and so MM(maxcon(K)As). Thus, MA. So, K|coreA.

Suppose K/|AExttemA. Thus, by Proposition 11.3, maxcon(K)AsLA. So, there is a MM(Asmaxcon(K)) such that M/A. So, N(M)maxcon(K). By Corollary 15.1, Mcore(K). So, K/|coreA. □

Combining our previous results with the result in Proposition 11.3, we get:

Corollary 15.2. Let K=As,Ad,RL be a knowledge base. Then,

  1. 1. K|PExttemA iff K|PextmconA iff K|A.

  2. 2. K|AExttemA iff K|AextmconA iff K|coreA.

16 Logic Programming

Logic programming is a declarative approach to problem solving. The idea is that a user describes a given reasoning problem by means of a so-called logic program in a simple syntax, without the need of encoding an algorithm to solve the problem. Automated proof procedures or semantic methods are then used to provide answers to queries. With the addition of negation-as-failure(-to-prove) (Section 16.1) or default negation, logic programming became a key paradigm in NML. It gave rise to a rich variety of applications, from legal reasoning (Sergot et al., Reference Sergot, Sadri, Kowalski, Kriwaczek, Hammond and Cory1986), to planning (including applications for the Space Shuttle program in Nogueira et al. (Reference Nogueira, Balduccini, Gelfond, Watson and Barry2001)), to cognitive science (Stenning & Van Lambalgen, Reference Stenning and Van Lambalgen2008), and others. In this section we will introduce one of the central semantical approaches based on stable models (Section 16.2), which under the addition of classical negation became known as answer set programming (in short: ASP, Section 16.3). In Section 16.4 we note that ASP and default logic coincide under a translation and that ASP can be considered a form of formal argumentation.

16.1 Normal Logic Programs and Default Negation

A logic program in its simplest form is a collection of strict inference rules of the form

B1,,BnA(16.1.1)

where A,B1,,Bn are atomic formulas (incl.  or ).Footnote 65 These rules are called the clauses of the program. Factual information is represented by rules with empty bodies, such as A. We reason with such programs as one would expect: C follows from a program Π={R1,,Rn} just in case there is an argument based on R1,,Rn with the conclusion C (recall Definition 5.1).

Similar to default logic, logic programming also accommodates defeasible assumptions in the body of rules such as:

On Sunday mornings Jane goes jogging, except it is stormy.

In logic programming the “except …” part is expressed with a dedicated negation whose exact interpretation we discuss as follows:

sundayMorning,stormyjogging(16.1.2)

More generally, we are now dealing with rules of the form

B1,,Bn,C1,CmA(16.1.3)

where A,B1,,Bn,C1,,Cm are atomic formulas. Sets of rules Π of the form (16.1.3) are called normal logic programs. The technical term for is “negation-as-failure(-to-prove)” or simply “default negation.” The basic idea is that A is considered true in the context of Π unless there is an argument for A (based on Π). So, jogging is entailed by the program only consisting of the rule (16.1.2), but if we add stormy it should not be entailed.

How to define a nonmonotonic consequence relation for negation-as-failure? Prima facie, the following simple (but ultimately flawed) idea seems to be in its spirit. We consider arguments that can be built with the rules in the given program Π and that are based on defeasible assumptions of the type A. Let for this Π be all formulas of the type A, where A occurs in some rule in Π and let KΠ be the knowledge base consisting of the defeasible assumptions Π and the strict rules in Π. So KΠ is of the form As:,Ad:Π,Rs:Π,Rd:, or in shorter notation Π,Π. We then let an atom A be entailed by Π just in case the following two criteria are fulfilled:

  1. 1. there is an argument a for A in ArgKΠ (recall Definition 5.1), and

  2. 2. there is no argument for C in ArgKΠ for any C occurring in a.

This would allow us to conclude jogging from

Π1={sundayMorning,sundayMorning,stormyjogging}.

The reason is that, where K1=KΠ1=Π1,Π1, there is an argument a in ArgK1 for jogging, namely a=sundayMorning,stormyjogging, and there is no argument for stormy in ArgK1. At the same time, this approach blocks the conclusion jogging from Π1=Π1{stormy} since now there is an argument b=stormy for stormy in ArgK1, where K1=KΠ1.

However, we run quickly into problems with our naive approach once the logic programs are slightly more involved.

Example 58. Consider, for instance, the following logic program:

Π2={s,sq,qr}

In this case, although it seems reasonable to infer r, our naive approach doesn’t permit it. To see this, we observe that the argument ar=qr for r relies on the assumption q. Although q can be concluded in view of a (counter-)argument aq=sq based on the assumption s, the latter is problematic since s follows strictly in Π2 by the argument as=s. This kind of reinstatement, in which an attacked argument is successfully defended by a nonattacked argument, cannot be handled by our naive approach.Footnote 66

Several ways that deal with such and similar problems have been proposed in various semantics for logic programming (see e.g., Eiter et al. (Reference Eiter, Ianni and Krennwallner2009)). In the following we will focus paradigmatically on one of the central approaches based on so-called stable models (Gelfond & Lifschitz, Reference Gelfond and Lifschitz1988).

16.2 Stable Models

A way to tackle the problem of reasoning with logic programs that contain default negation is by considering interpretations of programs. Let us start with the simple case of a -free logic program Π consisting of rules of the form (16.1.1). A model M of Π is a function that associates each atom C occurring in a rule in Π with true (written MC) or false (written M/C). As usual, we let M and M/. Where r=B1,,BnA, we write Mr (“M validates r”) in case MB1, …, MBn implies MA. A compact representation is by letting M be the set of those atoms in Π that it interprets as true (and so CM iff MC).Footnote 67

Example 59. Let Π={p,pq} and consider M1=, M2={p}, M3={q}, and M4={p,q}. Then, M1, M2, and M3 are not models of Π: M1 and M3 violate the first rule (since pM1 and pM3) and M2 violates the second rule (since qM2 although pM2). The only model of Π is M4.

It is easy to see that a -free program Π has a minimal model, that is, a model M of Π such that for all other models M of Π, M/M. In fact, as the reader can easily verify, the minimal program will consist exactly of the conclusions of arguments based on the rules in Π.

Fact 16.1. Let Π be a -free program. Then, Con[ArgKΠ] is the minimal model of Π.

As we have seen in the previous section, things get more interesting when we also consider default negation . For this we adjust the notion of validity in a model.

Definition 16.1. Let M{C} be a set of atoms. We let

  • MC, iff, CM or C= and

  • MC, iff, M/C.

Where r is a rule of type (16.1.3),

  • Mr, iff, MB1, …, MBn, MC1, …, MCm implies MA.

We write MΠ for the set {CΠMC} and Atoms(Π) for the set of atoms occurring in Π.

Let Π be a normal logic program and MAtoms(Π). We say that M is a model of Π in case Mr for all rΠ. We write M(Π) for the set of all models of Π.

By having another look at Π2 we note that not all models of a given program are equally representative of a rational reasoner.

Example 60 (Example 58 cont.). We consider the following candidates for models of Π2:

M1={s,r}M2={s,q,r}M3={s}

We have Miq for i{1,3} and M2/q. M3 is not a model of Π2 since M3/qr. M1 and M2 are models of Π2.

However, we also notice problems with M2. In particular we have M2q, although the only argument for q based on Π2 is sq while M2/s. So, q is “unfounded” in M2: it is valid but not supported by an argument in M2. A desideratum for us will thus be that models M of a program Π are founded in these programs in the sense that every atom contained in M can be inferred by means of Π and the defeasible assumptions MΠ in M. Let us make this precise.

Definition 16.2. Let Π be a normal logic program and MAtoms be a model. We let KΠM=dfMΠ,Π be the knowledge base consisting of the defeasible assumptions in MΠ and the rules in Π. A model M of Π is founded (in Π) if for each AM there is an argument aArgKΠM with conclusion A (so, M=Con[ArgKΠM]Atoms(Π)).

In order to filter out unfounded models, Gelfond and Lifschitz (Reference Gelfond and Lifschitz1988) have proposed the concept of a reduction program.

Definition 16.3. Given a model M of Π, we let the reduction of Π by M, written ΠM, be the result of (i) replacing each occurrence of a -negated formula C in Π by in case MC and by else, and of (ii) adding the rule r=.

Definition 16.4. Let Π be a normal logic program. M is a stable model of Π in case it is identical to the minimal model of ΠM. We write stable(Π) for the set of stable models of Π.

It is reassuring to note that (a) ΠM is a -free program and therefore has a minimal model (see Fact 16.1), and (b) if M is a model of ΠM, then it is also a model of Π.

Lemma 16.1. Let MAtoms(Π) and MM(ΠM). Then, MM(Π).

Proof. Let A1,,An,C1,,CmBΠ such that A1,,AnM and MCi for each i{1,,m}. Thus, A1,,An,,,BΠM. Since MM(ΠM), BM. □

Example 61 (Example 60 cont.). Let us put this idea to a test with Π2 and the two models M1={s,r} and M2={s,q,r}.

Table 010
A table comparing the logic program and its evaluations under two models. See long description.
Table 010Long description

The table consists of three columns: pi constant subscript 2; pi constant subscript 2 superscript M subscript 1; and pi constant subscript 2 superscript M subscript 2. It read as follows: Row 1: not q right arrow r; Top right arrow r; Bottom right arrow r. Row 2: not s right arrow q; Bottom right arrow q; and Bottom right arrow q. Row 3: right arrow s; right arrow s; and right arrow s. Row 4: Blank; right arrow Top; and right arrow Top.

The minimal model of Π2M1 is M1, the minimal model of Π2M2 is M2={s}M2. So, as expected, while M1 is a stable model of Π2, M2 is not.

Stable models do not exist for every program. Indeed, for some logic programs the only existing models are unfounded ones.

Example 62. A case in point is Π={pp}. Note that M0= is not a model of Π since M0p and so p would have to be true in M0 to be a model of Π. So we are left with M1={p}. But this model is not founded.Footnote 68

Programs containing conflicts may give rise to several stable models.

Example 63. As a simple example, consider Πconf={qp,pq}. M0= is not a model of Πconf, since both rules are applicable in M0, but p,qM0. On the other hand, Mpq={p,q} is not minimal (and hence unfounded), since neither rule is applicable in Mpq. We are left with Mp={p} and Mq={q}. As the reader can easily verify, these two models are stable.

16.3 Extended Logic Programs and Answer Sets

So far we have limited our attention to a rather weak language, only consisting of atoms and their default negations. We now will add another negation ¬ to the mix which will behave more similar to classical negation. This puts us in the realm of extended logic programs, which are sets of rules of the form

1,,n,1,,m(16.3.1)

where ,1,,n,1,,m are ¬-literals, that is, atoms or ¬-negated atoms. Lit¬(Π) denotes the set of all ¬-literals occurring in an extended program Π.

It is our task now to enhance the notion of a model to extended programs. A simple way is by means of a translation τ of a given extended program Π to a normal program Π (Gelfond & Lifschitz, Reference Gelfond and Lifschitz1991) in which each occurrence of some =¬p is replaced by a new atom p (not occurring in Π). We then consider only those models M of τ(Π) for which

  • pM or pM for all atoms p or

  • M contains all p and p for all atoms p in Π.Footnote 69

We then translate M back by considering τ1(M)={AMAAtoms}{¬AAM}, replacing atoms of the form p by ¬p. If M is a stable model of Π then we define τ1(M) to be a stable model of Π. The latter are also known as answer sets of Π.

Of course, we can also define a nonmonotonic consequence relation based on answer sets: where A is a ¬-literal, or a -negated ¬-literal and Π is an extended logic program we let:

Π|aspA iff for all answer sets M of Π,MA.

Example 64. We consider the extended logic program Π3 consisting of:

r1=sundayMorning,stormy,¬joggingjoggingr2=working,jogging¬joggingr3=sundayMorningr4=working

In the translated program τ(Π3), rules 1 and 2 are replaced by:

r1=sundayMorning,stormy,joggingjoggingr2=working,joggingjogging

We have two stable models of τ(Π3), as the reader can easily verify by inspecting Table 12.

M1={sundayMorning,working,jogging} andM2={sundayMorning,working,jogging}.

So, the answer sets of Π3 are M1 and

M2={sundayMorning,working,¬jogging}.

We note that ¬stormyM1M2, but M1stormy and M2stormy. This shows that negation-as-failure as interpreted in answer set programming does not realize a closed-world assumption in the strong sense that every atom A that is not derivable is interpreted as strongly negated, ¬A.Footnote 70

If we add the additional rule stormy to Π3, resulting in Π3, we end up with one answer set, namely

M3={sundayMorning,working,stormy,¬jogging}.

In terms of nonmonotonic consequence we have

  • Π3|aspstormy, while Π3|aspstormy, and

  • Π3|asp¬jogging, while Π3/|aspjogging and Π3/|asp¬jogging.

Table 12Models of τ(Π3) (Example 64)
A table displaying the logical program and its evaluation, used to identify two stable models. See long description.
Table 12Long description

The table consists of five columns: M subscript 1, M subscript 2, M subscript 3, M subscript 4, and M subscript 5. It read as follows: Row 1: working: checkmark, checkmark, checkmark, checkmark, checkmark. Row 2: Sunday Morning: checkmark, checkmark, checkmark, checkmark, checkmark. Row 3: jogging: checkmark, Blank, Blank, checkmark, checkmark. Row 4: jogging prime: Blank, checkmark, checkmark, Blank, checkmark. Row 5: stormy: Blank, Blank, checkmark, checkmark, checkmark. Row 6: r subscript 1 prime: checkmark, checkmark, checkmark, checkmark, checkmark. Row 7: r subscript 2 prime: checkmark, checkmark, checkmark, checkmark, checkmark.

Example 65. There are extended programs with only inconsistent (stable) models, for example, Π={p,¬p}. The only model of τ(Π)={p,p} is M={p,p}. So the only model of Π is τ1(M)={p,¬p}.

16.4 Answer Sets, Defaults, and Argumentation

Answer sets are closely related to the extensions of Reiter’s default logic (Section 12.2).Footnote 71 We can translate a clause of the form (16.3.1) to a (possibly) nonnormal default by

where for an atom A, A=df¬A and ¬A=dfA. Let the resulting translation of an extended program Π be

Krei(Π)=As:,RCL,Rd:{τrei(r)rΠ}.

Example 66. We have Krei(Π3)=,RCL,Rd, where Rd consists of the following general default rules:

d1=sundayMorning¬stormy,joggingjoggingd2=working¬jogging¬joggingd3=sundayMorningd4=working

Theorem 16.1 (Gelfond and Lifschitz, 1991). Let Π be an extended program and MLit¬(Π). Then,

  1. 1. if Mstable(Π), then CnCL(M)PExtgr(Πrei), and

  2. 2. for every EPExtgr(Πrei) there is exactly one Mstable(Π) for which E=CnCL(M).

Given this result the metatheoretic results for Reiter’s greedy approach immediately apply (see Section 12.1), such as cautious transitivity for |asp.Footnote 72

In the following we show that answer sets can also be expressed in logical argumentation.Footnote 73 We will improve our previous naive attempt (see Section 16.1 and recall the problematic Example 58) by allowing for reinstatement.

Definition 16.5. Let Π be an extended logic program. We let AFΠ=ArgKΠ,, where KΠ=Π,Π and for a,bArgKΠ, we let a attack b (in signs ab) if there is a sub-argument of b such that Con(a)=.

Example 67 (Example 64 cont.). We consider Π3 and list arguments in ArgKΠ3 that give rise to the argumentation framework in Fig. 32 (left).

Table 011
A list of arguments, each constructed from rules in the logic program. See long description.
Table 011Long description

The list describes: 1. a subscript zero equals right arrow working; 2. a subscript one equals right arrow Sunday Morning; 3. b subscript one equals similarity jogging; 4. b subscript two equals similarity not jogging; 5. b subscript three equals similarity stormy; 6. c subscript one equals (a subscript one, b subscript two, b subscript three right arrow jogging); 7. c subscript two equals (a subscript zero, b subscript one right arrow not jogging).

We obtain two stable extensions of AFΠ3:

E1={a0,a1,b2,b3,c1}  and  E2={a0,a1,b1,b3,c2}.

The set of conclusions (in Lit¬(Π3)) of arguments in the two stable models correspond to the two answer sets of Π3, namely:

M1={working,sundayMorning,jogging} andM2={working,sundayMorning,¬jogging}
Content of image described in text.

Figure 32 Argumentation framework for Examples 67 (left) and 68 (right). We omit nonattacked arguments.

Example 68 (Example 58 cont.). We consider the problematic example for our naive argumentation-based account, Π2. We have the following arguments in ArgKΠ2, giving rise to the argumentation framework in Fig. 32 (right).

The unique stable extension of AFΠ2 is E={b1,a2,b3} (highlighted). The set of atoms in Con[E] is identical to the only stable model of Π2, namely {s,r}.

Table 012
A table shows the list of six arguments derived from the logic program. See long description.
Table 012Long description

The table consists of three columns. It read as follows: Row 1: a subscript 1 not s; b subscript 1 equals right arrow s; b subscript 2 equals a subscript 1 right arrow q. Row 2: a subscript 2 equals not q; blank; b subscript 3 equals a subscript 2 right arrow r.

The correspondence is not coincidental. For a given extended logic program Π let stable(Π) be the set of consistent answer sets of Π, that is, those stable models of Π that do not contain contradictory literals.Footnote 74

Theorem 16.2. Let Π be an extended logic program.

  1. 1. If Mstable(Π), then ArgKΠ(MΠ)stable(KΠ).

  2. 2. If Estable(KΠ) and M=Con[E]Lit¬, then Mstable(Π).

Selected Further Readings

Friedman and Halpern (Reference Friedman and Halpern1996) provided a unifying approach to default reasoning based on plausibility orders covering many of the previously mentioned NMLs, such as the preferential semantics of Kraus et al. (Reference Kraus, Lehman and Magidor1990), the possibilistic approach by Benferhat et al. (Reference Benferhat, Dubois, Prade, Nebel, Rich and Swartout1992), ordinal rankings by Spohn (Reference Spohn, Harper and Skyrms1988), and ϵ-semantics (Adams, Reference Adams1975; Pearl, Reference Pearl1989). Another generalization is provided in Arieli and Avron (Reference Arieli and Avron200), who go beyond a classical base logic. Preferential conditionals have been embedded in the scope of a full logical language (so that they are allowed to occur in the scope of logical connectives such as ,,¬) in conditional logics (Asher & Morreau, Reference Asher, Morreau and van Eijck1991; Boutilier, Reference Boutilier1994a; Friedman & Halpern, Reference Friedman and Halpern1996). First-order versions of preferential consequence relations and conditional logics have been investigated, for example, in Delgrande (Reference Delgrande1998; Friedman et al. (Reference Friedman, Halpern and Koller2000) and Lehmann and Magidor (Reference Lehmann and Magidor1990). Proof theories for conditional logics in the style of Kraus et al. (Reference Kraus, Lehman and Magidor1990) can be found in Giordano et al. (Reference Giordano, Gliozzi, Olivetti and Pozzato2009), and for rational closure in Straßer (Reference Straßer, Carnielli and D’Ottaviano2009b) in terms of adaptive logics. Deep connections between preferential approaches and belief revision have been observed in many places, for example, Boutilier (Reference Boutilier1994b), Gärdenfors (Reference Gärdenfors1990), Rott et al. (Reference Rott2021).

Logics based on preferential semantics and logic programming have been characterized in terms of artificial neural nets; see for example Besold et al. (Reference Besold, d’Avila Garcez, Stenning, van der Torre and van Lambalgen2017), Hölldobler and Kalinke (Reference Hölldobler and Kalinke1994), and Leitgeb (Reference Leitgeb2018).

An overview and introduction to logic programming with an emphasis on answer sets is, in book form, Lifschitz (Reference Lifschitz2019), and more compactly, Eiter et al. (Reference Eiter, Ianni and Krennwallner2009). As the reader will expect, many variants of logic programming exist, including disjunctions (Minker, Reference Minker1994), preferences (Schaub & Wang, Reference Schaub and Wang2001), probabilities (Ng & Subrahmanian, Reference Ng and Subrahmanian1992) with connections to deep learning (Manhaeve et al., Reference Manhaeve, Dumanéiæ, Kimmig, Demeester and De Raedt2021), and so on. Logic programming has been successfully applied in the psychology of reasoning (Saldanha, Reference Saldanha2018; Stenning & Van Lambalgen, Reference Stenning and Van Lambalgen2008).

Acknowledgments

I want to thank Kees van Berkel, Matthis Hesse, Jessica Krumhus, and Dunja Šešelja for their highly valuable feedback on previous drafts. I am also much obliged to Joke Meheus and Diderik Batens for introducing me to the wonderful world of NMLs. Finally, not to forget Brad and Fred: thanks to my editors, Brad Armour-Garb and Fred Kroon, for their support, trust, and good mood throughout the whole process.

Philosophy and Logic

  • Bradley Armour-Garb

  • SUNY Albany

  • Bradley Armour-Garb is chair and Professor of Philosophy at SUNY Albany. His books include The Law of Non-Contradiction (co-edited with Graham Priest and J. C. Beall, 2004), Deflationary Truth and Deflationism and Paradox (both co-edited with J. C. Beall, 2005), Pretense and Pathology (with James Woodbridge, Cambridge University Press, 2015), Reflections on the Liar (2017), and Fictionalism in Philosophy (co-edited with Fred Kroon, 2020).

  • Frederick Kroon

  • The University of Auckland

  • Frederick Kroon is Emeritus Professor of Philosophy at the University of Auckland. He has authored numerous papers in formal and philosophical logic, ethics, philosophy of language, and metaphysics, and is the author of A Critical Introduction to Fictionalism (with Stuart Brock and Jonathan McKeown-Green, 2018).

About the Series

  • This Cambridge Elements series provides an extensive overview of the many and varied connections between philosophy and logic. Distinguished authors provide an up-to-date summary of the results of current research in their fields and give their own take on what they believe are the most significant debates influencing research, drawing original conclusions.

Philosophy and Logic

Footnotes

1 The term “defeasibility” entered philosophy with Hart’s discussion of legal contract as a defeasible concept (Hart, Reference Hart1948). Applied to duties, the idea occurs even earlier in Ross (Reference Ross1930), albeit under a different name, when elaborating on the prima facie character of duties. The defeasibility of arguments has been central to argumentation theory, starting from the writings of its pioneers such as Aristotle in his Topics (Aristotle, 1984) to modern classics such as Toulmin (Reference Toulmin1958) and Perelman and Olbrechts-Tyteca (Reference Perelman and Olbrechts-Tyteca1969) (for more on the history of the concept see Loui (Reference Loui1995). In recent years, the notion of defeat has also gained significant attention in epistemology; see e.g., Moretti and Piazza (Reference Moretti and Piazza2017), Sudduth (Reference Sudduth, Fieser and Dowden2017), and Brown and Simion (Reference Brown and Simion2021).

2 The example is inspired by Byrne (Reference Byrne1989).

3 Modus ponens is the classically valid inference that sanctions the conclusion B in view of the information that A, and that A implies B.

4 We find a funny twist on defeasible reasoning with generics and a bird named Tweety in the cartoon world of Birdy and the Beast (Warner Bros., 1944). In the heat of hunting the canary Tweety, the cat Sylvester begins to fly and just after being reminded by Tweety that cats don’t fly, he loses this ability – mid air – and crashes. Much to the amusement of Tweety, this shows that defeasible argumentation can save lives.

5 Since the main aim of this Element is to introduce the central methods driving NMLs, we will not discuss specific applications such as abductive inferences or inductive generalizations in any further detail. For an introduction to inductive logic we refer to the Element by Eagle (Reference Eagle2024).

6 For a list of benchmark examples, see Lifschitz (Reference Lifschitz, Reinfrank, de Kleer, Ginsberg and Sandewall1989).

7 Developing semantics for generics is notoriously difficult. There are many approaches, from normality-based (e.g., Asher and Pelletier (Reference Asher, Pelletier, Mari, Beyssade and Prete2012)), to prototype-based (e.g., Heyer (Reference Heyer1990)), to Bayesian accounts (e.g., Tessler and Goodman (Reference Tessler and Goodman2019)). See Leslie and Lerner (Reference Leslie, Lerner, Zalta and Nodelman2022) for an overview.

8 Elio and Pelletier forcefully argue for a closer orientation on human reasoning practices (Elio & Pelletier, Reference Elio and Pelletier1994; Pelletier & Elio, Reference Pelletier and Elio1997). Empirical studies on the acceptance of central principles of NML can be found, e.g., in Benferhat et al. (Reference Benferhat, Bonnefon and da Silva Neves2005), Pfeifer and Kleiter (Reference Pfeifer and Kleiter2005), Saldanha (Reference Saldanha2018), and Schurz (Reference Schurz2005).

9 In order to simplify the technical depth in this Element, we will use the language of propositional/sentential logic, rather than that of predicate logic. This will sometimes lead to less elegant translations of natural language sentences than would be possible in predicate logic (e.g., Nixon→Dove instead of Dove(Nixon)).

10 An overview on discussions surrounding floating conclusions can be found in Horty (Reference Horty2002).

11 Formal languages underlying specific NMLs are often richer. For instance, they may contain predicate symbols, quantifiers, modal operators or (nonmonotonic) conditionals. Nevertheless, for the introduction in this Element, a purely propositional language will suffice.

12 We suppose that and are commutative, associative, and idempotent.

13 Seminal studies of these properties can be found in Gabbay (Reference Gabbay and Apt1985) and Kraus et al. (Reference Kraus, Lehman and Magidor1990).

14 We silently interpret this and the following properties under universal quantification over sets of sentences S, sentences A,B, etc.

15 S and T are classically equivalent, iff, for all A∈T, S⊢CLA iff T⊢CLA.

16 The original discussion in Kraus et al. (Reference Kraus, Lehman and Magidor1990) concerns the case in which the left-hand side of |∼ is a single sentence.

17 See, for instance, Kelly and Lin (Reference Kelly and Lin2021) and R. Stalnaker (Reference Stalnaker1994) for critical views on RM.

18 The notions of defeat and rebuttal will be discussed in more detail in the context of formal argumentation (Part II).

19 See Prakken (Reference Prakken2012), Rescher (Reference Rescher1976), and Vreeswijk (Reference Vreeswijk1993). We capitalize these technical terms to distinguish them from their more general informal usage, i.e., “defeasible reasoning” refers to the general phenomenon as described in Section 1, while Defeasible Reasoning is the technical term described in this section.

20 A conditional is contrapositable in case ¬B→¬A follows from A→B, or more general, if C∧¬B→¬A follows from C∧A→B.

21 One should not put too much philosophical emphasis on the term knowledge in the context of knowledge representation (e.g., knowledge bases may contain defeasible assumptions which do not have the status of true and justified beliefs). From a Brandomian perspective one may think of knowledge bases as representing base commitments of a reasoner, where the defeasible inference rules open an argumentative space of prima facie entitlements (Brandom, Reference Brandom2009).

22 In Section 2.2 we considered properties for consequence relations of the type |∼⊆℘(sentL)×sentL. A generalized study for |∼⊆KnmL×sentL is presented in Section 10.3.

23 In many NMLs the deductive base system is not modeled as part of a knowledge base, but simply presupposed and provided by classical logic. Other systems, such as logic programming (see Section 16.4) or some systems of structured argumentation theory (see Section 8) explicitly model the strict rule base as part of the knowledge base. In any case, most well-known NMLs come with a deductive base system and a defeasible rule base. For reasons of generality, we model both as part of the knowledge base.

24 Pioneering research on deontic logic and subjunctive conditionals gave rise to many systems that incorporate nonmonotonic conditionals in the object language, for example, Van Fraassen (Reference Van Fraassen1972) and Hansson (Reference Hansson1969) in deontic logic, logics by Lewis for deontic reasoning (Lewis, Reference Lewis1974) and counterfactuals (Lewis, Reference Lewis1973), R. F. Stalnaker (Reference Stalnaker and Reischer1968) on counterfactuals, etc.

25 In this Element we will mostly consider knowledge bases for which there are no metarules, i.e., Rm=∅ (with the exception of temperate accumulation in Section 11, in particular input–output logics in Section 11.3.2).

26 Under the following interpretation, this example is known as the order puzzle in deontic logic (Horty, Horty_Reasons_as_Defaults_OUP_2012): it is winter (p), open the window (q), turn on the heating (r).

27 In Sections 9 and 11.3.2 we also discuss a different take on arguments, not as proof trees but as premise-conclusion pairs.

28 Recall that (a) Def(K) consists of the defeasible assumptions Ad and rules Rd in K, and that (b) some NMLs come with associated knowledge bases that only contain one of the two types of defeasible elements. We use d as a metavariable for members of Def(K).

29 Recall that Con[ArgK(Def⋆)]={Con(a)∣a∈ArgK(Def⋆)}.

30 The idea has been proposed by various scholars (Kraus et al., Reference Kraus, Lehman and Magidor1990; Shoham, Reference Shoham and Ginsberg1987). In circumscription (McCarthy, Reference McCarthy1980) (for propositional versions see also Gelfond et al. (Reference Gelfond, Przymusinska and Przymusinski1989) and Satoh (Reference Satoh1989)) and adaptive logics (Batens, Reference Batens2007 we proceed in an inverted manner: instead of interpreting the information such that as many defeasible assumptions in Ad are true as possible, one works with a set of negative assumptions which are interpreted false as much as possible.

31 See Moinard and Rolland (Reference Moinard and Rolland1998) for an overview on circumscription.

32 The reader is referred back to Section 5.1 where we introduced basic definitions such as the notion of an argument and the set ArgK.

33 For an overview on the state of the art in structured argumentation see Arieli et al. (Reference Arieli, Borg, Heyninck and Straßer2021a).

34 See Beirlaen et al. (Reference Beirlaen, Heyninck, Pardo and Straßer2018) for an overview of different accounts of argument strength.

35 Our definition follows Modgil and Prakken (p. 364, Reference Modgil and Prakken2013), according to whom preferences do not matter for defeats based on undercuts. See also Baroni et al. (Reference Baroni, Giacomin and Guida2001) for more discussion.

36 Standard ASPIC+ therefore disallows rebuttals in heads of strict rules. For alternative approaches to ASPIC+ that lift this restriction see Caminada et al. (Reference Caminada, Modgil and Oren2014), and Heyninck and Straßer (Reference Heyninck and Straßer2019).

37 There are some differences between these accounts; e.g., in Besnard and Hunter (Reference Besnard and Hunter2001) the premise set of an argument is supposed to be minimal and consistent, strict assumptions are not considered, and the approach is based on classical logic, whereas Arieli and Straßer (Reference Arieli and Straßer2015) allow for any Tarski base logic. Like Arieli et al. (Reference Arieli, Borg and Straßer2023) we here include defeasible assumptions, but we simplify the presentation in that we don’t rely on an underlying sequent calculus.

38 The terminology for attack forms in logical argumentation is incoherent with the one used in ASPIC+. In order to not confuse the reader familiar with logical argumentation, we don’t unify the terminology in this section.

39 For an overview on relations between methods based on maxicon sets and structured argumentation see Arieli et al. (Reference Arieli, Borg and Heyninck2019).

40 We again use the notation with edgy brackets to denote the lifting of a function to sets of elements of its domain. E.g., where A is a set of arguments, Con[A]={Con(a)∣a∈A}.

41 Since the organization of conflicting knowledge bases into coherent units essentially underlies the reasoning process one should consider knowledge representation, reasoning, and the study of consequences as deeply interwoven. For analytic purposes we nevertheless present these aspects separately in this Element.

42 Note that results marked with an asterisk are proven in the technical appendices.

43 In particular we have: A∪{A∨B}⊢LC iff A∪{A}⊢LC and A∪{B}⊢LC.

44 In view of Section 11.2.1 (Theorem 11.2) this characterization, by translation, also covers knowledge bases of the form ⟨As,Ad,Rs,Rd⟩.

45 A related family of logics is Adaptive Logics (Batens, Reference Batens2007). They have been shown to give rise to the same consequence relations as Makinson’s default assumptions (Van De Putte, Reference Van De Putte2013). Instead of working with “positive” assumptions, the knowledge bases of adaptive logics consider negative assumptions, so-called “abnormalities” (see Section 2.3). Adaptive logics have found many applications, from abductive reasoning (Beirlaen & Aliseda, Reference Beirlaen and Aliseda2014), to inductive generalizations (Batens, Reference Batens2011), from normative reasoning (Van De Putte et al., Reference Van De Putte, Beirlaen and Meheus2019) to default logic (Straßer, Reference Straßer2009a). For a book-length introduction see Straßer (Reference Straßer2014).

46 The consequence relation |∼∩extmcon is also called the strong or universal entailment, |∼∩argmcon the free consequence, and |∼∪mcon the existential consequence. See Benferhat et al. (Reference Benferhat, Dubois and Prade1997) for a detailed study of these consequence relations.

47 Input–output logics have two characterizations, a semantic and a syntactic, proof-theoretic one. We here focus on the latter, since it coheres better with our overall presentation.

48 The situation is different for |∼∩AExtio and the systems IOi⋆ (for i∈{2,4}). For instance, the knowledge base K2 in Table 9 can easily be adjusted for input–output logics to serve as a counterexample.

49 In order to simplify things, we omit, e.g., priorities from the presentation in this section.

50 An alternative algorithm without the need to guess has been proposed in Łukaszewicz (Reference Łukaszewicz1988). It also avoids the problem of nonexisting extensions in Example 44.

51 A similar translation is proposed in Bondarenko et al. (Reference Bondarenko, Dung, Kowalski and Toni1997). The translation presented here is generalized to prioritized default theories in Straßer and Pardo (Reference Straßer, Pardo, Liu, Marra, Portner and Van de Putte2021).

52 The idea of using semantic selections to give meaning to conditionals predates NMLs. Stalnaker used a selection function to give meaning to counterfactuals (R. F. Stalnaker, Reference Stalnaker and Reischer1968), while Lewis used semantic spheres (Lewis, Reference Lewis1973).

53 RM has a corresponding principle CV in Lewis’s logic VC, as studied in the context of counterfactuals (Lewis, Reference Lewis1973).

54 An equivalent approach to Rational Closure has been defined on the basis of ϵ-semantics (see Section 14.1) under the name of system Z (Goldszmidt & Pearl, Reference Goldszmidt and Pearl1990; Pearl, Reference Pearl1990).

55 In case our language only contains finitely many atoms, say n many, there is an upper limit to the rank, namely 2n.

56 Some follow-up work tackles this problem by penalizing models in the comparison of models for violations of defaults (where M violates A⇒B if A holds in M but B does not) such that the penalty is the higher the more specific the default is (Goldszmidt et al., Reference Goldszmidt, Morris and Pearl1993; Lehmann, Reference Lehmann1995).

57 In Eagle (Reference Eagle2024) the reader finds an introduction to inductive logics which are used to probabilistically study the support that some evidence provides for a claim.

58 In order to slightly simplify the discussion, we stick to a finite language.

59 Adams considers knowledge bases of the form ⟨Ad,Rd⟩. Our restriction to knowledge bases consisting of sets of defaults D is without loss of generality since, in Adams’ system, factual assumptions A are equivalent to ⊤⇒A.

60 In Section 14.2 we show, however, that when utilizing other types of quantitative measures, this idea does not suffer from the problems discussed later for probability measures.

61 By using big-stepped probabilities, the problems indicated later for approaches based on probabilistic thresholds can be avoided (see Benferhat et al. (Reference Benferhat, Dubois and Prade1999)).

62 Necessity is a separate technical notion in possibility theory, defined by Nec(S′)=1−Poss(S∖S′).

63 In Section 11.3.1 we have denoted this class of knowledge bases by Kmcon.

64 Also |∼∪exttem can be characterized by a similar semantics; for details, see how this is achieved in adaptive logics in, e.g., Batens (Reference Batens2007), and Straßer (Reference Straßer2014). In adaptive logics the semantic selection for |∼∩PExttem corresponds to the so-called minimal abnormality strategy, while the semantic selection for |∼∩AExttem corresponds to the reliability strategy. Adaptive logics offer adequate dynamic proof theories for each of these semantic methods.

65 These rules are often written “A←B1,…,Bn”, but in order to keep our presentation coherent, we write them in the same way as the strict rules in previous sections. Since we don’t consider rules with empty conclusions in this Element, we added to express (somewhat unorthodoxically) clauses such as B1,…,Bn→ equivalently by B1,…,Bn→⊥. This is an inconsequential for what follows.

66 We give an adequate argumentative characterization in Section 16.4.

67 The set of atoms that occur in a program Π is called its Herbrand base and sets of atoms in the Herbrand base of Π are called Herbrand interpretations of Π.

68 There are three-valued semantics for stable models in which a stable model exists that assigns to p a third truth value, undecided (Przymusinski, Reference Przymusinski1990).

69 This condition makes sure that from an inconsistent program anything is derivable (see Example 65).

70 Alternative interpretations of are offered, e.g., by the completion semantics (Clark, Reference Clark, Gallaire and Minker1977) and its weak variant that led to applications in the psychology of reasoning (Stenning & Van Lambalgen, Reference Stenning and Van Lambalgen2008).

71 There are also close relations to temperate accumulation. E.g., in Besold et al. (Reference Besold, d’Avila Garcez, Stenning, van der Torre and van Lambalgen2017) we find a characterization of input-ouput logic in logic programming.

72 For this one has to define Π⊕sA by Π∪{→A}.

73 Close connections between various semantics of logic programming and structured argumentation have been observed in Caminada and Schulz (Reference Caminada and Schulz2017).

74 Recall that ArgKΠ(MΠ∼) denotes those arguments in ArgKΠ which only make use of defeasible assumptions in MΠ∼.

References

Adams, E. W. (1975). The logic of conditionals. D. Reidel Publishing Co.CrossRefGoogle Scholar
Antonelli, G. A. (1999). A directly cautious theory of defeasible consequence for default logic via the notion of general extension. Artificial Intelligence, 109(1), 71109.CrossRefGoogle Scholar
Antoniou, G., & Wang, K. (2007). Default logic. In Gabbay, D. & Woods, J. (Eds.), Handbook of the history of logic (pp. 517556, Vol. 8). North-Holland.Google Scholar
Arieli, O., & Avron, A. (2000). General patterns for nonmonotonic reasoning: From basic entailments to plausible relations. Logic Journal of the IGPL, 8, 119148.CrossRefGoogle Scholar
Arieli, O., Borg, A., & Heyninck, J. (2019). A review of the relations between logical argumentation and reasoning with maximal consistency. Annals of Mathematics and Artificial Intelligence, 87(3), 187226.CrossRefGoogle Scholar
Arieli, O., Borg, A., Heyninck, J., & Straßer, C. (2021a). Logic-based approaches to formal argumentation. Journal of Applied Logics - IfCoLog Journal, 8(6), 17931898.Google Scholar
Arieli, O., Borg, A., & Straßer, C. (2021b). Characterizations and classifications of argumentative entailments. In Bienvenu, M., Lakemeyer, G., & Erdem, E. (Eds.), Proceedings of the 18th International Conference on Principles of Knowledge Representation and Reasoning, 5262. Curran Associates, Inc.Google Scholar
Arieli, O., Borg, A., & Straßer, C. (2023). A postulate-deriven study of logical argumentation. Artificial Intelligence, 103966.CrossRefGoogle Scholar
Arieli, O., & Straßer, C. (2015). Sequent-based logical argumentation. Argument and Computation, 6(1), 7399.CrossRefGoogle Scholar
, Aristotle. (1984). The complete works of Aristotle. The revised Oxford translation. One volume digital edition. Princeton University Press.Google Scholar
Asher, N., & Morreau, M. (1991). Commonsense entailment: A modal theory of nonmonotonic reasoning. In van Eijck, J. (Ed.), Logics in AI. Lecture Notes in Computer Science, vol. 478. Springer. DOI: https://doi.org/10.1007/BFb0018430.Google Scholar
Asher, N., & Pelletier, F. J. (2012). More truths about generic truth. In Mari, A., Beyssade, C., & Prete, F. Del (Eds.), Genericity, 312333. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199691807.003.0012.CrossRefGoogle Scholar
Baroni, P., Gabbay, D., & Giacomin, M. (Eds.). (2018, February). Handbook of formal argumentation. College Publications.Google Scholar
Baroni, P., Giacomin, M., & Guida, G. (2001). On the notion of strength in argumentation: Overcoming the epistemic/practical dichotomy. ECSQARU Workshop Adventures in Argumentation, 1–8. IRIS Institutional Research Information System: OPENBS Open Archive UniBS. https://iris.unibs.it/handle/11379/159298.Google Scholar
Batens, D. (1986). Dialectical dynamics within formal logics. Logique et Analyse, 114, 161173.Google Scholar
Batens, D. (2007). A universal logic approach to adaptive logics. Logica Universalis, 1(1), 221242.CrossRefGoogle Scholar
Batens, D. (2011). Logics for qualitative inductive generalization. Studia Logica, 97(1), 6180.CrossRefGoogle Scholar
Beirlaen, M., & Aliseda, A. (2014). A conditional logic for abduction. Synthese, 191(15), 37333758.CrossRefGoogle Scholar
Beirlaen, M., Heyninck, J., Pardo, P., & Straßer, C. (2018). Argument strength in formal argumentation. Journal of Applied Logics – IfCoLog Journal, 5(3), 629675.Google Scholar
Benferhat, S., Bonnefon, J. F., & da Silva Neves, R. (2005). An overview of possibilistic handling of default reasoning, with experimental studies. Synthese, 146(1–2), 5370.CrossRefGoogle Scholar
Benferhat, S., Cayrol, C., Dubois, D., Lang, J., & Prade, H. (1993). Inconsistency management and prioritized syntax-based entailment. International Joint Conference on Artificial Intelligence, 93, 640645.Google Scholar
Benferhat, S., Dubois, D., & Prade, H. (1992). Representing default rules in possibilistic logic. In Nebel, B., Rich, C., & Swartout, W. R. (Eds.), Proceedings of the Third International Conference on the Principles of Knowledge Representation and Reasoning, 673684. Morgan Kaufmann Publishers.Google Scholar
Benferhat, S., Dubois, D., & Prade, H. (1997). Some syntactic approaches to the handling of inconsistent knowledge bases: A comparative study. Part 1: The flat case. Studia Logica, 58, 1745.CrossRefGoogle Scholar
Benferhat, S., Dubois, D., & Prade, H. (1999). Possibilistic and standard probabilistic semantics of conditional knowledge bases. Journal of Logic and Computation, 9(6), 873895.CrossRefGoogle Scholar
Besnard, P., & Hunter, A. (2001). A logic-based theory of deductive arguments. Artificial Intelligence, 128(1), 203235.CrossRefGoogle Scholar
Besold, T. R., d’Avila Garcez, A., Stenning, K., van der Torre, L., & van Lambalgen, M. (2017). Reasoning in non-probabilistic uncertainty: Logic programming and neural-symbolic computing as examples. Minds and Machines, 27(1), 3777.CrossRefGoogle Scholar
Bochman, A. (2005). Explanatory nonmonotonic reasoning. World Scientific Publishing.CrossRefGoogle Scholar
Bondarenko, A., Dung, P. M., Kowalski, R. A., & Toni, F. (1997). An abstract, argumentation-theoretic approach to default reasoning. Artificial Intelligence, 93, 63101.CrossRefGoogle Scholar
Borg, A. (2020). Assumptive sequent-based argumentation. Journal of Applied Logics, 2631(3), 227294.Google Scholar
Borg, A., & Straßer, C. (2018). Relevance in structured argumentation. In Lang, J. (Ed.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 17531759.CrossRefGoogle Scholar
Boutilier, C. (1994a). Conditional logics of normality: A modal approach. Artificial Intelligence, 68(1), 87154.CrossRefGoogle Scholar
Boutilier, C. (1994b). Unifying default reasoning and belief revision in a modal framework. Artificial Intelligence, 68(1), 3385.CrossRefGoogle Scholar
Brandom, R. (2009). Articulating reasons: An introduction to inferentialism. Harvard University Press.CrossRefGoogle Scholar
Brewka, G. (1989). Preferred subtheories: An extended logical framework for default reasoning. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (II), 89, 10431048.Google Scholar
Brewka, G. (1991). Cumulative default logic: In defense of nonmonotonic inference rules. Artificial Intelligence, 50(2), 183205.CrossRefGoogle Scholar
Brown, J., & Simion, M. (2021). Reasons, justification, and defeat. Oxford University Press.CrossRefGoogle Scholar
Byrne, R. M. (1989). Suppressing valid inferences with conditionals. Cognition, 31(1), 6183.CrossRefGoogle ScholarPubMed
Caminada, M., & Amgoud, L. (2007). On the evaluation of argumentation formalisms. Artificial Intelligence, 171, 286310.CrossRefGoogle Scholar
Caminada, M., Carnielli, W. A., & Dunne, P. E. (2012). Semi-stable semantics. Journal of Logic and Computation, 22(5), 12071254.CrossRefGoogle Scholar
Caminada, M., Modgil, S., & Oren, N. (2014). Preferences and unrestricted rebut. Computational Models of Argument: Proceedings of COMMA 2014, 209220.Google Scholar
Caminada, M., & Schulz, C. (2017). On the equivalence between assumptionbased argumentation and logic programming. Journal of Artificial Intelligence Research, 60, 779825.CrossRefGoogle Scholar
Cayrol, C. (1995). On the relation between argumentation and non-monotonic coherence-based entailment. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, 95, 14431448.Google Scholar
Chisholm, R. M. (1963). Contrary-to-duty imperatives and deontic logic. Analysis, 24, 3336.CrossRefGoogle Scholar
Clark, K. L. (1977). Negation as failure. In Gallaire, H., and Minker, J. (Eds.), Logic and Data Bases, 293322. Springer. DOI: https://doi.org/10.1007/978-1-4684-3384-5_11.Google Scholar
Čyras, K., & Toni, F. (2015). Non-monotonic inference properties for assumptionbased argumentation. In Black, E., Modgil, S., N., & , Oren (Eds.), Theory and Applications of Formal Argumentation, vol. 9524, 92111. Springer. DOI: https://doi.org/10.1007/978-3-319-28460-6_6.CrossRefGoogle Scholar
Delgrande, J. P. (1987). A first-order conditional logic for prototypical properties. Artificial Intelligence, 33(1), 105130.CrossRefGoogle Scholar
Delgrande, J. P. (1998). On first-order conditional logics. Artificial Intelligence, 105(1), 105137.CrossRefGoogle Scholar
Denecker, M., Marek, V. W., & Truszczynski, M. (2011). Reiter’s default logic is a logic of autoepistemic reasoning and a good one, too. arXiv preprint arXiv:1108.3278.Google Scholar
Doyle, J., & McDermott, D. (1980). Nonmonotonic logic i. Artificial Intelligence, 13(1), 2.Google Scholar
Dubois, D., & Prade, H. (1990). An introduction to possibilistic and fuzzy logics. In Shafer, G. & Pearl, J. (Eds.), Readings in Uncertain Reasoning (pp. 742761). Morgan Kaufmann.Google Scholar
Dubois, D., & Prade, H. (1991). Possibilistic logic, preferential models, nonmonotonicity and related issues. In Proceedings Twelfth International Joint Conference on Artificial Intelligence, 419424.Google Scholar
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artifical Intelligence, 77, 321358.CrossRefGoogle Scholar
Eagle, A. (2024). Probability and inductive logic. Cambridge University Press.Google Scholar
Eemeren, F., & Grootendorst, R. (2004). A systematic theory of argumentation: The pragma-dialectical approach. Cambridge University Press.Google Scholar
Eiter, T., Ianni, G., & Krennwallner, T. (2009). Answer set programming: A primer. Reasoning Web International Summer School, 40110.Google Scholar
Elio, R., & Pelletier, F. J. (1994). On relevance in non-monotonic reasoning: Some empirical studies. Relevance: American Association for Artificial Intelligence 1994 Fall Symposium Series, 6467.Google Scholar
Friedman, N., & Halpern, J. Y. (1996). Plausibility measures and default reasoning. Journal of the ACM, 48(4), 12971304.Google Scholar
Friedman, N., Halpern, J. Y., & Koller, D. (2000). First-order conditional logic for default reasoning revisited. ACM Transactions on Computational Logic, 1(2), 175207.CrossRefGoogle Scholar
Gabbay, D. M. (1985). Theoretical foundations for non-monotonic reasoning in expert systems. In Apt, K. R. (ed.), Logics and models of concurrent systems (pp. 439457). Springer.CrossRefGoogle Scholar
Gabbay, D., Giacomin, M., Guillermo, S., & Thimm, M. (Eds.). (2021). Handbook of formal argumentation. College Publications.Google Scholar
Gärdenfors, P. (1990). Belief revision and nonmonotonic logic: Two sides of the same coin? European Workshop on Logics in Artificial Intelligence, 5254.Google Scholar
Geffner, H. (1992). High-probabilities, model-preference and default arguments. Minds and Machines, 2, 5170.CrossRefGoogle Scholar
Gelfond, M., & Lifschitz, V. (1988). The stable model semantics for logic programming. ICLP/SLP, 88, 10701080.Google Scholar
Gelfond, M., & Lifschitz, V. (1991). Classical negation in logic programs and disjunctive databases. New Generation Computing, 9(3–4), 365385.CrossRefGoogle Scholar
Gelfond, M., Lifschitz, V., Przymusinska, H., & Truszczynski, M. (1991). Disjunctive defaults. Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, 230237.Google Scholar
Gelfond, M., Przymusinska, H., & Przymusinski, T. (1989). On the relationship between circumscription and negation as failure. Artificial Intelligence, 38(1), 7594.CrossRefGoogle Scholar
Giordano, L., Gliozzi, V., Olivetti, N., & Pozzato, G. L. (2009). Analytic tableaux calculi for KLM logics of nonmonotonic reasoning. ACM Transactions on Computational Logic (TOCL), 10(3), 18.CrossRefGoogle Scholar
Goldszmidt, M., & Pearl, J. (1990). On the relation between rational closure and system Z. Third International Workshop on Nonmonotonic Reasoning (South Lake Tahoe), 130140.Google Scholar
Goldszmidt, M., & Pearl, J. (1992). Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. Proceedings of the Third International Conference on Knowledge Representation and Reasoning, 661672.Google Scholar
Goldszmidt, M., Morris, P., & Pearl, J. (1993). A maximum entropy approach to nonmonotonic reasoning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(3), 220232.CrossRefGoogle Scholar
Haenni, R. (2009). Probabilistic argumentation [Special issue: Combining Probability and Logic]. Journal of Applied Logics, 7(2), 155176.CrossRefGoogle Scholar
Hansen, J. (2008). Prioritized conditional imperatives: Problems and a new proposal. Autonomous Agents and Multi-Agent Systems, 17(1), 1135.CrossRefGoogle Scholar
Hansson, B. (1969). An analysis of some deontic logics. Nous, 373398.CrossRefGoogle Scholar
Hart, H. L. (1948). The ascription of responsibility and rights. Proceedings of the Aristotelian Society, 49, 171194.CrossRefGoogle Scholar
Heyer, G. (1990). Semantics and knowledge representation in the analysis of generic descriptions. Journal of Semantics, 7(1), 93110.CrossRefGoogle Scholar
Heyninck, J., & Arieli, O. (2019). An argumentative characterization of disjunctive logic programming. EPIA Conference on Artificial Intelligence, 526538.Google Scholar
Heyninck, J., & Straßer, C. (2016). Relations between assumption-based approaches in nonmonotonic logic and formal argumentation. In Kern-Isberner, G. & Wassermann, R. (Eds.), Proceedings of NMR2016 (pp. 6576).Google Scholar
Heyninck, J., & Straßer, C. (2019). A fully rational argumentation system for preordered defeasible rules. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (pp. 17041712).Google Scholar
Heyninck, J., & Straßer, C. (2021a). A comparative study of assumption-based argumentative approaches to reasoning with priorities. Journal of Applied Logics – IfCoLog Journal of Logics and Their Applications, 8(3), 737808.Google Scholar
Heyninck, J., & Straßer, C. (2021b). Rationality and maximal consistent sets for a fragment of ASPIC+ without undercut. Argument & Computation, (1), 347.CrossRefGoogle Scholar
Hölldobler, S., & Kalinke, Y. (1994). Towards a new massible parallel computational model for logic programming. Proceedings of the Workshop on Combining Symbolic and Connectionist Processing ECCAI, 6877.Google Scholar
Horty, J. F. (2002). Skepticism and floating conclusions. Artificial Intelligence, 135(1–2), 5572.CrossRefGoogle Scholar
Horty, J. F. (2012). Reasons as defaults. Oxford University Press.CrossRefGoogle Scholar
Hunter, A., & Thimm, M. (2017). Probabilistic reasoning with abstract argumentation frameworks. Journal of Artificial Intelligence Research, 59, 565611.CrossRefGoogle Scholar
Kelly, K. T., & Lin, H. (2021). Beliefs, probabilities, and their coherent correspondence.Lotteries, Knowledge, and Rational Belief: Essays on the Lottery Paradox, (pp. 185222). Cambridge University Press.CrossRefGoogle Scholar
Konolige, K. (1988). On the relation between default and autoepistemic logic. Artificial Intelligence, 35(3), 343382.CrossRefGoogle Scholar
Koons, R. (2017). Defeasible Reasoning. In Zalta, E. N. (Ed.), The Stanford encyclopedia of philosophy (Winter 2017). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/reasoning-defeasible/.Google Scholar
Kraus, S., Lehman, D., & Magidor, M. (1990). Nonmonotonic reasoning, preferential models and cumulative logics. Artificial Intelligence, 44, 167207.CrossRefGoogle Scholar
Kyburg, H. E. (2001). Real logic is nonmonotonic. Minds and Machines, 11(4), 577595.CrossRefGoogle Scholar
Lehmann, D. J. (1995). Another perspective on default reasoning. Annals of Mathematics and Artificial Intelligence, 15(1), 6182.CrossRefGoogle Scholar
Lehmann, D. J., & Magidor, M. (1990). Preferential logics: The predicate calculus case. Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge, 5772.Google Scholar
Lehmann, D. J., & Magidor, M. (1992). What does a conditional knowledge base entail? Artificial Intelligence, 55(1), 160.CrossRefGoogle Scholar
Leitgeb, H. (2018). Neural network models of conditionals. In Introduction to formal philosophy (pp. 147176). Springer.CrossRefGoogle Scholar
Leslie, S.-J., & Lerner, A. (2022). Generic Generalizations. In Zalta, E. N. & Nodelman, U. (Eds.), The Stanford encyclopedia of philosophy (Fall 2022). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/entries/generics/.Google Scholar
Lewis, D. (1973). Counterfactuals. Harvard University Press.Google Scholar
Lewis, D. (1974). Semantic analyses for dyadic deontic logic. In Logical theory and semantic analysis: Essays dedicated to Stig Kanger on his fiftieth birthday (pp. 114). Springer.Google Scholar
Li, Z., Oren, N., & Parsons, S. (2018). On the links between argumentationbased reasoning and nonmonotonic reasoning. Lecture Notes in Computer Science vol. 10757 (pp. 6785). Springer.Google Scholar
Liao, B., Oren, N., van der Torre, L., & Villata, S. (2016). Prioritized norms and defaults in formal argumentation. Cariani, F., Grossi, D., Meheus, J., & Parent, Xavier (Eds.), Deontic Logic and Normative Systems. 12th International Conference, DEON 2014, Ghent, Belgium, July 12–15, 2014. Proceedings. Springer, pp. 139154.Google Scholar
Liao, B., Oren, N., van der Torre, L., & Villata, S. (2018). Prioritized norms in formal argumentation. Journal of Logic and Computation, 29(2), 215240.CrossRefGoogle Scholar
Lifschitz, V. (1989). Benchmark problems for formal nonmonotonic reasoning. In Reinfrank, M., de Kleer, J., Ginsberg, M. L., & Sandewall, E. (Eds.), Non-Monotonic Reasoning, Lecture Notes in Computer Science, vol. 346. Springer, 202219.CrossRefGoogle Scholar
Lifschitz, V. (2019). Answer set programming. Springer.CrossRefGoogle Scholar
Lin, F., & Shoham, Y. (1990). Epistemic semantics for fixed-points non-monotonic logics. Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge, 111120.Google Scholar
Loui, R. P. (1995). Hart’s critics on defeasible concepts and ascriptivism. Proceedings of the 5th International Conference on Artificial Intelligence and Law, 2130.Google Scholar
Łukaszewicz, W. (1988). Considerations on default logic: An alternative approach. Computational Intelligence, 4(1), 116.CrossRefGoogle Scholar
Makinson, D. (2003). Bridges between classical and nonmonotonic logic. Logic Journal of IGPL, 11(1), 6996.CrossRefGoogle Scholar
Makinson, D. (2005). Bridges from classical to nonmonotonic logic (Vol. 5). King’s College Publications.Google Scholar
Makinson, D., & Van Der Torre, L. (2000). Input/Output logics. Journal of Philosophical Logic, 29, 383408.CrossRefGoogle Scholar
Makinson, D., & Van Der Torre, L. (2001). Constraints for Input/Output logics. Journal of Philosophical Logic, 30(2), 155185.CrossRefGoogle Scholar
Manhaeve, R., Dumanéiæ, S., Kimmig, A., Demeester, T., & De Raedt, L. (2021). Neural probabilistic logic programming in deepproblog. Artificial Intelligence, 298, 103504.CrossRefGoogle Scholar
McCarthy, J. (1980). Circumscription: A form of non-monotonic reasoning. Artificial Intelligence, 13, 2729.CrossRefGoogle Scholar
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 5774.CrossRefGoogle ScholarPubMed
Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.Google Scholar
Minker, J. (1994). Overview of disjunctive logic programming. Annals of Mathematics and Artificial Intelligence, 12(1), 124.CrossRefGoogle Scholar
Modgil, S., & Prakken, H. (2013). A general account of argumentation with preferences. Artificial Intelligence, 195, 361397.CrossRefGoogle Scholar
Modgil, S., & Prakken, H. (2014). The ASPIC+framework for structured argumentation: A tutorial. Argument & Computation, 5(1), 3162.CrossRefGoogle Scholar
Moinard, Y., & Rolland, R. (1998). Propositional circumscriptions [research report]. INRIA-00073147. https://inria.hal.science/inria-00073147.Google Scholar
Moretti, L., & Piazza, T. (2017). Defeaters in current epistemology: Introduction to the special issue. Synthese, 195(7), 28452854.CrossRefGoogle Scholar
Ng, R., & Subrahmanian, V. S. (1992). Probabilistic logic programming. Information and Computation, 101(2), 150201.CrossRefGoogle Scholar
Nogueira, M., Balduccini, M., Gelfond, M., Watson, R., & Barry, M. (2001). An a-prolog decision support system for the space shuttle. Proceedings of the Third International Symposium on Practical Aspects of Declarative Languages, 169183.Google Scholar
Parent, X., & van der Torre, L. (2013). Input/output logic. In Gabbay, D., Horty, J., Parent, X., van der Meyden, R., & van der Torre, L. (Eds.), Handbook of deontic logic (pp. 499544, Vol. 1). College Publications.Google Scholar
Pearl, J. (1989). Probabilistic semantics for nonmonotonic reasoning: A survey. Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, 505516.Google Scholar
Pearl, J. (1990). System Z: A natural ordering of defaults with tractable applications to nonmonotonic reasoning. TARK ’90: Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning about Knowledge, 121135.Google Scholar
Pelletier, F. J., & Elio, R. (1997). What should default reasoning be, by default? Computational Intelligence, 13(2), 165187.CrossRefGoogle Scholar
Perelman, C., & Olbrechts-Tyteca, L. (1969, June). The new rhetoric: A treatise on argumentation. University of Notre Dame Press.Google Scholar
Pfeifer, N., & Kleiter, G. D. (2005). Coherence and nonmonotonicity in human reasoning. Synthese, 146(1–2), 93109.CrossRefGoogle Scholar
Pollock, J. (1991). A theory of defeasible reasoning. International Journal of Intelligent Systems, 6, 3354.CrossRefGoogle Scholar
Pollock, J. (1995). Cognitive carpentry. Bradford/MIT Press.CrossRefGoogle Scholar
Poole, D. (1985). On the comparison of theories: Preferring the most specific explanation. IJCAI, 85, 144147.Google Scholar
Poole, D. (1988). A logical framework for default reasoning. Artificial Intelligence, 36(1), 2747.CrossRefGoogle Scholar
Poole, D. (1991). The effect of knowledge on belief: Conditioning, specificity and the lottery paradox in default reasoning. Artificial Intelligence, 49(1–3), 281307.CrossRefGoogle Scholar
Prakken, H. (2012). Some reflections on two current trends in formal argumentation. Logic Programs, Norms and Action, 249272.CrossRefGoogle Scholar
Przymusinski, T. C. (1990). The well-founded semantics coincides with the three-valued stable semantics. Fundamenta Informaticae, 13(4), 445463.CrossRefGoogle Scholar
Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 12(13).CrossRefGoogle Scholar
Reiter, R. (1981). On closed world data bases. In Readings in artificial intelligence (pp. 119140). Elsevier.CrossRefGoogle Scholar
Reiter, R., & Criscuolo, G. (1981). On interacting defaults. IJCAI, 81, 270276.Google Scholar
Rescher, N. (1976). Plausible reasoning: An introduction to the theory and practice of plausibilistic inference. Van Gorcum.Google Scholar
Rescher, N., & Manor, R. (1970). On inference from inconsistent premises. Theory and Decision, 1, 179217.CrossRefGoogle Scholar
Ross, W. D. (1930). The right and the good. Oxford University Press.Google Scholar
Rott, H. (2001). Change, choice and inference: A study of belief revision and nonmonotonic reasoning. Clarendon Press.CrossRefGoogle Scholar
Saldanha, E.-A. D. (2018). From logic programming to human reasoning: How to be artificially human. KI – Künstliche Intelligenz, 32(4), 283286.CrossRefGoogle Scholar
Satoh, K. (1989). A probabilistic interpretation for lazy nonmonotonic reasoning. Institute for New Generation Computer Technology.Google Scholar
Schaub, T., & Wang, K. (2001). A comparative study of logic programs with preference. IJCAI, 597602.Google Scholar
Schulz, C., & Toni, F. (2016). Justifying answer sets using argumentation. Theory and Practice of Logic Programming, 16(1), 59110.CrossRefGoogle Scholar
Schurz, G. (2005). Non-monotonic reasoning from an evolution-theoretic perspective: Ontic, logical and cognitive foundations. Synthese, 146(1–2), 3751.CrossRefGoogle Scholar
Sergot, M. J., Sadri, F., Kowalski, R. A., Kriwaczek, F., Hammond, P., & Cory, H. T. (1986). The British Nationality Act as a logic program. Communications of the ACM, 29(5), 370386.CrossRefGoogle Scholar
Shoham, Y. (1987). A semantical approach to nonmonotonic logics. In Ginsberg, M. L. (Ed.), Readings in non-monotonic reasoning (pp. 227249). Morgan Kaufmann.Google Scholar
Spohn, W. (1988, August). Ordinal conditional functions: A dynamic theory of epistemic states. In Harper, W. L. & Skyrms, B. (Eds.), Causation in decision, belief change and statistics (pp. 105134). Springer.CrossRefGoogle Scholar
Stalnaker, R. (1994). What is a nonmonotonic consequence relation? Fundamenta Informaticae, 21(1), 721.CrossRefGoogle Scholar
Stalnaker, R. F. (1968). A theory of conditionals. In Reischer, N. (Ed.), Studies in logical theory. Basil Blackwell.Google Scholar
Stenning, K., & Van Lambalgen, M. (2008). Human reasoning and cognitive science. MIT Press.CrossRefGoogle Scholar
Straßer, C. (2009a). An adaptive logic for rational closure. The many sides of logic, 4767.Google Scholar
Straßer, C. (2009b). The many sides of logic. In Carnielli, M. E. C. Walter & D’Ottaviano, I. M. L. (Eds.). College Publications.Google Scholar
Straßer, C. (2014). Adaptive logic and defeasible reasoning: Applications in argumentation, normative reasoning and default reasoning. (Trends in Logic Vol. 38). Springer.Google Scholar
Straßer, C., Beirlaen, M., & Van De Putte, F. (2016). Adaptive logic characterizations of input/output logic. Studia Logica, 104(5), 869916.CrossRefGoogle Scholar
Straßer, C., & Michajlova, L. (2023). Evaluating and selecting arguments in the context of higher order uncertainty. Frontiers in Artificial Intelligence, 6, 1133998.CrossRefGoogle ScholarPubMed
Straßer, C., & Pardo, P. (2021). Prioritized defaults and formal argumentation. In Liu, F., Marra, A., Portner, P., & Van de Putte, F. (Eds.), Proceedings of DEON2020/2021 (pp. 427446). College Publications.Google Scholar
Straßer, C., & Seselja, D. (2010). Towards the proof-theoretic unification of Dung’s argumentation framework: An adaptive logic approach. Journal of Logic and Computation, 21(2), 133156.CrossRefGoogle Scholar
Sudduth, M. (2017). Defeaters in epistemology. In Fieser, F. & Dowden, B. (Eds.), Internet encyclopedia of philosophy. https://iep.utm.edu/defeaters-in-epistemology/.Google Scholar
Tessler, M. H., & Goodman, N. D. (2019). The language of generalization. Psychological Review, 126(3), 395.CrossRefGoogle ScholarPubMed
Toni, F. (2014). A tutorial on assumption-based argumentation. Argument & Computation, 5(1), 89117.CrossRefGoogle Scholar
Toulmin, S. E. (1958). The Uses of Argument. Cambridge University Press.Google Scholar
van Berkel, K., & Straßer, C. (2022). Reasoning with and about norms in logical argumentation. In Toni, F., Polberg, S., Booth, R., Caminada, M., & Kido, H. (Eds.), Frontiers in artificial intelligence and applications: Computational models of argument, proceedings (COMMA22) (pp. 332343, Vol. 353). IOS Press.Google Scholar
Van De Putte, F. (2013). Default assumptions and selection functions: A generic framework for non-monotonic logics. In Advances in artificial intelligence and its applications. Lecture Notes in Computer Science, vol. 8265 (pp. 5467). Springer.CrossRefGoogle Scholar
Van De Putte, F., Beirlaen, M., & Meheus, J. (2019). Adaptive deontic logics. Handbook of Deontic Logic and Normative Systems, 2, 367459. College Publications.Google Scholar
Van Fraassen, B. C. (1972). The logic of conditional obligation. Journal of Philosophical Logic, 1, 417438.CrossRefGoogle Scholar
Vesic, S. (2013). Identifying the class of maxi-consistent operators in argumentation. Journal of Artificial Intelligence Research, 47, 7193.CrossRefGoogle Scholar
Vreeswijk, G. A. W. (1993). Studies in defeasible argumentation [Doctoral dissertation, Free University Amsterdam. Department of Computer Science].Google Scholar
Walton, D., Reed, C., & Macagno, F. (2008). Argumentation schemes. Cambridge University Press.CrossRefGoogle Scholar
Young, A. P., Modgil, S., & Rodrigues, O. (2016). Prioritised default logic as rational argumentation. Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 626634.Google Scholar
Figure 0

Table 001 Table 001 long description.

Figure 1

Figure 1 The Nixon Diamond from Example 2. Double arrows symbolize defeasible rules, single arrows strict rules, and wavy arrows conflicts. Black nodes represent unproblematic conclusions, while light nodes represent problematic conclusions. Rectangular nodes represent the starting point of the reasoning process. We use the same symbolism in the following figures.

Figure 2

Figure 2 Tweety and specificity, Example 3.Figure 2 long description.

Figure 3

Figure 3 A drowning scenario.

Figure 4

Figure 4 A scenario with the floating conclusion C.

Figure 5

Table 1 Reasoning styles modelled by various logics discussed in this Element. (⋆) A NML with genuine defaults, such as Reiter’s default logic can “simulate” Plausible Reasoning by encoding defeasible assumptions A by defaults with empty bodies ⇒A.Table 1 long description.

Figure 6

Figure 5 Top: Defeasible Reasoning giving rise to a greedy reasoning style. Bottom: Plausible Reasoning giving rise to contrapositions of defeasible rules.

Figure 7

Figure 6 The workings of NMLs.

Figure 8

Table 2 The class of associated knowledge bases for specific NMLs. In gray the nonfixed parts. For example, for specific input–output logics the set of metarules Rm is fixed, while the strict assumptions and defeasible rules vary in their applications. RL is the class of strict rules induced by a logic L, where CL is classical logic.Table 2 long description.

Figure 9

Figure 7 The arguments and the argumentation framework for Example 13 (omitting the nonattacked and nonattacking a1 and a2). We explain the shading in Example 14.

Figure 10

Figure 8 The skeptical and the credulous reasoning style.Figure 8 long description.

Figure 11

Table 3 Three types of nonmonotonic consequence relations.Table 3 long description.

Figure 12

Table 016 Table 016 long description.

Figure 13

Figure 9 Types of nonmonotonic consequence based on syntactic approaches.Figure 9 long description.

Figure 14

Algorithm 1 Greedy accumulationAlgorithm 1 long description.

Figure 15

Table 01 Table 01 long description.

Figure 16

Table 02 Table 02 long description.

Figure 17

Algorithm 2 Temperate accumulationAlgorithm 2 long description.

Figure 18

Table 03 Table 03 long description.

Figure 19

Table 04 Table 04 long description.

Figure 20

Figure 10 The syntactic approach and NMLs discussed in this Element.Figure 10 long description.

Figure 21

Figure 11 Nonmonotonic entailment by semantic selections.

Figure 22

Figure 12 The order on the models of Example 21. Highlighted are the -minimal models. The atoms p and s are true in every model of As.Figure 12 long description.

Figure 23

Figure 13 Models of As in Example 22 with highlighted -minimal models.Figure 13 long description.

Figure 24

Figure 14 Links between the various methods studied in this Element.Figure 14 long description.

Figure 25

Table 4 Argumentation semantics.Table 4 long description.

Figure 26

Figure 15 Relations between argumentation semantics. Every extension of the type left of an arrow is also an extension of the type to its right.

Figure 27

Figure 16 Left: An argumentation framework composed of five arguments. Highlighted in the center and on the right are its two preferred extensions. The extension in the center is the only stable extension. The grounded extension in this example is .

Figure 28

Figure 17 The argumentation framework for Example 23. Solid arrows represent rebuttals, dashed arrows undermining, and dotted arrows undercuts.

Figure 29

Table 5 The various extensions and consequences for Example 23.Table 5 long description.

Figure 30

Table 05 Table 05 long description.

Figure 31

Figure 18 Example 24 with the inconsistent preferred and stable extension {a1,a2,a3}.

Figure 32

Figure 19 Excerpt of the argumentation framework of Example 25.

Figure 33

Table 6 Attack types in logic-based argumentation.Table 6 long description.

Figure 34

Figure 20 Example 26. Left: We let A={p∧u,¬p∧u}. The black nodes represent the grounded extension. Dashed arrows correspond to those Defeats and ConDefeats that are not DirDefeats, while solid arrows are (also) DirDefeats. Right top: The grounded extension for α={DirDefeat}. Right center and bottom: the two stable resp. preferred extensions.

Figure 35

Figure 21 Example 27. Left: α∈AttSet. Right: α∈AttDir.

Figure 36

Algorithm 3 Greedy accumulation (general version)Algorithm 3 long description.

Figure 37

Algorithm 4 Temperate accumulation (general version)Algorithm 4 long description.

Figure 38

Figure 22 Versions of cautious monotonicity with defeasible knowledge basesFigure 22 long description.

Figure 39

Table 06 Table 06 long description.

Figure 40

Figure 23 Relations between extensional and consequence-based notions of cumulativity, cautious transitivity and monotonicity (where i∈{s,d}).

Figure 41

Table 7 Two classes of knowledge bases and the properties of the associated consequence relations with the notion of argument from Definition 5.1.Table 7 long description.

Figure 42

Figure 24 Nonmonotonic reasoning properties for temperate accumulation, where i∈{s,d}, |∼∈{|∼∩AExttem,|∼∩PExttem,|∼∪Exttem}, and |∼∩∈{|∼∩AExttem,|∼∩PExttem}.

Figure 43

Figure 25 The argumentation framework for the knowledge base of Example 32 based on the arguments to the right. The rectangular node represents a class of arguments based on the inconsistent assumption set {p∧q,¬p∧q}. An outgoing [resp. ingoing] arrow symbolizes an attack from [resp. to] some argument in the class.

Figure 44

Table 8 Overview on properties of the consequence relations based on maxicon sets. All positive results follow from the general results for NMLs based on temperate accumulation in Section 11.1 (see also Corollary 11.2 below).Table 8 long description.

Figure 45

Table 9 Counterexamples to OR i(|∼). Where j∈{1,2,3} let Kj=⟨∅,Aj,RCL⟩, A1={¬p∧r,¬q∧r}, A2={¬p,¬q,¬p⊃r,¬q⊃r}, A3=A3u∪A3¬u, A3u={u∧(p⊃r)}, A3¬u={¬u∧(q⊃r)}, and A2A=A2∪{A}.Table 9 long description.

Figure 46

Table 10 Counterexamples to RM i(|∼) where K=⟨∅,Ad,RCL⟩ with Ad={p,¬p,q∧r}. We have: K|∼r and K/|∼¬¬(p∧r), while K⊕¬(p∧r)/|∼r.Table 10 long description.

Figure 47

Table 07 Table 07 long description.

Figure 48

Figure 26 The basic input–output logics and their associated knowledge bases where IO1={RW,LS,AND} and Rdid={A⇒A∣A∈sentL}.

Figure 49

Algorithm 5 Generalized Greedy AccumulationAlgorithm 5 long description.

Figure 50

Figure 27 Illustration of Example 45.

Figure 51

Table 08 Table 08 long description.

Figure 52

Table 013

Figure 53

Figure 28 (Left) The preferential model M=⟨S,≺,v⟩ of Example 46. (Middle and Right) Ranked models M′=⟨S,≺′,v⟩ based on modular extension ≺′ of . The dashed arrow is optional.

Figure 54

Table 09 Table 09 long description.

Figure 55

Figure 29 Two ranked models of Example 49.

Figure 56

Table 11 The states and probability functions for Example 53.Table 11 long description.

Figure 57

Figure 30 Example 56. (Left) The cardinal ranking κ. (Right) The possibility function Poss.

Figure 58

Figure 31 The order on the models of K2 in Example 57. Highlighted are the models in core⪯(K2). The atoms p,q, and s are true in every model of As2.

Figure 59

Table 010 Table 010 long description.

Figure 60

Table 12 Models of τ(Π3) (Example 64)Table 12 long description.

Figure 61

Table 011 Table 011 long description.

Figure 62

Figure 32 Argumentation framework for Examples 67 (left) and 68 (right). We omit nonattacked arguments.

Figure 63

Table 012 Table 012 long description.

Accessibility standard: Inaccessible, or known limited accessibility

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this Element is known to have missing or limited accessibility features. We may be reviewing its accessibility for future improvement, but final compliance is not yet assured and may be subject to legal exceptions. If you have any questions, please contact accessibility@cambridge.org.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.

Structural and Technical Features

ARIA roles provided
You gain clarity from ARIA (Accessible Rich Internet Applications) roles and attributes, as they help assistive technologies interpret how each part of the content functions.

Save element to Kindle

To save this element to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Nonmonotonic Logic
Available formats
×

Save element to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Nonmonotonic Logic
Available formats
×

Save element to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Nonmonotonic Logic
Available formats
×