To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
With an increasing number of applications in the context of multi-agent systems, automated negotiation is a rapidly growing area. Written by top researchers in the field, this state-of-the-art treatment of the subject explores key issues involved in the design of negotiating agents, covering strategic, heuristic, and axiomatic approaches. The authors discuss the potential benefits of automated negotiation as well as the unique challenges it poses for computer scientists and for researchers in artificial intelligence. They also consider possible applications and give readers a feel for the types of domains where automated negotiation is already being deployed. This book is ideal for graduate students and researchers in computer science who are interested in multi-agent systems. It will also appeal to negotiation researchers from disciplines such as management and business studies, psychology and economics.
Writing in language tests is regarded as an important indicator for assessing language skills of test takers. As Chinese language tests become popular, scoring a large number of essays becomes a heavy and expensive task for the organizers of these tests. In the past several years, some efforts have been made to develop automated simplified Chinese essay scoring systems, reducing both costs and evaluation time. In this paper, we introduce a system called SCESS (automated Simplified Chinese Essay Scoring System) based on Weighted Finite State Automata (WFSA) and using Incremental Latent Semantic Analysis (ILSA) to deal with a large number of essays. First, SCESS uses an n-gram language model to construct a WFSA to perform text pre-processing. At this stage, the system integrates a Confusing-Character Table, a Part-Of-Speech Table, beam search and heuristic search to perform automated word segmentation and correction of essays. Experimental results show that this pre-processing procedure is effective, with a Recall Rate of 88.50%, a Detection Precision of 92.31% and a Correction Precision of 88.46%. After text pre-processing, SCESS uses ILSA to perform automated essay scoring. We have carried out experiments to compare the ILSA method with the traditional LSA method on the corpora of essays from the MHK test (the Chinese proficiency test for minorities). Experimental results indicate that ILSA has a significant advantage over LSA, in terms of both running time and memory usage. Furthermore, experimental results also show that SCESS is quite effective with a scoring performance of 89.50%.
We address the problem of unsupervised and semi-supervised SMS (Short Message Service) text message SPAM detection. We develop a content-based Bayesian classification approach which is a modest extension of the technique discussed by Resnik and Hardisty in 2010. The approach assumes that the bodies of the SMS messages arise from a probabilistic generative model and estimates the model parameters by Gibbs sampling using an unlabeled, or partially labeled, SMS training message corpus. The approach classifies new SMS messages as SPAM or HAM (non-SPAM) by zero-thresholding their logit estimates. We tested the approach on a publicly available SMS corpora collected from the UK. Used in semi-supervised fashion, the approach clearly outperformed a competing algorithm, Semi-Boost. Used in unsupervised fashion, the approach outperformed a fully supervised classifier, an SVM (Support Vector Machine), when the number of training messages used by the SVM was small and performed comparably otherwise. We believe the approach works well and is a useful tool for SMS SPAM detection.
Rule-based information extraction is an important approach for processing the increasingly available amount of unstructured data. The manual creation of rule-based applications is a time-consuming and tedious task, which requires qualified knowledge engineers. The costs of this process can be reduced by providing a suitable rule language and extensive tooling support. This paper presents UIMA Ruta, a tool for rule-based information extraction and text processing applications. The system was designed with focus on rapid development. The rule language and its matching paradigm facilitate the quick specification of comprehensible extraction knowledge. They support a compact representation while still providing a high level of expressiveness. These advantages are supplemented by the development environment UIMA Ruta Workbench. It provides, in addition to extensive editing support, essential assistance for explanation of rule execution, introspection, automatic validation, and rule induction. UIMA Ruta is a useful tool for academia and industry due to its open source license. We compare UIMA Ruta to related rule-based systems especially concerning the compactness of the rule representation, the expressiveness, and the provided tooling support. The competitiveness of the runtime performance is shown in relation to a popular and freely-available system. A selection of case studies implemented with UIMA Ruta illustrates the usefulness of the system in real-world scenarios.
Keyphrases are the most important phrases of documents that make them suitable for improving natural language processing tasks, including information retrieval, document classification, document visualization, summarization and categorization. Here, we propose a supervised framework augmented by novel extra-textual information derived primarily from Wikipedia. Wikipedia is utilized in such an advantageous way that – unlike most other methods relying on Wikipedia – a full textual index of all the Wikipedia articles is not required by our approach, as we only exploit the category hierarchy and a list of multiword expressions derived from Wikipedia. This approach is not only less resource intensive, but also produces comparable or superior results compared to previous similar works. Our thorough evaluations also suggest that the proposed framework performs consistently well on multiple datasets, being competitive or even outperforming the results obtained by other state-of-the-art methods. Besides introducing features that incorporate extra-textual information, we also experimented with a novel way of representing features that are derived from the POS tagging of the keyphrase candidates.
Knowledge representation and reasoning is the foundation of artificial intelligence, declarative programming, and the design of knowledge-intensive software systems capable of performing intelligent tasks. Using logical and probabilistic formalisms based on answer set programming (ASP) and action languages, this book shows how knowledge-intensive systems can be given knowledge about the world and how it can be used to solve non-trivial computational problems. The authors maintain a balance between mathematical analysis and practical design of intelligent agents. All the concepts, such as answering queries, planning, diagnostics, and probabilistic reasoning, are illustrated by programs of ASP. The text can be used for AI-related undergraduate and graduate classes and by researchers who would like to learn more about ASP and knowledge representation.
An informative and comprehensive overview of the state-of-the-art in natural language generation (NLG) for interactive systems, this guide serves to introduce graduate students and new researchers to the field of natural language processing and artificial intelligence, while inspiring them with ideas for future research. Detailing the techniques and challenges of NLG for interactive applications, it focuses on the research into systems that model collaborativity and uncertainty, are capable of being scaled incrementally, and can engage with the user effectively. A range of real-world case studies is also included. The book and the accompanying website feature a comprehensive bibliography, and refer the reader to corpora, data, software and other resources for pursuing research on natural language generation and interactive systems, including dialog systems, multimodal interfaces and assistive technologies. It is an ideal resource for students and researchers in computational linguistics, natural language processing and related fields.
The notion of burden of proof and its companion notion of presumption are central to argumentation studies. This book argues that we can learn a lot from how the courts have developed procedures over the years for allocating and reasoning with presumptions and burdens of proof, and from how artificial intelligence has built precise formal and computational systems to represent this kind of reasoning. The book provides a model of reasoning with burden of proof and presumption, based on analyses of many clearly explained legal and non-legal examples. The model is shown to fit cases of everyday conversational argumentation as well as argumentation in legal cases. Burden of proof determines (1) under what conditions an arguer is obliged to support a claim with an argument that backs it up and (2) how strong that argument needs to be to prove the claim in question.
In the previous chapters we have discussed how to represent the operation of critical questions in a formal and computational model that can incorporate argumentation schemes as well as their accompanying critical questions. In order to illustrate how this works the example of the scheme for argument from expert opinion has been used. The problem is to classify the critical questions as assumptions or exceptions in order to properly reflect the distribution of the burden of proof between the party who put forward the argument and the other party, the respondent who is raising critical questions about the argument. Is this problem merely a technical problem of how to model argumentation by the use of defeasible argumentation schemes? Or is it a problem that could arise in a real case of argumentation? In Chapter 4, a legal case concerning how to logically represent critical questions appropriate for argument from witness testimony is studied that illustrates the problem of how to arrive at a decision to properly assign a burden of proof to the one side or the other.
In this case, the Oregon Supreme Court overturned the previous procedures for determining the admissibility of eyewitness identification evidence. The decision to change the law was based on recent research in the social sciences concerning the reliability of eyewitness identification, and by considerations put to the court by the Innocence Network, an organization dedicated to the study of unjust convictions. In some cases it can be quite difficult for the courts to make a decision on burden of proof, and in some of these cases a ruling is made that can act as a precedent when the same kind of decision about burden of proof arises in a comparable case. In Chapter 4, a more challenging kind of case is studied in which a change was made in the normal way of dealing with burden of proof in criminal trials. This change was prompted by a gradually growing body of scientific evidence suggesting that witness testimony evidence is much more fallible in certain respects than was previously thought.
In law, there is a fundamental distinction between two main types of burden of proof (Prakken and Sartor, 2009). One is the setting of the global burden of proof before the trial begins, which is called the burden of persuasion. It does not change during the argumentation stage, and it is the device used to determine which side has won at the closing stage. The other is the local setting of burden of proof at the argumentation stage, often called the burden of production (or the evidential burden, or the burden of going forward with evidence) in law. This burden can shift back and forth as the argumentation proceeds. For example, if one side puts forward a strong argument, the other side must meet the local burden to respond to that argument by criticizing or presenting a counterargument, or otherwise the strong argument will hold, and it will fulfill the burden of persuasion of its proponent unless the respondent puts forward an equally strong objection or counterargument. Otherwise the respondent will lose the trial at that point, and the judge can declare that the trial is over.
According to Williams (2003, 166), considerable confusion has arisen from a failure to distinguish between two distinct kinds of burdens of proof, especially by appeal courts who discuss questions of burden of proof without making it clear whether they are talking about burden of persuasion or evidential burden. Recent ground-breaking work in AI shows great promise for helping law to work toward a more systematic conceptual grasp of the notion of burden of proof by seeing how to model it in a precise way.
In his book on fallacies, Hamblin (1970) built a simple system for argumentation in dialogue he called the Why-Because System with Questions. In his discussion of this system, he replaced the concept of burden of proof with a simpler concept of initiative, which could be described as something like getting the upper hand as the argumentation moves back and forth in the dialogue between the one party and the other. No doubt he realized that the concept of burden of proof was too complex a matter to be dealt with in the limited scope of his chapter on formal dialogue systems. In this chapter, it is shown how an extended version of Hamblin’s dialogue system provides a nice way of modeling the phenomenon of shifting of burden of proof in a dialogue, yielding with a precise way of distinguishing between different kinds of burden of proof, and dealing with fallacies like the argumentum ad ignorantiam (argument from negative evidence).
Over forty years has passed since the publication of Hamblin’s book Fallacies (1970), and there has been much written on the subject of argumentation since that time. One might think that such a book would have long ago ceased to have much value in contributing to the latest research. Such is not the case, however, especially with regard to Hamblin’s remarkably innovative Chapter 8 on formal dialogue systems, a chapter that provided the basis for much subsequent work. To give an example of a formal dialogue system of the kind he recommended in Chapter 8, he built a Why-Because System with Questions. A leading feature of this system is that it has a speech act representing a move in a dialogue in which one party asks the other party to prove, or give an argument to support a claim made by the first party. The Hamblin system has several rules for managing dialogues in which such support request questions are asked, and need to be responded to. It is shown in Chapter 5 how these rules are fundamentally important in attempting to build any formal dialogue system designed to be a framework modeling the operation of burden of proof in rational argumentation.
The notions of burden of proof and presumption are central to law, but as we noted in Chapter 1, they are also said to be the slipperiest of any of the family of legal terms employed in legal reasoning. However, as shown in Chapter 2, recent studies of burden of proof and presumption (Prakken, Reed and Walton, 2005; Prakken and Sartor, 2006; Gordon, Prakken and Walton, 2007; Prakken and Sartor, 2007) offer formal models that can render them into precise tools useful for legal reasoning. In this chapter, the various theories and formal models are comparatively evaluated with the aim of working out a more comprehensive theory that can integrate the components of the argumentation structure on which they are based. It is shown that the notion of presumption has both a logical component and a dialectical component, and the new theory of presumption developed in the chapter, called the dialogical theory, combines these two components. Thus, the aim of Chapter 3 is to build on the clarification of the notion of burden of proof achieved in Chapter 2, and to move forward to show how presumption is related to burden of proof. By this means, the goal is to achieve a better theory of presumption.
According to Ashford and Risinger (1969) there is no agreement among legal writers on the questions of exactly what a presumption is and how presumptions operate. However, they think that there is some general agreement on at least a minimal account of what a presumption is: “Most are agreed that a presumption is a legal mechanism which, unless sufficient evidence is introduced to render the presumption inoperative, deems one fact to be true when the truth of another fact has been established” (165). According to legal terminology, the fact to be proved is called “the fact presumed,” and the fact to be established before this other fact is to be deemed true is called “the fact proved” (Ashford and Risinger, 1969). The analysis of presumption put forward in this chapter takes this minimal account as its basic structure.
Erik Krabbe’s pioneering article on metadialogues (dialogues about dialogues) opened up an important new avenue of research in the field, largely unexplored up to that point. His modest conclusion was that it was too early for conclusions (Krabbe, 2003, 89). Even so, by posing a number of problems along with tentative solutions, his article was a very important advance in the field. Hamblin (1970) was the first to suggest the usefulness of metadialogues in the study of fallacies. He proposed (1970, 283–284) that disputes that can arise about allegations that the fallacy of equivocation has been committed could be resolved by redirecting the dispute to a procedural level. This procedural level would correspond to what Krabbe calls a metadialogue (Krabbe, 2003). Other writers on argumentation (Mackenzie, 1979, 1981; Finocchiaro, 1980, 2005; van Eemeren and Grootendorst, 1992), as noted by Krabbe (2003, 86–87) have tacitly recognized the need to move to a metalevel dialogue framework, but none provided a metadialogue system. The study of metadialogues is turning out to be very important in argumentation theory and in computer science (Wooldridge, McBurney and Parsons, 2005).
In Chapter 6, it is shown how analyzing disputes about burden of proof is an important research topic for investigation in the field of metadialogue theory. It has recently been shown (Prakken, Reed and Walton, 2005) that legal disputes about burden of proof can be formally modeled by using the device of a formal dialogue protocol for embedding a metadialogue about the issue of burden of proof into an ongoing dialogue about some prior issue. In Chapter 8, a general solution to the problem of how to analyze burden of proof is yielded by building on this framework, using three key examples from Chapter 1 to show how disputes about burden of proof can arise. These three examples were presented in Chapter 1 as classic cases of burden of proof disputes, and now in Chapter 6 it is shown how current tools from argumentation theory and artificial intelligence based on metadialogues can be applied to the problems they pose.