To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Reasoning and decision making are fundamental parts of the knowledge representation and reasoning (KR&R) artificial intelligence (AI) approach. Research on the two topics of reasoning and decision making is often done in isolation, with different methods and different theoretical understandings for the two topics. The chapter distinguishes research along representation lines, taking particular aim at logic-based and probability-based representations. Research on representation languages that permit tractable query answering yielded specialized languages with large bodies of applications. The chapter describes work along these research paths, focusing on logical reasoning, probabilistic reasoning, and commonsense reasoning. Decision making as a research area spans the disciplines of economics, psychology, computer science, and virtually all the engineering disciplines. The chapter looks at approaches to autonomous decision making developed over the past fifty years. Research on commonsense reasoning is divided into three main streams: logical theory, large commonsense knowledge bases, and ad hoc commonsense reasoning techniques.
Language and communication are considered as relevant to artificial intelligence. Linguists are not the only scientists wishing to test theories of language functioning: so do psychologists and neurophysiologists. This chapter briefly looks at samples of important and prescient early work, and shows two contrasting, slightly later, approaches to the extraction of content, evaluation, representation, and the role of knowledge. It considers a range of systems embodying natural language processing (NLP)/computational linguistics (CL) aspects since the early seventies, and divides them by their relationships to linguistic systems and in relation to concepts normally taken as central to AI, namely logic, knowledge, and semantics. Broadly, statistical methods imply the use of only numerical, quantitatively based, methods for NLP/CL, rather than methods based on representations, whether those are assigned by humans or by computers. The chapter discusses the role of annotations to texts and the interpretability of core AI representations.
This chapter introduces and surveys the emerging agent-centered artificial intelligence (AI), and highlights the importance of developing theories of action, learning, and negotiation in multi-agent scenarios such as the Internet. It examines in detail the main functionalities software systems would display in a social, agent-centered AI or, in other words, the principles of behavior of the new AI. Approaches to multi-agent behavior differ mainly in regards to the degree of control that the designer should have over individual agents and over the social environment, that is, over the interaction mechanisms. Research on machine learning has been mostly independent of agent research and only recently has it received attention in connection with agents and multi-agent systems. Agent-based applications have enjoyed considerable success in manufacturing, process control, telecommunications systems, air traffic control, traffic and transportation management, information filtering and gathering, electronic commerce, business process management, entertainment, and medical care.
Bayesian inference has become a standard method of analysis in many fields of science. Students and researchers in experimental psychology and cognitive science, however, have failed to take full advantage of the new and exciting possibilities that the Bayesian approach affords. Ideal for teaching and self study, this book demonstrates how to do Bayesian modeling. Short, to-the-point chapters offer examples, exercises, and computer code (using WinBUGS or JAGS, and supported by Matlab and R), with additional support available online. No advance knowledge of statistics is required and, from the very start, readers are encouraged to apply and adjust Bayesian analyses by themselves. The book contains a series of chapters on parameter estimation and model selection, followed by detailed case studies from cognitive science. After working through this book, readers should be able to build their own Bayesian models, apply the models to their own data, and draw their own conclusions.
Human psychology is deeply rooted in the culture in which people live. Introduction to Computational Cultural Psychology introduces a revolutionary approach for studying cultural psychology. Drawing on novel computational tools and in-depth case studies, Professor Yair Neuman offers thought-provoking answers to questions such as: how are thought and language deeply related? How can computers help us to understand different cultures? How can computers assist military intelligence in identifying vengeful intentions? And how is our concept of 'love' rooted in our basic embodied experience? Written by a leading interdisciplinary researcher this book is a 'tour-de-force' which will be of interest to a variety of researchers, students and practitioners in psychology as well as an interdisciplinary audience with an interest in the intricate web weaved between the human psyche and its cultural context.
Psychophysics is concerned with measuring how external physical stimuli cause internal psychological sensations. In a typical psychophysical experiment, subjects are repeatedly confronted with two similar stimuli, such as two sounds, two weights, two smells, two lines, or two time intervals. One stimulus—the standard stimulus—always has the same intensity, whereas the other stimulus—the test stimulus—varies in intensity from trial to trial. On each trial of the experiment, the subject's task is to detect which of the two stimuli is more intense: louder, heavier, stronger-smelling, more tilted, or longer lasting. The more similar the stimuli, the more difficult it is for the subject to discriminate between them.
The relation between task difficulty and performance usually follows a sigmoid or S-shaped curve, as shown in Figure 12.1. The x-axis represents differences in stimulus intensity between the test stimulus and the standard. The y-axis represents the probability of a response indicating that the test stimulus has higher intensity. The curve linking these physical and psychological measures is known as a psychophysical function, and is used to define several values of interest. The “point of subjective equality” (PSE) represents that difference in intensity for which the participant chooses the correct response 50% of the time, which is not necessarily the point where the two stimuli are equally intense physically. The “just noticeable difference” (JND) is the intensity threshold at which the subject “just” notices a difference in intensity between two stimuli.
The take-the-best (TTB) model of decision-making (Gigerenzer & Goldstein, 1996) is a simple but influential account of how people choose between two stimuli on some criterion, and a good example of the general class of heuristic decision-making models (e.g., Gigerenzer & Todd, 1999; Gigerenzer & Gaissmaier, 2011; Payne, Bettman, & Johnson, 1990). TTB addresses decision tasks like “which of Frankfurt or Munich has the larger population?”, “which of a catfish and a herring is more fertile?”, and “which of these two professors has the higher salary?”.
TTB assumes that all stimuli are represented in terms of the presence or absence of a common set of cues. In the well-studied German cities data set, this means cities are represented in terms of nine cues, including whether or not they have an international airport, whether they have hosted the Olympics, and whether they have a football team in the Bundesliga. Associated with each cue in TTB is a “cue validity.” This validity measures the proportion of times that, for those pairs of stimuli where one has the cue and the other does not, the cue belongs to the stimulus that has the greater criterion value. For example, the cue “Is the city the national capital?” is highly valid because the capital city, Berlin, is also the most populous city.
The TTB model assumes that, when people decide which is the larger of two cities, they search the cues from highest to lowest validity, stopping as soon as a cue is found that one city has but the other does not. At this point, TTB says simply that people choose the city that has the cue. If all of the cues are exhausted, TTB assumes people guess.
Brown, Neath, and Chater (2007) proposed the SIMPLE (Scale-Invariant Memory, Perception, and LEarning) model, which, among various applications, has been applied to the basic memory phenomenon of free recall. In this application, the SIMPLE model assumes memories are encoded by the time they were presented, but that the representations are logarithmically compressed, so that more temporally distant memories are more similar. It also assumes that distinctiveness plays a central role in performance on memory tasks, and that interference rather than decay is responsible for forgetting. Perhaps most importantly, the SIMPLE model assumes that the same memory processes operate at all time scales, unlike theories and models that assume different mechanisms for short-term and long-term memory.
The first application considered by Brown et al. (2007) involves seminal immediate free recall data reported by Murdock (1962). The data give the proportion of words correctly recalled averaged across participants, for lists of 10, 15, and 20 words presented at a rate of 2 seconds per word, and lists of 20, 30, and 40 words presented at a rate of 1 second per word.
Brown et al. (2007) make some reasonable assumptions about undocumented aspects of the task (e.g., the mean time of recall from the end-of-list presentation), to set the time Ti between learning and retrieval of the ith item. With these times established, the application of the SIMPLE model to the free recall data involves five stages, which are clearly described in Brown et al. (2007, Appendix).
This book, together with the code, answers to questions, and other material at www.bayesmodels.com, teaches you how to do Bayesian modeling. Using modern computer software—and, in particular, the WinBUGS program—this turns out to be surprisingly straightforward. After working through the examples provided in this book, you should be able to build your own models, apply them to your own data, and draw your own conclusions.
This book is based on three principles. The first is that of accessibility: the book's only prerequisite is that you know how to operate a computer; you do not need any advanced knowledge of statistics or mathematics. The second principle is that of applicability: the examples in this book are meant to illustrate how Bayesian modeling can be useful for problems that people in cognitive science care about. The third principle is that of practicality: this book offers a hands-on, “just do it” approach that we feel keeps students interested and motivated.
In line with these three principles, this book has little content that is purely theoretical. Hence, you will not learn from this book why the Bayesian philosophy to inference is as compelling as it is; neither will you learn much about the intricate details of modern sampling algorithms such as Markov chain Monte Carlo, even though this book could not exist without them.