To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the example from Pearl (above), mud and rain are correlated, but the relationship between mud and rain is not symmetric. Creating mud (e.g., by pouring water on dirt) does not make rain. However, if you were to cause rain (e.g., by seeding clouds), mud will result. There is a causal relationship between mud and rain: rain causes mud, and mud does not cause rain.
What should an agent do when there are other agents, with their own goals and preferences, who are also reasoning about what to do? An intelligent agent should not ignore other agents or treat them as noise in the environment. This chapter considers the problems of determining what an agent should do in an environment that includes other agents who have their own utilities.
This chapter is about how to represent individuals (things, entities, objects) and relationships among them. As Baum suggests in the quote above, the real world contains objects and compact representations of those objects and relationships can make reasoning about them tractable. Such representations can be much more compact than representations in terms of features alone.
In the machine learning and probabilistic models presented in earlier chapters, the world is made up of features and random variables. As Pinker points out, we generally reason about things. Things are not features or random variables; it doesn’t make sense to talk about the probability of an individual animal, but you could reason about the probability that it is sick, based on its symptoms.
Instead of reasoning explicitly in terms of states, it is typically better to describe states in terms of features and to reason in terms of these features, where a feature is a function on states. Features are described using variables. Often features are not independent and there are hard constraints that specify legal combinations of assignments of values to variables.
In Chapters 7 and 8, learning was divorced from reasoning. An alternative is to explicitly use probabilistic reasoning, as in Chapter 9, with data providing evidence that can be conditioned on. This provides a theoretical basis for much of machine learning, including regularization and measures of simplicity.
This chapter starts with the state-of-the-art in deploying AI for applications, then looks at the big picture in terms of the agent design space (page 21), and speculates on the future of AI. By placing many of the representations covered in the agent design space, the relationships among them become more apparent.
This book is about artificial intelligence (AI), a field built on centuries of thought, which has been a recognized discipline for over 60 years. As well as solving practical tasks, AI provides tools to test hypotheses about the nature of thought itself. Deep scientific and engineering problems have already been solved and many more are waiting to be solved.
Artificial intelligence is a transformational set of ideas, algorithms, and tools. AI systems are now increasingly deployed at scale in the real world [Littman et al., 2021; Zhang et al., 2022a]. They have significant impact across almost all forms of human activity, including the economic, social, psychological, healthcare, legal, political, government, scientific, technological, manufacturing, military, media, educational, artistic, transportation, agricultural, environmental, and philosophical spheres.
How do you represent knowledge about a world to make it easy to acquire, debug, maintain, communicate, share, and reason with that knowledge? This chapter explores flexible methods for storing and reasoning with facts, and knowledge and data sharing using ontologies. As Smith points out, the problems of ontology are central for building intelligent computational agents.
A reinforcement learning (RL) agent acts in an environment, observing its state and receiving rewards. From its experience of a stream of acting then observing the resulting state and reward, it must determine what to do given its goal of maximizing accumulated reward. This chapter considers fully observable (page 29), single-agent reinforcement learning.
This chapter considers simple forms of reasoning in terms of propositions – statements that can be true or false. Some reasoning includes model finding, finding logical consequences, and various forms of hypothetical reasoning. Semantics forms the foundations of specification of facts, reasoning, and debugging.