To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we review different "coping strategies" that cognitive scientists have been adopting for dealing with the intractability\sind{intractability} of computational-level theories\sind{computational-level theories} and reflect on their validity and usefulness. We next turn to the question how the concepts and techniques from Chapters 2–7 can be useful tools for cognitive scientists to deal with intractability of computational-level theories of cognition in a constructive and conceptually coherent way.
In this chapter, we consider a computational-level theory of analogy derivation as structure mapping. We again illustrate the use of classical complexity analysis to assess the theory's intractability. In addition, we show how parameterized complexity analysis can be used to formally assess intuitive conjectures about possible sources of this intractability. This illustration demonstrates that such intuitions can often be wrong, underscoring the importance of formal analyses.
In this chapter we introduce the parameterized analogues of \pt\ and \np, namely, \fpt and other complexity classes such as \wone, \wtwo, and \xp\ comprising with \fpt\ the \whi-hierarchy, and the formal notions of \whi-hardness and \whi-completeness. We describe why \whi-hard parameterized problems are considered to be fixed-parameter intractable (i.e., not computable in fixed-parameter tractable time). We explain how one can prove that a problem is a member of a class in the \whi-hierarchy, and how one can use the technique of parameterized reduction, introduced and practiced in Chapter 6, to prove \whi-hardness and \whi-completeness.
In this chapter, we consider a computational-level theory of coherence as constraint satisfaction. We illustrate the use of classical and parameterized complexity analysis to assess the theory's (in)tractability and identify some of its sources of intractability.
In this chapter, we introduce the concepts of parameterized problem and fixed-parameter tractability. Using these concepts, one can show that problems that are polynomial-time intractable in general (i.e., \np-hard) may yet be practically solvable provided only that certain input parameters are constrained in terms of their values. This conception of tractability underlies the FPT-Cognition Thesis introduced in Chapter 1. We illustrate three techniques for showing that a parameterized problem is fixed-parameter tractable, namely, brute-force combinatorics, bounded search trees, and reduction to a problem kernel.We also include several exercises for practicing these techniques.
In this chapter we introduce the notion of polynomial-time reductions. We explain how this technique can be used to transform an input for problem A to an input for problem B, mapping yes-instances for A to yes-instances for B and vice versa. If this transformation can be done in polynomial time, this implies that if B is polynomial-time computable, then so is A; also, it implies that if A has an exponential-time lower bound, then so must B. These polynomial-time reductions are thus a powerful technique to relate problems to each other. We will demonstrate several reduction strategies, namely reduction by restriction, by local replacement, and by component design. We include several exercises for practicing this technique.
In this chapter we introduce the notion of parameterized reductions. We explain how this technique can be used to transform an input for a parameterized problem $K$-$A$ into an input or parameterized problem $K$-$B$, mapping yes-instances for $K$-$A$ to yes-instances for $K$-$B$ and vice versa. If this transformation can be done in fixed-parameter tractable time, this implies that if $K$-$B$ is fixed-parameter-tractable, then so is $K$-$A$; conversely, if $K$-$A$ is not fixed-parameter tractable, then neither is $K$-$B$. Like the polynomial-time reductions introduced in Chapter 3, parameterized reductions are a powerful technique for relating problems to each other. We will demonstrate parameterized analogues of each of the reduction strategies described in Chapter 3. We also include several exercises for practicing this technique.
In this chapter we explain how to analyze the time an algorithm takes to solve a given problem and specifically whether it takes polynomial or exponential time. We consider a variety of well-known problems from computer science to illustrate polynomial-time versus exponential-time algorithms. In these analyses, we build on a common distinction between three types of problems: optimization problems, search problems, and decision problems. While the first two are most commonly adopted in cognitive science, the last one is widely used for computational complexity analyses. As we will explain in subsequent chapters, the distinction is overcome during complexity analyses by understanding the close relationship between these problem types. We will also see that while it is possible to "prove by example" that a problem is of polynomial-time complexity (viz., give an algorithm that solves the problem that runs in polynomial time) proving that a problem does not allow for any such polynomial-time algorithm requires different proof methods.
In this chapter we introduce the motivation for studying "cognition and intractability." We provide an intuitive introduction to the problem of intractability as it arises for models of cognition using an illustrative every day problem as running example: selecting toppings on a pizza. Next, we review relevant background information about the conceptual foundations of cognitive explanation, computability and tractability. At the end of this chapter the reader should have a good understanding of the conceptual foundations of the Tractable Cognition thesis, including its variants: The P-Cognition thesis and the FPT-Cognition thesis, which motivates diving into the technical concepts and proof techniques covered in Chapters 2–7.