To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Words, like biological species, are born and then, someday, they die. The half-life of a word is roughly 2,000 years, meaning that in that interval about half of all words are replaced with an unrelated (noncognate) word. Where do the new words come from? There are numerous dimensions along which new words could vary from old words, so it may not be easy to see how to enter this problem. However, extending our small worlds metaphor and the observation of clusters in language, we tell a simple story that mirrors biological theories about the origin of species. Language has urban centers with well-populated and well-connected meanings (like *food* and *red*). It also has rural fringes, where words live more isolated lives as hermits with limited connections to other words (like *twang* and *ohm*). Are new words more likely to be born in urban centers or in the rural fringes?
How can we get data into a network format? This chapter briefly describes the basics and introduces the main kinds of networks we encounter in network science. It also shows how to take data that may not obviously present itself as a network and transform it into a network format.
How can groups best coordinate to solve problems? The answer touches on cultural innovation, including the trajectory of science, technology, and art. If everyone acts independently, different people will explore different solutions, but there is no way to leverage good solutions across the community. If everyone acts in consort, early successes can lead the group down dead ends and stifle exploration. The challenge is one of maintaining innovation but also communicating effective solutions once they are found. When solutions spaces are smooth – that is, easy – communication is good. But when solution spaces are rugged – that is, hard – the balance should tilt toward exploration. How can we best achieve this? One answer is to place people in social structures that reduce communication, but maintain connectivity. But there are other solutions that might work better. Algorithms, like simulated annealing, are designed to deal with such problems by adjusting collective focus over time, allowing systems to “cool off” slowly as they home in on solutions. Network science allows us to explore the performance of such solutions on smooth and rugged landscapes, and provides numerous avenues for innovation of its own.
There are two contrasting views of aging. One sees age as a process of cognitive decline, a natural consequence of biological aging. The other sees aging as a process of lifelong learning: Older adults show conspicuous improvements in vocabulary across the lifespan as well as in many other knowledge-related domains. Of these two views, one is based on an underlying process of decay. The other is based on enrichment. Here we will investigate how understanding the nature of structural changes across the lifespan can help align these views, demonstrating how age related cognitive decline can be explained as a process of network enrichment caused by lifelong learning.
Compared to people who are rated as less creative, more creative people tend to produce ideas more quickly, with more novelty, and more actively engage regions of the brain associated with cognitive control. Both inside and outside the laboratory, the evidence is clear: the creative mind is a productive mind. Structural analysis of what more creative people produce has led to two different proposals for how this is achieved. One is based on differences in the underlying knowledge representation – the structure of semantic memory – called the associative theory of creativity. The other is based on more effortful cognitive control – how semantic memory is accessed – called the executive theory of creativity. Evidence supports both, but there are few models integrating these two ideas. Network analysis offers some inroads into how to tackle this problem and invites some creativity of its own.
The Brexit referendum and the US presidential election of Donald Trump were a surprise to many, on both sides of the fence. In this information age, it is useful to ask how so many people could be so wrong about issues of so much importance. If we all knew what everyone else was thinking there could be no surprise. We are, in fact, wrong about many things when it comes to estimating the beliefs of the majority. Many of our errors arise not because our brains are tricking us, but because our appreciation of structure is underdeveloped. For example, for the majority of people, the places they vacation to are not average destinations, just as the traffic they experience is not average traffic nor the classes they sit through of average attendance. This is because, by definition, the most crowded places are attended by the most people. Similar illusions lead us to overestimate how many friends the average person has, confuse us into making backwards inferences about class and gender divisions, and allow politicians to misrepresent their populations. All of these are guaranteed outcomes of certain kinds of structure which an understanding of networks demystifies.
Some people appear to learn more slowly. Could they just be learning different things? Suppose two groups of children are learning words – they have growing vocabularies – but one group acquires the list more slowly than the other. Can we use the structure of the information they learn to gain insight into whether or not they are learning different information? Small worlds are one way of measuring the structure of a community. When quantitatively defined, small worlds have a number of useful properties, including that they compare the structure of a network relative to different versions of itself, thereby providing a kind of ‘control’ network against which to benchmark a measurement. In this chapter, I discuss small worlds and several ways to evaluate them, and then use them to answer a simple question: Are children who learn to talk late just slow versions of early talkers? Or are they learning something different about the world? Along the way, I will enumerate three different approaches to explaining where structure comes from: function, formation, and emulation.
What are the contexts that give rise to cooperation as opposed to conflict? When should we love thy neighbor, turn the other cheek, or escalate? Game theory is an effort to formalize this problem such that we can ask what decisions we should make when the consequences of those decisions depend on the decisions of others. By this accounting, war is a game, as is negotiation, rock-paper-scissors, and love making. The difference is in the payoffs. Being rational is about making decisions that lead to the best outcomes. The hawks and doves of political foreign policy – who advocate for more or less aggressive military intervention – are rational beings in this world, because the nature of the payoffs demands certain kinds of responses. The pragmatics of high stakes games underpin quotes like John F. Kennedy’s "We can secure peace only by preparing for war." But is Kennedy’s statement rational by the logic of the game in which it is embedded? Is it rational when games of conflict are played repeatedly in a network of global interactions? By combining game theory with network science, we can make some progress toward understanding these issues.
Behavioral Network Science explains how and why structure matters in the behavioral sciences. Exploring open questions in language evolution, child language learning, memory search, age-related cognitive decline, creativity, group problem solving, opinion dynamics, conspiracies, and conflict, readers will learn essential behavioral science theory alongside novel network science applications. This book also contains an introductory guide to network science, demonstrating how to turn data into networks, quantify network structure across scales, and hone one's intuition for how structure arises and evolves. Online R code allows readers to explore the data and reproduce all the visualizations and simulations for themselves, empowering them to make contributions of their own. For data scientists interested in gaining a professional understanding of how the behavioral sciences inform network science, or behavioral scientists interested in learning how to apply network science from the ground up, this book is an essential guide.
The search for the 'furniture of the mind' has acquired added impetus with the rise of new technologies to study the brain and identify its main structures and processes. Philosophers and scientists are increasingly concerned to understand the ways in which psychological functions relate to brain structures. Meanwhile, the taxonomic practices of cognitive scientists are coming under increased scrutiny, as researchers ask which of them identify the real kinds of cognition and which are mere vestiges of folk psychology. Muhammad Ali Khalidi present a naturalistic account of 'real kinds' to validate some central taxonomic categories in the cognitive domain, including concepts, episodic memory, innateness, domain specificity, and cognitive bias. He argues that cognitive kinds are often individuated relationally, with reference to the environment and etiology of the thinking subject, whereas neural kinds tend to be individuated intrinsically, resulting in crosscutting relationships among cognitive and neural categories.
This chapter discusses the kind, episodic memory, which has recently garnered a great deal of attention from philosophers. In light of current empirical work, it has become increasingly challenging to accept an influential and intuitively plausible philosophical account of memory, namely the “causal theory of memory.” It is unlikely that each episodic memory can be associated with a trace or “engram” that can be shown to be linked by an uninterrupted causal chain to an episode in the thinker’s past. Some philosophers and psychologists have responded by effectively abandoning the category of episodic memory and assimilating memory to imagination or hypothetical thinking. But I argue that there is still room for a distinct cognitive kind, episodic memory, a cognitive capacity whose function it is to generate representational states that are connected to past episodes in the experience of the thinker, which bear traces of these episodes that are individuated not at the neural level but at the “computational level.”
This chapter is about the category of innateness, which is a feature often associated with a range of cognitive phenomena, including concepts, cognitive capacities, behavioral dispositions, and mental states. Arguing against a number of recent critiques of the notion, this chapter tries to show that innateness can be identified with a cluster of properties that are causally interrelated in various ways and proposes a tentative causal model of the kind. In individuating innateness, it is important to distinguish proximal from distal causation. Some of the causal properties associated with innateness are involved in individuating innate cognitive capacities synchronically, while others are etiological in nature, responsible for making those capacities innate in the first place. This complex causal network is robust enough to warrant considering innateness to be a real kind as used in contemporary cognitive science.
This chapter tackles a psychiatric kind that does not pertain to cognitive science narrowly conceived, though it is strongly rooted in cognition. It concerns Body Dysmorphic Disorder (BDD), a condition that involves persistent and intrusive thoughts about a perceived bodily flaw that is not observable or appears slight to others, leading to repetitive behaviors and tending to result in significant distress or functional impairment. The chapter argues that the disorder has an important cognitive component involving certain deficits in visual processing, in interpreting the mental states of others, and in assessing evidence for and against one’s beliefs. A causal model of BDD is proposed that aims to show how its main features fit together. Based on this causal model, there are strong grounds for considering it a distinct psychiatric kind. This model implies a revision of the standard psychiatric taxonomy based on an analysis of the underlying causes of the disorder as opposed to its superficial symptoms. It also suggests the feasibility of constructing cognitive causal models of other psychiatric disorders.
This chapter sets out the broad metaphysical picture that guides the inquiry. I derive a naturalist notion of kinds from the nineteenth-century discussion of classification and kinds initiated by Whewell, Mill, and Venn, rather than the more recent essentialist view of natural kinds suggested by Kripke and Putnam. I go on to defend a “simple causal theory” of cognitive kinds, which conceives of them as “nodes in causal networks” in the cognitive domain. In addition, I argue against the layer-cake picture of scientific domains and put forward some reasons to resist reductionism when it comes to cognitive categories, based on different bases for individuating cognitive and neural categories. Finally, I respond to some concerns that the resulting ontological picture is not a realist one, on the grounds that it countenances the existence of cognitive kinds that are mind-dependent and self-reflexive.