To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter focuses on aspects of the philosophy of science, in particular the twentieth century views of Karl Popper and Thomas Kuhn. It briefly covers earlier aspects, including Francis Bacon and William Whewell who highlighted the need for, and influence of, subjective factors in science. In discussing Popper, it considers inductive and deductive reasoning and his falsification approach, while discussion of Kuhn focuses on his view of scientific paradigms, normal science, anomalies and crises, and paradigm shifts and scientific revolutions. It highlights both Popper’s and Kuhn’s views using neuroscience examples, including chemical synaptic transmission, animal electricity and adult neurogenesis. The conclusion is that there is no formal scientific method, no formula for discovery: scientists use, and need to use, a diversity of approaches.
This chapter considers induction, deduction and abduction as methods of obtaining scientific knowledge. The introductory section again ends by highlighting that there is no single method, and refers to claims that scientific reasoning uses various heuristics or rules of thumb based on the specific approach and the background information we have, and that we should recognise that this can introduce various errors of reasoning: by being aware of the potential for making these errors, we are better able to guard against making them. The bulk of the chapter then looks at specific logical fallacies, using neuroscience examples to illustrate them. These include ad hoc reasoning; begging the question; confusing correlation for causation; confirmation and disconfirmation biases; false dichotomies; false metaphors; the appeal to authority, tradition and emotion; the mereological fallacy; the naturalistic fallacy; and straw man arguments.
Developed specifically for students in the behavioral and brain sciences, this textbook provides a practical overview of human neuroimaging. The fully updated second edition covers all major methods including functional and structural magnetic resonance imaging, positron emission tomography, electroencephalography, magnetoencephalography, multimodal imaging, and brain stimulation methods. Two new chapters have been added covering computational imaging as well as a discussion of the potential and limitations of neuroimaging in research. Experimental design, image processing, and statistical inference are addressed, with chapters for both basic and more advanced data analyses. Key concepts are illustrated through research studies on the relationship between brain and behavior, and review questions are included throughout to test knowledge and aid self-study. Combining wide coverage with detail, this is an essential text for advanced undergraduate and graduate students in psychology, neuroscience, and cognitive science programs taking introductory courses on human neuroimaging.
While most programmes in neuroscience are understandably built around imparting foundational knowledge of cell biology, neurons, networks and physiology, there is less attention paid to critical perspectives on methods. This book addresses this gap by covering a broad array of topics including the philosophy of science, challenges of terminology and language, reductionism, and social aspects of science to challenge claims to explanation and understanding in neuroscience. Using examples from dominant areas of neuroscience research alongside novel material from systems that are less often presented, it promotes the general need of scientists (and non-scientists) to think critically. Chapters also explore translations between neuroscience and technology, artificial intelligence, education, and criminology. Featuring accessible material alongside further resources for deeper study, this work serves as an essential resource for undergraduate and graduate courses in psychology, neuroscience, and biological sciences, while also supporting researchers in exploring philosophical and methodological challenges in contemporary research.
Computational neuroimaging is defined broadly as the use of neuroimaging to investigate the localization and representation of parameters in formal mathematical models. We focus upon models of behavior and neural processing that have been adopted widely in behavioral sciences and cognitive neuroscience, including reinforcement learning, predictive coding, decision theory (drift diffusion and evidence accumulation), population receptive field models, and encoding models (including artificial neural networks). The aim is not to explain all the technical details of the various models, but illustrate and discuss the added value of combining such models with neuroimaging.
Chapter 11 introduces basic EEG and MEG data analysis methods. It begins with an explanation of the noise components in EEG and MEG signals and discusses various methods of noise reduction, including filtering and independent component analysis (ICA). Spectral analysis, event-related response (ERR) analysis, and steady-state evoked response (ssER) analysis are then introduced. Each method is explained in plain language, followed by more detailed explanations to meet the different needs of beginners and advanced readers. Relevant statistical methods and data presentation formats are also introduced, using various data analysis platforms.
We present the main methods that are used to modulate brain activity directly. These methods are often used in combination or following up on neuroimaging experiments, in a means to test causal hypotheses. We include microstimulation, deep brain stimulation, focused ultrasound stimulation (FUS), transcranial magnetic stimulation (TMS) and its sub-types like single- and double-pulse and repetitive TMS. We end with transcranial current stimulation (TCS), also known as trancranial electric stimulation (TES), which comes in several variants such as transcranial direct current stimulation (TDCS) and transcranial alternating current stimulation (TACS).
Neurons generate electromagnetic fields as they communicate with each other. Chapter 9 introduces the electromagnetic field as a key concept overarching different electrophysiological brain activities. The concept corrects common misconceptions (e.g. "EEG is the sum of action potentials") and provides a common basis for data analysis of field signals. Basic properties of the field signal, amplitude, phase and frequency, are explained in plain language.
In addition, two major noninvasive techniques for measuring field activity, electroencephalography (EEG) and magnetoencephalography (MEG), are introduced. The advantages and disadvantages of the methods are discussed with a brief history of the techniques.
This method explains in an accessible way what the underlying principles are of magnetic resonance imaging, which underlies all structural imaging methods that are described in the next chapter.
This chapter sets the stage. We provide several examples of how neuroimaging findings have been covered in popular media, and the criticism that such coverage has elicited. This book intends to provide the knowledge and facts to understand the potential and proper use and mis-use of neuroimaging methods. We end with a brief overview of the different types of neuroimaging methods, and how they are organized with respect to spatial resolution, temporal resolution, and level of invasiveness.
We provide an introduction in the main pre-processing steps that are involved when analyzing imaging data: Correction for slice timing, motion correction, coregistration, normalization, and spatial smoothing
We discuss three more advanced statistical analysis approaches. First, the analysis of functional connectivity, including topics like directional and effective functional connectivity, modulations of connectivity by task (psychophysiological interactions), and resting-state fMRI. Second, we cover multivariate analyses and multi-voxel pattern analyses, and we discuss their potential and limitations to understand information processing in the brain. Third, we introduce the use of functional MRI adaptation as a means to measure neural selectivity.