To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The foundations of the General Theory are described at a conceptual level of understanding. The basic terminology of memory research is presented. The main focus is on the proposed memory structures and the control processes that guide the flow of information though them. The assumed memory structures are sensory registers, the short-term store, and the long-term store. Control processes are models of the flow information through these structures to support the performance of tasks that lead to the achievement of a subjective goal. Empirical support for the fundamental assumptions of the General Theory are provided.
Forgetting is a phenomenon that is familiar to everyone and among the most extensively investigated in psychological science. It is, therefore, quite surprising that forgetting is widely misunderstood by the layperson and even by researchers. Evidence for the permanence of long-term memories is presented, and the distinction between the accessibility and availability of memories is discussed. Search of associative memory (SAM) and retrieving effectively from memory (REM) models of forgetting are described and extended as a proposal for everyday forgetting.
In the 1980s there was surge of interest in modeling human memory. One of the most successful lines of research investigated recognition memory, and several important models were developed. Testing of the models was conducted, leading to a consensus that there were problems with all extant models. This led to the development of more complex models that assumed that differentiation was an important process in human memory. The findings that challenged then extant models are presented, and the retrieving effectively from memory (REM) framework describing differentiation is discussed in detail.
With development of the retrieving effectively from memory (REM) modeling framework, research on the interaction of experience and knowledge has taken off. While in its infancy, several models of lexical access have been developed. These models have more recently been extended to describe how experience leads to knowledge. Both the strengths and limitations of the current models are described.
Investigations of the contribution of controlled versus automatic information processing are presented using two superficially different procedures as examples: visual search and episodic recognition memory. Whereas most frameworks consider the possibility that tasks may be performed either in a controlled fashion or automatically, the General Theory assumes that both types of information processing may contribute to their performance. Thus, the empirical question is the extent to which each type of information processing contributes to task performance and under what conditions.
Sequential effects are among the most robust phenomena observed in psychological experiments; judgments that may be made independently are influenced by prior judgments when made in a sequence, even when doing so is suboptimal. Over the years, models of sequential effects that are observed in categorization, recognition, absolute identification, and short-term priming experiments were developed by different researchers at different times, each apparently unaware of the others. Nevertheless, all models of sequential effects developed within the framework of the General Theory converged on the same assumption: information used to make one judgment carries over to influence the judgment made on a subsequent trial. These models and relevant data are presented.
This innovative text introduces neuroscience students to the visual language of scientific publications, teaching scientific literacy, research methods, and graphical literacy in an engaging way. Employing a 'pictures first' pedagogical approach, it walks the reader step-by-step through the interpretation of neuroscience figures and explains the principles of experimental design. The major research techniques – from neuroimaging, to behavioral methods, to genetics and comparative approaches – are explored, illuminating how they are represented graphically in journal articles, and their strengths and limitations as a research tool. More than 130 example figures provide experimental paradigms for the more difficult-to-visualize methods, and depict actual results taken from the recently published scientific literature. Data from several study designs are discussed, including clinical case studies, meta-analyses, and experiments from behavior to molecular genetics. Concrete examples of experiments are provided along with each method, helping students with the design of their own research questions.
This chapter describes techniques related to genetics, at both a molecular and an organismal level. The introduction explains the principles of base pairing and single nucleotide polymorphisms. The molecular techniques and figures explained include transgene construct schematics, Manhattan plots generated from genome-wide association studies, and different analysis methods for studying the microbiome. Organismal-level techniques described include family trees and dendrograms.
This chapter describes the basics of scientific figures. It provides tips for identifying different types of figures, such as experimental protocol figures, data figures, and summary figures. There is a description of ways to compare groups and of different types of variables. A short discussion of statistics is included, describing elements such as central tendency, dispersion, uncertainty, outliers, distributions, and statistical tests to assess differences. Following that is a short overview of a few of the more common graph types, such as bar graphs, boxplots, violin plots, and raincloud plots, describing the advantages that each provides. The end of the chapter is an “Understanding Graphs at a Glance” section which gives the reader a step-by-step outline for interpreting many of the graphs commonly used in neuroscience research, applicable independently of the methodology used to collect those data.
This chapter describes methods for analyzing neuroscience questions at the molecular level. The introduction defines the central dogma of molecular biology and the four levels of protein structure. The chapter then describes techniques including in situ hybridization, RNA-sequencing, immunochemistry and some applications such as Western blot and affinity capture, ribbon diagrams, a variety of genetically encoded fluorescent biosensors, and receptor binding assays.
This chapter highlights some of the tools used for imaging features of the nervous system. The introduction defines the concepts of temporal and spatial resolution, the anatomical language used to describe structures in relation to one another, and planes of imaging, all of which are knowledge essential to understanding imaging figures. The chapter then describes both structural and functional imaging techniques and the figures that may accompany these scanning methods, including dissection; CT scans; PET scans; various applications of MRI scanning including arterial spin labeling, functional MRI, and diffusion tensor imaging for tract tracing; SPECT scans; and electroencephalography imaging, including a description of event-related potentials.
This chapter is a discussion of methods used to study the nervous system at the level of cells. The introduction defines and describes the microanatomy of neurons and populations of glia and gives an overview of organelles. Next is a discussion of microscopy techniques and images, including light microscopy (bright-field and fluorescence) and electron microscopy. Other techniques which rely on microscopy are then described, including unbiased stereology, fluorescence recovery after photobleaching, and flow cytometry. The chapter concludes with a description of a variety of stains, dyes, and anterograde and retrograde tracers, as well as an interpretation of Sholl analysis figures and dendritic spine quantification.
This chapter describes the techniques used in electrophysiology and electrochemistry and explains the figures derived from these methods. The introduction describes how neurons can be modeled as electrical circuits and explains different preparations of electrophysiological samples, the common recording configurations, and the equipment used with these techniques. The techniques are divided into a few major categories: passive neuronal properties, action potential analysis, synaptic events including paired pulse ratios and long-term potentiation, current-voltage plots, and electrochemistry techniques such as fast scan cyclic voltammetry and amperometry.