Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Meteorology increasingly relies on visualizations but the particular contributions that multimedia's visual components make to learning are relatively unexplored. This chapter examines the basis for comprehending weather maps and how learners extract information from static and animated depictions. Meteorological knowledge deficiencies hamper learners' processing. Information extracted is superficial and fragmentary with key information in the animation neglected despite being explicitly depicted and flexibly available via user control. Inadequate processing stems from the display's perceptual characteristics. For such specialized visualizations to be effective in multimedia learning materials, they may need to be given extensive support. Implications for multimedia learning theory and instructional design practice are discussed.
What Is Multimedia Learning of Meteorology?
Multimedia approaches to the learning of meteorology are well established within the field and widely accepted internationally. This acceptance is reflected in the large-scale instructional initiatives in both the United States (the COMET Program; http://www.comet.ucar.edu/) and Europe (the EUMETCAL Program; http://eumetcal.meteo.fr/) that for some years have provided computer-based professional education and training in this domain. The multimedia materials produced under these programs combine a diverse range of visual and verbal components (including written text, narrations, static pictures, animations, and video). In recent years, technological advances in delivery systems such as Web casting have led to an increasing emphasis on dynamic and interactive forms of presentation in these materials.
The split-attention principle states that when designing instruction, including multimedia instruction, it is important to avoid formats that require learners to split their attention between, and mentally integrate, multiple sources of information. Instead, materials should be formatted so that disparate sources of information are physically and temporally integrated thus obviating the need for learners to engage in mental integration. By eliminating the need to mentally integrate multiple sources of information, extraneous working memory load is reduced, freeing resources for learning. This chapter provides the theoretical rationale, based on cognitive load theory, for the split-attention principle, describes the major experiments that establish the validity of the principle, and indicates the instructional design implications when dealing with multimedia materials.
Definition of Split-Attention
Instructional split-attention occurs when learners are required to split their attention between and mentally integrate several sources of physically or temporally disparate information, where each source of information is essential for understanding the material. Cognitive load is increased by the need to mentally integrate the multiple sources of information. This increase in extraneous cognitive load (see chapter 2) is likely to have a negative impact on learning compared to conditions where the information has been restructured to eliminate the need to split attention. Restructuring occurs by physically or temporally integrating disparate sources of information to eliminate the need for mental integration. The split-attention effect occurs when learners studying integrated information outperform learners studying the same information presented in split-attention format. The split-attention principle flows from the split-attention effect.
This chapter proposes the use of a “situative” theory to complement the cognitive theory of multimedia learning (CTML) of chemistry. The chapter applies situative theory to examine the practices of chemists and to derive implications for the use of various kinds of representations in chemistry education. The two theories have implications for different but complementary educational goals – cognitive theory focusing on the learning of scientific concepts and situative theory focusing on learning science as an investigative process. We go on to present and contrast several examples of multimedia in chemistry that address each goal. We critically review the current state of research on multimedia in chemistry and derive implications for theory development, instructional design and classroom practice, and future research in the area.
What Is the Multimedia Learning of Chemistry?
Multimedia to Support Cognition
Richard Mayer (chapter 3; 2001; 2002, 2003), and others (Schnotz, chapter 4; Sweller, chapter 2) describe an information-processing, cognitive theory of learning. There are three tenets at the base of this theory: dual-channel input, limited-memory capacity, and active processing. Mayer draws on this theory to develop a series of design principles for multimedia presentations that use both auditory–verbal and visual–pictorial channels; address limited cognitive capacity for storing and processing information from these channels; and support students' active selection, organization, and integration of information from both auditory and visual inputs.
When a concise narrated animation containing complicated material is presented at a fast rate, the result can be a form of cognitive overload called essential overload. Essential overload occurs when the amount of essential cognitive processing (similar to intrinsic cognitive load) required to understand the multimedia instructional message exceeds the learner's cognitive capacity. Three multimedia design methods intended to minimize essential overload are the segmenting, pretraining, and modality principles. The segmenting principle is that people learn more deeply when a multimedia message is presented in learner-paced segments rather than as a continuous unit. This principle was supported in three out of three experimental tests, yielding a median effect size of 0.98. The pretraining principle is that people learn more deeply from a multimedia message when they know the names and characteristics of the main concepts. This principle was supported in seven out of seven experimental tests, yielding a median effect size of 0.92. The modality principle is that people learn more deeply from a multimedia message when the words are spoken rather than printed. This principle was supported in 21 out of 21 experimental tests, yielding a median effect size of 0.97.
The capacity limitations of working memory are a major impediment when students are required to learn new material. Furthermore, those limitations are relatively inflexible. Nevertheless, in this chapter we explore one technique that can effectively expand working memory capacity. Under certain, well-defined conditions, presenting some information in visual mode and other information in auditory mode can expand effective working memory capacity and so reduce the effects of an excessive cognitive load. This effect is called the modality effect or modality principle. It is an instructional principle that can substantially increase learning. This chapter discusses the theory and data that underpin the principle and the instructional implications that flow from the principle.
Introduction
There is evidence to indicate that the manner in which information is presented will affect how well it is learned and remembered (e.g., Mayer, Bove, Bryman, Mars, & Tapangco, 1996). This chapter deals with evidence documenting the importance of presentation modes, specifically the modality effect that occurs when information presented in a mixed mode (partly visual and partly auditory) is more effective than when the same information is presented in a single mode (either visual or auditory alone). The instructional version of the modality effect derives from the split-attention effect (see chapter 8), a phenomenon explicable by cognitive load theory (see chapter 2). It occurs when multiple sources of information that must be mentally integrated before they can be understood have written (and therefore visual) information presented in spoken (and therefore auditory) form.
The redundancy principle suggests that redundant material interferes with rather than facilitates learning. Redundancy occurs when the same information is presented in multiple forms or is unnecessarily elaborated. In this chapter, the long, but until recently unknown, history of the principle is traced. In addition, an explanation of the principle using cognitive load theory is provided. The theory suggests that coordinating redundant information with essential information increases working memory load, which interferes with the transfer of information to long-term memory. Eliminating redundant information eliminates the requirement to coordinate multiple sources of information. Accordingly, instructional designs that eliminate redundant material can be superior to those that include redundancy.
Introduction
The history of the redundancy effect or principle is a history of academic amnesia. The effect has been discovered, forgotten, and rediscovered many times over many decades. This unusual history probably has two related causes: first, the effect is seen as counterintuitive by many researchers and practitioners and second, until recently, there has not been a clear theoretical explanation to place it into context. As a consequence of these two factors, demonstrations of the effect have tended to be treated as isolated peculiarities unconnected to any mainstream work. Memories of each demonstration have faded with the passage of time until the next demonstration has appeared. Worse, each demonstration has tended to be unconnected to the previous one. Hopefully, current explanations of the effect can alter this lamentable state of affairs.
Social cues may prime social responses in learners that lead to deeper cognitive processing during learning and hence better test performance. The personalization principle is that people learn more deeply when the words in a multimedia presentation are in conversational style rather than formal style. This principle was supported in 10 out of 10 experimental tests, yielding a median effect size of 1.3. The voice principle is that people learn more deeply when the words in a multimedia message are spoken in a standard-accented human voice rather than in a machine voice or foreign-accented human voice. This principle was supported in four out of four experimental comparisons, with a median effect size of 0.8. The image principle is that people do not necessarily learn more deeply from a multimedia presentation when the speaker's image is on the screen rather than not on the screen. This principle was based on nine experimental tests with mixed results, yielding a median effect size of 0.2.
What Are the Personalization, Voice, and Image Principles?
Definitions
The goal of this chapter is to examine the research evidence concerning three principles for multimedia design that are based on social cues – personalization, voice, and image principles. The personalization principle is that people learn more deeply when the words in a multimedia presentation are in conversational style rather than formal style.
The multimedia principle states that people learn better from words and pictures than from words alone. It is supported by empirically derived theory suggesting that words and images evoke different conceptual processes and that perception and learning are active, constructive processes. It is further supported by research studies that have found superior retention and transfer of learning from words augmented by pictures compared to words presented alone and superior transfer when narration is accompanied by animation compared to narration or animation presented alone. Research has also found that the effectiveness of combining imagery with text varies with the content to be learned, the conditions under which performance is measured, and individual differences in spatial ability, prior knowledge, and general learning ability. Cognitive theory derived from these findings posits interactions between three stages of memory – sensory, working, and long term – that are connected by cooperative, additive channels used to process information arriving from different sensory modalities.
The Multimedia Principle
It is commonly assumed that adding pictures to words, rather than presenting text alone, makes it easier for people to understand and learn. The proverb that a picture is worth a thousand words attests to the popularity and acceptance of this assumption. The assumption leads to what may be called the multimedia principle. This principle, as stated by Mayer (2001), is that people learn better from words and pictures than from words alone, or, more specifically, that people learn more or more deeply when appropriate pictures are added to text (Mayer, in press).
Based on sociocultural and social cognitive theory, computer support for collaborative learning (CSCL) has emerged as a new research and development subdiscipline of computer-mediated communication. The emphasis of CSCL is on supporting collaborative learning activities in online multimedia environments. In this chapter, we review research on the nature of the technology used, how the learning groups are comprised (e.g., group size, learner characteristics), the learning outcome engaged by the task, the role of the tutor, the effects of community-building activities, the nature of the learning or communication assessment, and the effects of scaffolds or discussion constraints on learning. Based on this research, we make a variety of recommendations for the design and implementation of learning environments.
Introduction to the Collaboration Principle
In the past decade, the study of learning has been influenced increasingly by constructivism and social theories. Not only have the epistemological and ontological assumptions about the nature of learning changed as a result of constructivist influences, but the nature of instructional and learning activities has changed dramatically. At the risk of oversimplification, the most obvious effect of this influence has been a shift from emphasis on instructional communication systems to an emphasis on practice-based, collaborative learning systems. The goal of instructional systems, informed by objectivist assumptions, was to effectively design messages to support the efficient transmission of knowledge about the world.
We show that age-related cognitive changes necessitate considerations for the design of multimedia learning environments. These considerations mainly relate to the cognitive aging principle, which states that limited working memory may be effectively expanded by using more than one sensory modality, and some instructional materials with dual-mode presentation may be more efficient than equivalent single-modality formats, especially for older adults. The principle is based on the modality effect and multimedia effect that have been researched extensively in the context of Sweller's (1999) cognitive load theory (CLT) and Mayer's (2001) cognitive theory of multimedia learning (CTML). The research on cognitive aging in relation to multimedia processing is reviewed to explore current understanding of age-related design principles for multimedia learning environments. The potential implications of age-related cognitive changes for the design of multimedia learning environments are highlighted and complemented with important future directions in multimedia learning. The role of CLT and CTML as versatile frameworks for the design of multimedia learning environments for the elderly is discussed.
The Cognitive Aging Principle in the Design of Multimedia Learning
Demographic and technological developments will lead to a growing proportion of independent, active, and eager-to-learn elderly adults who in their everyday lives are more and more confronted with multimedia applications, such as learning environments. Generally, these learning environments consist of many relevant and irrelevant information elements, which are presented together at a fast pace and through different sensory modalities.
Principle:A basic generalization that is accepted as true and that can be used as a basis for reasoning or conduct.
OneLook.com Dictionary
Abstract
This chapter describes five commonly held principles about multimedia learning that are not supported by research and suggests alternative generalizations that are more firmly based on existing studies. The questionable beliefs include the expectations that multimedia instruction: (1) yields more learning than live instruction or older media; (2) is more motivating than other instructional delivery options; (3) provides animated pedagogical agents that aid learning; (4) accommodates different learning styles and so maximizes learning for more students; and (5) facilitates student-managed constructivist and discovery approaches that are beneficial to learning.
Introduction
Multimedia instruction is one of the current examples of a new area of instructional research and practice that has generated a considerable amount of excitement. Like other new areas, its early advocates begin with a set of assumptions about the learning and access problems it will solve and the opportunities it affords (see, e.g., a report by the American Society for Training and Development, 2001). The goal of this chapter is to examine the early expectations about multimedia benefits that seem so intuitively correct that advocates may not have carefully examined research evidence for them. If these implicit assumptions are incorrect we may unintentionally be using them as the basis for designing multimedia instruction that does not support learning or enhance motivation.
Hypermedia proponents suggest that its ability to make information available in a multitude of formats, provide individual control, engage the learner, and cater to various learning styles and needs makes it the harbinger of a new learning revolution. However, despite nearly two decades of research on hypermedia in education, researchers have not yet solved some of the basic issues raised by this technology. In this chapter, we review empirical studies performed since Dillon and Gabbard's (1998) landmark review in an attempt to analyze and draw conclusions from this diverse and extensive literature.
Introduction
Since Vannevar Bush's ground-breaking article As We May Think (Bush, 1945), the idea of using technology to link the world's information resources in new ways has been heralded by some as a revolutionary opportunity to design new instructional media. The term hypermedia is commonly used to refer to this type of information resources and is based on the term hypertext, coined by Ted Nelson around 1965 to refer to “nonsequential” or “nonlinear” text where authors and readers were free to explore and to link information in ways that made personal sense for them (Nelson, 1965). In general usage, the terms are often used interchangeably but to be strictly accurate, hypermedia consists of more than linked texts; it includes other forms of media as well, such as images, video, and sound.
This chapter reviews research on individual differences in spatial cognition from a somewhat historical perspective. It commences with a review of the factor analysis literature, which dominated early research in spatial abilities. Then, the chapter considers research on the analysis of spatial abilities from the perspective of cognitive psychology. Individual differences in large-scale or environmental spatial abilities such as wayfinding and navigation are examined. Finally, it considers some of the functions of spatial ability in occupational and academic performance. The research reviewed in this chapter provides strong evidence that spatial ability is differentiated from general intelligence. It shows that spatial ability is not a single, undifferentiated construct, but composed of several separate abilities, such as spatial visualization, flexibility of closure, spatial memory, and perceptual speed. Recent research has also begun to analyze complex tasks involved in these professions in terms of their demand on spatial skills.
This chapter discusses sex differences that are found in a variety of tests of visuospatial abilities ranging from standardized paper-and-pencil or computerized tasks to tests of way-finding ability and geographical knowledge. Visuospatial information processing involves interplay of multiple cognitive processes, including visual and spatial sensation and perception, a limited capacity visuospatial working memory, and longer-term memories where visual and spatial information may be encoded in many ways. Certain visuospatial and mathematical abilities are related, and visuospatial sex differences have been suggested to contribute to observed sex differences in mathematics performance. Many cultures show similar patterns of visuospatial sex differences, a finding that seems to support theories based on the principles of evolutionary psychology. The chapter explores how factors rooted in biology, specifically the what-where visual systems, hemispheric lateralization, and exposure to sex steroid hormones, may relate to visuospatial skill and to sex differences in those abilities.