To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mixed reality simulations such as TeachLivE and Mursion have been increasingly utilised to prepare teachers for inclusive classrooms. The use of mixed reality simulations, which combine elements of both virtual and augmented reality, offers immersive and interactive experiences that can enhance teacher training in various ways. These simulations provide preservice teachers with realistic and safe spaces to practise inclusive communication, pedagogy, and classroom management. Each scenario can be tailored to provide practice in specific skills and support preservice teachers in meeting the Australian Institute for Teaching and School Leadership standards. This is especially helpful in view of today’s inclusive classes, as avatars in the simulations are neurodiverse, representing students of various abilities and personalities. The authors define mixed reality simulations, describe various ways that simulations have been used to support students in special and inclusive education, and describe a case study of simulations used for parent–teacher meetings and for inclusive classroom management in an Australian university. Lastly, they suggest directions for future research and practice.
We analyze the effects of different pay-as-you-go public pension systems on financial imbalance, rate of return, and inequality of heterogeneous generations in terms of gender and education. We include aspects that are relevant for developing countries such as labor informality and payment of an old-age and social benefit. We introduce a new mixed system that combines components of the defined benefit (DB) and the defined contribution (DC) systems. Results show the new mixed system represents a compromise between the DB and DC systems and that a scheme (inspired in the German system) exhibits the highest rates of return and horizontal equity.
What is the nature of discovery? As a human being and a physicist, I can only observe one mind at work first-hand. This mind accepts every new scrap as a discovery, whether it originates in the external world of knowledge or springs from an internal process. To explore the nature of discovery from the point of view of the inner observer, I chose to turn over past experiences, fitting together days on which years of determined and dogged plodding resulted in a finished equation, or finally, a coherent assembly of disparate ideas gave the clue to why storm damage in breakwaters is like a phase change in liquid crystals. Old, unfashionable methods are suddenly useful with new computer architectures. Theory, experiment, observation, and simulation fit together as aids to thinking. The properties of complex systems can provide intuitive insight into the social science of science. We will need every ability to assemble the puzzle pieces in the coming years to discover how to extricate the planet from the difficulties in which we have placed it: as an observer and actor, I suggest that the evolution of human thinking, and of aids to thinking, are critical.
Increasing sustainability expectations requires support for the design of systems that are reactive in minimizing potential negative impact and proactive in guiding engineering decision-making toward more value-robust long-term decisions. This article identifies a gap in the methodological support for the design of circular systems, building on the hypothesis that computer-based simulation models will drive the development of more value-robust systems designed to behave positively in a changeable operational environment during the whole lifecycle. The article presents a framework for value-robust circular systems design, complementing the current approaches for circular design aimed at increasing decision-makers’ awareness about the complexity of circular systems to be designed. The framework is theoretically described and demonstrated through its applications in four case studies in the field of construction machinery investigating new circular solutions for the future of mining, quarrying and road construction. The framework supports the development of more resilient and sustainable systems, strengthening the feedback loop between exploring new technologies, proposing innovative concepts and evaluating system performance.
In small meta-analyses (e.g., up to 20 studies), the best-performing frequentist methods can yield very wide confidence intervals for the meta-analytic mean, as well as biased and imprecise estimates of the heterogeneity. We investigate the frequentist performance of alternative Bayesian methods that use the invariant Jeffreys prior. This prior has the usual Bayesian motivation, but also has a purely frequentist motivation: the resulting posterior modes correspond to the established Firth bias correction of the maximum likelihood estimator. We consider two forms of the Jeffreys prior for random-effects meta-analysis: the previously established “Jeffreys1” prior treats the heterogeneity as a nuisance parameter, whereas the “Jeffreys2” prior treats both the mean and the heterogeneity as estimands of interest. In a large simulation study, we assess the performance of both Jeffreys priors, considering different types of Bayesian estimates and intervals. We assess point and interval estimation for both the mean and the heterogeneity parameters, comparing to the best-performing frequentist methods. For small meta-analyses of binary outcomes, the Jeffreys2 prior may offer advantages over standard frequentist methods for point and interval estimation of the mean parameter. In these cases, Jeffreys2 can substantially improve efficiency while more often showing nominal frequentist coverage. However, for small meta-analyses of continuous outcomes, standard frequentist methods seem to remain the best choices. The best-performing method for estimating the heterogeneity varied according to the heterogeneity itself. Röver & Friede’s R package bayesmeta implements both Jeffreys priors. We also generalize the Jeffreys2 prior to the case of meta-regression.
A procedure for generating multivariate nonnormal distributions is proposed. Our procedure generates average values of intercorrelations much closer to population parameters than competing procedures for skewed and/or heavy tailed distributions and for small sample sizes. Also, it eliminates the necessity of conducting a factorization procedure on the population correlation matrix that underlies the random deviates, and it is simpler to code in a programming language (e.g., FORTRAN). Numerical examples demonstrating the procedures are given. Monte Carlo results indicate our procedure yields excellent agreement between population parameters and average values of intercorrelation, skew, and kurtosis.
Under a formula score instruction (FSI), test takers omit items. If students are encouraged to answer every item (under a rights-only scoring instruction, ROI), the score distribution will be different. In this paper, we formulate a simple statistical model to predict the score ROI distribution using the FSI data. Estimation error is also provided. In addition, a preliminary investigation of the probability of guessing correctly on omitted items and its sensitivity is presented in the paper. Based on the data used in this paper, the probability of guessing correctly may be close or slightly greater than the chance score.
This chapter brings together the argument so far, showing how nonignorable nonresponse may manifest itself and how the various models perform across these contexts, including how they may fail. It also highlights the ideal response to potential nonignorable nonresponse, which involves (1) creating randomized instruments, (2) using the randomized instrument to diagnose nonignorable nonresponse, (3) moving to conventional weights if there is no evidence of nonignorable nonresponse, but (4) using selection models explained here when there is evidence of nonignorable nonresponse. Section 11.1 simulates and analyzes data across a range of scenarios using multiple methods. Section 11.2 discusses how to diagnose whether nonresponse is nonignorable. Section 11.3 integrates the approaches with a decision tree based on properties of the data. Section 11.4 discusses how selection models can fail.
Nancy Cartwright's 1983 book How the Laws of Physics Lie argued that theories of physics often make use of idealisations, and that as a result many of these theories were not true. The present paper looks at idealisation in logic and argues that, at least sometimes, the laws of logic fail to be true. That might be taken as a kind of skepticism, but I argue rather that idealisation is a legitimate tool in logic, just as in physics, and recognising this frees logicians up to use false laws where these are helpful.
Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.
This chapter details the practical, theoretical, and philosophical aspects of experimental science. It discusses how one chooses a project, performs experiments, interprets the resulting data, makes inferences, and develops and tests theories. It then asks the question, "are our theories accurate representations of the natural world, that is, do they reflect reality?" Surprisingly, this is not an easy question to answer. Scientists assume so, but are they warranted in this assumption? Realists say "yes," but anti-realists argue that realism is simply a mental representation of the world as we perceive it, that is, metaphysical in nature. Regardless of one's sense of reality, the fact remains that science has been and continues to be of tremendous practical value. It would have to be a miracle if our knowledge and manipulation of the nature were not real. Even if they were, how do we know they are true in an absolute sense, not just relative to our own experience? This is a thorny philosophical question, the answer to which depends on the context in which it is asked. The take-home message for the practicing scientist is "never assume your results are true."
This innovation in simulation evaluated the effectiveness of a time sensible, low-cost simulation on prelicensure nursing students’ knowledge and confidence in responding to public health emergencies.
Method:
One hundred eighty-two nursing students, in groups of 5, participated in a 75-min emergency preparedness disaster simulation. A mixed methods design was used to evaluate students’ knowledge and confidence in disaster preparedness, and satisfaction with the simulation.
Results:
Students reported an increase in knowledge and confidence following the disaster simulation and satisfaction with the experience.
Conclusions:
Prelicensure nursing programs can replicate this low cost, time sensible disaster simulation to effectively educate students in emergency preparedness.
For international lawyers seeking to promote compliance with international humanitarian law (IHL), some level of affective awareness is essential – but just where one might cultivate an understanding of emotions, and at which juncture of one's career, remains a mystery. This article proposes that what the IHL lawyers and advocates of the future need is an affect-based education. More than a simple mastery of a technical set of emotional intelligence skills, what we are interested in here is the refinement of a disposition or sensibility – a way of engaging with the world, with IHL, and with humanitarianism. In this article, we consider the potential for the Jean-Pictet Competition to provide this education. Drawing on our observations of the competition and a survey with 231 former participants, the discussion examines the legal and affective dimensions of the competition, identifies the precise moments of the competition in which emotional processes take place, and probes the role of emotions in role-plays and simulations. Presenting the Jean-Pictet Competition as a form of interaction ritual, we propose that high “emotional energy” promotes a humanitarian sensibility; indeed, participant interactions have the potential to re-constitute the very concept of humanitarianism. We ultimately argue that a more conscious engagement with emotions at competitions like Pictet has the potential to strengthen IHL training, to further IHL compliance and the development of IHL rules, and to enhance legal education more generally.
The Markov True and Error (MARTER) model (Birnbaum & Wan, 2020) has three components: a risky decision making model with one or more parameters, a Markov model that describes stochastic variation of parameters over time, and a true and error (TE) model that describes probabilistic relations between true preferences and overt responses. In this study, we simulated data according to 57 generating models that either did or did not satisfy the assumptions of the True and Error fitting model, that either did or did not satisfy the error independence assumptions, that either did or did not satisfy transitivity, and that had various patterns of error rates. A key assumption in the TE fitting model is that a person’s true preferences do not change in the short time within a session; that is, preference reversals between two responses by the same person to two presentations of the same choice problem in the same brief session are due to random error. In a set of 48 simulations, data generating models either satisfied this assumption or they implemented a systematic violation, in which true preferences could change within sessions. We used the true and error (TE) fitting model to analyze the simulated data, and we found that it did a good job of distinguishing transitive from intransitive models and in estimating parameters not only when the generating model satisfied the model assumptions, but also when model assumptions were violated in this way. When the generating model violated the assumptions, statistical tests of the TE fitting models correctly detected the violations. Even when the data contained violations of the TE model, the parameter estimates representing probabilities of true preference patterns were surprisingly accurate, except for error rates, which were inflated by model violations. In a second set of simulations, the generating model either had error rates that were or were not independent of true preferences and transitivity either was or was not satisfied. It was found again that the TE analysis was able to detect the violations of the fitting model, and the analysis correctly identified whether the data had been generated by a transitive or intransitive process; however, in this case, estimated incidence of a preference pattern was reduced if that preference pattern had a higher error rate. Overall, the violations could be detected and did not affect the ability of the TE analysis to discriminate between transitive and intransitive processes.
Chapter 11 begins with a discussion of the constraints of language and how metaphor helps to overcome these constraints and expand the expressive power of language.It briefly summarizes traditional theories of metaphor comprehension, followed by conceptual metaphor theory and perceptual simulations.Extensions of conceptual metaphor theory that imply a code model are discussed and critiqued.Then I introduce and illustrate an approach to metaphor analysis that begins with the speaker’s experience as expressed in the text.The chapter discusses grammatical metaphors, metaphorical stories, playful metaphors, and multimodal metaphors.It discusses the processing and comprehension of metaphors, and closes with a discussion of the contribution of metaphor to social structure and personal identity.
There has been an explosion in the uses of multimedia and their various platforms. The proliferation of different types of technology inclusion in education has become even greater due to the increased need for remote platforms for education globally. My focus in this paper is on providing a definition of multimedia learning with simulations. There are many types of simulations and this chapter presents a framework for understanding this diversity. In particular, I discuss the multimedia principles that inform the design of simulations along with research evidence of how simulations support learning. Future directions for this research are discussed.
Collective Defined Contribution (CDC) pension schemes are a variant of collective pension plans that are present in many countries and especially common in the Netherlands. CDC schemes are based on the pooled management of the retirement savings of all members, thereby incorporating inter-generational risk-sharing features. Employers are not subject to investment and longevity risks as these are transferred to plan members collectively. In this paper, we discuss policy related to the proposed introduction of CDC schemes to the UK. By means of a simulation-based study, we compare the performance of CDC schemes vis-à-vis typical Defined Contribution schemes under different investment strategies. We find that CDC schemes may provide retirees with a higher income replacement rate on average, together with less uncertainty.
In Spain, the epidemic curve caused by COVID-19 has reached its peak in the last days of March. The implementation of the blockade derived from the declaration of the state of alarm on 14th March has raised a discussion on how and when to deal with the unblocking. In this paper, we intend to add information that may help by using epidemic simulation techniques with stochastic individual contact models and several extensions.
The regression discontinuity design (RDD) is a valuable tool for identifying causal effects with observational data. However, applying the traditional electoral RDD to the study of divided government is challenging. Because assignment to treatment in this case is the result of elections to multiple institutions, there is no obvious single forcing variable. Here, we use simulations in which we apply shocks to real-world election results in order to generate two measures of the likelihood of divided government, both of which can be used for causal analysis. The first captures the electoral distance to divided government and can easily be utilized in conjunction with the standard sharp RDD toolkit. The second is a simulated probability of divided government. This measure does not easily fit into a sharp RDD framework, so we develop a probability restricted design (PRD) which relies upon the underlying logic of an RDD. This design incorporates common regression techniques but limits the sample to those observations for which assignment to treatment approaches “as-if random.” To illustrate both of our approaches, we reevaluate the link between divided government and the size of budget deficits.