To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Measurement is an aspect of the mathematics curriculum that has wide usage in everyday life. A basic level of knowledge, skills and confidence in measurement is very much part of being numerate. An analysis of the measuring process suggests that children learn to measure first by becoming aware of the physical attributes of objects and how they compare with other objects. Estimation is a significant aspect of measurement and should be seen as an integral part of the measurement process. The ability to estimate is enhanced when students have strong spatial awareness and are able to visualise and represent measurement situations in their heads. Students therefore need to be given plenty of opportunities to engage in measurement activities that focus on developing a sound understanding of the attribute being measured, along with the act of measuring.
The chapter discusses three assessment topics: assessing the impact of experimental manipulations, assessing the clinical significance of change, and the use of ongoing assessment during the course of interventions. Each has direct implications for what one can conclude from a study. Assessing the impact of the experimental manipulation is a check on the independent variable and how it was manipulated or received on the part of the participant. Assessment of clinical significance reflects the concern about the importance of therapeutic changes that were achieved. Statistical significance on the usual measures and indices of effect size do not necessarily reflect whether the impact of the treatment had any practical or palpable value in the lives of the clients. Several indices of clinical significance are discussed. Finally, the value of ongoing assessment of treatment in intervention studies was elaborated. In randomized controlled trials, the usual assessment consists of pre- and post-treatment assessment to evaluate improvement. If the goal is to understand the course or mediators of treatment or to see if the client is improving while the intervention is in effect, ongoing assessment (on multiple occasions over the course of treatment) can be very valuable. The course of change both of mediators and symptoms are likely to vary among individuals. Currently when studies evaluate mediators, one or two assessment occasions are used during treatment to assess the mediator (e.g., cognitive processes, alliance). Ongoing assessment on multiple occasions provide a better way to capture whether the proposed mediators and outcome in fact are related.
Qualitative research is designed to describe, interpret, and understand human experience and to elaborate the meaning that the experience has to the participants. The data usually derive from in-depth interviews of individuals to capture their experiences. A key feature of the approach is obtaining detailed description without presupposing specific measures, categories, or a narrow range of constructs at the outset of the study. Rather, from the data, interpretations, overarching constructs, and theory are generated to better explain and understand how the participants experience the phenomenon of interest. Mixed-methods research is discussed and consists of the combination of quantitative and qualitative research within the same study. Among the obvious benefits is bringing to bear multiple levels of understanding of a phenomenon of interest. Also, the methods allow opportunities for each part of the investigation to influence the other (e.g., qualitative findings can be used to develop measures that will be used later in quantitative research). It is important to be familiar with qualitative and mixed-methods research because of the rich opportunities they provide and because these methodological approaches are much less frequently taught and used in psychological research in comparison to quantitative methods.
Most biological ideas can be viewed as models of nature we create to explain phenomena and predict outcomes in new situations. We use data to determine these models’ credibility. We translate our biological models into statistical ones, then confront those models with data. A mismatch suggests the biological model needs refinement. A biological idea can also be considered a signal that appears in the data among the background noise. Fitting the model to the data lets us see if such a signal exists and, importantly, measure its strength. This approach only works well if our biological hypotheses are clear, the statistical models match the biology, and we collect the data appropriately. This clarity is the starting point for any biological research program.
Control groups rule out or weaken rival hypotheses or alternative explanations of the results. The control group appropriate for an experiment depends upon precisely what the investigator is interested in concluding at the end of the investigation. No treatment, wait-list, treatment as usual, nonspecific control condition, and yoked controls were discussed. The progression of research and the different control and comparisons groups that are used were illustrated in the context of psychotherapy research. Several different treatment evaluation strategies were discussed that focused on the treatment package, extensions of the treatment, dismantling or constructing treatments, comparisons of different treatments, noninferiority of treatments, and the evaluation of moderators and mediators. These strategies convey the rich opportunities and range of questions that can guide intervention research.
The chapter discusses entitlement to POW status in IACs on the part of combatants, those special categories also entitled to the status, unlawful combatants, the relationship between IHL and IHRL and the determination of POW status .The chapter then goes on to discuss the enemy states responsibility for POW, their treatment, internment in POW camps and the punishment of POW. It then considers termination of captivitity and repatriation and, finally, internment in NIAC.
In discussing the results of one’s study, little inferential leaps often are made that can misrepresent or overinterpret what actually was found in the data analyses. Common examples from clinical research were mentioned such as slippage in the use and meaning of “significant” and “predictor.” Another topic critical to data interpretation is the notion of negative results, a concept that has come to mean that no statistically significant differences were found in the experiment. The publication bias favoring statistically significant effects can greatly distort our understanding of the relations in the world and capitalizes on chance findings as well as the use of questionable research practices (e.g., searching for significant effects in the data, whether predicted or not). Replication is a critical topic in all of science because it is the most reliable test of whether the finding is veridical. The logic of statistical analyses suggests that occasionally statistical significance will be achieved even when there are no group differences in the population. Since these findings are likely to be published because of the bias, there could well be a great many findings that would not stand up under any replication effort. Thus, distinguishing those findings in the field that have a sound basis requires replication research. Replication is facilitated by making available information about how the study was done. Open science practices aim to improve the quality and foster greater transparency, which includes access to procedures, data, and facets of decision making.
Uncertainty is a part of everyday life. We live with a range of situations that inherently have an element of uncertainty in them – for example, crossing the road, going on holiday, or making a major purchase – but we often ignore the embedded chance in these activities. Risk is acknowledged in many activities, and much effort is expended in identifying these risks and minimising any potential negative outcomes. In schools, for example, a risk assessment is required prior to any excursion with children. Probability is the strand of mathematics that addresses uncertainty. This chapter explores ideas relating to probability in the mathematics classroom.
For this book, we assume you’ve had an introductory statistics or experimental design class already! This chapter is a mini refresher of some critical concepts we’ll be using and lets you check you understand them correctly. The topics include understanding predictor and response variables, the common probability distributions that biologists encounter in their data, the common techniques, particularly ordinary least squares (OLS) and maximum likelihood (ML), for fitting models to data and estimating effects, including their uncertainty. You should be familiar with confidence intervals and understand what hypothesis tests and P-values do and don’t mean. You should recognize that we use data to decide, but these decisions can be wrong, so you need to understand the risk of missing important effects and the risk of falsely claiming an effect. Decisions about what constitutes an “important” effect are central.
Multiple predictors can all be continuous, or they can be mixtures of continuous and categorical. A common biological situation is a substantial number of continuous predictors, and fitting these models is commonly labeled multiple regression. We might also mix continuous and categorical predictors, and these have been called analyses of covariance. We show how these two analyses are closely related and how to fit and interpret these models. This chapter introduces the complication of correlated predictors (collinearity) and describes ways of detecting and dealing with the problem. This chapter also introduces measures of influence and leverage as part of checking assumptions.
Making repeated observations through time adds complications, but it’s a common way to deal with limited research resources and reduce the use of experimental animals. A consequence of this design is that observations fall into clusters, often corresponding to individual organisms or “subjects.” We need to incorporate these relationships into statistical models and consider the additional complication where observations closer together in time may be more similar than those further apart. These designs were traditionally analyzed with repeated measures ANOVA, fitted by OLS. We illustrate this traditional approach but recommend the alternative linear mixed models approach. Mixed models offer better ways to deal with correlations within the data by specifying the clusters as random effects and modeling the correlations explicitly. When the repeated measures form a sequence (e.g. time), mixed models also offer a way to deal with occasional missing observations without omitting the whole subject from the model.
This chapter addresses the challenges associated with moving from being a pre-service teacher into being a member of the teaching profession. It explores ideas associated with testing and the pressures this can place on students and teachers; collecting evidence of teaching effectiveness for both accreditation and personal development; keeping a professional portfolio; and becoming a full member of a professional learning community. At all times mathematics, and its teaching and learning, is at the centre of the discussion.
The components or functions derived from an eigenanalysis are linear combinations of the original variables. Principal components analysis (PCA) is a very common method that uses these components to examine patterns among the objects, often in a plot termed an ordination, and identify which variables are driving those patterns. Correspondence analysis (CA) is a related method used when the variables represent counts or abundances. Redundancy analysis and canonical CA are constrained versions of PCA and CA, respectively, where the components are derived after taking into account the relationships with additional explanatory variables. Finally, we introduce linear discriminant function analysis as a way of identifying and predicting membership of objects to predefined groups.
Developing computational skills and the concepts that underpin proportional reasoning is a large component of the primary mathematics curriculum. Moving children’s thinking towards proportional reasoning will be covered in more detail in Chapter 6. The research base about the development of number concepts in individual students goes back many years and is too large to address in detail (e.g. Björklund et al., 2020; Whitacre et al., 2020). In this chapter, the focus is on effective teaching to enhance learning: developing computational skills, the relationships between different operations and moving from additive to multiplicative thinking.