We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is commonly held that even where questionnaire response is poor, correlational studies are affected only by loss of degrees of freedom or precision. We show that this supposition is not true. If the decision to respond is correlated with a substantive variable of interest, then regression or analysis of variance methods based upon the questionnaire results may be adversely affected by self-selection bias. Moreover such bias may arise even where response is 100%. The problem in both cases arises where selection information is passed to the score indirectly via the disturbance or individual effects, rather than entirely via the observable explanatory variables. We suggest tests for the ensuing self-selection bias and possible ways of handling the ensuing problems of inference.
An alternative selection of subsample sizes for the Box-Scheffé test is compared to the single m used in Levy, “An empirical comparison of the Z-variance and Box-Scheffé tests for homogeneity of variance.” Use of the alternative subsample sizes is shown to suggest greater power for the Box-Scheffé test, although the test would still be less powerful than the Z-variance test. Because of the nonrobustness of the Z-variance test and the robustness of the Box-Scheffé test, the latter is recommended as a general technique unless an experimenter has assurance that all populations in the experiment are normal. Alternative techniques are also considered.
With random assignment to treatments and standard assumptions, either a one-way ANOVA of post-test scores or a two-way, repeated measures ANOVA of pre- and post-test scores provides a legitimate test of the equal treatment effect null hypothesis for latent variable Θ. In an ANCOVA for pre- and post-test variables X and Y which are ordinal measures of η and Θ, respectively, random assignment and standard assumptions ensure the legitimacy of inferences about the equality of treatment effects on latent variable Θ. Sample estimates of adjusted Y treatment means are ordinal estimators of adjusted post-test means on latent variable Θ.
Four approximate tests are considered for repeated measurement designs in which observations are multivariate normal with arbitrary covariance matrices. In these tests traditional within-subject mean square ratios are compared with critical values derived from F distributions with adjusted degrees of freedom. Two of them—the ∈ approximate and the improved general approximate (IGA) tests-behave adequately in terms of Type I error. Generally, the IGA test functions better than the ∈ approximate test, however the latter involves less computations. In regards to power, the IGA test may compete with one multivariate procedure when the assumptions of the latter are tenable.
Consider a repeated measurements study in which a single observation is taken at each of t(t > 2) equally spaced time points for each of n subjects. This paper uses a result of Box to find the appropriate multiplicative correction term for the degrees of freedom in the analysis of variance table for the case when there is equal variability per time point, and when the correlation between observations u time units apart is equal to ρu. For large values of t, the multiplicative correction has a lower bound approximately equal to 2.5/(t − 1).
Several ways of using the traditional analysis of variance to test heterogeneity of spread in factorial designs with equal or unequal n are compared using both theoretical and Monte Carlo results. Two types of spread variables, (1) the jackknife pseudovalues of s2 and (2) the absolute deviations from the cell median, are shown to be robust and relatively powerful. These variables seem to be generally superior to the Z-variance and Box-Scheffé procedures.
A method is developed to investigate the additive structure of data that (a) may be measured at the nominal, ordinal or cardinal levels, (b) may be obtained from either a discrete or continuous source, (c) may have known degrees of imprecision, or (d) may be obtained in unbalanced designs. The method also permits experimental variables to be measured at the ordinal level. It is shown that the method is convergent, and includes several previously proposed methods as special cases. Both Monte Carlo and empirical evaluations indicate that the method is robust.
Drowning remains a significant cause of mortality among children world-wide, making prevention strategies crucial. The World Health Organization (WHO) recommends training children in safe rescue techniques, including the use of basic skills such as throwing floating objects. This study aims to address a knowledge gap regarding the throwing capabilities of children aged six to twelve using conventional and alternative water rescue materials.
Method:
A total of 374 children aged six to twelve years participated in the study, including both males and females. A randomized crossover approach was used to compare throws with conventional rescue material (ring buoy and rescue tube) to an alternative material (polyethylene terephthalate [PET]-bottle). Throwing distance and accuracy were assessed based on age, sex, and the type of rescue tools used.
Results:
Children of all ages were able to throw the PET-bottle significantly farther than both the ring buoy (P <.001; d = 1.19) and the rescue tube (P <.001; d = 0.60). There were no significant differences (P = .414) in the percentage of children who managed to throw each object accurately.
Conclusion:
Conventional rescue materials, particularly the ring buoy, may not be well-suited for long-distance throws by children. In contrast, lighter and smaller alternatives, such as PET-bottles, prove to be more adaptable to children’s characteristics, enabling them to achieve greater throwing distances. The emphasis on cost-effective and easily accessible alternatives should be implemented in drowning prevention programs or life-saving courses delivered to children.
Cognitive impairment constitutes a prevailing issue in the schizophrenia spectrum, severely impacting patients' functional outcomes. A global cognitive score, sensitive to the stages of the spectrum, would benefit the exploration of potential factors involved in the cognitive decline.
Methods
First, we performed principal component analysis on cognitive scores from 768 individuals across the schizophrenia spectrum, including first-degree relatives of patients, individuals at ultra-high risk, who had a first-episode psychosis, and chronic schizophrenia patients, alongside 124 healthy controls. The analysis provided 10 g-factors as global cognitive scores, validated through correlations with intelligence quotient and assessed for their sensitivity to the stages on the spectrum using analyses of variance. Second, using the g-factors, we explored potential mechanisms underlying cognitive impairment in the schizophrenia spectrum using correlations with sociodemographic, clinical, and developmental data, and linear regressions with genotypic data, pooled through meta-analyses.
Results
The g-factors were highly correlated with intelligence quotient and with each other, confirming their validity. They presented significant differences between subgroups along the schizophrenia spectrum. They were positively correlated with educational attainment and the polygenic risk score (PRS) for cognitive performance, and negatively correlated with general psychopathology of schizophrenia, neurodevelopmental load, and the PRS for schizophrenia.
Conclusions
The g-factors appeared as valid estimators of global cognition, enabling discerning cognitive states within the schizophrenia spectrum. Educational attainment and genetics related to cognitive performance may have a positive influence on cognitive functioning, while general psychopathology of schizophrenia, neurodevelopmental load, and genetic liability to schizophrenia may have an adverse impact.
Chapter 9 introduces students to one-way or one-factor analysis of variance (ANOVA) and factorial designs involving two or more factors. Step-by-step calculation demonstrations are provided. However, a greater emphasis is placed on conceptual understanding than on computation, especially for factorial designs or multifactor ANOVA. The first part of the chapter explores the logic of ANOVA and the steps required, after rejecting a false null hypothesis, to understand and apply ANOVA. The second part introduces factorial designs involving multiple factors.
Suppose you are running a company that provides proofreading services to publishers. You employ people who sit in front of screens, correcting written text. Spelling errors are the most frequent problem, so you are motivated to hire proofreaders who are excellent spellers. Therefore, you decide to give your job applicants a spelling test. It isn’t hard: throw together 25 words, and score everyone on a scale of 0–25. You are now a social scientist, a specialist called a psychometrician, measuring “spelling ability.”
The reader should be officially informed that in this chapter I take leave of the widely accepted consensus about nature–nurture. This is not a textbook, and everything that I have said up to now has been very much my own take on things, but for the most part I have not strayed far from what most scientists would say about the intellectual history of nature and nurture. Not everyone perhaps, but most people agree that Galton was a racist, eugenics a moral and scientific failure, heritability of behavioral differences nearly universal, heritability a less than useful explanatory concept, twin studies an interesting but ultimately limited research paradigm, and linkage and candidate gene analysis of human behavior decisive failures.
Has it always been the case that living people must struggle with the moral failings of their dead ancestors, or is that a special burden that has been placed on the shoulders of citizens and scientists living in contemporary Europe and North America? Recently, the culture feels as though it is being torn apart by this question. I was taught in grade school that the United States is the greatest country in the world, the land of the free and the home of the brave, where anyone could be a millionaire or president if they put in the effort. It is hardly radical to recognize that this is less than true today and isn’t even close to true historically, especially if one is not white, Christian, and male.
Notwithstanding Galton’s admonition to count everything, counting is just a tool; it is no more science than hammering is architecture. One hundred years after Galton, Robert Hutchins remarked, contemptuously, that a social scientist is a person who counts telephone poles. The obvious way to turn counting into science is by conducting experiments, that is by manipulating nature and observing what the consequences are for whatever one is counting. Gregor Mendel, for example, was certainly a counter – he counted the mixtures of smooth and wrinkled peas in the progeny of the pea plants he intentionally crossed. What made Mendel’s work science was the intentional crossing of the plants, not the counting itself. It would have been much more difficult – perhaps impossible – to observe the segregation and independent assortment of traits by counting smooth and wrinkled peas in the wild.
Why is divorce heritable? It’s clear that it is heritable, in the rMZ > rDZ sense. I hope I have convinced you that the heritability of divorce doesn’t mean that there are “divorce genes,” or that divorce is passed down genetically from parents to children, but seriously: how does something like that happen? I am aware that my constant minimizing of the implications of heritability can seem as though I am keeping my finger in the dike against an inevitable onslaught of scientifically based genetic determinism, the final Plominesque realization that our genes make us who we are, the apotheosis of Galton’s proclamation in 1869: “I propose to show … that a man’s natural abilities are derived by inheritance, under exactly the same limitations as are the form and physical features of the whole organic world” (Hereditary Genius, p. 1).
Robert Plomin, whose name has come up a few times already, is unquestionably the most important psychological geneticist of our time. Trained in social and personality psychology at the University of Texas at Austin in the 1970s (my graduate alma mater, though we didn’t overlap), he went on to faculty positions at the University of Colorado and the Pennsylvania State University (both major American centers for behavior genetics) before moving to London to take a position at the Institute of Psychiatry. Plomin’s career has embodied the integration of behavioral genetics into mainstream social science and psychology. Everywhere Plomin has been, he has initiated twin and adoption studies, many of which continue to make contributions today. Although genetics has always played a central role in Plomin’s research, you would never mistake his work for that of a biologist or quantitative geneticist: he (like me) has always been first and foremost a psychologist.
The Second World War marked a turning point for what was considered acceptable in genetics and its implications for eugenic and racially motivated social policies. To be sure, the change in attitude was not quick or decisive. Tens of thousands of Americans were sterilized involuntarily after the war. Anti-black racism, antisemitism, and anti-immigrant sentiment, needless to say, persisted for a long while and have not yet been eliminated; interracial marriage was still illegal in much of the country during my lifetime. But – and despite the foot-dragging, I think this needs to be recognized as an advance – it slowly became less and less acceptable to adopt openly eugenic or racist opinions in public or to justify them based on science. Retrograde attitudes about such things persist to this day, but they have mostly been relegated to the fringes of scientific discourse.
Many people outside of psychology and biology come to the subject of nature–nurture because of an interest in race. That is unfortunate, but I get it. People, especially in the United States, are obsessed with race, for obvious reasons: American history is indelibly steeped in racial categories. The two foundational failures of the American experience – genocide of Indigenous Americans and enslavement of Africans – happened because of race and racism. Even today in the United States, people of all persuasions think about race all the time, whether as hereditarian racists convinced that there are essential biological differences among ancestral groups, progressives fascinated by personal identity and the degradations that non-white people still experience, or the dozens of racial and ethnic categories obsessively collected by the U.S. census.
Let’s summarize where the nature–nurture debate stood as the twentieth century drew to a close. When the century began, thinkers were faced for the first time with the hard evolutionary fact that human beings were not fundamentally different biologically than other evolved organisms. Galton and his eugenic followers concluded that even those parts of human experience that seemed to be unique – social, class, and cultural differences; abilities, attitudes, and personal struggles – were likewise subsumed by evolution and the mammalian biology it produced. People and societies could therefore be treated like herds of animals, rated on their superior and inferior qualities, bred to maintain them, treated to fix them, and culled as necessary for the good of the herd. Not every mid-century moral disaster that followed resulted from their misinterpretation of human evolution, but it played a role. Society has been trying to recover from biologically justified racism, eugenics, and genocide ever since.
The theory of evolution, as espoused by Charles Darwin in The Origin of Species in 1859, was difficult to accept for religious believers whose assumptions about the world were shattered by it, but Darwin’s The Descent of Man, published 12 years later, posed even greater challenges to people who did accept it, and those challenges continue today. It has often been noted that a disorienting consequence of the Enlightenment was to force people to recognize that humans were not created at the center of the universe in the image of God, but instead on a remote dust-speck of a planet, in the image of mold, rats, dogs, and chimps. For the entirety of recorded history, moral beliefs about humans had been based on the idea that people were in some fundamental sense apart from the rest of nature. Darwin disabused us of that notion once and for all. The scientific and social upheaval that has occurred since Darwin has been an extended process of coming to terms with a unification of humans and the rest of the natural world.