To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Punctuated equilibria argue for intervals of long-term net stasis and comparatively abrupt change in the morphology of individual species lineages resulting from the process of allopatric speciation as recorded in the stratigraphic and fossil record. The concept of coordinated stasis extends punctuated equilibria to posit that not only individual species, but groups of coexisting lineages within a basin, display concurrent morphological and ecological stability over the same extended intervals of geologic time (105 to 106 yr). These blocks of stability termed ecological–evolutionary subunits (EESUs) are separated by shorter-lived (on the order of 103 to 104 yr) episodes of change characterized by varying combinations of speciation, extinction, immigration, and emigration. The result is a pattern of evolutionary and ecological stasis and change that is coincident and highly punctuational.
Here, we assess the connections among environment, evolution, and ecology by documenting patterns of stability, geographic extent, and synchronous turnover during medium-scale bioevents in the Middle Devonian of the eastern United States, and we briefly compare these with patterns of EESUs across the Late Ordovician mass extinction (LOME) based on ongoing work. We quantify the geographic extent and stability of faunas originally documented in the Appalachian Basin and identify their likely places of origin and refugia during turnovers. Faunas are geographically widespread during times of stability and border comparably stable faunas in adjacent provinces. During geologically brief intervals, assemblages display near-synchronous shifts involving local extirpation/extinction and coordinated migration of biogeographic boundaries over very long distances. Allopatric speciation in small, locally isolated populations along the edges of basins during brief windows of dramatically altered environmental conditions is more consistent with the geological record, emphasizes the role of environment and biogeography in driving evolutionary change, and confirms the prevalence of punctuated equilibria.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Parkinsonism and Parkinson's disease (PD) have been described as consequences of repetitive head impacts (RHI) from boxing, since 1928. Autopsy studies have shown that RHI from other contact sports can also increase risk for neurodegenerative diseases, including chronic traumatic encephalopathy (CTE) and Lewy bodies. In vivo research on the relationship between American football play and PD is scarce, with small samples, and equivocal findings. This study leveraged the Fox Insight study to evaluate the association between American football and parkinsonism and/or PD Diagnosis and related clinical outcomes.
Participants and Methods:
Fox Insight is an online study of people with and without PD who are 18+ years (>50,000 enrolled). Participants complete online questionnaires on motor function, cognitive function, and general health behaviors. Participants self-reported whether they "currently have a diagnosis of Parkinson's disease, or parkinsonism, by a physician or other health care professional." In November 2020, the Boston University Head Impact Exposure Assessment was launched in Fox Insight for large-scale data collection on exposure to RHI from contact sports and other sources. Data used in this abstract were obtained from the Fox Insight database https://foxinsight-info.michaeljfox.org/insight/explore/insight.jsp on 01/06/2022. The sample includes 2018 men who endorsed playing an organized sport. Because only 1.6% of football players were women, analyses are limited to men. Responses to questions regarding history of participation in organized football were examined. Other contact and/or non-contact sports served as the referent group. Outcomes included PD status (absence/presence of parkinsonism or PD) and Penn Parkinson's Daily Activities Questionnaire-15 (PDAQ-15) for assessment of cognitive symptoms. Binary logistic regression tested associations between history and years of football play with PD status, controlling for age, education, current heart disease or diabetes, and family history of PD. Linear regressions, controlling for these variables, were used for the PDAQ-15.
Results:
Of the 2018 men (mean age=67.67, SD=9.84; 10, 0.5% Black), 788 (39%) played football (mean years of play=4.29, SD=2.88), including 122 (16.3%) who played youth football, 494 (66.0%) played high school, 128 (17.1%) played college football, and 5 (0.7%) played at the semi-professional or professional level. 1738 (86.1%) reported being diagnosed with parkinsonism/PD, and 707 of these were football players (40.7%). History of playing any level of football was associated with increased odds of having a reported parkinsonism or PD diagnosis (OR=1.52, 95% CI=1.14-2.03, p=0.004). The OR remained similar among those age <69 (sample median age) (OR=1.45, 95% CI=0.97-2.17, p=0.07) and 69+ (OR=1.45, 95% CI=0.95-2.22, p=0.09). Among the football players, there was not a significant association between years of play and PD status (OR=1.09, 95% CI=1.00-1.20, p=0.063). History of football play was not associated with PDAQ-15 scores (n=1980) (beta=-0.78, 95% CI=-1.59-0.03, p=0.059) among the entire sample.
Conclusions:
Among 2018 men from a data set enriched for PD, playing organized football was associated with increased odds of having a reported parkinsonism/PD diagnosis. Next steps include examination of the contribution of traumatic brain injury and other sources of RHI (e.g., soccer, military service).
White matter hyperintensity (WMH) burden is greater, has a frontal-temporal distribution, and is associated with proxies of exposure to repetitive head impacts (RHI) in former American football players. These findings suggest that in the context of RHI, WMH might have unique etiologies that extend beyond those of vascular risk factors and normal aging processes. The objective of this study was to evaluate the correlates of WMH in former elite American football players. We examined markers of amyloid, tau, neurodegeneration, inflammation, axonal injury, and vascular health and their relationships to WMH. A group of age-matched asymptomatic men without a history of RHI was included to determine the specificity of the relationships observed in the former football players.
Participants and Methods:
240 male participants aged 45-74 (60 unexposed asymptomatic men, 60 male former college football players, 120 male former professional football players) underwent semi-structured clinical interviews, magnetic resonance imaging (structural T1, T2 FLAIR, and diffusion tensor imaging), and lumbar puncture to collect cerebrospinal fluid (CSF) biomarkers as part of the DIAGNOSE CTE Research Project. Total WMH lesion volumes (TLV) were estimated using the Lesion Prediction Algorithm from the Lesion Segmentation Toolbox. Structural equation modeling, using Full-Information Maximum Likelihood (FIML) to account for missing values, examined the associations between log-TLV and the following variables: total cortical thickness, whole-brain average fractional anisotropy (FA), CSF amyloid ß42, CSF p-tau181, CSF sTREM2 (a marker of microglial activation), CSF neurofilament light (NfL), and the modified Framingham stroke risk profile (rFSRP). Covariates included age, race, education, APOE z4 carrier status, and evaluation site. Bootstrapped 95% confidence intervals assessed statistical significance. Models were performed separately for football players (college and professional players pooled; n=180) and the unexposed men (n=60). Due to differences in sample size, estimates were compared and were considered different if the percent change in the estimates exceeded 10%.
Results:
In the former football players (mean age=57.2, 34% Black, 29% APOE e4 carrier), reduced cortical thickness (B=-0.25, 95% CI [0.45, -0.08]), lower average FA (B=-0.27, 95% CI [-0.41, -.12]), higher p-tau181 (B=0.17, 95% CI [0.02, 0.43]), and higher rFSRP score (B=0.27, 95% CI [0.08, 0.42]) were associated with greater log-TLV. Compared to the unexposed men, substantial differences in estimates were observed for rFSRP (Bcontrol=0.02, Bfootball=0.27, 994% difference), average FA (Bcontrol=-0.03, Bfootball=-0.27, 802% difference), and p-tau181 (Bcontrol=-0.31, Bfootball=0.17, -155% difference). In the former football players, rFSRP showed a stronger positive association and average FA showed a stronger negative association with WMH compared to unexposed men. The effect of WMH on cortical thickness was similar between the two groups (Bcontrol=-0.27, Bfootball=-0.25, 7% difference).
Conclusions:
These results suggest that the risk factor and biological correlates of WMH differ between former American football players and asymptomatic individuals unexposed to RHI. In addition to vascular risk factors, white matter integrity on DTI showed a stronger relationship with WMH burden in the former football players. FLAIR WMH serves as a promising measure to further investigate the late multifactorial pathologies of RHI.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Chronic traumatic encephalopathy (CTE) is a neurodegenerative disease that can only be diagnosed at post-mortem. Revised criteria for the clinical syndrome of CTE, known as traumatic encephalopathy syndrome (TES), include impairments in episodic memory and/or executive function as core clinical features. These criteria were informed by retrospective interviews with next-of-kin and the presence and rates of objective impairments in memory and executive functions in CTE are unknown. Here, we characterized antemortem neuropsychological test performance in episodic memory and executive functions among deceased contact sport athletes neuropathologically diagnosed with CTE.
Participants and Methods:
The sample included 80 deceased male contact sport athletes from the UNITE brain bank who had autopsy-confirmed CTE (and no other neurodegenerative diseases). Published criteria were used for the autopsy diagnosis of CTE. Neuropsychological test reports (raw scores) were acquired through medical record requests. Raw scores were converted to z-scores using the same age, sex, and education-adjusted normative data. Tests of memory included long delay trials from the Rey Complex Figure, CVLT-II, HVLT-R, RBANS, and BVMT-R. Tests of executive functions included Trail Making Test-B (TMT-B), Controlled Oral Word Association Test, WAIS-III Picture Arrangement, and various WAIS-IV subtests. Not all brain donors had the same tests, and the sample sizes vary across tests, with 33 donors having tests from both domains. Twenty-eight had 1 test in memory and 3 had 2+. Eight had 1 test of executive function and 46 had 2+. A z-score of 1.5 standard deviations below the normative mean was impaired. Interpretation of test performance followed the American Academy of Clinical Neuropsychology guidelines (Guilmette et al., 2020). Bivariate correlations assessed cumulative p-tau burden (summary semiquantitative ratings of p-tau severity across 11 brain regions) and TMT-B (n=34) and CVLT-II (n=14), the most common tests available.
Results:
Of the 80 (mean age= 59.9, SD=18.0 years; 13, 16.3% were Black), 72 played football, 4 played ice hockey, and 4 played other contact sports. Most played at the professional level (57, 71.3%). Mean time between neuropsychological testing and death was 3.9 (SD= 4.5) years. The most common reason for testing was dementia-related (43, 53.8%). Mean z-scores fell in the average psychometric range(mean z= -0.52, SD=1.5, range= -6.0 to 3.0) for executive function and the low average range for memory (mean z= -1.3, SD=1.1, range= -4.0 to 2.0). Eleven (20.4%) had impairment on 1 test and 3 (5.6%) on 2+ tests of executive functions. The most common impairment was on TMT-B (mean z= -1.77, 13 [38.2%] impaired). For memory, 13 (41.9%) had impairment on 1 test. Of the 14 who had CVLT-II, 7 were impaired (mean z= -1.33). Greater p-tau burden was associated with worse performance on CVLT-II (r= -.653, p= .02), but not TMT-B (r= .187, p>.05).
Conclusions:
This study provides the first evidence for objectively-measured impairments in executive functions and memory in a sample with known, autopsy-confirmed CTE. Furthermore, p-tau burden corresponded to worse memory test performance. Examination of neuropsychological tests from medical records has limitations but can overcome shortcomings of retrospective informant reports to provide insight into the cognitive profiles associated with CTE.
OBJECTIVES/GOALS: The trabecular meshwork (TM) and Schlemm’s canal (SC), located within the iridocorneal angle (ICA), form the main outflow pathway and a major target for glaucoma treatments. We characterized the human ICA in vivo with Optical Coherence Tomography (OCT) imaging using a customized goniolens and a commercial OCT device (Heidelberg Spectralis). METHODS/STUDY POPULATION: Imaging these structures is difficult due to the optical limitations of imaging through the cornea at high angles. Therefore, a clinical gonioscopy lens was modified with a 12mm plano-convex lens placed on its anterior surface to focus light on the ICA structures, and capture returning light. Each subjects’ eye was anesthetized with 1 drop of Proparacaine 0.5%. The goniolens was coupled to the eye with gonio-gel and it was held by a 3D adjustable mount. OCT volume scans were acquired on 10 healthy subjects. The linear polarization of the OCT was rotated with a half-waveplate to measure dependence of the ICA landmarks on polarization orientation. RESULTS/ANTICIPATED RESULTS: The TM was seen in 9 of 10 subjects. Polarization rotation modified the brightness of the band of extracanalicular limbal lamina (BELL) and corneoscleral bands due to the birefringent nature of the collagenous structures, increasing the contrast of SC. SC width was 99 ± 20µm varying in size over space, including a subject with SC narrowing in the inferior-temporal quadrant. DISCUSSION/SIGNIFICANCE: This clinically suitable gonioscopic OCT approach has successfully been used to image the human ICA in 3D in vivo, providing detailed characterization of the TM and SC as well as enhancing their contrast against their birefringent backgrounds by rotating the polarization of the imaging beam.
Objectives: We estimated the change to health-service costs and health benefits resulting from a decision to adopt temporary isolation rooms, which are effective at isolating the patient within a general ward environment. We assessed the cost-effectiveness of the decision to adopt temporary isolation rooms in a Singapore hospital. Methods: Existing data were used to update a model of the impact of adopting temporary isolation rooms on healthcare-associated infections. We predicted the expected change to health service costs and health benefits, measured in life years gained. Uncertainty was addressed using probabilistic sensitivity analysis, and the findings were tested with plausible scenarios to determine the effectiveness of the intervention. Results: We predicted 478 fewer HAIs per 100,000 occupied bed days resulting from a decision to adopt temporary isolation rooms. This decreased would result in cost savings of SGD$329,432 (US $247,302) and 1,754 life years gained. When the effectiveness of the intervention was set at 1% of cases of HAI prevented, the incremental cost per life year saved was SGD$16,519 (US $12,400), indicating that this would be a cost-effective measure in Singapore. Conclusions: We have provided evidence that adoption of a temporary isolation room would be cost-effective for Singapore acute-care hospitals. Using temporary isolation rooms may be a positive decision for other countries in the region with fewer resources for infection prevention and control.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
Methods
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
Results
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
Conclusions
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Reduction of the pulse width has been reported to improve ECT outcomes with unilateral ECT (similar efficacy, fewer cognitive side effects), but has been minimally studied for bitemporal ECT. The only study comparing brief and ultrabrief pulse bitemporal ECT found reduced efficacy for bitemporal ultrabrief compared to bitemporal brief pulse stimulation. This randomised controlled trial (RCT) aimed to test if ultrabrief pulse bitemporal ECT results in fewer cognitive side effects than brief pulse bitemporal ECT, when given at doses adjusted with the aim of achieving comparable efficacy.
Methods
Thirty-six participants were randomly assigned to receive ultrabrief (at 3 times seizure threshold) or brief (at 1.5 times seizure threshold) pulse bitemporal ECT given 3 times a week in a double-blind, controlled proof-of-concept trial. Blinded raters assessed mood and cognitive functioning over the ECT course.
Results
Efficacy and cognitive outcomes did not differ significantly between the two treatment groups over the ECT course. The ultrabrief pulse group performed better on a test of visual memory assessed acutely after an ECT treatment.
Conclusions
This study suggests there may be a small cognitive advantage in giving bitemporal ECT with an ultrabrief pulse when dosage is increased to match the efficacy of brief pulse bitemporal ECT, but the study was underpowered to fully examine this issue.
Composing Apartheid is the first book ever to chart the musical world of a notorious period in world history, apartheid South Africa. It explores how music was produced through, and was productive of, key features of apartheid’s social and political topography, as well as how music and musicians contested and even helped to conquer apartheid. The collection of essays is intentionally broad, and the contributors include historians, sociologists and anthropologists, as well as ethnomusicologists, music theorists and historical musicologists. The essays focus on a variety of music (jazz, music in the Western art tradition, popular music) and on major composers (such as Kevin Volans) and works (Handel’s Messiah). Musical institutions and previously little-researched performers (such as the African National Congress’s troupe-in-exile, Amandla) are explored. The writers move well beyond their subject matter, intervening in debates on race, historiography, and postcolonial epistemologies and pedagogies.
To examine tweeting activity, networks, and common topics mentioned on Twitter at 4 international infection control and infectious disease conferences.
DESIGN
A cross-sectional study.
METHODS
An independent company was commissioned to undertake a Twitter ‘trawl’ each month between July 1, 2016, and November 31, 2016. The trawl identified any tweets that contained the official hashtags of the conferences for (1) the UK Infection Prevention Society, (2) IDWeek 2016, (3) the Federation of Infectious Society/Hospital Infection Society, and (4) the Australasian College for Infection Prevention and Control. Topics from each tweet were identified, and an examination of the frequency and timing of tweets was performed. A social network analysis was performed to illustrate connections between users. A multivariate binary logistic regression model was developed to explore the predictors of ‘retweets.’
RESULTS
In total, 23,718 tweets were identified as using 1 of the 2 hashtags of interest. The results demonstrated that the most tweets were posted during the conferences. Network analysis demonstrated a diversity of twitter networks. A link to a web address was a significant predictor of whether a tweet would be retweeted (odds ratio [OR], 2.0; 95% confidence interval [CI], 1.9–2.1). Other significant factors predicting a retweet included tweeting on topics such as Clostridium difficile (OR, 2.0; 95% CI, 1.7–2.4) and the media (OR, 1.8; 95% CI, 1.6–2.0). Tweets that contained a picture were significantly less likely to be retweeted (OR, 0.06; 95% CI, 0.05–0.08).
CONCLUSION
Twitter is a useful tool for information sharing and networking at infection control conferences.
Although evidence shows that attachment insecurity and disorganization increase risk for the development of psychopathology (Fearon, Bakermans-Kranenburg, van IJzendoorn, Lapsley, & Roisman, 2010; Groh, Roisman, van IJzendoorn, Bakermans-Kranenburg, & Fearon, 2012), implementation challenges have precluded dissemination of attachment interventions on the broad scale at which they are needed. The Circle of Security–Parenting Intervention (COS-P; Cooper, Hoffman, & Powell, 2009), designed with broad implementation in mind, addresses this gap by training community service providers to use a manualized, video-based program to help caregivers provide a secure base and a safe haven for their children. The present study is a randomized controlled trial of COS-P in a low-income sample of Head Start enrolled children and their mothers. Mothers (N = 141; 75 intervention, 66 waitlist control) completed a baseline assessment and returned with their children after the 10-week intervention for the outcome assessment, which included the Strange Situation. Intent to treat analyses revealed a main effect for maternal response to child distress, with mothers assigned to COS-P reporting fewer unsupportive (but not more supportive) responses to distress than control group mothers, and a main effect for one dimension of child executive functioning (inhibitory control but not cognitive flexibility when maternal age and marital status were controlled), with intervention group children showing greater control. There were, however, no main effects of intervention for child attachment or behavior problems. Exploratory follow-up analyses suggested intervention effects were moderated by maternal attachment style or depressive symptoms, with moderated intervention effects emerging for child attachment security and disorganization, but not avoidance; for inhibitory control but not cognitive flexibility; and for child internalizing but not externalizing behavior problems. This initial randomized controlled trial of the efficacy of COS-P sets the stage for further exploration of “what works for whom” in attachment intervention.
Determining the best strategy for allocating weed management resources across and between landscapes is challenging because of the uncertainties and large temporal and spatial scales involved. Ecological models of invasive plant spread and control provide a practical tool with which to evaluate alternative management strategies at landscape scales. We developed a spatially explicit model for the spread and control of spotted knapweed and leafy spurge across three Montana landscapes. The objective of the model was to determine the ecological and economic costs and benefits of alternative strategies across landscapes of varying size and stages of infestation. Our results indicate that (1) in the absence of management the area infested will continue to increase exponentially leading to a substantial cost in foregone grazing revenues; (2) even though the costs of management actions are substantial, there is a net economic benefit associated with a broad range of management strategies; (3) strategies a that prioritize targeting small new infestations consistently outperform strategies that target large established patches; and (4) inconsistent treatment and short-term delays can greatly reduce the economic and ecological benefits of management.
The efficacy of prolonged single sessions of live graded exposure (LGE) and computer-aided vicarious exposure (CAVE) for spider phobia was examined in a single-blind, controlled trial. Forty participants diagnosed with specific phobia (spiders) received a prolonged single-session treatment of either therapist-aided LGE comprising exposure only or CAVE, or were assigned to a waiting list. Phobic symptomatology was measured at pre- and post-treatment, and at 1-month follow-up on a range of behavioural and subjective assessments. The results showed that the single-session therapist-aided LGE was superior to both CAVE and the waiting-list control. In contrast with previous findings of comparability between LGE and CAVE, and superiority of CAVE over placebo, the present study found no significant differences between the CAVE and waiting-list groups, with the exception of subjective units of distress, providing little support for single-session CAVE treatment.
Behavioural avoidance tests (BATs) are a cornerstone of objective assessment of phobias. However, live BATs have several disadvantages. They are practically difficult and time-consuming to set up and are not standardised. This study examined two computer-delivered BATs (using slide and video presentations of phobic stimuli respectively): first, in respect to their ability to discriminate fearfuls from nonfearfuls, and second, in terms of convergent validity with a live BAT and the Spider Phobia Questionnaire (SPQ). Sixty-four low (n = 32) and high (n = 32) spider-fearful undergraduate participants were administered the three BATs in counterbalanced order. Results showed that subjective anxiety on all BATs was highly discriminative of low and high spider-fearfuls. The number of steps completed did not discriminate between phobics and nonphobics on the computer BATs. However, there was good convergent validity between the live BAT, the SPQ and both computer-delivered BATs on subjective anxiety. Overall, the live BAT gives a clearer indication of avoidance behaviour while the video BAT assesses subjective anxiety across a wider range of steps. The development of computer-delivered BATs that reliably measure avoidance is necessary before contemplating them as an alternative to a live BAT.