To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Paleontology provides insights into the history of the planet, from the origins of life billions of years ago to the biotic changes of the Recent. The scope of paleontological research is as vast as it is varied, and the field is constantly evolving. In an effort to identify “Big Questions” in paleontology, experts from around the world came together to build a list of priority questions the field can address in the years ahead. The 89 questions presented herein (grouped within 11 themes) represent contributions from nearly 200 international scientists. These questions touch on common themes including biodiversity drivers and patterns, integrating data types across spatiotemporal scales, applying paleontological data to contemporary biodiversity and climate issues, and effectively utilizing innovative methods and technology for new paleontological insights. In addition to these theoretical questions, discussions touch upon structural concerns within the field, advocating for an increased valuation of specimen-based research, protection of natural heritage sites, and the importance of collections infrastructure, along with a stronger emphasis on human diversity, equity, and inclusion. These questions offer a starting point—an initial nucleus of consensus that paleontologists can expand on—for engaging in discussions, securing funding, advocating for museums, and fostering continued growth in shared research directions.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
DSM-5 specifies bulimia nervosa (BN) severity based on specific thresholds of compensatory behavior frequency. There is limited empirical support for such severity groupings. Limited support could be because the DSM-5’s compensatory behavior frequency cutpoints are inaccurate or because compensatory behavior frequency does not capture true underlying differences in severity. In support of the latter possibility, some work has suggested shape/weight overvaluation or use of single versus multiple purging methods may be better severity indicators. We used structural equation modeling (SEM) Trees to empirically determine the ideal variables and cutpoints for differentiating BN severity, and compared the SEM Tree groupings to alternate severity classifiers: the DSM-5 indicators, single versus multiple purging methods, and a binary indicator of shape/weight overvaluation.
Methods
Treatment-seeking adolescents and adults with BN (N = 1017) completed self-report measures assessing BN and comorbid symptoms. SEM Trees specified an outcome model of BN severity and recursively partitioned this model into subgroups based on shape/weight overvaluation and compensatory behaviors. We then compared groups on clinical characteristics (eating disorder symptoms, depression, anxiety, and binge eating frequency).
Results
SEM Tree analyses resulted in five severity subgroups, all based on shape/weight overvaluation: overvaluation <1.25; overvaluation 1.25–3.74; overvaluation 3.75–4.74; overvaluation 4.75–5.74; and overvaluation ≥5.75. SEM Tree groups explained 1.63–6.41 times the variance explained by other severity schemes.
Conclusions
Shape/weight overvaluation outperformed the DSM-5 severity scheme and single versus multiple purging methods, suggesting the DSM-5 severity scheme should be reevaluated. Future research should examine the predictive utility of this severity scheme.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
The Society for Healthcare Epidemiology of America, the Association of Professionals in Infection Control and Epidemiology, the Infectious Diseases Society of America, and the Pediatric Infectious Diseases Society represent the core expertise regarding healthcare infection prevention and infectious diseases and have written multisociety statement for healthcare facility leaders, regulatory agencies, payors, and patients to strengthen requirements and expectations around facility infection prevention and control (IPC) programs. Based on a systematic literature search and formal consensus process, the authors advocate raising the expectations for facility IPC programs, moving to effective programs that are:
• Foundational and influential parts of the facility’s operational structure
• Resourced with the correct expertise and leadership
• Prioritized to address all potential infectious harms
This document discusses the IPC program’s leadership—a dyad model that includes both physician and infection preventionist leaders—its reporting structure, expertise, and competencies of its members, and the roles and accountability of partnering groups within the healthcare facility. The document outlines a process for identifying minimum IPC program medical director support. It applies to all types of healthcare settings except post-acute long-term care and focuses on resources for the IPC program. Long-term acute care hospital (LTACH) staffing and antimicrobial stewardship programs will be discussed in subsequent documents.
Inadequate recruitment and retention impede clinical trial goals. Emerging decentralized clinical trials (DCTs) leveraging digital health technologies (DHTs) for remote recruitment and data collection aim to address barriers to participation in traditional trials. The ACTIV-6 trial is a DCT using DHTs, but participants’ experiences of such trials remain largely unknown. This study explored participants’ perspectives of the ACTIV-6 DCT that tested outpatient COVID-19 therapeutics.
Methods:
Participants in the ACTIV-6 study were recruited via email to share their day-to-day trial experiences during 1-hour virtual focus groups. Two human factors researchers guided group discussions through a semi-structured script that probed expectations and perceptions of study activities. Qualitative data analysis was conducted using a grounded theory approach with open coding to identify key themes.
Results:
Twenty-eight ACTIV-6 study participants aged 30+ years completed a virtual focus group including 1–4 participants each. Analysis yielded three major themes: perceptions of the DCT experience, study activity engagement, and trust. Participants perceived the use of remote DCT procedures supported by DHTs as an acceptable and efficient method of organizing and tracking study activities, communicating with study personnel, and managing study medications at home. Use of social media was effective in supporting geographically dispersed participant recruitment but also raised issues with trust and study legitimacy.
Conclusions:
While participants in this qualitative study viewed the DCT-with-DHT approach as reasonably efficient and engaging, they also identified challenges to address. Understanding facilitators and barriers to DCT participation and DHT interaction can help improve future research design.
Commercial targeted sprayer systems allow producers to reduce herbicide inputs but risks the possibility of not treating emerging weeds. Currently, targeted applications with the John Deere system have five spray sensitivity settings, and no published literature discusses the effects of these settings on detecting and spraying weeds of varying species, sizes, and positions in crops. Research was conducted in Arkansas, Illinois, Indiana, Mississippi, and North Carolina on plantings of corn, cotton, and soybean to determine how various factors might influence the ability of targeted applications to treat weeds. These data included 21 weed species aggregated to six classes with height, width, and densities ranging from 25 to 0.25 cm, 25 to 0.25 cm, and 14.3 to 0.04 plants m−2, respectively. Crop and weed density did not influence the likelihood of treating the weeds. As expected, the sensitivity setting alters the ability to treat weeds. Targeted applications (across sensitivity settings, median weed height and width, and density of 2.4 plants m−2) resulted in a treatment success of 99.6% to 84.4% for Convolvulaceae, 99.1% to 68.8% for decumbent broadleaf weeds, 98.9% to 62.9% for Malvaceae, 99.1% to 70.3% for Poaceae, 98.0% to 48.3% for Amaranthaceae, and 98.5% to 55.8% for yellow nutsedge. Reducing the sensitivity setting reduced the ability to treat weeds. The size of weeds aided targeted application success, with larger weeds being more readily treated through easier detection. Based on these findings, various conditions can affect the outcome of targeted multinozzle applications. Additionally, the analyses highlight some of the parameters to consider when using these technologies.
Objectives/Goals: The identification of the cascade of molecular and cellular events occurring during the progression of focal segmental glomerulosclerosis in human kidney biopsies from kidney transplant (KTx) recipients (KTR) with normal function or recurrent FSGS to determine potential targets of intervention and therapy. Methods/Study Population: In this study, we evaluate the molecular and cellular events associated with primary FSGS in both native and transplant kidneys. We collected biopsy samples from the native normal kidney (nNK, n = 3), normal functioning allografts (NKTx, n = 3), primary FSGS in the native kidney (nFSGS, n = 1), recurrent FSGS (KTxFSGS, n = 5). KTxFSGS comprises a collection of longitudinal samples with biopsy also collected at the subsequent recurrence. Blood samples were collected during biopsy collection. Biopsies were preserved in RNAlater at the time of collection. 10X genomics chromium single nuclei RNA sequencing (snRNAseq) was performed using isolated nuclei. Data was analyzed using Seurat on R. Conditionally immortalized podocytes were treated with a patient serum to determine the change in expression observed in snRNAseq data. Results/Anticipated Results: Recurrence rates of primary FSGS are high in kidney allograft recipients up to 25–50% in first, and up to 80% in second transplants, often leading to graft loss. Our findings reveal that podocyte detachment is driven by metabolic and structural dysregulation rather than cell death, increasing VEGFA expression and disrupting glomerular endothelial cell growth and permeability. Parietal epithelial cells initially compensate by dedifferentiating toward podocytes but later increase collagen deposition, contributing to glomerular sclerosis. Increased interactions of glomerular cells with B cells exacerbate extracellular matrix deposition and scarring. We also observed tubular sclerosis and disruption of the regenerative potential of proximal tubular cells, with increased interaction with T cells. Discussion/Significance of Impact: These findings offer new insights into the pathogenesis of recurrent FSGS and suggest potential therapeutic targets and establishes a foundation for future studies to further evaluate the role of metabolic dysfunction as the cause of podocyte injury and loss.
Planktonic foraminifera are extremely well suited to studying evolutionary change in the fossil record due to their abundant deposits and global distribution. Species are typically conservative in their shell morphology, with the same geometric shapes appearing repeatedly through iterative evolution, but the mechanisms behind the architectural limits on foraminiferal shell shape are still not well understood. To determine how these developmental constraints arise, we study morphological change leading up to the origination of the unusually ornate species Globigerinoidesella fistulosa. We measured the size and circularity of more than 900 specimens of G. fistulosa, its ancestor the Trilobatus sacculifer plexus, and intermediate forms from a site in the western equatorial Pacific. Our results show that the origination of G. fistulosa from the T. sacculifer plexus involved a combination of two heterochronic expressions: earlier onset of protuberances (pre-displacement) and a steeper allometric slope (acceleration) as compared with its ancestor. Our work provides a case study of the complex morphological and developmental changes required to produce unusual shell shapes and highlights the importance of developmental deviations in evolutionary origination.
A concept for a femtosecond pulse compressor based on underdense plasma prisms is presented. An analytical model is developed to calculate the spectral phase incurred and the expected pulse compression. A 2D particle-in-cell simulation verifies the analytical model. Simulated intensities (${\sim} {10}^{16}$ W/cm2) were orders of magnitude higher than the damage threshold for conventional gratings used in chirped pulse amplification. Theoretical geometries for compact (tens of cm scale) compressors for 1, 10 and 100 PW power levels are proposed.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Prenatal glucocorticoid exposure has been negatively associated with infant neurocognitive outcomes. However, questions about developmental timing effects across gestation remain. Participants were 253 mother-child dyads who participated in a prospective cohort study recruited in the first trimester of pregnancy. Diurnal cortisol was measured in maternal saliva samples collected across a single day within each trimester of pregnancy. Children (49.8% female) completed the Bayley Mental Development Scales, Third Edition at 6, 12, and 24 months and completed three observational executive function tasks at 24 months. Structural equation models adjusting for sociodemographic covariates were used to test study hypotheses. There was significant evidence for timing sensitivity. First-trimester diurnal cortisol (area under the curve) was negatively associated with cognitive and language development at 12 months and poorer inhibition at 24 months. Second-trimester cortisol exposure was negatively associated with language scores at 24 months. Third-trimester cortisol positively predicted performance in shifting between task rules (set shifting) at 24 months. Associations were not reliably moderated by child sex. Findings suggest that neurocognitive development is sensitive to prenatal glucocorticoid exposure as early as the first trimester and underscore the importance of assessing developmental timing in research on prenatal exposures for child health outcomes.
Clinical outcomes of repetitive transcranial magnetic stimulation (rTMS) for treatment of treatment-resistant depression (TRD) vary widely and there is no mood rating scale that is standard for assessing rTMS outcome. It remains unclear whether TMS is as efficacious in older adults with late-life depression (LLD) compared to younger adults with major depressive disorder (MDD). This study examined the effect of age on outcomes of rTMS treatment of adults with TRD. Self-report and observer mood ratings were measured weekly in 687 subjects ages 16–100 years undergoing rTMS treatment using the Inventory of Depressive Symptomatology 30-item Self-Report (IDS-SR), Patient Health Questionnaire 9-item (PHQ), Profile of Mood States 30-item, and Hamilton Depression Rating Scale 17-item (HDRS). All rating scales detected significant improvement with treatment; response and remission rates varied by scale but not by age (response/remission ≥ 60: 38%–57%/25%–33%; <60: 32%–49%/18%–25%). Proportional hazards models showed early improvement predicted later improvement across ages, though early improvements in PHQ and HDRS were more predictive of remission in those < 60 years (relative to those ≥ 60) and greater baseline IDS burden was more predictive of non-remission in those ≥ 60 years (relative to those < 60). These results indicate there is no significant effect of age on treatment outcomes in rTMS for TRD, though rating instruments may differ in assessment of symptom burden between younger and older adults during treatment.
n-3 fatty acid consumption during pregnancy is recommended for optimal pregnancy outcomes and offspring health. We examined characteristics associated with self-reported fish or n-3 supplement intake.
Design:
Pooled pregnancy cohort studies.
Setting:
Cohorts participating in the Environmental influences on Child Health Outcomes (ECHO) consortium with births from 1999 to 2020.
Participants:
A total of 10 800 pregnant women in twenty-three cohorts with food frequency data on fish consumption; 12 646 from thirty-five cohorts with information on supplement use.
Results:
Overall, 24·6 % reported consuming fish never or less than once per month, 40·1 % less than once a week, 22·1 % 1–2 times per week and 13·2 % more than twice per week. The relative risk (RR) of ever (v. never) consuming fish was higher in participants who were older (1·14, 95 % CI 1·10, 1·18 for 35–40 v. <29 years), were other than non-Hispanic White (1·13, 95 % CI 1·08, 1·18 for non-Hispanic Black; 1·05, 95 % CI 1·01, 1·10 for non-Hispanic Asian; 1·06, 95 % CI 1·02, 1·10 for Hispanic) or used tobacco (1·04, 95 % CI 1·01, 1·08). The RR was lower in those with overweight v. healthy weight (0·97, 95 % CI 0·95, 1·0). Only 16·2 % reported n-3 supplement use, which was more common among individuals with a higher age and education, a lower BMI, and fish consumption (RR 1·5, 95 % CI 1·23, 1·82 for twice-weekly v. never).
Conclusions:
One-quarter of participants in this large nationwide dataset rarely or never consumed fish during pregnancy, and n-3 supplement use was uncommon, even among those who did not consume fish.
As part of the Research Domain Criteria (RDoC) initiative, the NIMH seeks to improve experimental measures of cognitive and positive valence systems for use in intervention research. However, many RDoC tasks have not been psychometrically evaluated as a battery of measures. Our aim was to examine the factor structure of 7 such tasks chosen for their relevance to schizophrenia and other forms of serious mental illness. These include the n-back, Sternberg, and self-ordered pointing tasks (measures of the RDoC cognitive systems working memory construct); flanker and continuous performance tasks (measures of the RDoC cognitive systems cognitive control construct); and probabilistic learning and effort expenditure for reward tasks (measures of reward learning and reward valuation constructs).
Participants and Methods:
The sample comprised 286 cognitively healthy participants who completed novel versions of all 7 tasks via an online recruitment platform, Prolific, in the summer of 2022. The mean age of participants was 38.6 years (SD = 14.5, range 18-74), 52% identified as female, and stratified recruitment ensured an ethnoracially diverse sample. Excluding time for instructions and practice, each task lasted approximately 6 minutes. Task order was randomized. We estimated optimal scores from each task including signal detection d-prime measures for the n-back, Sternberg, and continuous performance task, mean accuracy for the flanker task, win-stay to win-shift ratio for the probabilistic learning task, and trials completed for the effort expenditure for reward task. We used parallel analysis and a scree plot to determine the number of latent factors measured by the 7 task scores. Exploratory factor analysis with oblimin (oblique) rotation was used to examine the factor loading matrix.
Results:
The scree plot and parallel analyses of the 7 task scores suggested three primary factors. The flanker and continuous performance task both strongly loaded onto the first factor, suggesting that these measures are strong indicators of cognitive control. The n-back, Sternberg, and self-ordered pointing tasks strongly loaded onto the second factor, suggesting that these measures are strong indicators of working memory. The probabilistic learning task solely loaded onto the third factor, suggesting that it is an independent indicator of reinforcement learning. Finally, the effort expenditure for reward task modestly loaded onto the second but not the first and third factors, suggesting that effort is most strongly related to working memory.
Conclusions:
Our aim was to examine the factor structure of 7 RDoC tasks. Results support the RDoC suggestion of independent cognitive control, working memory, and reinforcement learning. However, effort is a factorially complex construct that is not uniquely or even most strongly related to positive valance. Thus, there is reason to believe that the use of at least 6 of these tasks are appropriate measures of constructs such as working memory, reinforcement learning and cognitive control.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.