To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study evaluated the impact of four cover crop species and their termination timings on cover crop biomass, weed control, and corn yield. A field experiment was arranged in a split-plot design in which cover crop species (wheat, cereal rye, hairy vetch, and rapeseed) were the main plot factor, and termination timings [4, 2, 1, and 0 wk before planting corn (WBP)] was the subplot factor. In both years (2021 and 2022), hairy vetch produced the most biomass (5,021 kg ha–1) among cover crop species, followed by cereal rye (4,387 kg ha–1), wheat (3,876 kg ha–1), and rapeseed (2,575 kg ha–1). Regression analysis of cover crop biomass with accumulated growing degree days (AGDDs) indicated that for every 100 AGDD increase, the biomass of cereal rye, wheat, hairy vetch, and rapeseed increased by 880, 670, 780, and 620 kg ha–1, respectively. The density of grass and small-seeded broadleaf (SSB) weeds at 4 wk after preemergence herbicide (WAPR) application varied significantly across termination timings. The grass and SSB weed densities were 56% and 36% less at 0 WBP compared with 2 WBP, and 67% and 61% less compared with 4 WBP. The sole use of a roller-crimper did not affect the termination of rapeseed at 0 WBP and resulted in the least corn yield (3,046 kg ha–1), whereas several different combinations of cover crops and termination timings resulted in greater corn yield. In conclusion, allowing cover crops to grow longer in the spring offers more biomass for weed suppression and impacts corn yield.
Biostatisticians increasingly use large language models (LLMs) to enhance efficiency, yet practical guidance on responsible integration is limited. This study explores current LLM usage, challenges, and training needs to support biostatisticians.
Methods:
A cross-sectional survey was conducted across three biostatistics units at two academic medical centers. The survey assessed LLM usage across three key professional activities: communication and leadership, clinical and domain knowledge, and quantitative expertise. Responses were analyzed using descriptive statistics, while free-text responses underwent thematic analysis.
Results:
Of 208 eligible biostatisticians (162 staff and 46 faculty), 69 (33.2%) responded. Among them, 44 (63.8%) reported using LLMs; of the 43 who answered the frequency question, 20 (46.5%) used them daily and 16 (37.2%) weekly. LLMs improved productivity in coding, writing, and literature review; however, 29 of 41 respondents (70.7%) reported significant errors, including incorrect code, statistical misinterpretations, and hallucinated functions. Key verification strategies included expertise, external validation, debugging, and manual inspection. Among 58 respondents providing training feedback, 44 (75.9%) requested case studies, 40 (69.0%) sought interactive tutorials, and 37 (63.8%) desired structured training.
Conclusions:
LLM usage is notable among respondents at two academic medical centers, though response patterns likely reflect early adopters. While LLMs enhance productivity, challenges like errors and reliability concerns highlight the need for verification strategies and systematic validation. The strong interest in training underscores the need for structured guidance. As an initial step, we propose eight core principles for responsible LLM integration, offering a preliminary framework for structured usage, validation, and ethical considerations.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
Negative symptoms are a key feature of several psychiatric disorders. Difficulty identifying common neurobiological mechanisms that cut across diagnostic boundaries might result from equifinality (i.e., multiple mechanistic pathways to the same clinical profile), both within and across disorders. This study used a data-driven approach to identify unique subgroups of participants with distinct reward processing profiles to determine which profiles predicted negative symptoms.
Methods
Participants were a transdiagnostic sample of youth from a multisite study of psychosis risk, including 110 individuals at clinical high-risk for psychosis (CHR; meeting psychosis-risk syndrome criteria), 88 help-seeking participants who failed to meet CHR criteria and/or who presented with other psychiatric diagnoses, and a reference group of 66 healthy controls. Participants completed clinical interviews and behavioral tasks assessing four reward processing constructs indexed by the RDoC Positive Valence Systems: hedonic reactivity, reinforcement learning, value representation, and effort–cost computation.
Results
k-means cluster analysis of clinical participants identified three subgroups with distinct reward processing profiles, primarily characterized by: a value representation deficit (54%), a generalized reward processing deficit (17%), and a hedonic reactivity deficit (29%). Clusters did not differ in rates of clinical group membership or psychiatric diagnoses. Elevated negative symptoms were only present in the generalized deficit cluster, which also displayed greater functional impairment and higher psychosis conversion probability scores.
Conclusions
Contrary to the equifinality hypothesis, results suggested one global reward processing deficit pathway to negative symptoms independent of diagnostic classification. Assessment of reward processing profiles may have utility for individualized clinical prediction and treatment.
Ambulatory antimicrobial stewardship can be challenging due to disparities in resource allocation across the care continuum, competing priorities for ambulatory prescribers, ineffective communication strategies, and lack of incentive to prioritize antimicrobial stewardship program (ASP) initiatives. Efforts to monitor and compare outpatient antibiotic usage metrics have been implemented through quality measures (QM). Healthcare Effectiveness Data and Information Set (HEDIS®) represent standardized measures that examine the quality of antibiotic prescribing by region and across insurance health plans. Health systems with affiliated emergency departments and ambulatory clinics contribute patient data for HEDIS measure assessment and are directly related to value-based reimbursement, pay-for-performance, patient satisfaction measures, and payor incentives and rewards. There are four HEDIS® measures related to optimal antibiotic prescribing in upper respiratory tract diseases that ambulatory ASPs can leverage to develop and measure effective interventions while maintaining buy-in from providers: avoidance of antibiotic treatment for acute bronchitis/bronchiolitis, appropriate treatment for upper respiratory infection, appropriate testing for pharyngitis, and antibiotic utilization for respiratory conditions. Additionally, there are other QM assessed by the Centers for Medicare and Medicaid Services (CMS), including overuse of antibiotics for adult sinusitis. Ambulatory ASPs with limited resources should leverage HEDIS® to implement and measure successful interventions due to their pay-for-performance nature. The purpose of this review is to outline the HEDIS® measures related to infectious diseases in ambulatory care settings. This review also examines the barriers and enablers in ambulatory ASPs which play a crucial role in promoting responsible antibiotic use and the efforts to optimize patient outcomes.
The COVID-19 pandemic highlighted gaps in infection control knowledge and practice across health settings nationwide. The Centers for Disease Control and Prevention, with funding through the American Rescue Plan, developed Project Firstline. Project Firstline is a national collaborative aiming to reach all aspects of the health care frontline. The American Medical Association recruited eight physicians and one medical student to join their director of infectious diseases to develop educational programs targeting knowledge gaps. They have identified 5 critical areas requiring national attention.
A 54-question survey about System Healthcare Infection Prevention Programs (SHIPPs) was sent out to SHEA Research Network participants in August 2023. Thirty-eight United States-based institutions responded (38/93, 41%), of which 23 have SHIPPs. We found heterogeneity in the structure, staffing, and resources for system infection prevention (IP) programs.
NASA’s all-sky survey mission, the Transiting Exoplanet Survey Satellite (TESS), is specifically engineered to detect exoplanets that transit bright stars. Thus far, TESS has successfully identified approximately 400 transiting exoplanets, in addition to roughly 6 000 candidate exoplanets pending confirmation. In this study, we present the results of our ongoing project, the Validation of Transiting Exoplanets using Statistical Tools (VaTEST). Our dedicated effort is focused on the confirmation and characterisation of new exoplanets through the application of statistical validation tools. Through a combination of ground-based telescope data, high-resolution imaging, and the utilisation of the statistical validation tool known as TRICERATOPS, we have successfully discovered eight potential super-Earths. These planets bear the designations: TOI-238b (1.61$^{+0.09} _{-0.10}$ R$_\oplus$), TOI-771b (1.42$^{+0.11} _{-0.09}$ R$_\oplus$), TOI-871b (1.66$^{+0.11} _{-0.11}$ R$_\oplus$), TOI-1467b (1.83$^{+0.16} _{-0.15}$ R$_\oplus$), TOI-1739b (1.69$^{+0.10} _{-0.08}$ R$_\oplus$), TOI-2068b (1.82$^{+0.16} _{-0.15}$ R$_\oplus$), TOI-4559b (1.42$^{+0.13} _{-0.11}$ R$_\oplus$), and TOI-5799b (1.62$^{+0.19} _{-0.13}$ R$_\oplus$). Among all these planets, six of them fall within the region known as ‘keystone planets’, which makes them particularly interesting for study. Based on the location of TOI-771b and TOI-4559b below the radius valley we characterised them as likely super-Earths, though radial velocity mass measurements for these planets will provide more details about their characterisation. It is noteworthy that planets within the size range investigated herein are absent from our own solar system, making their study crucial for gaining insights into the evolutionary stages between Earth and Neptune.
In an identified quality improvement effort, nurses were observed regarding their workflow while in contact precaution rooms. Multiple opportunities for hand hygiene were missed while nurses were in gloves, predominantly while moving between “dirty” and “clean” tasks. An education initiative afterward did not show improvement in hand hygiene rates.
Children with CHD or born very preterm are at risk for brain dysmaturation and poor neurodevelopmental outcomes. Yet, studies have primarily investigated neurodevelopmental outcomes of these groups separately.
Objective:
To compare neurodevelopmental outcomes and parent behaviour ratings of children born term with CHD to children born very preterm.
Methods:
A clinical research sample of 181 children (CHD [n = 81]; very preterm [≤32 weeks; n = 100]) was assessed at 18 months.
Results:
Children with CHD and born very preterm did not differ on Bayley-III cognitive, language, or motor composite scores, or on expressive or receptive language, or on fine motor scaled scores. Children with CHD had lower ross motor scaled scores compared to children born very preterm (p = 0.047). More children with CHD had impaired scores (<70 SS) on language composite (17%), expressive language (16%), and gross motor (14%) indices compared to children born very preterm (6%; 7%; 3%; ps < 0.05). No group differences were found on behaviours rated by parents on the Child Behaviour Checklist (1.5–5 years) or the proportion of children with scores above the clinical cutoff. English as a first language was associated with higher cognitive (p = 0.004) and language composite scores (p < 0.001). Lower median household income and English as a second language were associated with higher total behaviour problems (ps < 0.05).
Conclusions:
Children with CHD were more likely to display language and motor impairment compared to children born very preterm at 18 months. Outcomes were associated with language spoken in the home and household income.
Fetal alcohol spectrum disorder (FASD) is a life-long condition, and few interventions have been developed to improve the neurodevelopmental course in this population. Early interventions targeting core neurocognitive deficits have the potential to confer long-term neurodevelopmental benefits. Time-targeted choline supplementation is one such intervention that has been shown to provide neurodevelopmental benefits that emerge with age during childhood. We present a long-term follow-up study evaluating the neurodevelopmental effects of early choline supplementation in children with FASD approximately 7 years on average after an initial efficacy trial. In this study, we examine treatment group differences in executive function (EF) outcomes and diffusion MRI of the corpus callosum using the Neurite Orientation Dispersion and Density Index (NODDI) biophysical model.
Participants and Methods:
The initial study was a randomized, double-blind, placebo-controlled trial of choline vs. placebo in 2.5- to 5-year-olds with FASD. Participants in this long-term follow-up study included 18 children (9 placebo; 9 choline) seen 7 years on average following initial trial completion. The mean age at follow-up was 11 years old. Diagnoses were 28% fetal alcohol syndrome (FAS), 28% partial FAS, and 44% alcohol-related neurodevelopmental disorder. The follow-up evaluation included measures of executive functioning (WISC-V Picture Span and Digit Span; DKEFS subtests) and diffusion MRI (NODDI).
Results:
Children who received choline early in development outperformed those in the placebo group across a majority of EF tasks at long-term follow-up (effect sizes ranged from -0.09 to 1.27). Children in the choline group demonstrated significantly better performance on several tasks of lower-order executive function skills (i.e., DKEFS Color Naming [Cohen's d = 1.27], DKEFS Word Reading [Cohen's d = 1.13]) and showed potentially better white matter microstructure organization (as indicated by lower orientation dispersion; Cohen's d = -1.26) in the splenium of the corpus callosum compared to the placebo group. In addition, when collapsing across treatment groups, higher white matter microstructural organization was associated with better performance on several EF tasks (WISC-V Digit Span; DKEFS Number Sequencing and DKEFS Word Reading).
Conclusions:
These findings highlight long-term benefits of choline as a neurodevelopmental intervention for FASD and suggest that changes in white matter organization may represent an important target of choline in this population. Unique to this study is the use of contemporary biophysical modeling of diffusion MRI data in youth with FASD. Findings suggest this neuroimaging approach may be particularly useful for identifying subtle white matter differences in FASD as well as neurobiological responses to early intervention associated with important cognitive functions.
An accurate accounting of prior sport-related concussion (SRC) is critical to optimizing the clinical care of athletes with SRC. Yet, obtaining such a history via medical records or lifetime monitoring is often not feasible necessitating the use of self-report histories. The primary objective of the current project is to determine the degree to which athletes consistently report their SRC history on serial assessments throughout their collegiate athletic career.
Participants and Methods:
Data were obtained from the NCAA-DoD CARE Consortium and included 1621 athletes (914 male) from a single Division 1 university who participated in athletics during the 2014-2017 academic years. From this initial cohort, 752 athletes completed a second-year assessment and 332 completed a third-year assessment. Yearly assessments included a brief self-report survey that queried SRC history of the previous year. Consistency of self-reported SRC history was defined as reporting the same number of SRC on subsequent yearly evaluation as had been reported the previous year.
For every year of participation, the number of SRC reported on the baseline exam (Reported) and the number of SRC recorded by athletes and medical staff during the ensuing season (Recorded) were tabulated. In a subsequent year, the expected number of SRC (Expected) was computed as the sum of Reported and Recorded. For participation years in which Expected could be computed, the reporting deviation (RepDev) gives the difference between the number of SRCs which were expected to be reported at a baseline exam based on previous participation year data and the number of SRCs which was actually reported by the athlete or medical record during the baseline exam. The reporting deviation was computed only for those SRC that occurred while the participant was enrolled in the current study (RepDevSO). Oneway intraclass correlations (ICC) were computed between the expected and reported numbers of SRC.
Results:
341 athletes had a history of at least one SRC and 206 of those (60.4%) had a RepDev of 0. The overall ICC for RepDev was 0.761 (95% CI 0.73-0.79). The presence of depression (ICC 0.87, 95% CI 0.79-0.92) and loss of consciousness (ICC 0.80, 95% CI 0.720.86) were associated with higher ICCs compared to athletes without these variables. Female athletes demonstrated higher self-report consistency (ICC 0.82, 95% CI 0.79-0.85) compared to male athletes (ICC 0.72, 95% CI 0.68-0.76). Differences in the classification of RepDev according to sex and sport were found to be significant (x2=77.6, df=56, p=0.03). The sports with the highest consistency were Women’s Tennis, Men’s Diving, and Men’s Tennis with 100% consistency between academic years. Sports with the lowest consistency were Women’s Gymnastics (69%), Men’s Lacrosse (70%), and Football (72%). 96 athletes had at least one study-only SRC in the previous year and 69 of those (71.9%) had a RepDevSO of 0 (ICC 0.673, 95% CI 0.64-0.71).
Conclusions:
Approximately 40% of athletes do not consistently report their SRC history, potentially further complicating the clinical management of SRC. These findings encourage clinicians to be aware of factors which could influence the reliability of self-reported SRC history.
Cost-effective treatments are needed to reduce the burden of depression. One way to improve the cost-effectiveness of psychotherapy might be to increase session frequency, but keep the total number of sessions constant.
Aim
To evaluate the cost-effectiveness of twice-weekly compared with once-weekly psychotherapy sessions after 12 months, from a societal perspective.
Method
An economic evaluation was conducted alongside a randomised controlled trial comparing twice-weekly versus once-weekly sessions of psychotherapy (cognitive–behavioural therapy or interpersonal psychotherapy) for depression. Missing data were handled by multiple imputation. Statistical uncertainty was estimated with bootstrapping and presented with cost-effectiveness acceptability curves.
Results
Differences between the two groups in depressive symptoms, physical and social functioning, and quality-adjusted life-years (QALY) at 12-month follow-up were small and not statistically significant. Total societal costs in the twice-weekly session group were higher, albeit not statistically significantly so, than in the once-weekly session group (mean difference €2065, 95% CI −686 to 5146). The probability that twice-weekly sessions are cost-effective compared with once-weekly sessions was 0.40 at a ceiling ratio of €1000 per point improvement in Beck Depression Inventory-II score, 0.32 at a ceiling ratio of €50 000 per QALY gained, 0.23 at a ceiling ratio of €1000 per point improvement in physical functioning score and 0.62 at a ceiling ratio of €1000 per point improvement in social functioning score.
Conclusions
Based on the current results, twice-weekly sessions of psychotherapy for depression are not cost-effective over the long term compared with once-weekly sessions.
Twice weekly sessions of cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT) for major depressive disorder (MDD) lead to less drop-out and quicker and better response compared to once weekly sessions at posttreatment, but it is unclear whether these effects hold over the long run.
Aims
Compare the effects of twice weekly v. weekly sessions of CBT and IPT for depression up to 24 months since the start of treatment.
Methods
Using a 2 × 2 factorial design, this multicentre study randomized 200 adults with MDD to once or twice weekly sessions of CBT or IPT over 16–24 weeks, up to a maximum of 20 sessions. Main outcome measures were depression severity, measured with the Beck Depression Inventory-II and the Longitudinal Interval Follow-up Evaluation. Intention-to-treat analyses were conducted.
Results
Compared with patients who received once weekly sessions, patients who received twice weekly sessions showed a significant decrease in depressive symptoms up through month 9, but this effect was no longer apparent at month 24. Patients who received CBT showed a significantly larger decrease in depressive symptoms up to month 24 compared to patients who received IPT, but the between-group effect size at month 24 was small. No differential effects between session frequencies or treatment modalities were found in response or relapse rates.
Conclusions
Although a higher session frequency leads to better outcomes in the acute phase of treatment, the difference in depression severity dissipated over time and there was no significant difference in relapse.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
A survey of academic medical-center hospital epidemiologists indicated substantial deviation from Centers for Disease Control and Prevention guidance regarding healthcare providers (HCPs) recovering from coronavirus disease 2019 (COVID-19) returning to work. Many hospitals continue to operate under contingency status and have HCPs return to work earlier than recommended.
Several hypotheses may explain the association between substance use, posttraumatic stress disorder (PTSD), and depression. However, few studies have utilized a large multisite dataset to understand this complex relationship. Our study assessed the relationship between alcohol and cannabis use trajectories and PTSD and depression symptoms across 3 months in recently trauma-exposed civilians.
Methods
In total, 1618 (1037 female) participants provided self-report data on past 30-day alcohol and cannabis use and PTSD and depression symptoms during their emergency department (baseline) visit. We reassessed participant's substance use and clinical symptoms 2, 8, and 12 weeks posttrauma. Latent class mixture modeling determined alcohol and cannabis use trajectories in the sample. Changes in PTSD and depression symptoms were assessed across alcohol and cannabis use trajectories via a mixed-model repeated-measures analysis of variance.
Results
Three trajectory classes (low, high, increasing use) provided the best model fit for alcohol and cannabis use. The low alcohol use class exhibited lower PTSD symptoms at baseline than the high use class; the low cannabis use class exhibited lower PTSD and depression symptoms at baseline than the high and increasing use classes; these symptoms greatly increased at week 8 and declined at week 12. Participants who already use alcohol and cannabis exhibited greater PTSD and depression symptoms at baseline that increased at week 8 with a decrease in symptoms at week 12.
Conclusions
Our findings suggest that alcohol and cannabis use trajectories are associated with the intensity of posttrauma psychopathology. These findings could potentially inform the timing of therapeutic strategies.