To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article discusses recent moves in political science that emphasise predicting future events rather than theoretically explaining past ones or understanding empirical generalisations. Two types of prediction are defined: pragmatic, and scientific. The main aim of political science is explanation, which requires scientific prediction. Scientific prediction does not necessarily entail pragmatic prediction nor does it necessarily refer to the future, though both are desiderata for political science. Pragmatic prediction is not necessarily explanatory, and emphasising pragmatic prediction will lead to disappointment, as it will not always help in understanding how to intervene to change future outcomes, and policy makers are likely to be disappointed by its time‐scale.
Older adults with treatment-resistant depression (TRD) benefit more from treatment augmentation than switching. It is useful to identify moderators that influence these treatment strategies for personalised medicine.
Aims
Our objective was to test whether age, executive dysfunction, comorbid medical burden, comorbid anxiety or the number of previous adequate antidepressant trials could moderate the superiority of augmentation over switching. A significant moderator would influence the differential effect of augmentation versus switching on treatment outcomes.
Method
We performed a preplanned moderation analysis of data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) randomised controlled trial (N = 742). Participants were 60 years old or older with TRD. Participants were either (a) randomised to antidepressant augmentation with aripiprazole (2.5–15 mg), bupropion (150–450 mg) or lithium (target serum drug level 0.6 mmol/L) or (b) switched to bupropion (150–450 mg) or nortriptyline (target serum drug level 80–120 ng/mL). Treatment duration was 10 weeks. The two main outcomes of this analysis were (a) symptom improvement, defined as change in Montgomery–Asberg Depression Rating Scale (MADRS) scores from baseline to week 10 and (b) remission, defined as MADRS score of 10 or less at week 10.
Results
Of the 742 participants, 480 were randomised to augmentation and 262 to switching. The number of adequate previous antidepressant trials was a significant moderator of depression symptom improvement (b = −1.6, t = −2.1, P = 0.033, 95% CI [−3.0, −0.1], where b is the coefficient of the relationship (i.e. effect size), and t is the t-statistic for that coefficient associated with the P-value). The effect was similar across all augmentation strategies. No other putative moderators were significant.
Conclusions
Augmenting was superior to switching antidepressants only in older patients with fewer than three previous antidepressant trials. This suggests that other intervention strategies should be considered following three or more trials.
This chapter discusses the growing body of research that examines the social cognitive processes of jurors used when making verdict or sentencing decisions. This includes the empirical findings related to priming ideas and attitudes and impression formation. The chapter then discusses heuristics, or cognitive “shortcuts,” that jurors employ during their decision-making processes in trials and deliberations. For instance, there is a tendency for jurors to over-rely on dispositional attributions, stereotypes, and schemas. Cognitive biases that jurors are prone to, such as the hindsight bias, the outcome bias, and counterfactual thinking, will also be discussed in the context of evaluating evidence and making verdict decisions, along with the potential of debiasing techniques. Finally, jurors’ biases and prejudices regarding factors, such as race, gender, and religion, and how they relate to decision-making are examined. The chapter also addresses areas of social cognition that have not yet been explored in current research and provides recommendations for future directions.
Translation is the process of turning observations in the research laboratory, clinic, and community into interventions that improve people’s health. The Clinical and Translational Science Awards (CTSA) program is a National Center for Advancing Translational Sciences (NCATS) initiative to advance translational science and research. Currently, 64 “CTSA hubs” exist across the nation. Since 2006, the Houston-based Center for Clinical Translational Sciences (CCTS) has assembled a well-integrated, high-impact hub in Texas that includes six partner institutions within the state, encompassing ∼23,000 sq. miles and over 16 million residents. To achieve the NCATS goal of “more treatments for all people more quickly,” the CCTS promotes diversity and inclusion by integrating underrepresented populations into clinical studies, workforce training, and career development. In May 2023, we submitted the UM1 application and six “companion” proposals: K12, R25, T32-Predoctoral, T32-Postdoctoral, and RC2 (two applications). In October 2023, we received priority scores for the UM1 (22), K12 (25), T32-Predoctoral (20), and T32-Postdoctoral (23), which historically fall within the NCATS funding range. This report describes the grant preparation and submission approach, coupled with data from an internal survey designed to assimilate feedback from principal investigators, writers, reviewers, and administrative specialists. Herein, we share the challenges faced, the approaches developed, and the lessons learned.
Former professional American football players have a high relative risk for neurodegenerative diseases like chronic traumatic encephalopathy (CTE). Interpreting low cognitive test scores in this population occasionally is complicated by performance on validity testing. Neuroimaging biomarkers may help inform whether a neurodegenerative disease is present in these situations. We report three cases of retired professional American football players who completed comprehensive neuropsychological testing, but “failed” performance validity tests, and underwent multimodal neuroimaging (structural MRI, Aß-PET, and tau-PET).
Participants and Methods:
Three cases were identified from the Focused Neuroimaging for the Neurodegenerative Disease Chronic Traumatic Encephalopathy (FIND-CTE) study, an ongoing multimodal imaging study of retired National Football League players with complaints of progressive cognitive decline conducted at Boston University and the UCSF Memory and Aging Center. Participants were relatively young (age range 55-65), had 16 or more years of education, and two identified as Black/African American. Raw neuropsychological test scores were converted to demographically-adjusted z-scores. Testing included standalone (Test of Memory Malingering; TOMM) and embedded (reliable digit span, RDS) performance validity measures. Validity cutoffs were TOMM Trial 2 < 45 and RDS < 7. Structural MRIs were interpreted by trained neurologists. Aß-PET with Florbetapir was used to quantify cortical Aß deposition as global Centiloids (0 = mean cortical signal for a young, cognitively normal, Aß negative individual in their 20s, 100 = mean cortical signal for a patient with mild-to-moderate Alzheimer’s disease dementia). Tau-PET was performed with MK-6240 and first quantified as standardized uptake value ratio (SUVR) map. The SUVR map was then converted to a w-score map representing signal intensity relative to a sample of demographically-matched healthy controls.
Results:
All performed in the average range on a word reading-based estimate of premorbid intellect. Contribution of Alzheimer’s disease pathology was ruled out in each case based on Centiloids quantifications < 0. All cases scored below cutoff on TOMM Trial 2 (Case #1=43, Case #2=42, Case #3=19) and Case #3 also scored well below RDS cutoff (2). Each case had multiple cognitive scores below expectations (z < -2.0) most consistently in memory, executive function, processing speed domains. For Case #1, MRI revealed mild atrophy in dorsal fronto-parietal and medial temporal lobe (MTL) regions and mild periventricular white matter disease. Tau-PET showed MTL tau burden modestly elevated relative to controls (regional w-score=0.59, 72nd%ile). For Case #2, MRI revealed cortical atrophy, mild hippocampal atrophy, and a microhemorrhage, with no evidence of meaningful tau-PET signal. For Case #3, MRI showed cortical atrophy and severe white matter disease, and tau-PET revealed significantly elevated MTL tau burden relative to controls (w-score=1.90, 97th%ile) as well as focal high signal in the dorsal frontal lobe (overall frontal region w-score=0.64, 74th%ile).
Conclusions:
Low scores on performance validity tests complicate the interpretation of the severity of cognitive deficits, but do not negate the presence of true cognitive impairment or an underlying neurodegenerative disease. In the rapidly developing era of biomarkers, neuroimaging tools can supplement neuropsychological testing to help inform whether cognitive or behavioral changes are related to a neurodegenerative disease.
Individuals living with HIV may experience cognitive difficulties or marked declines known as HIV-Associated Neurocognitive Disorder (HAND). Cognitive difficulties have been associated with worse outcomes for people living with HIV, therefore, accurate cognitive screening and identification is critical. One potentially sensitive marker of cognitive impairment which has been underutilized, is intra-individual variability (IIV). Cognitive IIV is the dispersion of scores across tasks in neuropsychological assessment. In individuals living with HIV, greater cognitive IIV has been associated with cortical atrophy, poorer cognitive functioning, with more rapid declines, and greater difficulties in daily functioning. Studies examining the use of IIV in clinical neuropsychological testing are limited, and few have examined IIV in the context of a single neuropsychological battery designed for culturally diverse or at-risk populations. To address these gaps, this study aimed to examine IIV profiles of individuals living with HIV and who inject drugs, utilizing the Neuropsi, a standardized neuropsychological instrument for Spanish speaking populations.
Participants and Methods:
Spanish speaking adults residing in Puerto Rico (n=90) who are HIV positive and who inject drugs (HIV+I), HIV negative and who inject drugs (HIV-I), HIV positive who do not inject drugs (HIV+), or healthy controls (HC) completed the Neuropsi battery as part of a larger research protocol. The Neuropsi produces 3 index scores representing cognitive domains of memory, attention/memory, and attention/executive functioning. Total battery and within index IIV were calculated by dividing the standard deviation of T-scores by mean performance, resulting in a coefficient of variance (CoV). Group differences on overall test battery mean CoV (OTBMCoV) were investigated. To examine unique profiles of index specific IIV, a cluster analysis was performed for each group.
Results:
Results of a one-way ANOVA indicated significant between group differences on OTBMCoV (F[3,86]=6.54, p<.001). Post-hoc analyses revealed that HIV+I (M=.55, SE=.07, p=.003), HIV-I (M=.50, SE=.03, p=.001), and HIV+ (M=.48, SE=.02, p=.002) had greater OTBMCoV than the HC group (M=.30, SE=.02). To better understand sources of IIV within each group, cluster analysis of index specific IIV was conducted. For the HIV+ group, 3 distinct clusters were extracted: 1. High IIV in attention/memory and attention/executive functioning (n=3, 8%); 2. Elevated memory IIV (n=21, 52%); 3. Low IIV across all indices (n=16, 40%). For the HIV-I group, 2 distinct clusters were extracted: 1. High IIV across all 3 indices (n=7, 24%) and 2. Low IIV across all 3 indices (n=22, 76%). For the HC group, 3 distinct clusters were extracted: 1. Very low IIV across all 3 indices (n=5, 36%); 2. Elevated memory IIV (n=6, 43%); 3. Elevated attention/executive functioning IIV with very low attention/memory and memory IIV (n=3, 21%). Sample size of the HIV+I group was insufficient to extract clusters.
Conclusions:
Current findings support IIV in the Neuropsi test battery as clinically sensitive marker for cognitive impairment in Spanish speaking individuals living with HIV or who inject drugs. Furthermore, the distinct IIV cluster types identified between groups can help to better understand specific sources of variability. Implications for clinical assessment in prognosis and etiological considerations are discussed.
Injection drug use is a significant public health crisis with adverse health outcomes, including increased risk of human immunodeficiency virus (HIV) infection. Comorbidity of HIV and injection drug use is highly prevalent in the United States and disproportionately elevated in surrounding territories such as Puerto Rico. While both HIV status and injection drug use are independently known to be associated with cognitive deficits, the interaction of these effects remains largely unknown. The aim of this study was to determine how HIV status and injection drug use are related to cognitive functioning in a group of Puerto Rican participants. Additionally, we investigated the degree to which type and frequency of substance use predict cognitive abilities.
Participants and Methods:
96 Puerto Rican adults completed the Neuropsi Attention and Memory-3rd Edition battery for Spanish-speaking participants. Injection substance use over the previous 12 months was also obtained via clinical interview. Participants were categorized into four groups based on HIV status and injection substance use in the last 30 days (HIV+/injector, HIV+/non-injector, HIV/injector, HIV-/non-injector). One-way analysis of variance (ANOVA) was conducted to determine differences between groups on each index of the Neuropsi battery (Attention and Executive Function; Memory; Attention and Memory). Multiple linear regression was used to determine whether type and frequency of substance use predicted performance on these indices while considering HIV status.
Results:
The one-way ANOVAs revealed significant differences (p’s < 0.01) between the healthy control group and all other groups across all indices. No significant differences were observed between the other groups. Injection drug use, regardless of the substance, was associated with lower combined attention and memory performance compared to those who inject less than monthly (Monthly: p = 0.04; 2-3x daily: p < 0.01; 4-7x daily: p = 0.02; 8+ times daily: p < 0.01). Both minimal and heavy daily use predicted poorer memory performance (p = 0.02 and p = 0.01, respectively). Heavy heroin use predicted poorer attention and executive functioning (p = 0.04). Heroin use also predicted lower performance on tests of memory when used monthly (p = 0.049), and daily or almost daily (2-6x weekly: p = 0.04; 4-7x daily: p = 0.04). Finally, moderate injection of heroin predicted lower scores on attention and memory (Weekly: p = 0.04; 2-6x weekly: p = 0.048). Heavy combined heroin and cocaine use predicted worse memory performance (p = 0.03) and combined attention and memory (p = 0.046). HIV status was not a moderating factor in any circumstance.
Conclusions:
As predicted, residents of Puerto Rico who do not inject substances and are HIVnegative performed better in domains of memory, attention, and executive function than those living with HIV and/or inject substances. There was no significant difference among the affected groups in cognitive ability. As expected, daily injection of substances predicted worse performance on tasks of memory. Heavy heroin use predicted worse performance on executive function and memory tasks, while heroin-only and combined heroin and cocaine use predicted worse memory performance. Overall, the type and frequency of substance is more predictive of cognitive functioning than HIV status.
Sleep problems associated with poor mental health and academic outcomes may have been exacerbated by the COVID-19 pandemic.
Aims
To describe sleep in undergraduate students during the COVID-19 pandemic.
Method
This longitudinal analysis included data from 9523 students over 4 years (2018–2022), associated with different pandemic phases. Students completed a biannual survey assessing risk factors, mental health symptoms and lifestyle, using validated measures. Sleep was assessed with the Sleep Condition Indicator (SCI-8). Propensity weights and multivariable log-binomial regressions were used to compare sleep in four successive first-year cohorts. Linear mixed-effects models were used to examine changes in sleep over academic semesters and years.
Results
There was an overall decrease in average SCI-8 scores, indicating worsening sleep across academic years (average change −0.42 per year; P-trend < 0.001), and an increase in probable insomnia at university entry (range 18.1–29.7%; P-trend < 0.001) before and up to the peak of the pandemic. Sleep improved somewhat in autumn 2021, when restrictions loosened. Students commonly reported daytime sleep problems, including mood, energy, relationships (36–48%) and concentration, productivity, and daytime sleepiness (54–66%). There was a consistent pattern of worsening sleep over the academic year. Probable insomnia was associated with increased cannabis use and passive screen time, and reduced recreation and exercise.
Conclusions
Sleep difficulties are common and persistent in students, were amplified by the pandemic and worsen over the academic year. Given the importance of sleep for well-being and academic success, a preventive focus on sleep hygiene, healthy lifestyle and low-intensity sleep interventions seems justified.
Two independent temporal-spatial clusters of hospital-onset Rhizopus infections were evaluated using whole-genome sequencing (WGS). Phylogenetic analysis confirmed that isolates within each cluster were unrelated despite epidemiological suspicion of outbreaks. The ITS1 region alone was insufficient for accurate analysis. WGS has utility for rapid rule-out of suspected nosocomial Rhizopus outbreaks.
This paper explores the relationship between unionism and quits. Three channels of influence are investigated: unions-collective voice-quits; unions-training-quits; unions-job dissatisfaction-quits. Estimates of each model, using data from the Australian Longitudinal Survey, indicate that unions reduce the probability of quitting via the training effect by 0.5 percentage points, they reduce the probability of quitting via the collective voice effect by 4 percentage points and they increase the probability of quitting via the job dissatisfaction effect by 1.2 percentage points. The net effect of unions is, therefore, to reduce the probability of quitting by around 3 percentage points.
During the past decade—more precisely during the last five to seven years—the increased use of urban guerrilla warfare and terrorism have characterized the activities of many revolutionary groups in the less developed world. High-lighted by the olympic assassinations of 1972, this phenomenon has also been evident in various African and Asian states. It is in Latin America, however, that the change from the traditional rural base for guerrilla operations to an urban environment has been most pronounced. The years from 1962 to 1967 saw many Latin American insurgents copying the Cuban revolutionary model, with its emphasis on rural guerrilla operations and the peasantry as the ultimate motive force, but recent years have seen an equally strong pull toward either purely urban insurgency or a more balanced strategy according equal importance to both rural and urban activities. In either case, the identifiable shift away from a totally rural guerrilla strategy for most Latin American revolutionary groups seems an established fact.
Urban insurgency has been used with increasing frequency and effectiveness in many areas of the developed and less developed world during the past decade. In Latin America, this trend toward expanded urban guerrilla warfare has been most pronounced in Brazil, Uruguay, and Argentina. In the three nations, revolutionary forces have rejected completely the concept of the primacy of guerrilla activities based in the countryside, a theory adapted to the Latin American environment by Cuba's Ernesto “Che” Guevara and French Marxist Régis Debray. Instead, attention has been focused on organizing and developing guerrilla and terrorist operations in such population centers as Rio de Janeiro, São Paulo, Montevideo, Buenos Aires, Rosario, and Córdoba. (For a discussion of factors leading to the development of urban insurgency in Latin America see “The Urban Guerrilla in Latin America: A Select Bibliography,” LARR: 9: 1).
Gastropods are an important component of subtidal Antarctic communities including in common association with macroalgae. Nonetheless, limited data exist detailing their abundance and distribution on macroalgal species. This study documents the abundance and species composition of gastropod assemblages on the two largest, blade-forming Antarctic macroalgae, Himantothallus grandifolius and Sarcopeltis antarctica, sampled across two depths (9 and 18 m) at four sites for each species off Anvers Island, Antarctica. Gastropods were also enumerated on Desmarestia anceps, Desmarestia antarctica and Plocamium sp. but were not included in the main analyses because of small sample sizes. There were major differences between the gastropod assemblages on deep vs shallow H. grandifolius and S. antarctica with much higher numbers of individuals and also greater numbers of gastropod species at the greater depth. Differences between the gastropod assemblages on H. grandifolius and S. antarctica across sampling sites were apparent in non-parametric, multivariate analyses, although depth contributed more than site to these differences. Within common sites, assemblages on H. grandifolius were significantly different from those on S. antarctica at 18 m depth but not at 9 m depth, indicating that the host species can be but is not always more important than site in influencing the gastropod assemblages.
Methicillin-resistant Staphylococcus aureus (MRSA) is an important pathogen in neonatal intensive care units (NICU) that confers significant morbidity and mortality.
Objective:
Improving our understanding of MRSA transmission dynamics, especially among high-risk patients, is an infection prevention priority.
Methods:
We investigated a cluster of clinical MRSA cases in the NICU using a combination of epidemiologic review and whole-genome sequencing (WGS) of isolates from clinical and surveillance cultures obtained from patients and healthcare personnel (HCP).
Results:
Phylogenetic analysis identified 2 genetically distinct phylogenetic clades and revealed multiple silent-transmission events between HCP and infants. The predominant outbreak strain harbored multiple virulence factors. Epidemiologic investigation and genomic analysis identified a HCP colonized with the dominant MRSA outbreak strain who cared for most NICU patients who were infected or colonized with the same strain, including 1 NICU patient with severe infection 7 months before the described outbreak. These results guided implementation of infection prevention interventions that prevented further transmission events.
Conclusions:
Silent transmission of MRSA between HCP and NICU patients likely contributed to a NICU outbreak involving a virulent MRSA strain. WGS enabled data-driven decision making to inform implementation of infection control policies that mitigated the outbreak. Prospective WGS coupled with epidemiologic analysis can be used to detect transmission events and prompt early implementation of control strategies.
Synthetic peptide and peptido-mimetic filaments and tubes represent a diverse class of nanomaterials with a broad range of potential applications, such as drug delivery, vaccine development, synthetic catalyst design, encapsulation, and energy transduction. The structures of these filaments comprise supramolecular polymers based on helical arrangements of subunits that can be derived from self-assembly of monomers based on diverse structural motifs. In recent years, structural analyses of these materials at near-atomic resolution (NAR) have yielded critical insights into the relationship between sequence, local conformation, and higher-order structure and morphology. This structural information offers the opportunity for development of new tools to facilitate the predictable and reproducible de novo design of synthetic helical filaments. However, these studies have also revealed several significant impediments to the latter process – most notably, the common occurrence of structural polymorphism due to the lability of helical symmetry in structural space. This article summarizes the current state of knowledge on the structures of designed peptide and peptido-mimetic filamentous assemblies, with a focus on structures that have been solved to NAR for which reliable atomic models are available.
The aim of the study was to assess occupational health effects 1 month after responding to a natural gas pipeline explosion.
Methods:
First responders to a pipeline explosion in Kentucky were interviewed about pre- and post-response health symptoms, post-response health care, and physical exertion and personal protective equipment (PPE) use during the response. Logistic regression was used to examine associations between several risk factors and development of post-response symptoms.
Results:
Among 173 first responders involved, 105 (firefighters [58%], emergency medical services [19%], law enforcement [10%], and others [12%]) were interviewed. Half (53%) reported at least 1 new or worsening symptom, including upper respiratory symptoms (39%), headache (18%), eye irritation (17%), and lower respiratory symptoms (16%). The majority (79%) of symptomatic responders did not seek post-response care. Compared with light-exertion responders, hard-exertion responders (48%) had significantly greater odds of upper respiratory symptoms (aOR: 2.99, 95% CI: 1.25–7.50). Forty-four percent of responders and 77% of non-firefighter responders reported not using any PPE.
Conclusions:
Upper respiratory symptoms were common among first responders of a natural gas pipeline explosion and associated with hard-exertion activity. Emergency managers should ensure responders are trained in, equipped with, and properly use PPE during these incidents and encourage responders to seek post-response health care when needed.
To describe epidemiologic and genomic characteristics of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak in a large skilled-nursing facility (SNF), and the strategies that controlled transmission.
Design, setting, and participants:
This cohort study was conducted during March 22–May 4, 2020, among all staff and residents at a 780-bed SNF in San Francisco, California.
Methods:
Contact tracing and symptom screening guided targeted testing of staff and residents; respiratory specimens were also collected through serial point prevalence surveys (PPSs) in units with confirmed cases. Cases were confirmed by real-time reverse transcription–polymerase chain reaction testing for SARS-CoV-2, and whole-genome sequencing (WGS) was used to characterize viral isolate lineages and relatedness. Infection prevention and control (IPC) interventions included restricting from work any staff who had close contact with a confirmed case; restricting movement between units; implementing surgical face masking facility-wide; and the use of recommended PPE (ie, isolation gown, gloves, N95 respirator and eye protection) for clinical interactions in units with confirmed cases.
Results:
Of 725 staff and residents tested through targeted testing and serial PPSs, 21 (3%) were SARS-CoV-2 positive: 16 (76%) staff and 5 (24%) residents. Fifteen cases (71%) were linked to a single unit. Targeted testing identified 17 cases (81%), and PPSs identified 4 cases (19%). Most cases (71%) were identified before IPC interventions could be implemented. WGS was performed on SARS-CoV-2 isolates from 4 staff and 4 residents: 5 were of Santa Clara County lineage and the 3 others were distinct lineages.
Conclusions:
Early implementation of targeted testing, serial PPSs, and multimodal IPC interventions limited SARS-CoV-2 transmission within the SNF.
Magnetic-resonance-guided focused ultrasound surgery (MRgFUS) is the only truly non-invasive procedure for the treatment of uterine fibroids. In MRgFUS, high-frequency ultrasound beams target specific tissue, in this case fibroids, causing increased temperatures leading to destruction of the tissue by coagulative necrosis. The resultant fibroid shrinkage occurs under the guidance of real-time magnetic resonance imaging (MRI) for accuracy and preservation of healthy surrounding myometrium [1]. This chapter will begin with an overview of the history of MRgFUS, then examine the technical parameters of the procedure and measurement of success, discuss patient eligibility and selection, preparation for the procedure, recovery, potential complications, and finally, outlooks for fertility and other outcomes.
One of the risks of electronic power morcellation, central to the morcellation debate, is the concern of spread of malignant uterine tissue. Uterine cancer is the most common gynaecologic cancer in the United States with an estimated 49,560 cases and 8,190 deaths in 2013. Uterine sarcomas arise from the mesodermal tissues of the uterine body and account for 3% of all uterine cancers, and represent 3.3 cases per 100,000 women [1]. Leiomyosarcoma (LMS) represents 40% of all uterine sarcomas, and 2% of all uterine malignancies, and the annual incidence has been estimated to be 0.64 per 100,000 women [2]. It can present at any age, but most commonly between 45 and 55 years old, and its prevalence increases by 10% in patients over 60 years old.
This is a cross-sectional study aiming to understand the early characteristics and background of bone health impairment in clinically well children with Fontan circulation.
Methods:
We enrolled 10 clinically well children with Fontan palliation (operated >5 years before study entrance, Tanner stage ≤3, age 12.1 ± 1.77 years, 7 males) and 11 healthy controls (age 12.0 ± 1.45 years, 9 males) at two children’s hospitals. All patients underwent peripheral quantitative CT. For the Fontan group, we obtained clinical characteristics, NYHA class, cardiac index by MRI, dual x-ray absorptiometry, and biochemical studies. Linear regression was used to compare radius and tibia peripheral quantitative CT measures between Fontan patients and controls.
Results:
All Fontan patients were clinically well (NYHA class 1 or 2, cardiac index 4.85 ± 1.51 L/min/m2) and without significant comorbidities. Adjusted trabecular bone mineral density, cortical thickness, and bone strength index at the radius were significantly decreased in Fontan patients compared to controls with mean differences −30.13 mg/cm3 (p = 0.041), −0.31 mm (p = 0.043), and −6.65 mg2/mm4 (p = 0.036), respectively. No differences were found for tibial measures. In Fontan patients, the mean height-adjusted lumbar bone mineral density and total body less head z scores were −0.46 ± 1.1 and −0.63 ± 1.1, respectively, which are below the average, but within normal range for age and sex.
Conclusions:
In a clinically well Fontan cohort, we found significant bone deficits by peripheral quantitative CT in the radius but not the tibia, suggesting non-weight-bearing bones may be more vulnerable to the unique haemodynamics of the Fontan circulation.