To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Three decades after the initial five-year deadline for compliance, federal agencies and museums have once more been called to account for their failure to return Ancestors and cultural items to Tribal Nations under the Native American Graves Protection and Repatriation Act of 1990 (NAGPRA). In April 2024 more than 70 practitioners collaborated in forums and paper and poster sessions to produce the first ever “Day of NAGPRA” at the 89th Annual Meeting of the Society for American Archaeology in New Orleans. The overwhelming success of this effort is as clear a barometer as any for the current need in the discipline for more conversation, better resources, increased opportunities, and—above all—the chance at a truly collaborative push for a complete return of all Ancestors and their belongings to their communities. In this article, we set up our thematic issue by introducing readers to the various contributions concerning duty of care, education, and policy implementation inspired and informed by the “Day of NAGPRA.”
The Hector Galaxy Survey is a new optical integral field spectroscopy (IFS) survey currently using the Anglo-Australian Telescope to observe up to 15 000 galaxies at low redshift ($z \lt 0.1$). The Hector instrument employs 21 optical fibre bundles feeding into two double-beam spectrographs, AAOmega and the new Spector spectrograph, to enable wide-field multi-object IFS observations of galaxies. To efficiently process the survey data, we adopt the data reduction pipeline developed for the SAMI Galaxy Survey, with significant updates to accommodate Hector’s dual-spectrograph system. These enhancements address key differences in spectral resolution and other instrumental characteristics relative to SAMI and are specifically optimised for Hector’s unique configuration. We introduce a two-dimensional arc fitting approach that reduces the root-mean-square (RMS) velocity scatter by a factor of 1.2–3.4 compared to fitting arc lines independently for each fibre. The pipeline also incorporates detailed modelling of chromatic optical distortion in the wide-field corrector, to account for wavelength-dependent spatial shifts across the focal plane. We assess data quality through a series of validation tests, including wavelength solution accuracy (1.2–2.7 km s$^{-1}$ RMS), spectral resolution (FWHM of 1.2–1.4 Å for Spector), throughput characterisation, astrometric precision ($\lesssim$ 0.03 arcsec median offset), sky subtraction residuals (1–1.6% median continuum residual), and flux calibration stability (4% systematic offset when compared to Legacy Survey fluxes). We demonstrate that Hector delivers high-fidelity, science-ready datasets, supporting robust measurements of galaxy kinematics, stellar populations, and emission-line properties and provide examples. Additionally, we address systematic uncertainties identified during the data processing and propose future improvements to enhance the precision and reliability of upcoming data releases. This work establishes a robust data reduction framework for Hector, delivering high-quality data products that support a broad range of extragalactic studies.
Can observing opposing partisans engage in dialogue depolarize Americans at scale? Partisan animosity poses a challenge to democracy in the United States. Direct intergroup contact interventions have shown promise in reducing partisan polarization, but are costly, time-consuming, and sensitive to subtle changes in implementation. Vicarious intergroup contact—observing co-partisans engage with outparty members—offers a possible solution to the drawbacks of direct contact, and could potentially depolarize Americans quickly and at scale. We test this proposition using a pre-registered, placebo-controlled trial with a nationally representative sample of Americans. Using both attitudinal and behavioral measures, we find that a 50-minute documentary showing an intergroup contact workshop reduces polarization and increases interest but not investment in depolarization activities. While we find no evidence that the film mitigates anti-democratic attitudes, it does increase optimism about the survival of democratic institutions. Our findings suggest that vicarious intergroup contact delivered via mass media can be an effective, inexpensive, and scalable way to promote depolarization among Americans.
Diagnosing HIV-Associated Neurocognitive Disorders (HAND) requires attributing neurocognitive impairment and functional decline at least partly to HIV-related brain effects. Depressive symptom severity, whether attributable to HIV or not, may influence self-reported functioning. We examined longitudinal relationships among objective global cognition, depressive symptom severity, and self-reported everyday functioning in people with HIV (PWH).
Methods:
Longitudinal data from 894 PWH were collected at a university-based research center (2002–2016). Participants completed self-report measures of everyday functioning to assess both dependence in instrumental activities of daily living (IADL) and subjective cognitive difficulties at each visit, along with depressive symptom severity (BDI-II). Multilevel modeling examined within- and between-person predictors of self-reported everyday functioning outcomes.
Results:
Participants averaged 6 visits over 5 years. Multilevel regression showed a significant interaction between visit-specific global cognitive performance and mean depression symptom severity on likelihood of dependence in IADL (p = 0.04), such that within-person association between worse cognition and greater likelihood of IADL dependence was strongest among individuals with lower mean depressive symptom severity. In contrast, participants with higher mean depressive symptom severity had higher likelihoods of IADL dependence regardless of cognition. Multilevel modelling of subjective cognitive difficulties showed no significant interaction between global cognition and mean depressive symptom severity (p > 0.05).
Conclusions:
The findings indicate a link between cognitive abilities and IADL dependence in PWH with low to moderate depressive symptoms. However, those with higher depressive symptoms severity report IADL dependence regardless of cognitive status. This is clinically significant because everyday functioning is measured through self-report rather than performance-based assessments.
Guideline-based tobacco treatment is infrequently offered. Electronic health record-enabled patient-generated health data (PGHD) has the potential to increase patient treatment engagement and satisfaction.
Methods:
We evaluated outcomes of a strategy to enable PGHD in a medical oncology clinic from July 1, 2021 to December 31, 2022. Among 12,777 patients, 82.1% received a tobacco screener about use and interest in treatment as part of eCheck-in via the patient portal.
Results:
We attained a broad reach (82.1%) and moderate response rate (30.9%) for this low-burden PGHD strategy. Patients reporting current smoking (n = 240) expressed interest in smoking cessation medication (47.9%) and counseling (35.8%). As a result of patient requests via PGHD, most tobacco treatment requests by patients were addressed by their providers (40.6–80.3%). Among patients with active smoking, those who received/answered the screener (n = 309 ) were more likely to receive tobacco treatment compared with usual care patients who did not have the patient portal (n = 323) (OR = 2.72, 95% CI = 1.93–3.82, P < 0.0001) using propensity scores to adjust for the effect of age, sex, race, insurance, and comorbidity. Patients who received yet ignored the screener (n = 1024) compared with usual care were also more likely to receive tobacco treatment, but to a lesser extent (OR = 2.20, 95% CI = 1.68–2.86, P < 0.0001). We mapped observed and potential benefits to the Translational Science Benefits Model (TSBM).
Discussion:
PGHD via patient portal appears to be a feasible, acceptable, scalable, and cost-effective approach to promote patient-centered care and tobacco treatment in cancer patients. Importantly, the PGHD approach serves as a real world example of cancer prevention leveraging the TSBM.
Objectives/Goals: Clinical trial success requires recruiting and retaining diverse participants. The ER&R Certificate Program trains clinical research professionals (CRPs) in equity, diversity, and inclusion (EDI), addressing biases, and integrating regulatory knowledge with practical skills to foster inclusive research practices. Methods/Study Population: An interdisciplinary Steering Committee, supported by Duke CTSI and DOCR, developed and implemented an engagement, recruitment, and retention certificate program (ER&R) for CRPs. With expert-led instruction, including e-learning, group sessions, and hands-on activities, ER&R integrates EDI into participant engagement practices. Participants complete 7 core courses and at least 3 elective courses, reflecting their unique responsibilities. Program evaluation uses the Kirkpatrick model to assess participant learning, competency, and EDI integration into clinical research. Since launch, the program has expanded to include clinical research trainees from Durham Technical Community College. All elements of the program were designed to allow for sharing across academic medical institutions. Results/Anticipated Results: A total of 202 CRPs and trainees have participated since launch (2020), including 17 trainee participants from Durham Technical Community College (2022–2024). Post-program evaluations showed significant growth in recruitment and retention self-efficacy. An early evaluation of the first 2 cohorts (n = 59) included a self-assessment across defined competencies showing marked increases in comfort across all learning objectives, with notable gains in: Community and Stakeholder Engagement, Recruitment on a Shoestring Budget, Community-Engaged Research Initiatives, and Social Marketing. Participants valued the program’s focus on EDI and sought more practical strategies and peer collaboration. 50 additional institutions have engaged with our implementation consultations and program repository. Discussion/Significance of Impact: Barriers to equitable ER&R exist at the individual, study, and system levels. Addressing these requires more intentional engagement practices. The ER&R certificate program is an innovative model for integrating equity principles with practical and required knowledge and skills training for participant-facing research professionals.
To quantify the impact of patient- and unit-level risk adjustment on infant hospital-onset bacteremia (HOB) standardized infection ratio (SIR) ranking.
Design:
A retrospective, multicenter cohort study.
Setting and participants:
Infants admitted to 284 neonatal intensive care units (NICUs) in the United States between 2016 and 2021.
Methods:
Expected HOB rates and SIRs were calculated using four adjustment strategies: birthweight (model 1), birthweight and postnatal age (model 2), birthweight and NICU complexity (model 3), and birthweight, postnatal age, and NICU complexity (model 4). Sites were ranked according to the unadjusted HOB rate, and these rankings were compared to rankings based on the four adjusted SIR models.
Results:
Compared to unadjusted HOB rate ranking (smallest to largest), the number and proportion of NICUs that left the fourth quartile (worst-performing) following adjustments were as follows: adjusted for birthweight (16, 22.5%), birthweight and postnatal age (19, 26.8%), birthweight and NICU complexity (22, 31.0%), birthweight, postnatal age and NICU complexity (23, 32.4%). Comparing NICUs that moved into the better-performing quartiles after birthweight adjustment to those that remained in the better-performing quartiles regardless of adjustment, the median percentage of low birthweight infants was 17.1% (Interquartile Range (IQR): 15.8, 19.2) vs 8.7% (IQR: 4.8, 12.6); and the median percentage of infants who died was 2.2% (IQR: 1.8, 3.1) vs 0.5% (IQR: 0.01, 12.0), respectively.
Conclusion:
Adjusting for patient and unit-level complexity moved one-third of NICUs in the worst-performing quartile into a better-performing quartile. Risk adjustment may allow for a more accurate comparison across units with varying levels of patient acuity and complexity.
Diagnosis in psychiatry faces familiar challenges. Validity and utility remain elusive, and confusion regarding the fluid and arbitrary border between mental health and illness is increasing. The mainstream strategy has been conservative and iterative, retaining current nosology until something better emerges. However, this has led to stagnation. New conceptual frameworks are urgently required to catalyze a genuine paradigm shift.
Methods
We outline candidate strategies that could pave the way for such a paradigm shift. These include the Research Domain Criteria (RDoC), the Hierarchical Taxonomy of Psychopathology (HiTOP), and Clinical Staging, which all promote a blend of dimensional and categorical approaches.
Results
These alternative still heuristic transdiagnostic models provide varying levels of clinical and research utility. RDoC was intended to provide a framework to reorient research beyond the constraints of DSM. HiTOP began as a nosology derived from statistical methods and is now pursuing clinical utility. Clinical Staging aims to both expand the scope and refine the utility of diagnosis by the inclusion of the dimension of timing. None is yet fit for purpose. Yet they are relatively complementary, and it may be possible for them to operate as an ecosystem. Time will tell whether they have the capacity singly or jointly to deliver a paradigm shift.
Conclusions
Several heuristic models have been developed that separately or synergistically build infrastructure to enable new transdiagnostic research to define the structure, development, and mechanisms of mental disorders, to guide treatment and better meet the needs of patients, policymakers, and society.
Differences in social behaviours are common in young people with neurodevelopmental conditions (NDCs). Recent research challenges the long-standing hypothesis that difficulties in social cognition explain social behaviour differences.
Aims
We examined how difficulties regulating one's behaviour, emotions and thoughts to adapt to environmental demands (i.e. dysregulation), alongside social cognition, explain social behaviours across neurodiverse young people.
Method
We analysed cross-sectional behavioural and cognitive data of 646 6- to 18-year-old typically developing young people and those with NDCs from the Province of Ontario Neurodevelopmental Network. Social behaviours and dysregulation were measured by the caregiver-reported Adaptive Behavior Assessment System Social domain and Child Behavior Checklist Dysregulation Profile, respectively. Social cognition was assessed by the Neuropsychological Assessment Affect-Recognition and Theory-of-Mind, Reading the Mind in the Eyes Test, and Sandbox continuous false-belief task scores. We split the sample into training (n = 324) and test (n = 322) sets. We investigated how social cognition and dysregulation explained social behaviours through principal component regression and hierarchical regression in the training set. We tested social cognition-by-dysregulation interactions, and whether dysregulation mediated the social cognition–social behaviours association. We assessed model fits in the test set.
Results
Two social cognition components adequately explained social behaviours (13.88%). Lower dysregulation further explained better social behaviours (β = −0.163, 95% CI −0.191 to −0.134). Social cognition-by-dysregulation interaction was non-significant (β = −0.001, 95% CI −0.023 to 0.021). Dysregulation partially mediated the social cognition–social behaviours association (total effect: 0.544, 95% CI 0.370–0.695). Findings were replicated in the test set.
Conclusions
Self-regulation, beyond social cognition, substantially explains social behaviours across neurodiverse young people.
In the course of the EU funded Pandemic Preparedness and Response (PANDEM-2) project, a functional exercise (FX) was conducted to train the coordinated response to a large-scale pandemic event in Europe by using new IT solutions developed by the project. This report provides an overview of the steps involved in planning, conducting, and evaluating the FX.
Methods
The FX design was based on the European Centre for Disease Prevention and Control (ECDC) simulation exercise cycle for public health settings and was carried out over 2 days in the German and Dutch national public health institutes (PHI), with support from other consortium PHIs. The planning team devised an inject list based on a scenario script describing the emergence of an influenza pandemic from a novel H5N1 pathogen.
Results
The multi-disciplinary participant teams included 11 Dutch and 6 German participants. The FX was supported by 9 international project partners from 8 countries. Overall, participants and observers agreed that the FX goals were achieved.
Conclusions
The FX was a suitable format to test the PANDEM-2 solutions in 2 different country set-ups. It demonstrated the benefit of regular simulation exercises at member state level to test and practice public health emergency responses to be better prepared for real-life events.
Since lack of culture-specific foods in dietary assessment methods may bias reported dietary intake, we identified foods and dishes consumed by residents not born in Sweden and describe consequences for reported foods and nutrient intake using a culturally adapted dietary assessment method. Design consisted of cross-sectional data collection using (semi-)qualitative methods of dietary assessment (and national diet survey instrument RiksmatenFlex) with subsequent longitudinal data collection using quantitative methods for method comparison (December 2020–January 2023). Three community-based research groups were recruited that consisted of mothers born in Sweden, Syria/Iraq, and Somalia, with a median age of 34, 37, and 36 years, respectively. Women born in Syria/Iraq and Somalia who had lived in Sweden for approximately 10 years, reported 78 foods to be added to RiksmatenFlex. In a subsequent study phase, 69% of these foods were reported by around 90% of the ethnic minority groups and contributed to 17% of their reported energy intake. However, differences between the three study groups in median self-reported energy intake remained (Sweden 7.19 MJ, Syria/Iraq 5.54 MJ, and Somalia 5.69 MJ). The groups also showed differences in relative energy contribution from fats and carbohydrates, as well as differences in energy intake from food groups such as bread and sweet snacks. We conclude that a dietary assessment instrument containing culture-specific foods could not resolve group differences in reported energy intake, although these foods provided content validity and contributed 17% of energy intake. The dietary habits collected in this study serve to develop new dietary assessment instruments.
The New Jersey Kids Study (NJKS) is a transdisciplinary statewide initiative to understand influences on child health, development, and disease. We conducted a mixed-methods study of project planning teams to investigate team effectiveness and relationships between team dynamics and quality of deliverables.
Methods:
Ten theme-based working groups (WGs) (e.g., Neurodevelopment, Nutrition) informed protocol development and submitted final reports. WG members (n = 79, 75%) completed questionnaires including de-identified demographic and professional information and a modified TeamSTEPPS Team Assessment Questionnaire (TAQ). Reviewers independently evaluated final reports using a standardized tool. We analyzed questionnaire results and final report assessments using linear regression and performed constant comparative qualitative analysis to identify central themes.
Results:
WG-level factors associated with greater team effectiveness included proportion of full professors (β = 31.24, 95% CI 27.65–34.82), team size (β = 0.81, 95% CI 0.70–0.92), and percent dedicated research effort (β = 0.11, 95% CI 0.09–0.13); age distribution (β = −2.67, 95% CI –3.00 to –2.38) and diversity of school affiliations (β = –33.32, 95% CI –36.84 to –29.80) were inversely associated with team effectiveness. No factors were associated with final report assessments. Perceptions of overall initiative leadership were associated with expressed enthusiasm for future NJKS participation. Qualitative analyses of final reports yielded four themes related to team science practices: organization and process, collaboration, task delegation, and decision-making patterns.
Conclusions:
We identified several correlates of team effectiveness in a team science initiative's early planning phase. Extra effort may be needed to bridge differences in team members' backgrounds to enhance the effectiveness of diverse teams. This work also highlights leadership as an important component in future investigator engagement.
Background: Automated sepsis alerts have become a widely implemented screening tool aimed at early detection of clinically unstable patients. Prior research has shown mixed results depending on the type of screening tools used and the patient population studied. This study aimed to evaluate the predictive value of an alert system created for identifying patients with sepsis to determine utility in clinical practice prior to implementation. Additionally, clinical management of those with and without sepsis was compared to measure potential added benefit of this system in clinical decision making. Methods: A TheraDoc® software sepsis alert was generated for non-ICU patients meeting >2 SIRS criteria within a 24-hour time period (temperature >38°C or 90, respiratory rate >20 or partial pressure CO2 12,000 or 10% bands/immature cells) during March 2023. Alerts were excluded if they were duplicates (using identical criteria or a second alert within 24 hours), triggered by labs collected >48 hours prior, or death or discharge occurred before the time of alert. The primary outcome was positive predictive value (PPV) of sepsis identification, confirmed by ICD-10 codes and diagnostic studies (cultures, imaging). Secondary outcomes included clinical management (antibiotic utilization [AU] and choice, infectious disease [ID] consultations and culture collection). Antibiotics were categorized as broad-spectrum using National Healthcare Safety Network (NSHN) criteria. Secondary outcomes were compared between sepsis and SIRS without infection groups (SIRS) by chi-square analysis. Results: After applying exclusion criteria, 116 of 166 alerts were analyzed; 55 of 116 alerts had confirmed sepsis (PPV 47.4%). Patients with sepsis were more likely to have an ID consult (16% [9/55] vs 7% [4/61]) and cultures collected (70.9% [39/55] vs 39.3% [24/61]) compared to SIRS patients, however these differences were not statistically significant. AU was higher with confirmed infections compared to SIRS patients (94.5% [52/55] vs 32.8% [20/61], p < 0 .05) along with use of broad-spectrum antibiotics (73% [38/52] vs 40% [ 8/20] p < 0 .05). Conclusions: While automated alerts may enable early identification of sepsis, use of SIRS criteria alone has poor specificity, which was borne out by the low PPV in this study. Our study found that management of sepsis patients (as measured by AU and culture ordering) was better than expected and combined with the low PPV of this alert system resulted in our team rejecting widespread adoption of SIRS-based sepsis alerts.
Although behavioral mechanisms in the association among depression, anxiety, and cancer are plausible, few studies have empirically studied mediation by health behaviors. We aimed to examine the mediating role of several health behaviors in the associations among depression, anxiety, and the incidence of various cancer types (overall, breast, prostate, lung, colorectal, smoking-related, and alcohol-related cancers).
Methods
Two-stage individual participant data meta-analyses were performed based on 18 cohorts within the Psychosocial Factors and Cancer Incidence consortium that had a measure of depression or anxiety (N = 319 613, cancer incidence = 25 803). Health behaviors included smoking, physical inactivity, alcohol use, body mass index (BMI), sedentary behavior, and sleep duration and quality. In stage one, path-specific regression estimates were obtained in each cohort. In stage two, cohort-specific estimates were pooled using random-effects multivariate meta-analysis, and natural indirect effects (i.e. mediating effects) were calculated as hazard ratios (HRs).
Results
Smoking (HRs range 1.04–1.10) and physical inactivity (HRs range 1.01–1.02) significantly mediated the associations among depression, anxiety, and lung cancer. Smoking was also a mediator for smoking-related cancers (HRs range 1.03–1.06). There was mediation by health behaviors, especially smoking, physical inactivity, alcohol use, and a higher BMI, in the associations among depression, anxiety, and overall cancer or other types of cancer, but effects were small (HRs generally below 1.01).
Conclusions
Smoking constitutes a mediating pathway linking depression and anxiety to lung cancer and smoking-related cancers. Our findings underline the importance of smoking cessation interventions for persons with depression or anxiety.
The knowledge, skills, and abilities needed for clinical research professionals (CRPs) are described in the Joint Task Force (JTF) for Clinical Trial Competencies Framework as a basis for leveled educational programs, training curricula, and certification. There is a paucity of literature addressing team science competencies tailored to CRPs. Gaps in training, research, and education can restrict their capability to effectively contribute to team science.
Materials/Methods:
The CRP Team Science team consisted of 18 members from 7 clinical and translational science awarded institutions. We employed a multi-stage, modified Delphi approach to define “Smart Skills” and leveled team science skills examples using individual and team science competencies identified by Lotrecchiano et al.
Results:
Overall, 59 team science Smart Skills were identified resulting in 177 skills examples across three levels: fundamental, skilled, and advanced. Two examples of the leveled skillsets for individual and team competencies are illustrated. Two vignettes were created to illustrate application for training.
Discussion:
This work provides a first-ever application of team science for CRPs by defining specific individual and team science competencies for each level of the CRP career life course. This work will enhance the JTF Domains 7 (Leadership and Professionalism) and 8 (Communication and Teamwork) which are often lacking in CRP training programs. The supplement provides a full set of skills and examples from this work.
Conclusion:
Developing team science skills for CRPs may contribute to more effective collaborations across interdisciplinary clinical research teams. These skills may also improve research outcomes and stabilize the CRP workforce.
We evaluated whether universal chlorhexidine bathing (decolonization) with or without COVID-19 intensive training impacted COVID-19 rates in 63 nursing homes (NHs) during the 2020–2021 Fall/Winter surge. Decolonization was associated with a 43% lesser rise in staff case-rates (P < .001) and a 52% lesser rise in resident case-rates (P < .001) versus control.
This article takes stock of the 2030 Agenda and focuses on five governance areas. In a nutshell, we see a quite patchy and often primarily symbolic uptake of the global goals. Although some studies highlight individual success stories of actors and institutions to implement the goals, it remains unclear how such cases can be upscaled and develop a broader political impact to accelerate the global endeavor to achieve sustainable development. We hence raise concerns about the overall effectiveness of governance by goal-setting and raise the question of how we can make this mode of governance more effective.
Technical Summary
A recent meta-analysis on the political impact of the Sustainable Development Goals (SDGs) has shown that these global goals are moving political processes forward only incrementally, with much variation across countries, sectors, and governance levels. Consequently, the realization of the 2030 Agenda for Sustainable Development remains uncertain. Against this backdrop, this article explores where and how incremental political changes are taking place due to the SDGs, and under what conditions these developments can bolster sustainability transformations up to 2030 and beyond. Our scoping review builds upon an online expert survey directed at the scholarly community of the ‘Earth System Governance Project’ and structured dialogues within the ‘Taskforce on the SDGs’ under this project. We identified five governance areas where some effects of the SDGs have been observable: (1) global governance, (2) national policy integration, (3) subnational initiatives, (4) private governance, and (5) education and learning for sustainable development. This article delves deeper into these governance areas and draws lessons to guide empirical research on the promises and pitfalls of accelerating SDG implementation.
Social Media Summary
As SDG implementation lags behind, this article explores 5 governance areas asking how to strengthen the global goals.
This article introduces and demonstrates the utility of a new event dataset on democratic erosion around the world. Through case studies of Turkey and Brazil, we show that our Democratic Erosion Event Dataset (DEED) can help to resolve debates about the extent to which democracy is backsliding based on prominent cross-national indicators, focusing in particular on the Varieties of Democracy (V-Dem) and Little and Meng (L&M) indices. V-Dem suggests that democracies are deteriorating worldwide; L&M argue that this may be an artifact of subjectivity and coder bias and that more “objective” indicators reveal little to no global democratic backsliding in recent years. Using DEED, we show that—at least in these cases—objective indices may underestimate the extent of democratic erosion whereas subjective indices may overestimate it. Our analyses illustrate the ways in which DEED can complement existing indices by illuminating the nature and dynamics of democratic erosion as it occurs on the ground.