To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Early career researchers have unique demands, many of which contribute to increased stress, decreased professional fulfillment, and burnout. Consequently, academic institutions and government organizations, such as the National Institutes of Health, are beginning to embrace structured coaching as a tool to support physician wellbeing. To date, such coaching programs have demonstrated promising results, but little is known about whether early career research faculty find coaching feasible, accessible, or helpful. To explore this question further, we developed a novel group coaching intervention for clinician researchers and scientific faculty at the University of Texas Southwestern Medical Center based on the concept of appreciative inquiry, grounding the program in a positive and hopeful approach to the challenges faced by clinicians and researchers. Results from our program indicate this intervention is feasible, satisfactory, and helpful, with participants reporting enhanced self-reflection and empowerment. Effective for a wide array of research faculty, our program brought together diverse faculty, fostered connections, and encouraged future collaborations among this translational group. This suggests that our program provides a foundational blueprint that can be used by other academic medical centers who aim to develop group coaching efforts.
Biostatisticians increasingly use large language models (LLMs) to enhance efficiency, yet practical guidance on responsible integration is limited. This study explores current LLM usage, challenges, and training needs to support biostatisticians.
Methods:
A cross-sectional survey was conducted across three biostatistics units at two academic medical centers. The survey assessed LLM usage across three key professional activities: communication and leadership, clinical and domain knowledge, and quantitative expertise. Responses were analyzed using descriptive statistics, while free-text responses underwent thematic analysis.
Results:
Of 208 eligible biostatisticians (162 staff and 46 faculty), 69 (33.2%) responded. Among them, 44 (63.8%) reported using LLMs; of the 43 who answered the frequency question, 20 (46.5%) used them daily and 16 (37.2%) weekly. LLMs improved productivity in coding, writing, and literature review; however, 29 of 41 respondents (70.7%) reported significant errors, including incorrect code, statistical misinterpretations, and hallucinated functions. Key verification strategies included expertise, external validation, debugging, and manual inspection. Among 58 respondents providing training feedback, 44 (75.9%) requested case studies, 40 (69.0%) sought interactive tutorials, and 37 (63.8%) desired structured training.
Conclusions:
LLM usage is notable among respondents at two academic medical centers, though response patterns likely reflect early adopters. While LLMs enhance productivity, challenges like errors and reliability concerns highlight the need for verification strategies and systematic validation. The strong interest in training underscores the need for structured guidance. As an initial step, we propose eight core principles for responsible LLM integration, offering a preliminary framework for structured usage, validation, and ethical considerations.
This book is about the science and ethics of clinical research and healthcare. We provide an overview of each chapter in its three sections. The first section reviews foundational knowledge about clinical research. The second section provides background and critique on key components and issues in clinical research, ranging from how research questions are formulated, to how to find and synthesize the research that is produced. The third section comprises four case studies of widely used evaluations and treatments. These case examples are exercises in critical thinking, applying the questions and methods outlined in other sections of the book. Each chapter suggests strategies to help clinical research be more useful for clinicians and more relevant for patients.
This book provides a comprehensive analysis of biases inherent in contemporary clinical research, challenging traditional methodologies and assumptions. Aimed at students, professionals, and science enthusiasts, the book delves into fundamental principles, research tools, and ethics. It is organized in three sections: The first section covers fundamentals including framing clinical research questions, core research tools, and clinical research ethics. The second section discusses topics relevant to clinical research, organized according to their relevance in the development of a clinical study. Chapters within this section examine the strengths and limitations of traditional and alternative methods, ethical issues, and patient-centered consequences. The third section presents four in-depth case examples, illustrating issues across diverse health conditions and treatments: gastroesophageal reflux disease, hypercholesterolemia, screening for breast cancer, and depression. This examination encourages readers to critically evaluate the methodologies and assumptions underlying clinical research, promoting a nuanced understanding of evidence production in the health sciences.
The potential for physicians, clinicians, and health professionals to contribute to the advancement of medical therapies through clinical research is significant. Yet, a lack of exposure to, or practical training in, the conduct of clinical research can inhibit health profession trainees from considering research careers, thus perpetuating the already limited influx of new talent. To enhance the sustainability of career pathways into research for all trainees, including those from traditionally underrepresented communities, trainees must experience early exposure to research concepts through robust training and hands-on opportunities. In 2015, the Duke Office of Clinical Research created a Research Immersion elective for Duke’s Master in Biomedical Sciences program, which prepares students for additional health professional training. The course trained students through didactic and practical experiences, with a unique interprofessional mentorship team including both principal investigator and clinical research professional mentors. Following eight cohorts of iterative course optimization, students’ confidence increased in all 24 research competencies assessed. A cross-sectional analysis of post-course outcomes in May 2024 revealed 40.4% of students had continued in research after the program and 60.6% had continued their health professions education. We attributed this success to applied learning and clear expectations and guidelines to support the mentor-student relationship.
Children continue to be an underrepresented population in research and clinical trials due to difficulties encountered in recruitment, assenting, and retention processes. “Sofia Learns About Research” is a children’s activity book that introduces youth to clinical research and basic elements of clinical trials.
Methods:
Development of the activity book began in 2016, with publication of the first paper version in 2017 and an online version adapted for computer and tablet users in 2019. In 2019, we developed internal review board-approved pre/post surveys with five statements (written at ≤ 3rd-grade level) reflecting key concepts covered in the book. Participants were asked to indicate whether they agreed, disagreed, or were not sure about each of the statements and if they would ever want to be part of a research study. Preliminary analyses included descriptive statistics and cross-tabulations with chi squares.
Results:
Despite delays in dissemination and outreach due to the COVID-19 pandemic, we obtained feedback from over 170 diverse persons across a spectrum of communities and community partners. After book exposure, more participants knew that both children and parents have to assent/consent and that participants can withdraw from a study at any time.
Conclusions:
The book is an important advocacy tool with a long-term aim of increasing children’s knowledge and awareness about clinical research, ultimately leading to enhanced participation in clinical research and trials.
Clinical research professionals (CRPs) are integral to the academic medical center workforce, research operations, and daily clinical research tasks; however, due to inconsistent training, there is a shortage of qualified CRPs. The Joint Task Force for Clinical Trial Competency created a competency framework for CRPs, which has demonstrated positive results from various institutions, but training programs have been limited in standardization, replicability, and dissemination. To improve this, we designed the University of Texas Southwestern (UTSW) Medical Center Clinical Research Foundations (CRF) training program, which is a competency-based online self-paced CRP training curriculum hosted via the Collaborative Institutional Training Initiative (CITI) portal. We examined feasibility, acceptability, and uptake of the UTSW CRF training on an institutional scale and were pleased to find this curriculum is not only feasible but has high levels of acceptability. Furthermore, faculty, clinicians, and trainees voluntarily completed this training program indicating utility across diverse groups. The UTSW CRF combines the existing CITI training modules with UTSW-created material, providing an optimal balance between generalized clinical research education and institutionally tailored content. We believe the UTSW CRF curriculum could serve as a plug-and-play foundational model for other research centers to tailor according to their audience and institutional needs.
Participant representation, including the Good Participatory Practice guidelines, in the design and execution of clinical research can profoundly affect research structure and process. Early in the COVID-19 pandemic, an online registry called the Healthcare Worker Exposure Response and Outcomes (HERO) Registry, was launched to capture the experiences of healthcare workers (HCWs) on the pandemic frontlines. It evolved into a program that distributed COVID-19-related information and connected participants with COVID-19-related research opportunities. Furthermore, a subcommittee of HCWs was created to inform the COVID-19-related clinical research, engagement, and communication efforts. This paper, coauthored by the HERO HCW subcommittee, describes how it was formed, the impact of community participation on the HERO Registry and Research Program, reflections on lessons learned, and implications for future research. Engagement of the HCW Subcommittee resulted in representing their lived experience and ensured that their perspectives as HCWs were incorporated into the HERO Research. The strategies not only supported recruitment and retention efforts but also influenced the HERO research team in framing research questions and data collection pertinent to the participant community. This experience demonstrated the importance of having participants’ input as expert advisors to an investigative team in their research efforts during a global health emergency.
There is a growing trend for studies run by academic and nonprofit organizations to have regulatory submission requirements. As a result, there is greater reliance on REDCap, an electronic data capture (EDC) widely used by researchers in these organizations. This paper discusses the development and implementation of the Rapid Validation Process (RVP) developed by the REDCap Consortium, aimed at enhancing regulatory compliance and operational efficiency in response to the dynamic demands of modern clinical research. The RVP introduces a structured validation approach that categorizes REDCap functionalities, develops targeted validation tests, and applies structured and standardized testing syntax. This approach ensures that REDCap can meet regulatory standards while maintaining flexibility to adapt to new challenges. Results from the application of the RVP on recent successive REDCap software version releases illustrate significant improvements in testing efficiency and process optimization, demonstrating the project’s success in setting new benchmarks for EDC system validation. The project’s community-driven responsibility model fosters collaboration and knowledge sharing and enhances the overall resilience and adaptability of REDCap. As REDCap continues to evolve based on feedback from clinical trialists, the RVP ensures that REDCap remains a reliable and compliant tool, ready to meet regulatory and future operational challenges.
There has been an erosion of trust in medical care and clinical research, and this has raised issues about whether institutions and investigators conducting clinical research are worthy of trust. We review recent literature on research on trust and trustworthiness in the clinical research enterprise and identify opportunities to enhance trustworthiness, which will likely increase participant trust in clinical research. In addition, we review materials reporting the results of national polls related to the public’s trust in different occupations. The literature on trustworthiness and trust is complex and suffers from a lack of agreement on definitions of trust and trustworthiness and actions to enhance trustworthiness. Nonetheless, institutions need to take action to address the many elements that contribute to being perceived as trustworthy. As a complementary approach, since nurses have consistently ranked highest on trust by the public for twenty-two straight years, we analyze the features that likely account for the public’s uniform high regard for nurses. We propose specific actions to enhance the role of research nurses in the research enterprise, without compromising their primary role as participant advocates, that we have adopted at Rockefeller University to gain the benefits of the public’s trust in nurses in building trustworthiness.
Adrenal vein sampling (AVS) is a complicated procedure requiring clinical expertise, collaboration, and patient involvement to ensure it occurs successfully. Implementation science offers unique insights into the barriers and enablers of service delivery of AVS. The primary aim of this review was to identify implementation components as described within clinical studies, that contribute to a successful AVS procedure. The secondary aim was to inform practice considerations to support the scale-up of AVS. A scoping review of clinical papers that discussed factors contributing to effective AVS implementation was included. A phased approach was employed to extract implementation science data from clinical studies. Implementation strategies were named and defined, allowing for implementation learnings to be synthesized, in the absence of dedicated research examining implementation process and findings only. Ten implementation components reported as contributing to a successful AVS procedure were identified. These components were categorized according to actions required pre-AVS, during AVS, and post-AVS. Using an implementation science approach, the findings of this review and analysis provide practical considerations to facilitate AVS service delivery design. Extracting implementation science information from clinical research has provided a mechanism that accelerates the translation of evidence into practice where implementation research is not yet available.
Clinical research is critical for healthcare advancement, but participant recruitment remains challenging. Clinical research professionals (CRPs; e.g., clinical research coordinator, research assistant) perform eligibility prescreening, ensuring adherence to study criteria while upholding scientific and ethical standards. This study investigates the key information CRP prioritizes during eligibility prescreening, providing insights to optimize data standardization, and recruitment approaches.
Methods:
We conducted a freelisting survey targeting 150 CRPs from diverse domains (i.e., neurological disorders, rare diseases, and other diseases) where they listed essential information they look for from medical records, participant/caregiver inquiries, and discussions with principal investigators to determine a potential participant’s research eligibility. We calculated the salience scores of listed items using Anthropac, followed by a two-level analytic procedure to classify and thematically categorize the data.
Results:
The majority of participants were female (81%), identified as White (44%) and as non-Hispanic (64.5%). The first-level analysis universally emphasized age, medication list, and medical history across all domains. The second-level analysis illuminated domain-specific approaches in information retrieval: for instance, history of present illness was notably significant in neurological disorders during participant and principal investigator inquiries, while research participation was distinctly salient in potential participant inquiries within the rare disease domain.
Conclusion:
This study unveils the intricacies of eligibility prescreening, with both universal and domain-specific methods observed. Variations in data use across domains suggest the need for tailored prescreening in clinical research. Incorporating these insights into CRP training and refining prescreening tools, combined with an ethical, participant-focused approach, can advance eligibility prescreening practices.
Community inclusion in research may increase the quality and relevance of research, but doing so in an equitable way is complex. Novel approaches used to build engagement with historically marginalized communities in other sectors may have relevance in the clinical research sector.
Method:
To address long-standing gaps and challenges, a stakeholder group was convened to develop a theory of change (ToC), a structured method for obtaining input from stakeholders to enhance the design, conduct, and dissemination of research. The stakeholder group, comprised of Black residents within a metropolitan area, followed a structured monthly meeting schedule for 12 months to produce an outcome map, a model that formally defines aspects of research and engagement for this community.
Results:
Stakeholders reported significant improvements in trust in and engagement with research over the 12-month period, but not changes in health empowerment (individual, organizational, or community level). Through this convening process, a ToC and outcome map were developed with the focus of building bidirectional relationships between groups identifying as Black, Indigenous, and People of Color (BIPOC) and researchers in Boston, MA. Additionally, the group developed a community ownership model and guidelines for researchers to adhere to when utilizing the ToC and outcome map with BIPOC communities.
Conclusion:
Co-ownership of models to develop bidirectional relationships between researchers and community members, such as the ToC and outcome map, may advance and further the value and reach of community-based participatory research while increasing levels of trust and engagement in research.
The University of California (UC) Davis Clinical and Translational Science Center has established the “Join the Team” model, a Clinical Research Coordinator workforce pipeline utilizing a community-based approach. The model has been extensively tested at UC Davis and demonstrated to generate a viable pathway for qualified candidates for employment in clinical research. The model combines the following elements: community outreach; professional training materials created by the Association for Clinical Research Professionals and adapted to the local environment; financial support to trainees to encourage ethnic and socioeconomic diversity; and internship/shadowing opportunities. The program is tailored for academic medical centers (AMCs) in recognition of administrative barriers specific to AMCs. UC Davis’s model can be replicated at other locations using information in this article, such as key program features and barriers faced and surmounted. We also discuss innovative theories for future program iterations.
The value of Source Data Verification (SDV) has been a common theme in the applied Clinical Translational Science literature. Yet, few published assessments of SDV quality exist even though they are needed to design risk-based and reduced monitoring schemes. This review was conducted to identify reports of SDV quality, with a specific focus on accuracy.
Methods:
A scoping review was conducted of the SDV and clinical trial monitoring literature to identify articles addressing SDV quality. Articles were systematically screened and summarized in terms of research design, SDV context, and reported measures.
Results:
The review found significant heterogeneity in underlying SDV methods, domains of SDV quality measured, the outcomes assessed, and the levels at which they were reported. This variability precluded comparison or pooling of results across the articles. No absolute measures of SDV accuracy were identified.
Conclusions:
A definitive and comprehensive characterization of SDV process accuracy was not found. Reducing the SDV without understanding the risk of critical findings going undetected, i.e., SDV sensitivity, is counter to recommendations in Good Clinical Practice and the principles of Quality by Design. Reference estimates (or methods to obtain estimates) of SDV accuracy are needed to confidently design risk-based, reduced SDV processes for clinical studies.
Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design, sample, and data management. High complexity often requires more resources, advanced planning, and specialized expertise to execute studies effectively. However, there are limited instruments that scale study complexity across research designs. The purpose of this study was to develop and establish initial psychometric properties of an instrument that scales research study complexity.
Methods:
Technical and grammatical principles were followed to produce clear, concise items using language familiar to researchers. Items underwent face, content, and cognitive validity testing through quantitative surveys and qualitative interviews. Content validity indices were calculated, and iterative scale revision was performed. The instrument underwent pilot testing using 2 exemplar protocols, asking participants (n = 31) to score 25 items (e.g., study arms, data collection procedures).
Results:
The instrument (Research Complexity Index) demonstrated face, content, and cognitive validity. Item mean and standard deviation ranged from 1.0 to 2.75 (Protocol 1) and 1.31 to 2.86 (Protocol 2). Corrected item-total correlations ranged from .030 to .618. Eight elements appear to be under correlated to other elements. Cronbach’s alpha was 0.586 (Protocol 1) and 0.764 (Protocol 2). Inter-rater reliability was fair (kappa = 0.338).
Conclusion:
Initial pilot testing demonstrates face, content, and cognitive validity, moderate internal consistency reliability and fair inter-rater reliability. Further refinement of the instrument may increase reliability thus providing a comprehensive method to assess study complexity and related resource quantification (e.g., staffing requirements).
A gap in the literature exists pertaining to a global research nurse/research midwife resources and communication skill set necessary to engage with participants of diverse populations and geographic regions in the community or home-based conduct of decentralized clinical trials.
Aims:
An embedded mixed methods study was conducted to examine research nurse/research midwife knowledge base, experiences, and communication skill sets pertaining to decentralized trials across global regions engaged in remote research: the USA, Republic of Ireland, United Kingdom, and Australia.
Methods:
An online survey was deployed across international research nurse/research midwife stakeholder groups, collecting demographics, decentralized trial experience, barriers and facilitators to optimal trial conduct, and the self-perceived communication competence (SPCC) and interpersonal communication competence (IPCC) instruments.
Results:
86 research nurses and research midwives completed the survey across all countries: The SPCC and IPCC results indicated increased clinical research experience significantly correlated with increased SPCC score (p < 0.05). Qualitative content analysis revealed five themes: (1) Implications for Role, (2) Safety and Wellbeing, (3) Training and Education, (4) Implications for Participants, and (5) Barriers and Facilitators.
Conclusions:
Common trends and observations across the global sample can inform decentralized trial resource allocation and policy pertaining to the research nurse/research midwife workforce. This study demonstrates shared cultural norms of research nursing and midwifery across varied regional clinical trial ecosystems.
Telemedicine enables critical human communication and interaction between researchers and participants in decentralized research studies. There is a need to better understand the overall scope of telemedicine applications in clinical research as the basis for further research. This narrative, nonsystematic review of the literature sought to review and discuss applications of telemedicine, in the form of synchronous videoconferencing, in clinical research. We searched PubMed to identify relevant literature published between January 1, 2013, and June 30, 2023. Two independent screeners assessed titles and abstracts for inclusion, followed by single-reviewer full-text screening, and we organized the literature into core themes through consensus discussion. We screened 1044 publications for inclusion. Forty-eight publications met our inclusion and exclusion criteria. We identified six core themes to serve as the structure for the narrative review: infrastructure and training, recruitment, informed consent, assessment, monitoring, and engagement. Telemedicine applications span all stages of clinical research from initial planning and recruitment to informed consent and data collection. While the evidence base for using telemedicine in clinical research is not well-developed, existing evidence suggests that telemedicine is a potentially powerful tool in clinical research.
To compare how clinical researchers generate data-driven hypotheses with a visual interactive analytic tool (VIADS, a visual interactive analysis tool for filtering and summarizing large datasets coded with hierarchical terminologies) or other tools.
Methods:
We recruited clinical researchers and separated them into “experienced” and “inexperienced” groups. Participants were randomly assigned to a VIADS or control group within the groups. Each participant conducted a remote 2-hour study session for hypothesis generation with the same study facilitator on the same datasets by following a think-aloud protocol. Screen activities and audio were recorded, transcribed, coded, and analyzed. Hypotheses were evaluated by seven experts on their validity, significance, and feasibility. We conducted multilevel random effect modeling for statistical tests.
Results:
Eighteen participants generated 227 hypotheses, of which 147 (65%) were valid. The VIADS and control groups generated a similar number of hypotheses. The VIADS group took a significantly shorter time to generate one hypothesis (e.g., among inexperienced clinical researchers, 258 s versus 379 s, p = 0.046, power = 0.437, ICC = 0.15). The VIADS group received significantly lower ratings than the control group on feasibility and the combination rating of validity, significance, and feasibility.
Conclusion:
The role of VIADS in hypothesis generation seems inconclusive. The VIADS group took a significantly shorter time to generate each hypothesis. However, the combined validity, significance, and feasibility ratings of their hypotheses were significantly lower. Further characterization of hypotheses, including specifics on how they might be improved, could guide future tool development.
In 2016, Duke reconfigured its clinical research job descriptions and workforce to be competency-based, modeled around the Joint Taskforce for Clinical Trial Competency framework. To ensure consistency in job classification amongst new hires in the clinical research workforce, Duke subsequently implemented a Title Picker tool. The tool compares the research unit’s description of job responsibility needs against those standardized job descriptions used to map incumbents in 2016. Duke worked with human resources and evaluated the impact on their process as well as on the broader community of staff who hire clinical research professionals. Implementation of the tool has enabled Duke to create consistent job classifications for its workforce and better understand who composes the clinical research professional workforce. This tool has provided valuable workforce metrics, such as attrition, hiring, etc., and strengthened our collaboration with Human Resources.