To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter outlines practical ways emergency risk communicators can use evaluation throughout a health emergency to inform and improve emergency risk communication messaging strategies and activities. The chapter starts with a basic orientation on program evaluation and its relevance to emergency risk communication. Next, the chapter provides an in-depth look at 16 communication evaluation activities that emergency risk communications can use throughout a health emergency. Then the chapter describes how organizations learn after health emergencies and how organizational learning can inform community resilience and public education. Next, the chapter outlines current theoretical research approaches to evaluating emergency risk communication and practical ways to apply this research during a health emergency. The chapter highlights the ADKAR model for organizational change management, and a student case study uses the Crisis and Emergency Risk Communication framework to analyze how the Georgia Department of Health communicating during the e-cigarette or vaping product use-associated lung injury (EVALI) outbreak. End-of-chapter reflection questions are included.
In 2009, the University of Michigan’s Michigan Institute for Clinical and Health Research developed a three-session K Writing workshop. Beginning in 2016, we implemented a non-attendance fee to encourage attendance across the three sessions. We examined whether this fee improved attendance, increased submission of an NIH K or R grant proposal, and improved success rates. Between 2012 and 2021, 373 participants attended the workshop. After the non-attendance fee was implemented, significantly more participants attended all three sessions of the workshop, and there was a statistical trend suggesting an increase in the success rate, while submission rates remained constant.
While organizations leading community initiatives play a crucial role in tackling public health challenges, their difficulties in designing rigorous evaluations often undermine the strength of their proposals and diminish their chances of securing funding. We developed a matching service funded by the Robert Wood Johnson Foundation’s Evidence for Action program to bridge these gaps. This service identified matched applicants involved in community-engaged research with evaluation experts to provide complementary expertise, strengthen evaluation capacity, and enhance participants’ ability to secure funding.
Methods:
We conducted a mixed-methods evaluation of the pilot phase of the Accelerating Collaborations for Evaluation Matching Service from August 2018 to February 2021. Data sources included program records, participant surveys administered at 3-, 6-, and 12-months post-match, and semi-structured interviews conducted at 12–18 months post-match. We assessed outcomes such as match success, resubmissions, funding rates, and participant satisfaction.
Results:
Over the 2.5-year pilot period, the matching service successfully matched 20 of 24 referred applicants. Among these, 50% submitted revised proposals, and a third of secured funding. Survey results indicated widespread satisfaction with the partnerships. One-year interviews highlighted complementary expertise, bidirectional learning, and capacity-building as key benefits of these partnerships.
Conclusion:
This pilot demonstrated the feasibility, acceptability, and impact of the matching service in creating rewarding collaborations for community-engaged researchers. Beyond funding outcomes, participants uniformly valued the partnerships and described them as mutually satisfying. This model offers a scalable approach to creating research partnerships to build capacity for the evaluation of community initiatives.
The boundaries of psychology are expanding as growing numbers of psychological scientists, educators, and clinicians take a preventive approach to social and mental health challenges. Offering a broad introduction to prevention in psychology, this book provides readers with the tools, resources, and knowledge to develop and implement evidence-based prevention programs. Each chapter features key points, a list of helpful resources for creating successful intervention programs, and culturally informed case examples from across the lifespan, including childhood, school, college, family, adult, and community settings. An important resource for students, researchers, and practitioners in counseling, clinical, health, and educational psychology, social justice and diversity, social work, and public health.
Childhood bullying is a public health priority. We evaluated the effectiveness and costs of KiVa, a whole-school anti-bullying program that targets the peer context.
Methods
A two-arm pragmatic multicenter cluster randomized controlled trial with embedded economic evaluation. Schools were randomized to KiVa-intervention or usual practice (UP), stratified on school size and Free School Meals eligibility. KiVa was delivered by trained teachers across one school year. Follow-up was at 12 months post randomization. Primary outcome: student-reported bullying-victimization; secondary outcomes: self-reported bullying-perpetration, participant roles in bullying, empathy and teacher-reported Strengths and Difficulties Questionnaire. Outcomes were analyzed using multilevel linear and logistic regression models.
Findings
Between 8/11/2019–12/02/2021, 118 primary schools were recruited in four trial sites, 11 111 students in primary analysis (KiVa-intervention: n = 5944; 49.6% female; UP: n = 5167, 49.0% female). At baseline, 21.6% of students reported being bullied in the UP group and 20.3% in the KiVa-intervention group, reducing to 20.7% in the UP group and 17.7% in the KiVa-intervention group at follow-up (odds ratio 0.87; 95% confidence interval 0.78 to 0.97, p value = 0.009). Students in the KiVa group had significantly higher empathy and reduced peer problems. We found no differences in bullying perpetration, school wellbeing, emotional or behavioral problems. A priori subgroup analyses revealed no differences in effectiveness by socioeconomic gradient, or by gender. KiVa costs £20.78 more per pupil than usual practice in the first year, and £1.65 more per pupil in subsequent years.
Interpretation
The KiVa anti-bullying program is effective at reducing bullying victimization with small-moderate effects of public health importance.
Funding
The study was funded by the UK National Institute for Health and Care Research (NIHR) Public Health Research program (17-92-11). Intervention costs were funded by the Rayne Foundation, GwE North Wales Regional School Improvement Service, Children's Services, Devon County Council and HSBC Global Services (UK) Ltd.
Self-efficacy (or the belief in one’s ability to effect change) often moderates the relationship between education, interest, and actions in evaluations of training programs that prepare community-based investigators in the clinical and translational sciences workforce. Such evaluations, however, tend to emphasize individual-level attitudes when there are also community- or organizational-level outcomes impacted. Methods: This study uses a novel sequential, explanatory mixed-methods design to explore multiple levels of self-efficacy (or self-awareness of personal growth in leadership) in the Clinical Scholars program, an equity-centered leadership development program for mid- to later-career healthcare professionals. Our design involves: (1) bivariate correlations and confirmatory factor analysis of self-assessed competencies across all program participants to identify emergent combinations of competencies, which informed (2) more nuanced thematic coding of participants’ stories of most significant change in their personal and professional lives, as a result of the program. Results: In unpacking their accounts of personal leadership styles (that aligned with our quantitative analyses of competencies), we found that participants demonstrated multiple competencies simultaneously. Specifically, they employed emotionally intelligent learning and consensus-building dialogue to manage conflict for interpersonal impact. Additionally, they used this combination of skills to unite diverse stakeholders under a shared vision in order to lead and manage organizational change where all colleagues’ contributions were valued. Conclusion: Together, these methods extend our understanding of personal growth in leadership as an outcome of the program in terms of individual- and organizational-level impacts, using representative quantitative self-assessments to categorize rich qualitative descriptions.
The University of Michigan created the Practice-Oriented Research Training (PORT) program and implemented it between 2008 and 2018. The PORT program provided research training and funding opportunities for allied healthcare professionals. The program consisted of weekly didactics and group discussion related to topics relevant to developing specific research ideas into projects and funding for a mentored research project for those who submitted a competitive grant application. The goal of this evaluation was to assess the long-term impact of the PORT program on the research careers of the participants. Ninety-two participants (74 staff and 18 faculty) participated in both phases of the program. A mixed-methods approach to evaluation was used; 25 participants who received funding for their research completed surveys, and semi-structured interviews were conducted with eight program participants. In addition, data were collected on participants’ publication history. Fifteen out of the 74 staff participants published 31 first-authored papers after participating in PORT. Twelve out of 15 staff participants who published first-authored papers did so for the first time after participating in the PORT program. Results of quantitative and qualitative analyses suggest that the PORT program had positive impacts on both participants and the research community.
The clinical and translational research workforce involved in social and behavioral research (SBR) needs to keep pace with clinical research guidance and regulations. Updated information and a new module on community and stakeholder engagement were added to an existing SBR training course. This article presents evaluation findings of the updated course for the Social and Behavioral Workforce.
Methods and Materials:
Participants working across one university were recruited. Course completers were sent an online survey to evaluate the training. Some participants were invited to join in a focus group to discuss the application of the training to their work. We performed descriptive statistics and conducted a qualitative analysis on focus group data.
Results:
There were 99 participants from diverse backgrounds who completed the survey. Most reported the training was relevant to their work or that of the study teams they worked with. Almost half (46%) indicated they would work differently after participating. Respondents with community or stakeholder engaged research experience vs. those without were more likely to report that the new module was relevant to study teams they worked with (t = 5.61, p = 0.001), and that they would work differently following the training (t = 2.63, p = 0.01). Open-ended survey responses (n = 99) and focus group (n = 12) data showed how participants felt their work would be affected by the training.
Conclusion:
The updated course was rated highly, particularly by those whose work was related to the new course content. This course provides an up-to-date resource for the training and development for the Social and Behavioral Workforce.
Health technology assessment (HTA) programs inform decision making about the value and reimbursement of new and existing health technologies; however, they are under increasing pressure to demonstrate that they are a cost-effective use of finite healthcare resources themselves. The 2023 HTAi Global Policy Forum (GPF) discussed the value and impact of HTA, including how it is assessed and communicated, and how it could be enhanced in the future. This article summarizes the discussions held at the 2023 HTAi GPF, where the challenges and opportunities related to the value and impact of HTA were debated. Core themes and recommendations identified that defining the purpose of value and impact assessment is an essential first step prior to undertaking it, and that it can be done through the use and expansion of existing tools. Further work around aligning HTA programs with underlying societal values is needed to ensure the long-term value and impact of HTA. HTA could also have a role in assessing the efficiency of the wider health system by applying HTA methods or concepts to broader budgetary allocations and organizational aspects of health care. Stakeholders (particularly patients, industry, and clinicians but also payers, wider society, and the media) should ideally be actively engaged when undertaking the value and impact assessment of HTA. More concerted efforts in communicating the role and remit of HTA bodies would also help stakeholders to better understand the value and impact of HTA, which in turn could improve the implementation of HTA recommendations and application to future actions in the lifecycle of technologies.
Clinical and translational research relies on a well-trained workforce, but mentorship programs designed expressly for this workforce are lacking. This paper presents the development of a mentoring program for research staff and identifies key programmatic outcomes. Research staff participating in this program were matched with a senior mentor. Focus groups were conducted to identify key program outcomes. Surveys were administered throughout the program period to assess participants’ experience, gains in skill, and subsequent careers. Analysis of the resultant qualitative and quantitative data are used to characterize the implementation and impact of the program. A total of 47 mentees and 30 mentors participated in program between 2018 and 2023. A comprehensive logic model of short-, intermediate- and long-term outcomes was developed. Participants reported positive valuations of every programmatic outcome assessed including their program experience, learning and research careers. The pool of available mentors also grew as new mentors were successfully recruited for each cohort. This mentorship program developed and implemented by senior research staff successfully provided junior research staff with professional development support, mentorship, and professional development opportunities. Junior and senior health research staff built mentoring relationships that advanced their clinical and translational research careers.
The coronavirus disease (COVID-19) pandemic produced swift, extensive changes in daily life, including for first-episode psychosis (FEP) clients. This study examined pandemic-related psychosocial impacts to clients while engaged in Coordinated Specialty Care (CSC). We also examined FEP client vaccination rates, as vaccinations can reduce hospitalizations/deaths, and related worries.
Methods:
Thirty-one clients (45% female; ages 13-39; 26% black, 61% white) from Pennsylvania (PA) CSC outpatient programs completed an online survey evaluating exposure to COVID-19, associated worries, coping, and safety strategies. Descriptive statistics characterized responses and demographic group differences. Additional program evaluation data informed vaccination rates for PA FEP clients.
Results:
Participants reported substantial pandemic-related impacts to daily life. Many clients reported improved safety measures to protect themselves/others from COVID-19. Clients largely denied substantial worries about infection for themselves, reporting greater concern for loved ones. Multiple coping strategies were endorsed, which, with few exceptions, did not differ among demographic groups. FEP clients had a low reported rate of vaccination (28.6%) as of September 2021.
Conclusions:
Observed prolonged pandemic effects may alter FEP client progress in CSC. Stakeholders should be prepared to adjust FEP treatment accordingly in the event of a similar disaster. Concentrated vaccination efforts may be necessary for this population.
The Michigan Integrative Well-Being and Inequality (MIWI) Training Program aims to provide state-of-the-art, interdisciplinary training to enhance the methodological skills of early-career scientists interested in integrative approaches to understanding health disparities. The goals of this paper are to describe the scientific rationale and core design elements of MIWI, and to conduct a process evaluation of the first cohort of trainees (called “scholars”) to complete this program.
Methods:
Mixed methods process evaluation of program components and assessment of trainee skills and network development of the first cohort (n = 15 scholars).
Results:
The program drew 57 applicants from a wide range of disciplines. Of the 15 scholars in the first cohort, 53% (n = 8) identified as an underrepresented minority, 60% (n = 9) were within 2 years of completing their terminal degree, and most (n = 11, 73%) were from a social/behavioral science discipline (e.g., social work, public health). In the post-program evaluation, scholars rated their improvement in a variety of skills on a one (not at all) to five (greatly improved) scale. Areas of greatest growth included being an interdisciplinary researcher (mean = 4.47), developing new research collaborations (mean = 4.53), and designing a research study related to integrative health (mean = 4.27). The qualitative process evaluation indicated that scholars reported a strong sense of community and that the program broadened their research networks.
Conclusions:
These findings have implications for National Institutes of Health (NIH) efforts to train early-career scientists, particularly from underrepresented groups, working at the intersection of multiple disciplines and efforts to support the formation of research networks.
In 2017, the Michigan Institute for Clinical and Health Research (MICHR) and community partners in Flint, Michigan collaborated to launch a research funding program and evaluate the dynamics of those research partnerships receiving funding. While validated assessments for community-engaged research (CEnR) partnerships were available, the study team found none sufficiently relevant to conducting CEnR in the context of the work. MICHR faculty and staff along with community partners living and working in Flint used a community-based participatory research (CBPR) approach to develop and administer a locally relevant assessment of CEnR partnerships that were active in Flint in 2019 and 2021.
Methods:
Surveys were administered each year to over a dozen partnerships funded by MICHR to evaluate how community and academic partners assessed the dynamics and impact of their study teams over time.
Results:
The results suggest that partners believed that their partnerships were engaging and highly impactful. Although many substantive differences between community and academic partners’ perceptions over time were identified, the most notable regarded the financial management of the partnerships.
Conclusion:
This work contributes to the field of translational science by evaluating how the financial management of community-engaged health research partnerships in a locally relevant context of Flint can be associated with these teams’ scientific productivity and impact with national implications for CEnR. This work presents evaluation methods which can be used by clinical and translational research centers that strive to implement and measure their use of CBPR approaches.
To evaluate an enrichment training program targeted at home palliative care professionals in terms of its effects and participants’ satisfaction. The program had 2 main aims: give voice to professionals’ emotional fatigue and promote their personal resources.
Methods
One hundred twenty-three home palliative care professionals participated in 12 parallel training courses; each course consisted of four 3-hour meetings led by 2 trainers and involved about 10–15 participants. The program adopted the method and tools typical of the enrichment approach, with the insertion of an art therapy exercise in the central meetings. The topics addressed were the following: emotional awareness in care relationship; the recognition of the needs of the patient, the family, and the professional himself; the inevitability of the death of the patient; and the challenges and resources of the multidisciplinary care team. At the first (T1) and last (T2) meetings, participants filled in a self-report questionnaire assessing work emotional fatigue, empowerment, generativity, and satisfaction with the course.
Results
Participants were highly satisfied with the course. They reported a higher level of work emotional fatigue and a higher perception of personal resources, in terms of empowerment (both individual-oriented and relationship-oriented) and generativity at the end of the program than before.
Significance of results
Results confirm the need to provide home palliative care professionals with trainings in which they can express, share, and deal with personal and professional needs. This course gave voice to professionals’ work emotional fatigue and promoted their personal resources, while enhancing collaboration in the multidisciplinary team.
Optimizing the effectiveness of a team-based approach to unite multiple disciplines in advancing specific translational areas of research is foundational to improving clinical practice. The current study was undertaken to examine investigators’ experiences of participation in transdisciplinary team science initiatives, with a focus on challenges and recommendations for improving effectiveness.
Methods:
Qualitative interviews were conducted with investigators from twelve multidisciplinary teams awarded pilot research funding by the University of Kentucky College of Medicine to better understand the barriers and facilitators to effective team science within an academic medical center. An experienced qualitative researcher facilitated one-on-one interviews, which lasted about one hour. Structured consensus coding and thematic analysis were conducted.
Results:
The sample was balanced by gender, career stage (five were assistant professor at the time of the award, seven were senior faculty), and training (six were PhDs; six were MD physicians). Key themes at the team-level centered on the tension between clinical commitments and research pursuits and the limitations for effective team functioning. Access to tangible support from home departments and key university centers was identified as a critical organizational facilitator of successful project completion. Organizational barriers centered on operationalizing protected time for physicians, gaps in effective mentoring, and limitations in operational support.
Conclusions:
Prioritizing tailored mentoring and career development support for early career faculty, and particularly physician faculty, emerged as a key recommendation for improving team science in academic medical centers. The findings contribute to establishing best practices and policies for team science in academic medical centers.
This study proposes a new practical approach for tracking institutional changes in research teamwork and productivity using commonly available institutional electronic databases such as eCV and grant management systems. We tested several definitions of interdisciplinary collaborations based on number of collaborations and their fields of discipline. We demonstrated that the extent of interdisciplinary collaboration varies significantly by academic unit, faculty appointment and seniority. Interdisciplinary grants constitute 24% of all grants but the trend has significantly increased over the last five years. Departments with more interdisciplinary grants receive more research funding. More research is needed to improve efficiency of interdisciplinary collaborations.
Clinicians who are interested in becoming principal investigators struggle to find and complete training that adequately prepares them to conduct safe, well-designed clinical and translational research. Degree programs covering these skills require a significant time investment, while online trainings lack engagement and may not be specific to local research contexts. Staff at Tufts Clinical and Translational Science Institute sought to fill the gap in junior investigator training by designing an eight-module, noncredit certificate program to teach aspiring clinician-investigators about good clinical practice, clinical research processes, and federal and local regulatory requirements. The first iteration of this program was evaluated using pre- and posttest questionnaires and by gathering clinician learner feedback in a focus group. Based on the pre- and posttest questionnaires, learners experienced an increase in self-efficacy and confidence related to clinical research competencies. Feedback from learners also highlighted important program strengths, including an engaging program format, a manageable time commitment, and an emphasis on identifying crucial research resources. This article describes one approach to creating a meaningful and efficient clinical trial training program for clinicians.
The Clinical and Translational Science Award (CTSA) program aims to enhance the quality, efficiency, and impact of translation from discovery to interventions that improve human health. CTSA program hubs at medical research institutions across the United States develop and test innovative tools, methods, and processes, offering core resources and training for the clinical and translational research (CTR) workforce. Hubs have developed services across different domains, such as informatics and pilot studies, to provide ad hoc expertise and staffing for local research teams. Although these services can provide efficient, cost-effective ways to cover skills gaps and implement rigorous studies, three CTSAs of varying size found the majority of investigators were single domain service users, likely missing opportunities to further enhance their work.
Methods:
Through interviews with CTSA service users and a survey of CTSA service managers, this exploratory study aims to identify barriers to using services from multiple modules and solutions to overcome those barriers.
Results:
Barriers include challenges in finding information about services, unclear or unknown user needs, and users’ lack of funding to engage in services. More issues were identified for the largest CTSA.
Conclusions:
Although this study represents a small subset of CTSA hubs, we anticipate that our findings and proposed solutions will be relevant to the broader CTSA community. This study provides foundational information can use in their own efforts to increase service utilization and methods that can be used for more comprehensive studies that focus on explaining the relationship between CTSA features and rates of single versus cross-module service use.
This special communication provides an approach for applying implementation science frameworks to a Clinical and Translational Science Institutes (CTSIs) community engagement (CE) program that measures the use of implementation strategies and outcomes that promote the uptake of CE in research. Using an iterative multi-disciplinary group process, we executed a four-phased approach to developing an evaluation plan: 1) creating an evaluation model adapted from Proctor’s conceptual model of implementation research; 2) mapping implementation strategies to CTSI CE program interventions that support change in research practice; 3) identifying and operationalizing measures for each strategy; and 4) conducting an evaluation. Phase 2 employed 73 implementation strategies across 9 domains generated by the Expert Recommendations for Implementing Change project. The nine domains were used to classify each CE program implementation strategy. In Phase 3, the group used the Reach, Effectiveness, Adoption, Implementation and Maintenance (RE-AIM) framework to define measures for each individual strategy. Phase 4 demonstrates the application of this framework and measures Year 1 outcomes for the strategy providing interactive assistance, which we implemented using a centralized consultation model. This approach can support the CTSA program in operationalizing CE program measurement to demonstrate which activities and strategies may lead to benefits derived by the program, institution, and community.
This chapter reviews assessment research with the goal of helping all readers understand how to design and use effective assessments. The chapter begins by introducing the purposes and contexts of educational assessment. It then presents four related frameworks to guide work on assessment: (1) assessment as a process of reasoning from evidence, (2) assessment driven by models of learning expressed as learning progressions, (3) the use of an evidence-centered design process to develop and interpret assessments, and (4) the centrality of the concept of validity in the design, use, and interpretation of any assessment. The chapter then explores the implications of these frameworks for real-world assessments and for learning sciences research. Most learning sciences research studies deeper learning that goes beyond traditional student assessment, and the field can contribute its insights to help shape the future of educational assessment.