Hostname: page-component-54dcc4c588-tfzs5 Total loading time: 0 Render date: 2025-09-27T16:13:38.570Z Has data issue: false hasContentIssue false

Evaluation as a translational resource: Bridging social sciences with clinical and translational sciences to advance university, health system, and community initiatives

Published online by Cambridge University Press:  22 August 2025

Jessica Sperling*
Affiliation:
Duke Univ, Clinical and Translational Science Institute, Durham, NC, USA Duke Univ, Social Science Research Institute, Durham, NC, USA
Perusi B. Muhigaba
Affiliation:
Duke Univ, Clinical and Translational Science Institute, Durham, NC, USA
Stella Quenstedt
Affiliation:
Duke Univ, Clinical and Translational Science Institute, Durham, NC, USA
Noelle Wyman Roth
Affiliation:
Duke Univ, Nicholas School of the Environment, Durham, NC, USA
Adrian Brown
Affiliation:
Duke Univ, Social Science Research Institute, Durham, NC, USA
F. Joseph McClernon
Affiliation:
Duke Univ, Clinical and Translational Science Institute, Durham, NC, USA
*
Corresponding author: J. Sperling; Email: Jessica.sperling@duke.edu
Rights & Permissions [Opens in a new window]

Abstract

Evaluation supports the translation of knowledge into practice by systematically assessing what works, for whom, and under what conditions. It generates evidence to guide improvements, inform decision-making, and identify how programs, research studies, or interventions should be scaled. Within a Clinical and Translational Science Award (CTSA) hub, evaluation is typically focused on internal evaluation and administrative functions. However, expanding evaluation to also support efforts based outside of a CTSA hub (i.e., to the larger institution and community), akin to other CTSA cores and services, can support overarching translational goals. This paper outlines the process and benefits of institutionalizing a partnership between clinical and translational science and social science to provide expertise, resulting in evaluation as a translational resource. Herein, we describe developing the Duke Office of Evaluation and Applied Research Partnership, an organizational unit that, by bridging a university’s CTSA hub and interdisciplinary social science institute, expanded the scope and capacity of evaluation to advance clinical and translational science. We outline the specific activities supported by this initiative, facilitators involved in its establishment, and barriers to implementation and success. This model and lessons learned can inform broader opportunities to leverage multidisciplinary evaluation expertise to support clinical and translational science.

Information

Type
Special Communication
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

Evaluation is a structured way to assess the value and effectiveness of programs, projects, or initiatives. It helps determine what is working, what needs improvement, and how efforts can be more impactful – using quantitative, qualitative, or mixed methods. By identifying strengths, gaps, and results, evaluation is designed to support better decision-making and helps move ideas into action [Reference Patton1]. Because evaluation draws from and is used across fields like education, public health, social services, and business, it is highly adaptable and widely applicable. It can be used to improve teaching programs, assess health interventions, strengthen organizational practices, or understand how social services impact lives [Reference Rossi, Lipsey and Freeman2]. This ability to generate practical, actionable insights makes evaluation especially valuable in translational research, which focuses on turning scientific findings into real-world solutions that improve health [Reference Fort, Herr, Shaw, Gutzman and Starren3]. Evaluation can also support translational science by helping develop and test general strategies for applying research in practice more effectively and efficiently [Reference Austin4].

Infrastructural entities such as the NIH-supported Clinical and Translational Science Award (CTSA) hubs, which provide support across disciplines and content areas to improve research processes and advance positive health outcomes, are well-positioned to provide evaluative partnership and service to health-related initiatives and research programs. CTSA hubs provide knowledge, services, and resources that apply across translational stages and disease states. Yet, within CTSA Funding Opportunity Announcement guidelines, evaluation is discussed as an administrative function, which includes examining CTSA hubs’ internal activities with a focus on needs assessment, process tracking, determining outcomes and impact, and continuous improvement inside the CTSA hub [5]. Current research addressing organizational attributes of CTSA evaluation, most notably a recurring survey examining CTSA evaluation hubs’ structure and capacity [Reference Hoyo, Nehl and Dozier6], has provided insights into internal hub functions but has not examined evaluation expertise applications beyond the hub. While dedicated attention to enhancing the operational efficiency and impact of hubs is critical, we contend that expanding CTSA evaluation to an external, partnership/service-oriented model can support the translation of research into real-world applications. Applied this way, evaluation can amplify the translational impact of CTSA hubs, ultimately supporting CTSA goals of improved public health outcomes.

Evaluation conducted in clinical and health science contexts settings can benefit from direct interdisciplinary engagement with the social sciences. Social science-rooted study design and research methods can help assess effectiveness and impact of programs, policies, and interventions. These methods also support the development of measurement tools and enable findings to be interpreted within broader social contexts using theories from fields like sociology, psychology, and economics. Many health initiatives are themselves cross-disciplinary, such as a health education program that considers cultural context or a social psychology-based health behavior intervention. In these cases, involving a team with social science expertise aligns with the intervention’s design and goals.

CTSA hubs are well-positioned to leverage social science partnership to advance evaluation, as they are already engaged with the wide range of research fields and domains of translational research. Hubs have an explicit focus on enabling interdisciplinary engagement (e.g., via team science) and have historically engaged the social sciences in many ways, including addressing social determinants of health [Reference Craven, Highfield and Basit7]. Interdisciplinary social science engagement via evaluation thus builds on this base and supports the expansion of benefits CTSA hubs can offer. Yet, there is limited documentation of institutionalized social science partnership in CTSA-provided evaluation support for clinical and translational researchers [Reference Hoyo, Nehl and Dozier6].

This paper describes Duke University’s institutionalization of an evaluation partnership between social science and clinical and translational science. The Duke Office of Evaluation and Applied Research Partnership (O-EARP) was created as a joint initiative of Duke’s Clinical and Translational Science Institute (CTSI, home to Duke’s CTSA hub) and the Applied Research, Evaluation, and Engagement (AREE) team within Duke’s Social Science Research Institute (SSRI). O-EARP formalized evaluation partnership, leadership, and service to institutional initiatives, research studies, and community organizations. We describe O-EARP’s development, the functions of the resultant evaluation-focused entity, and contextual factors supporting its establishment and implementation. Documenting the evolution of O-EARP can support and guide other institutions seeking to build similar infrastructure.

Development and organization

Duke’s CTSI is based in Duke University’s School of Medicine and works to accelerate translational research and translational science at Duke through funding, innovative resources, and nationwide collaborations. Duke’s CTSI, like all CTSA hubs, has historically focused on internal evaluation, specifically continuous improvement and assessing impact within CTSI. Duke’s SSRI, an interdisciplinary hub under the Vice Provost for Interdisciplinary Studies, aims to bring together researchers with interests in various social and behavioral sciences and to promote multidisciplinary collaboration on important and complex social issues. Within SSRI, the Applied Research, Evaluation, and Engagement (SSRI-AREE) team included a primary focus on evaluation and applied social science research partnership and capacity-building.

The Duke School of Medicine historically lacked an entity dedicated to evaluation, but in 2021, CTSI leadership expressed a desire to spearhead efforts in that space, increasing capacity for evaluation-related work and expanding beyond internal assessment to engage with other initiatives and studies via evaluation partnership. Collaboration with SSRI was discussed early, as the evaluation lead at SSRI had assumed an evaluation role at CTSI the year prior; a joint CTSI/SSRI entity could expand upon existing partnerships and capacity-building efforts already in place at SSRI-AREE, mitigate potential issues of CTSI/SSRI overlap in evaluation leadership, and further university-wide interests in bridging the biomedical and social sciences. Initial conversations included two CTSI leadership members (the CTSI Director and an additional lead focused on strategic partnership and integration), SSRI’s Director, and the lead of the CTSI and SSRI evaluation teams to be brought together via this Office. By Spring 2022, CTSI and SSRI leadership had determined to move ahead with a joint office addressing evaluation partnership and capacity-building. Subsequent processes included: (1) meetings with potentially related/aligned entities across the institution (e.g., with focus on evaluation, implementation science, and/or community engagement) to inform structure and approach and avoid duplication of effort, facilitated by lead of the CTSI and SSRI evaluation teams; and (2) administrative processes to formalize the joint CTSI/SSRI evaluation entity (O-EARP), with formalization indicated by establishment/signing of an MOU between SSRI and CTSI for O-EARP and a stated launch date. Figure 1 further depicts development progression and foci, and a Supplement provides additional documentation (e.g., MOU, organizational chart from O-EARP’s outset)

Figure 1. Organization and Timing of Development.

Organization and Timing of Development. This figure provides the (1) foci of the SSRI- and CTSI-based entities joining to form O-EARP and (2) operational steps / timing for the development trajectory of O-EARP.

Organization

O-EARP bridges SSRI and CTSI, connecting Duke’s “campus” (nonmedical research and teaching units) and School of Medicine, which have distinct administrative features. It is led by a faculty director with appointments in the School of Medicine and SSRI. SSRI-AREE and CTSI’s evaluation team staff also became O-EARP personnel while remaining administratively housed within their original units. The team has included 12 full-time staff with graduate training in fields including sociology, economics, psychology, social work, environmental management, and public administration, all with evaluation and/or applied social science experience. The SSRI-AREE team and CTSI’s evaluation team collaborate through joint projects, O-EARP meetings, and team-building activities. O-EARP also integrates graduate and undergraduate research assistants and interns on an ad hoc/project-based basis and an advisory group that includes leadership from Duke Centers and Offices addressing research, research innovation, community engagement, and other O-EARP-aligned areas, which meets approximately 2 times/year, informs O-EARP’s work, and facilitates connections across Duke. O-EARP’s communication needs are addressed jointly by CTSI and SSRI communication teams, and discussion with these teams prior to O-EARP launch informed specific processes (e.g., communication strategies for O-EARP, division of labor between these communication teams for specific O-EARP needs such as the development of a webpage).

O-EARP is funded through multiple mechanisms. All team members are supported by project partnerships in which they are involved (described below). CTSI-based personnel are also funded by CTSI to engage in internal evaluation and for engagement in other strategic priorities that utilize evaluation team capacities (e.g., involvement in Project ENTRUST, a Duke Health initiative using a mixed-methods approach to enhance trustworthiness in Duke Health care and research, where CTSI has played a key development role). The SSRI-AREE team is also funded by SSRI to engage in related SSRI program and priority initiatives (e.g., SSRI consultation and student-support priorities; SSRI strategic priorities such as the American South).

Activities and Results

O-EARP has focused its activities in two key areas: evaluation implementation partnerships and capacity-building.

Evaluation implementation partnerships

Evaluation implementation partnerships entail direct collaboration with initiatives and entities to conduct and increase those partners’ capacities in evaluation. O-EARP team members focus much of their time on this work, including with university and health system programs (e.g., educational and researcher development programs); university and health system researchers and their teams; and community-based organizations. O-EARP uses a cost-recovery model in project partnership, meaning that a project’s funding source supports O-EARP involvement (i.e., personnel effort).

Evaluation partnerships involve designing and implementing evaluation activities, including developing logic models, collecting and analyzing data, and presenting findings to project stakeholders. Determination of partnership products (e.g., logic models, memos, reports, presentations, and academic papers) is informed by the partner and the intended project goal. Since 2023, O-EARP has engaged in over 40 partnerships across School of Medicine departments/divisions (e.g., Pediatrics, Psychiatry and Behavioral Sciences, Biostatistics and Bioinformatics, Obstetrics and Gynecology, Orthopedic Surgery); departments in other Schools including Arts and Sciences, Engineering, and Public Policy; and community organizations focused on areas such as education, STEM workforce development, and social and health equity.

These partnerships most directly prioritize enabling translational research, or the process of turning observations and knowledge into interventions that have direct benefit to individuals and the public. Some partnerships, depending on their focus, also have implications for advancing translational science, or increasing the efficiency of translation. For example, we partnered on an Ethics Supplement to an NIH-funded R01 (R01DK123062; “Ethical considerations of mortality clinical prediction model for patients undergoing hemodialysis project” [8]). We examined how machine-learning (ML)-based clinical prediction models estimating mortality for individuals with kidney disease could be developed in a trustworthy, interpretable, and usable way, informed by perspectives from providers, patients, and caregivers [Reference Sperling, Welsh and Haseley9]. This formative evaluation guided planning for the clinical application of the ML model and also has translational value, offering insights that can inform ML tool development in other domains (e.g., supporting the design and implementation of a ML-based clinical prediction model for early childhood autism screenings, P50HD09307410) [10]. As another example, our partnership with the Pediatrics Supporting Parents Initiative, a community-led project with central engagement from Duke Pediatrics, examines the implementation and effectiveness of changes in clinic practices and parent/caregiver education material to promote and improve early relational health (quality of the emotional and social connections between young children and their caregivers) and social emotional development. Within this initiative, data collected from their Parent Advisory Team (akin to a community advisory board) provided information on what factors enable effective community engagement [Reference Little, Barak, Best and Soloman11], which could inform implementation of community advisory boards in research spanning domain areas. See Table 1 for additional example partnership descriptions.

Table 1. Example partnerships

*Indicates non-Duke community entity.

** Relevance to translational science is addressed as alignment with one or more translational science principles, most often boundary-crossing partnerships, addressing efficiency, and meeting unmet needs [12].

Projects begin in varied ways, including O-EARP involvement in proposal development or a partner identifying that evaluation would support project goals after project launch. Outreach efforts include presentations to varied university entities, including research and research development units, and inclusion in Duke-wide research resource-identification and -navigation portal. Most partnerships come through word of mouth, including referrals from prior partners or university research development/support units, and ongoing interest following consultations (see below for capacity-building). In select cases, O-EARP responds to RFPs for community partnerships that would advance O-EARP’s mission or extend existing efforts. Partnership fit is evaluated using criteria outlined in Table 2; all criteria must be met for a partnership to proceed.

Table 2. Prospective partnership fit

Dissemination products are focused on knowledge intended for use. They are shared with partners in multiple formats, including presentations, memos, reports, and participatory processes such as data walks [Reference Murray, Falkenburger and Saxena14,Reference Sperling, Brown, Townsley, Tigranyan and O’Sullivan15]. For projects with partners who have academic publication aims, partnerships have also led to collaboratively-developed academic publications in an array of journals including the Journal of Clinical and Translational Science [Reference Sperling, Roth, Welsh, McElvaine, Permar and Gbadegesin16], JAMA Internal Medicine [Reference Ashana, Welsh and Preiss17], JAMA Cardiology [Reference Girotra, Dukes and Sperling18], and the Journal of the American Medical Informatics Association (JAMIA) [Reference Sperling, Welsh and Haseley9].

Capacity-building

O-EARP also provides capacity-building in evaluation to Duke and community-based groups and individuals. This comprises a notably lesser amount of the O-EARP team’s time, relative to implementation partnership, and these efforts are generally funded by the O-EARP-host institutions based on training and education-related institutional priorities.

Workshops and training

Building on existing efforts led by the AREE team, O-EARP conducts free workshops for the Duke community in evaluation, qualitative data collection, qualitative analysis, mixed-methods research, and community-engaged research and supports additional workshops on survey design (conducted by a separate SSRI group) via integrated registration and outreach. Workshops are provided semesterly or annually, depending on the topic, and are open to all Duke students, faculty, and staff. They are advertised via SSRI and CTSI communication channels and listed in a Duke-wide events calendar. Between Spring 2023 and Spring 2025, there were 1,593 total recorded registrants for O-EARP-facilitated workshops, with representation across Duke University schools; this includes workshops facilitated by SSRI’s Duke Initiative on Survey Methodology (DISM), as DISM workshop implementation was collaborative with O-EARP (O-EARP personnel managed registration processes but determined not to host separate survey-focused workshops to avoid duplication). The highest-reported affiliation was the Sanford School of Public Policy (20%), followed by the Graduate School (17%), Arts and Sciences (17%), and the School of Medicine (13%). The highest-reported registrant role was graduate students (46%), followed by research staff (18%), faculty members (10%), and nonresearch staff (10%). Postsession surveys tailored to the session have shown positive results. For example, nearly all evaluation workshop respondents (95%) reported the session as either good or excellent; the most common use was to inform ongoing/future evaluation efforts and/or inform program design or program direction (74% each). In 2024, O-EARP also conducted a free virtual workshop on evaluation for community entities, with a focus on North Carolina based- or public-serving, nonprofit, and grassroots organizations, with 95 registrants from a range of organizations and role types. All postsession survey respondents reported value (agreed somewhat or strongly that the session was valuable) and planned to use knowledge gained from the session in varied ways (e.g., to build evaluation plans for their programs/organizations, to develop new logic models for strategy and/or communication purposes, to improve evaluation recommendations, to advocate for evaluation as a strategic imperative).

O-EARP thus does not directly host curricular courses, as neither of O-EARP’s parent entities (SSRI and CTSI) are structured as academic departments with curricular course offerings as core functions. Yet, O-EARP has utilized and developed partnerships to host curricular offerings to students, including summer mini-courses through Duke Graduate Academy on topics of evaluation, qualitative and mixed-methods research, and community-engaged research. O-EARP personnel have also taught evaluation and methodology courses offered in graduate programs within Duke, including the Master of Public Policy program and the Master of Environmental Management program. O-EARP personnel have led credit-bearing, student-engaged research teams as part of the Bass Connections program, a university-wide initiative to develop interdisciplinary research through year-long research teams composed of students across levels and disciplines and often including community partners [Reference Balleisen, Howes, Nicewonger and Amelink19]. For example, a 2023–24 project partnered with a systems-change organization focused on equity in the American South and included community-participatory interpretation of equity-oriented data [Reference Sperling, Brown, Townsley, Tigranyan and O’Sullivan15,20]. Finally, O-EARP personnel have provided guest lectures in existing courses.

Consultations and advising

O-EARP personnel provide consultations for Duke affiliates and community entities. Consultations provide an opportunity for individuals and teams to meet with an O-EARP staff member and discuss their applied research and evaluation questions and needs. Consultations are usually virtual and approximately 45 minutes. Requests are assigned to O-EARP staff by an intake process matching the need/question with staff members’ expertise. Between January 2023–25, O-EARP staff conducted 185 consultations. Consultations were provided across disciplines/schools, with the largest representation from the School of Medicine (32%), followed by the Trinity College of Arts and Sciences (21%) and the Nicholas School of the Environment (20%). Graduate students (30%) and faculty members (28%) were the highest-reported consultees, followed by research staff (17%). Consultees receive a follow-up survey sent via email, with 91% of respondents reporting high satisfaction with their consultation (reporting being extremely or very satisfied). Most consultees shared that they would use consultation information to inform ongoing research (73%), inform ongoing evaluation (29%), and/or develop a research product (26%). Nearly half (41%) of consultees reported that they did not have, or were not sure if they had, another available resource to address their question or need, further supporting the value of this support.

O-EARP-provides ongoing/sustained mentorship or coaching for learners and researchers (distinct from curricular offerings or consultations), though much less frequently than consultations. This extends in duration beyond short-term support, such as one-time consultations and workshops, and has generally focused on a specific learner/trainee project. O-EARP mentorship prioritizes building the knowledge and capacity of a partner/trainee but moves closer to implementation partnership than does other capacity-building efforts and can include a cost-recovery structure, particularly if the project is not part of a curricular degree program requirement. Communication processes are typical to mentorship or coaching, such as recurring meetings and communication focused on progress toward a project goal and support to reach that goal. Examples include supporting (1) a postdoctoral scholar in Duke Ob/Gyn’s Division of Women’s Community and Population Health that collected and analyzed data on reproductive health counseling experiences for women with sickle cell disease and (2) a joint MD and Master of Public Policy candidate that developed evaluative processes addressing criminal justice system-involved individuals’ experiences with mental health resources. O-EARP personnel have been a part of formal trainees’ mentorship teams in grant proposals, such as NIH K-awards.

Discussion and Conclusion

The establishment of O-EARP at Duke University represents an expansion in scope of evaluation and leverages interdisciplinary collaboration to enhance evaluation within clinical and translational science. O-EARP extended the reach and impact of CTSI by providing robust evaluation that can increase the effectiveness of various programs and initiatives. CTSI evaluation, rather than solely an internal evaluative focus, synergizes with other hub components to directly support researcher and research development, facilitates research translation across individual studies/projects, and builds community capacity. The interdisciplinary nature of O-EARP enabled more effective leveraging of evaluative expertise across social science and healthcare contexts to enhance responsiveness in evaluation for diverse constituents. O-EARP’s formalization and related outreach supported institutional visibility for the entities comprising this office, creating clearer pathways for new partnerships.

O-EARP engagement is focused most overtly on advancing translational research (i.e., supporting the knowledge-to-practice trajectory) but also has numerous additional implications for advancing or employing translational science. O-EARP engagement with project partners and consultees are based upon translational science principles such as boundary-crossing and meeting unmet needs. Selected partnered projects or consultations have direct implications for increasing the efficiency of translational processes. The formation of O-EARP itself can also be seen as enacting or advancing of translational science, as it directly joined together two distinct areas (social science, and clinical and translational science) to support more efficient engagement across these areas and utilize that joint knowledge and background to enhance evaluation partnerships.

The factors and strategies that contributed to O-EARP’s successful establishment and achievements can inform identification of other contexts where similar efforts may succeed and processes that can facilitate success. The flexible structures of CTSI and SSRI, as opposed to traditional academic departments, facilitated the development of joint evaluation functions. Leadership in both CTSI and SSRI strongly supported interdisciplinary collaboration, and SSRI’s prior-existing evaluation program, which focused on partnership, training, and capacity-building, provided a valuable foundation on which O-EARP could build. As Duke University does not offer many of the academic programs that often focus on evaluation (e.g., social work, public health, education research), O-EARP was able to fulfill a need that may otherwise have been met by a school or department with evaluation expertise. A Duke office overseeing interdisciplinary studies, under which SSRI sits, provided valuable guidance in managing boundary-crossing entities, and development of a formal memorandum of understanding (MOU) between CTSI and SSRI outlined shared responsibilities and operational planning, ensuring clarity in roles and functions.

Even with these facilitators, certain aspects of O-EARP implementation have proved challenging or required careful consideration. For instance, administrative coordination across the SSRI and CTSI host units, the associated “campus” and “School of Medicine” Duke entities, and with implementation partners requires time and infrastructure; additional administrative capacity could offer value. In addition, balancing visibility efforts with available resources is an ongoing challenge. While outreach increases awareness and interest in O-EARP, staff may be already working at capacity; this can hinder ability to engage new partners and requires decisions on whether to expand capacity via hiring students, engaging with personnel from other units with needed skills (e.g., specific methodological expertise), or hiring additional staff. Finally, the balance between serving the CTSI/CTSA hub’s internal evaluation needs, versus engaging with other partners, has not been a challenge but has required careful consideration. O-EARP has prioritized service to the CTSI for CTSI-based team members, and CTSI-based team members’ engagement in implementation partnerships occurs when it would not inhibit the completion of key CTSI/CTSA needs (e.g., considering CTSI timelines; ensuring a majority of time on CTSI/CTSA needs) and where projects are aligned with CTSI/CTSA topical interests (e.g., biomedical research, health improvement, health equity). Recurring discussion with CTSI leadership ensures alignment in parameters around CTSI-based team members’ project partnership engagement.

By integrating evaluation expertise across clinical, translational, and social sciences, O-EARP has demonstrated the potential for interdisciplinary partnerships to enhance translational research. O-EARP created a model for evaluation partnership that can inform and advance broader efforts to support translational research, translational science, community partnership, and capacity-building. Future work on systematic assessment of CTSA evaluation programs’ external-to-hub work or utilization of interdisciplinary and social science connections could provide a broader knowledge base of hubs’ work and other potential models; this could be integrated into efforts already in place to understand the landscape of CTSA evaluation [Reference Hoyo, Nehl and Dozier6].

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2025.10141.

Acknowledgements

Ebony Boulware, Don Taylor, Susanna Naggie, and Diane Uzarski provided critical support for the development of this Office given their CTSI and SSRI leadership roles. All O-EARP personnel, as well as the individuals who have been engaged partial-time or as trainees, have played essential roles in O-EARP activities and/or specific projects. The Duke Office of Interdisciplinary Studies provided valuable guidance in the formation of O-EARP, and partnership from numerous entities including evaluation partners, institutional development and research navigation staff, and university Schools/programs hosting evaluation courses have enabled and supported O-EARP work.

Author contributions

Jessica Sperling: Conceptualization, Funding acquisition, Methodology, Project administration, Writing-original draft, Writing-review and editing: Perusi Muhigaba: Project administration, Writing-original draft, Writing-review and editing: Stella Quenstedt: Project administration, Writing-original draft, Writing-review and editing: Noelle Wyman Roth: Project administration, Writing-review and editing: Adrian Brown: Project administration, Writing-original draft, Writing-review and editing: Joseph McClernon: Conceptualization, Funding acquisition, Project administration, Writing-review and editing.

Funding statement

This work was supported by Duke CTSI, which is supported by the Duke Clinical and Translational Science Award (NIH Award UL1TR002553) and by the Duke University School of Medicine, and by the Duke University Social Science Research Institute.

Competing interests

All authors have no conflicts of interest.

References

Patton, MQ. Utilization-Focused Evaluation. 4th ed. Sage Publications, 2008.Google Scholar
Rossi, PH, Lipsey, MW, Freeman, HE. Evaluation: A Systematic Approach. 7th ed. Sage Publications, 2004.Google Scholar
Fort, DG, Herr, TM, Shaw, PL, Gutzman, KE, Starren, JB. Mapping the evolving definitions of translational research. J Clin Transl Sci. 2017;1:6066. doi: 10.1017/cts.2016.10.Google Scholar
Austin, CP. Opportunities and challenges in translational science. Clin Transl Sci. 2021;14:16291647. doi: 10.1111/cts.13055.Google Scholar
National Center for Advancing Translational Sciences (NCATS). PAR-24–272: Clinical and Translational Science Award (UM1 Clinical Trial Optional), September 04, 2024. Department of Health and Human Services (HHS), National Institutes of Health (NIH). (https://grants.nih.gov/grants/guide/pa-files/PAR-24-272.html) Accessed March 26, 2025.Google Scholar
Hoyo, V, Nehl, E, Dozier, A, et al. A landscape assessment of CTSA evaluators and their work in the CTSA consortium, 2021 survey findings. J Clin Transl Sci. 2024;8:e79. doi: 10.1017/cts.2024.526.Google Scholar
Craven, CK, Highfield, L, Basit, M, et al. Toward standardization, harmonization, and integration of social determinants of health data: A Texas Clinical and Translational Science award institutions collaboration. J Clin Transl Sci. 2024;8:e17. doi: 10.1017/cts.2024.2.Google Scholar
Predictive Analytics in Hemodialysis: Enabling Precision Care for Patient with ESKD. NIH Reporter. (https://reporter.nih.gov/search/ugUOIBEIREa3lMouGbDTqQ/project-details/10605248) Accessed June 22, 2025.Google Scholar
Sperling, J, Welsh, W, Haseley, E, et al. Machine learning-based prediction models in medical decision-making in kidney disease: Patient, caregiver, and clinician perspectives on trust and appropriate use. J Am Med Inform Assoc JAMIA. 2025;32:5162. doi: 10.1093/jamia/ocae255.Google Scholar
Duke Autism Center of Excellence: A translational digital health and computational approach to early identification, outcome monitoring, and biomarker discovery in autism. NIH Reporter. (https://reporter.nih.gov/search/iqFfSvcwUEKuKIASEYUlVw/project-details/11085484#similar-Projects) Accessed June 22, 2025.Google Scholar
Little, Danielle, Barak, Meytal, Best, Debra, Soloman, Tiffany. Pediatrics supporting parents: Caregiver-clinic-community engagement and coDesign. In: Presented at: Smart Start 2025 Conference, Greensboro, NC, May 1, 2025.Google Scholar
National Institutes of Health (NIH), National Center for Advancing Translational Sciences (NCATS). About Translational Science Principles. (https://ncats.nih.gov/about/about-translational-science/principles) Accessed April 26, 2024Google Scholar
American Evaluation Association. Guiding principles for evaluators. 2018. (https://www.eval.org/About/Guiding-Principles) Accessed March 10, 2025.Google Scholar
Murray, B, Falkenburger, E, Saxena, P. Data Walks: An Innovative Way to Share Data with Communities. Urban Institute. 2016. (https://coilink.org/20.500.12592/m91qr1) Accessed March 26, 2025.Google Scholar
Sperling, J, Brown, A, Townsley, B, Tigranyan, G, O’Sullivan, M. Data walks as a participatory research process: Examining social issues in the American South. J Particip Res Methods. 2025;6:288309. doi: 10.35844/001c.133648. Google Scholar
Sperling, J, Roth, NEW, Welsh, WE, McElvaine, AT, Permar, SR, Gbadegesin, RA. Supporting students from underrepresented minority backgrounds in graduate school: A mixed-methods formative study to inform post-baccalaureate design. J Clin Transl Sci. 2024;8:e124. doi: 10.1017/cts.2024.590.Google Scholar
Ashana, DC, Welsh, W, Preiss, D, et al. Racial differences in shared decision-making about critical illness. JAMA Intern Med. 2024;184:424432. doi: 10.1001/jamainternmed.2023.8433.Google Scholar
Girotra, S, Dukes, KC, Sperling, J, et al. Emergency medical service agency practices and cardiac arrest survival. JAMA Cardiol. 2024;9:683691. doi: 10.1001/jamacardio.2024.1189.Google Scholar
Balleisen, EJ, Howes, L. Evaluating and scaling best practices in interdisciplinary, project-based learning. In: Nicewonger, T, Amelink, C, eds. Global insights into transdisciplinary higher education initiatives. Virginia Tech Publishing, 2025: 5980.Google Scholar
State of the South: advancing equity through participatory data (2023-2024). Bass Connections - Duke University. (https://bassconnections.duke.edu/project/state-south-advancing-equity-through-participatory-data-2023-2024/) Accessed March 28, 2025.Google Scholar
Figure 0

Figure 1. Organization and Timing of Development.

Figure 1

Table 1. Example partnerships

Figure 2

Table 2. Prospective partnership fit

Supplementary material: File

Sperling et al. supplementary material

Sperling et al. supplementary material
Download Sperling et al. supplementary material(File)
File 2.8 MB