Introduction
The advancement of translational science requires cultivating and improving effective mentorship practices among translational scientists. Mentorship has been studied extensively in many different contexts, including health care delivery [Reference Burgess, van Diggele and Mellis1], corporate settings [Reference Underhill2], not-for-profit settings [Reference Bortnowska and Seiler3], and government agencies [Reference Ehrich and Hansford4]. In higher education, mentorship has been studied across different career stages (undergraduate, graduate, postdoc, faculty) and disciplines (biomedical science, engineering, social sciences, physical sciences, allied health professions and humanities) [Reference Dahlberg and Byars-Winston5–Reference Hernandez8]. However, relatively little research has been devoted to understanding mentoring practices and needs that are distinctive to the unique needs of the clinical and translational science context. This review establishes a foundation for studying mentorship in Translational Science by analyzing findings and identifying research gaps in studies supported by Clinical and Translational Science Awards (CTSA) from the National Center for Advancing Translational Sciences (NCATS).
Mentorship in the context of translational science is distinct from other STEM fields in a few different ways. First, the context and the activities required for effective translational science differ from those in basic science research. Unlike basic science researchers, translational science researchers focus on developing practical solutions for specific health-related problems [Reference Rubio, Schoenbaum and Lee9]. Second, translational science is distinct in its goal of testing and disseminating tools and practices to enhance the clinical and translational research enterprise. While testing methodologies may not be unique, the process of dissemination and translation into practice requires specialized skills and knowledge, highlighting the need for mentorship. Third, beyond distinguishing translational science from basic research, we also emphasize the importance of understanding effective mentorship behaviors [Reference Kraiger, Finkelstein and Varghese10], and we believe this need can fulfill a gap in the mentoring literature and be applied across many fields in science.
Translational scientists require training in a wide range of competencies [Reference Gilliland, White and Gee11]. Their mentors play a key role in this training process. A strong, active mentoring relationship serves both a career function and a psychosocial function and is one of the best predictors of academic success. However, as noted in a previous review of research mentorship in clinical and translational science, the mentorship experience is often difficult to define and measure [Reference Meagher, Taylor, Probsfield and Fleming12].
In addition, recent T32 and K12 notice of funding opportunities released by the NIH [13,14] put a strong emphasis on mentorship. As of 2025, all training grant applications must now include a Mentor/Trainee Assessment Plan, which requires that institutions specify how their programs will monitor mentoring relationships, which approaches and tools will be used to assess both mentors’ and mentees’ perceptions of the mentoring relationship, and how the program leadership will handle major discrepancies. For career development programs, institutions are now required to describe how participating faculty will be trained to use evidence-informed mentorship practices, how gains in perceived skill will be measured, how changes in mentoring behaviors and effectiveness as a result of mentor training will be measured, how the research training environment will be monitored and assessed, and how outstanding mentors will be recognized and rewarded.
To address these challenges, NCATS has developed programs and training resources to promote the development of effective mentors. Mentored research training programs, such as the T32 and the K12, aim to give trainees and scholars the opportunity to work on a research project with a faculty mentor. Many CTSA hubs have also developed mentor training programs [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15–Reference Pfund, House and Spencer20] that are designed to support and improve the culture of mentoring by giving mentors needed skills. However, identification of mentorship practices and principles that are distinct to the field of clinical and translational science is still needed.
While rigorous evaluations of CTSA mentorship programs have been conducted [Reference Pfund, House and Asquith21,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22], there is little understanding of how and why they work, a challenge that can be described as the “black box” problem in program evaluation [Reference Solmeyer and Constance23]. As stated in the NASEM report [Reference Dahlberg and Byars-Winston5], “To fully understand mentorship, evaluation measures would ideally address both mentorship processes and mentorship outcomes and the system factors that can profoundly shape it.” (p. 127). To gain an understanding of the mechanisms of mentorship within the context of clinical and translational science, we examined the current state of the literature about mentorship provided by, or supported within, CTSA-funded research centers, identified gaps in evaluation, and articulated future directions for research that could address these gaps. Based on these findings, we propose a model of CTSA mentorship.
Methods
Study design
This study followed a systematic review methodology, guided by the framework proposed by Arksey and O’Malley [Reference Arksey and O’Malley24] and refined by Levac et al. [Reference Levac, Colquhoun and O’Brien25]. This approach was selected to examine the extent, range, and nature of the existing literature on evaluation research for mentorship programs at CTSA hubs and to identify key concepts, gaps in the research, and avenues for future inquiry. The reporting of this review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines [Reference Tricco, Lillie and Zarin26].
Inclusion/exclusion criteria
The inclusion criteria for our review were based on the Population, Concept, and Context (PCC) framework as recommended by the Joanna Briggs Institute (JBI) methodology for scoping reviews [Reference Peters, Godfrey, McInerney, Soares, Khalil and Parker27]. Studies were eligible for inclusion if they:
-
1. Were conducted at a CTSA-funded mentored research program.
-
2. Included evaluation results (qualitative or quantitative).
-
3. Included measurable metrics of mentoring inputs, activities, outputs, or outcomes.
-
4. Examined faculty or postdoctoral participants (no CRP’s, undergraduate students, etc.).
Studies were excluded if they were not directly related to the core concept, were not published in English (if applicable), or if the full text was unavailable.
Search strategy
A comprehensive literature search was conducted using PubMed to identify relevant studies. The search strategy was developed and included a combination of keywords and Medical Subject Headings (MeSH) related to mentor programs. We used the following search terms: “mentor* AND program AND research AND (KL2 OR TL1 OR T32 OR K12 OR postdoctoral) AND (CTSA OR NCATS) AND (Clinical OR translational).” In addition to electronic databases, a secondary search of the reference lists of included articles and key journals was conducted to capture additional relevant studies. The literature search was conducted in 2024. There were no limits placed on year of publication, and articles included in this review were published between 2009 and 2022.
Study selection
All search results were imported into Covidence, a web-based collaboration software platform that streamlines the production of systematic and other literature reviews [28]. The review process had three steps. In the first step, three of the authors screened the titles and abstracts of the studies based on the predefined eligibility criteria. In the second step, full-text screening was then conducted on studies that met the initial inclusion criteria by two separate authors. Any disagreements between reviewers were resolved through discussion or by consulting the entire research team. In the third step, relevant data were extracted from the selected papers by three separate authors. We repeated this process with articles identified from the original list of papers. A diagram of this process is shown in Figure 1.

Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram.
Results
Study types, designs, and methods
Clear commonalities are shared by the 25 studies summarized in Table 1, notably including similarities in study types, designs, and methods. Typically, these studies involved mixed-methods research, often using surveys, interviews, and/or focus groups in a pre-post design.
Table 1. Summary of key study variables

However, there was also considerable variation across studies. For example, while most (60%) of these studies focused on a single CTSA hub, some were far broader in scope, with six studies evaluating mentoring activity in at least 45 hubs each. The number of participants in these studies also varied considerably, ranging from 6 to 1362, with an average of 177 participants per study (SD = 288).
As shown in Table 2, there are clear similarities between the design and methods of the studies that were evaluated. Most (16, 64%) of these studies utilized mixed methods, with all but one of these studies using survey methods; other evaluation methods used by these studies include interviews (4), focus groups (1), secondary data collection (3) and qualitative coding (2). Almost a quarter (24%) of the studies included in this review used only qualitative methods, including interviews (5), focus groups (2), and secondary data collection (2). Three of the studies evaluated used quantitative methods solely, all of which involved surveys.
Table 2. Study type, design, and methods extracted from the articles evaluated

Roughly half of the studies (48%) had a pre- and post-test design to measure change over the course of the intervention, three of which also used a comparison group to measure outcomes against a meaningfully comparable group of individuals. Five studies (20%) had a cross-sectional design, and the remaining 9 studies (32%) had other study designs.
Types of interventions
Most of the studies (n = 16, 64%) examined a K Scholars program, with four of the 16 studies examining both T trainee and K Scholars programs. Six studies examined other mentoring programs designed for health sciences faculty overall [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Spence, Buddenbaum, Bice, Welch and Carroll17,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Bonilha, Hyer and Krug29], junior faculty [Reference Spence, Buddenbaum, Bice, Welch and Carroll17], or physician-scientists [Reference Stefely, Theisen and Hanewall30]. Regarding the source of the data collected, ten studies (40%) gathered data from both mentors and mentees, and over half (n = 13, 52%) only collected data from mentors or program administrators. Two studies collected data only from mentees [Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Smyth, Coller and Jackson32].
Statistical tests used and interview/survey questions asked
Roughly half of the studies evaluated (n = 13, 52%) utilized inferential statistics, such as t-tests, chi-square tests, logistic regression, or MANOVAs, to inform conclusions or predictions about mentoring activities and impacts. And nine (36%) of the evaluated studies used validated survey measures of mentoring, with survey questions typically derived from Mentoring Competency Assessment (MCA) [Reference Fleming, House and Hanson33]. Some studies utilized other validated scales of mentoring activity [Reference Feldman, Huang and Guglielmo18,Reference Feldman, Steinauer and Khalili34] or in combination with the MCA [Reference Trejo, Wingard and Hazen35]. Seven studies (28%) employed both inferential statistics and validated survey measures.
The open-ended survey questions and focus group or interview protocols used in these studies were diverse, considered as a whole. Many of the open-ended survey questions solicited information about the mentoring experience, gains in mentoring knowledge and skills, the quality of training programs, examples of valuable mentoring interactions, and opportunities for developmental or programmatic improvements which could be made by CTSAs. The impact of mentoring on mentees research studies and research careers was also the subject of many of these survey questions as well as the focus group or interview protocols. The qualitative data collected through these means were typically coded by multiple members of the research team and grouped into thematic categories; only occasionally were these qualitative data grouped into themes without methodical coding carried out by multiple raters.
Inputs measured
Only five studies involved the use of inferential statistical tests that included independent variables [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Pfund, House and Asquith21,Reference Bonilha, Hyer and Krug29,Reference Smyth, Coller and Jackson32,Reference Trejo, Wingard and Hazen35]. All of these studies analyzed differences in outcomes across groups identified by their sex or faculty rank. Some studies tested for and found statistically significant differences between other participant characteristics including those identified by race, age, tenure track, and years of experience or between mentors and mentees [Reference Pfund, House and Asquith21,Reference Bonilha, Hyer and Krug29]. The remaining studies that were evaluated (n = 20, 80%) did not involve testing for statistically significant differences between subgroups of participants, although many tested for significant differences among participants before and after they received a programmatic intervention, such as mentorship training. The following summary of the results addresses these differences in the context of the overall findings of the studies reviewed.
Data collected in articles reviewed
We identified three types of studies in our review. Several studies focused largely on evaluating training programs across the whole CTSA Consortium [Reference Abedin, Rebello, Richards and Pincus36–Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. A few studies focused on evaluating specific mentored research training programs within the CTSA Consortium [Reference Stefely, Theisen and Hanewall30,Reference Smyth, Coller and Jackson32,Reference Huskins, Silet and Weber-Main38,Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. Finally, the largest proportion of studies focused on evaluating mentor training programs [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15–Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Bonilha, Hyer and Krug29,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Feldman, Steinauer and Khalili34,Reference Trejo, Wingard and Hazen35,Reference Martina, Mutrie, Ward and Lewis43–Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46].
We have structured the review according to these three types of papers. The findings of each type of paper are reviewed below.
Mentorship programs across the CTSA consortium
To understand the current state of mentoring in the CTSA consortium, we examined the findings of several manuscripts that evaluated K and T mentoring training programs across the CTSA Consortium [Reference Abedin, Rebello, Richards and Pincus36–Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. These works demonstrate that CTSA K and T programs share many common mentoring training requirements, training opportunities, resources, and outcomes [Reference Abedin, Rebello, Richards and Pincus36,Reference Burnham, Schiro and Fleming37,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. However, they also show that there are considerable differences across both K and T programs, notably the mechanisms for evaluating mentoring performance [Reference Sancheznieto, Sorkness and Attia39–Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].
The most common form of institutional support for mentors was mentor training. There was great variety in the mode of delivery of mentor training. Mentor training is usually given as a one-time orientation (51%), but informal training (33%), Web-based training (28%) and face-to-face training (23%) were other common formats [Reference Abedin, Rebello, Richards and Pincus36]. Mentor training was typically offered by both KL2 and TL1 programs but was often not mandatory. Thirty-four (64%) KL2 programs reported that they offered mentor training, but only 56% of those programs required mentors to attend training [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. Similarly, for TL1 predoctoral training programs, 90% offered mentor training, but this training was mandatory for only 20% of hubs [Reference Sancheznieto, Sorkness and Attia39]. For TL1 postdoctoral training programs, 84% of hubs offered mentor training, but only 30% required this training [Reference Sancheznieto, Sorkness and Attia39]. Most TL1 programs also provided mentee training to their mentees, with training available for both predoctoral (78%) and postdoctoral (76%) trainees. However, as with mentor training, mentee training was rarely required (22% of predoctoral programs and 32% of postdoctoral programs) [Reference Sancheznieto, Sorkness and Attia39].
KL2 programs often utilized multiple strategies to facilitate KL2 Scholars’ mentoring experience. In addition to mentor training, other strategies included aligning mentor and mentee expectations, mentor evaluations, mentor awards, and subsidizing membership to mentoring academies [Reference Burnham, Schiro and Fleming37,Reference Silet, Asquith and Fleming40]. 53% of KL2 programs reported offering incentives, such as consideration in annual evaluation or promotion [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. KL2 programs often utilized existing institutional mentoring policies, training, and resources [Reference Abedin, Rebello, Richards and Pincus36,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].
A majority of KL2 programs had mechanisms to communicate the programmatic expectations for the mentoring relationship to mentors (52%) and scholars (54%), such as contracts, agreements, signed letters, orientation meetings, handbooks, oversight committees, and initial meetings with the program director [Reference Huskins, Silet and Weber-Main38]. However, only 28%–38% of KL2 programs had written mentor-mentee agreements [Reference Silet, Asquith and Fleming40,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41] and 21% of KL2 programs had written policies to manage conflicts [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].
To evaluate mentors, most KL2 programs reported having some kind of formal evaluation, with 79% reporting using one or more evaluation processes [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. 37% of KL2 programs reported conducting formal evaluations of the mentoring relationship, with 11% of programs developing their own survey instrument. Typically, mentees rate their mentors using annual or semi-annual surveys [Reference Silet, Asquith and Fleming40,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].
While there has been research on the use of specific mentoring activities, such as Individualized Development Plans (IDPs) in the broader mentorship literature [Reference Vanderford, Evans, Weiss, Bira and Beltran-Gastelum47–Reference Thompson, Santacroce and Pickler49], there has been little research on their influence on mentee outcomes in the CTSA literature. One recent survey [Reference Sancheznieto, Sorkness and Attia39] of the mentoring activities of TL1 predoctoral and postdoctoral programs examined the strategic use of IDPs, which are currently required for NIH-supported research training programs. They found that IDPs were primarily being used to track trainee progress, set milestones, and to provide opportunities for “midcourse” corrections in the training. However, as far as we are aware, there have been no studies examining how IDP adherence affects mentee productivity. Lastly, there were several reported barriers to mentoring, including a lack of knowledge or appreciation for the importance of mentor training, a lack of resources to provide training, and a lack of accountability for organizing mentor training activities [Reference Abedin, Rebello, Richards and Pincus36,Reference Silet, Asquith and Fleming40]. There was also a noted inconsistency in how mentor training is conducted across institutions [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. The use of IDPs also varied, with IDPs not being utilized for any evident purposes by 7% of TL1 predoctoral programs and 10% of postdoctoral TL1 programs. Concerningly, some CTSA hubs reported that mentors and program directors contributed to trainees’ IDP without trainee input [Reference Sancheznieto, Sorkness and Attia39].
Mentored research programs
To understand how mentoring impacted mentees, we examined the results of manuscripts on mentored research programs. Across all four studies we reviewed, mentees reported that their mentors were critical to their success. Specifically, mentees reported that their mentors helped them generate ideas for research studies, plan and design studies, review drafts of grants and manuscripts, and acted as a resource for advice, encouragement, and feedback [Reference Stefely, Theisen and Hanewall30,Reference Smyth, Coller and Jackson32,Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. Additionally, mentees emphasized the importance of identifying and aligning expectations with their mentors [Reference Huskins, Silet and Weber-Main38].
There were also several other programmatic outcomes reported in these articles; however, we chose not to report these findings here. This is because we could not pinpoint whether these programmatic outcomes were specifically associated with the mentoring they received. Mentored research programs often include a range of didactic elements, including grant and scientific writing, team science, entrepreneurship, leadership, community engagement, and health disparities research [Reference Huskins, Silet and Weber-Main38,Reference Sorkness, Scholl, Fair and Umans50].
Mentor training programs
To examine whether mentor training programs had an impact on mentors and mentees, we examined several studies. Most of the manuscripts evaluating mentor training programs demonstrated the impact of mentor training on one or more levels of evaluation outcomes, including participant experience, learning or skill, behavior, programmatic results, or organizational impact [Reference Kirkpatrick and Kirkpatrick51,Reference Yardley and Dornan52]. Most of these manuscripts measured the impact of both the training experience and learning outcomes [Reference Feldman, Huang and Guglielmo18,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Martina, Mutrie, Ward and Lewis43–Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. The types of mentor training programs, inputs, outcomes, and statistical tests used in these papers are shown in Table 3.
Table 3. Type of mentor training, outcomes, inputs, and statistical tests

*MCA = Mentoring Competency Assessment.
Overall, quantitative data analysis provided evidence for the effectiveness of mentor training. Most studies found that mentor training programs increased mentors’ self-reported confidence in their mentoring skills [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Feldman, Huang and Guglielmo18,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Trejo, Wingard and Hazen35,Reference McGee, Blumberg and Ziegler44–Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. There is evidence that this increase in confidence was durable and long-lasting [Reference Feldman, Steinauer and Khalili34] and significantly greater than a control group that did not receive training [Reference Pfund, House and Asquith21]. This finding is consistent with previous research on mentor training programs in medicine [Reference Sheri, Too, Chuah, Toh, Mason and Radha Krishna53] There is also evidence that mentor training programs have similar benefits for mentees, with mentees feeling more confident in their ability to connect with potential and future mentors, know what characteristics to look for in current and future mentors, and manage the work environment [Reference Nearing, Nuechterlein, Tan, Zerzan, Libby and Austin45].
There appear to be several other benefits of mentor training. In one study [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15], as a result of mentor training, mentors reported statistically significant increases in: the overall quality of their mentoring, a perception that they were currently meeting their mentees’ expectations, and their ability to set clear expectations for mentees [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15], However, these benefits may differ by faculty rank. While assistant and associate professors’ perceived ability to help mentees acquire resources and set clear expectations significantly increased from pre-test to post-test, full professors’ scores on these items did not change significantly from pre-test to post-test [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15]. Mentor training has also been found to significantly increase the percent of faculty familiar with their departmental mentoring plans, the percent of faculty (instructors, assistant professors, and associate professors) with a mentor, the percent of full professors in a mentoring role, the percent of faculty familiar with their college’s promotion criteria, the percent of faculty satisfied with departmental support, and significantly decrease the number of mentees perceiving mentoring resources to be insufficient [Reference Bonilha, Hyer and Krug29].
There has been little research in the CTSA context on the effects of demographic characteristics on the mentorship process. Evidence from two studies suggests that mentor training may be more beneficial for male mentors than for female mentors. Interaction effects found that as a result of the training they received, males provided more constructive feedback and helped mentees develop strategies to meet goals; no significant increase was found for females [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15]. However, female gender was significantly associated with satisfaction with departmental support among mentees [Reference Bonilha, Hyer and Krug29]. We caution that the findings of these two studies are narrowly-focused and so cannot provide meaningful insights into the effects of any other demographic characteristics variables on the mentorship process.
A number of studies also evaluated the impact of mentor training on participant behavior [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15–Reference Spence, Buddenbaum, Bice, Welch and Carroll17,Reference Patino, Kubicek, Robles, Kiger and Dzekov19,Reference Pfund, House and Spencer20,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Feldman, Steinauer and Khalili34]. In an experimental study, Pfund [Reference Pfund, House and Spencer20,Reference Pfund, House and Asquith21] compared the impact of mentor training on participants’ experience and learning using the MCA against a control group, but also evaluated participants’ actual mentoring practices throughout the course of a multi-session training. They found that compared to the control group, mentors in the intervention group reported a significantly greater increase in MCA composite scores and also reported more changes in their mentoring practices.
Two studies focused on evaluating the impact of mentor training at organizational or systemic levels. Trejo and colleagues [Reference Trejo, Wingard and Hazen35] found that participants in a faculty mentor training program reported improved overall morale and an increased perception of a supportive organizational environment. Mentor training can also improve rates of faculty satisfaction and retention [Reference Bonilha, Hyer and Krug29].
Discussion
Summary of findings
The results of nationwide surveys show that there is a wide variety of mentoring practices across the CTSA, including mode of delivery, whether mentor training is mandatory or optional, strategies to facilitate KL2 Scholars’ mentorship experience, mechanisms to communicate programmatic expectations for the mentoring relationship, and methods to evaluate mentors. Studies on mentored research training programs demonstrated that mentors can have a substantive impact on their mentees’ professional work. The findings of mentor training programs in a CTSA context are consistent with the body of literature on mentor training programs in medicine more broadly [Reference Sheri, Too, Chuah, Toh, Mason and Radha Krishna53]. Specifically, mentor training was consistently found to boost mentors’ confidence in several domains of mentoring competence.
While there is an ample body of research on mentorship supported by CTSA institutes, there are also key gaps in the current research. Most notably, many research studies on mentorship use small sample sizes, self-report measures, and correlational designs, which have limited effectiveness for evaluating student-faculty mentorship programs [Reference Campbell, Allen and Eby54]. There is a need for more rigorous designs, such as longitudinal and quasi-experimental methods, that enable researchers to get a better understanding of the dynamics of mentor relationships over the course of a mentee’s research career. It is reasonable to speculate that this gap exists in this body of literature because cross-sectional evaluations of mentoring practices, programs, and training are much less time- and resource-intensive to conduct compared to longitudinal studies of the dynamics of mentoring relationships over long periods of time. Outside the CTSA context, a recent review [Reference Gangrade, Samuels and Attar55] found only 24 out of 109 reviewed articles on mentoring in medical and STEM settings used a longitudinal design, suggesting that this kind of design is still rarely used in mentoring research. The implications of this general trend in the scientific literature are discussed further in regards to potential directions for future research.
There are also clear gaps in the literature regarding mentorship teams, processes, and quality improvement initiatives. Few of the papers reviewed here included process measures of mentorship in their evaluations. As Steiner [Reference Steiner56] writes: “Such structural tools will not improve mentorship outcomes unless they are consistently adopted into the day-to-day process of mentorship…since all good mentorships are journeys, training programs should monitor and evaluate mentoring relationships continuously—similarly to the existing strategies for evaluating clinical or classroom teaching. Systematic assessments of mentorship are surprisingly rare.” (p. 3–4). Potentially measurable mentorship processes identified in the STEM literature include career support, psychosocial support, role modeling, and negative mentoring experiences [Reference Dahlberg and Byars-Winston5,Reference Hernandez8]. As far as we are aware, these processes have not yet been investigated in a translational science mentoring context.
There were several limitations of this review. First, it is possible that we could have missed relevant articles in our literature search because of our choice of search terms, which were intentionally designed to be limited and focused. In addition, at the time of this writing, some of the studies (n = 10) included in this review are over 10 years old and may not reflect the current state of mentor activity and training across the CTSA Consortium. Lastly, publication bias likely excluded findings that were not significant. Because of this, the studies reviewed may not give an accurate picture of the impact of mentor training programs [Reference Torgerson57].
Directions for future research
Future research on CTSA mentored research programs should utilize sophisticated statistical methods involving comparison groups, power analysis, quasi-experimental designs, and propensity score matching [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Pfund, House and Spencer20,Reference Samuels, Ianni, Eakin, Champagne and Ellingrod58]. The use of such rigorous methods can answer important empirical questions about causality, providing program directors with useful, in-depth knowledge of how their programs work. While there have been a few rigorous evaluations that statistically model the impact of a mentorship program on mentee outcomes in clinical and translational science [Reference Spence, Buddenbaum, Bice, Welch and Carroll17], much more of this research is needed. Rigorous studies of the impact of mentorship in STEM fields could provide a guide for evaluation of mentorship programs in clinical and translational science [Reference Kuchynka, Gates and Rivera59–Reference Estrada, Hernandez and Schultz61]. In addition, CTSA institutions ought to use standardized methodologies to evaluate their programs [Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. To facilitate this process, software programs such as Flight Tracker [Reference Helton, Pearson and Hartmann62] can be used to extract data required for standardized evaluations of mentorship programs, including mentee grant success and counts of papers co-authored by mentors and mentees [Reference Steiner56].
Future research should also evaluate the criteria and processes used to match mentees with appropriate mentors and mentorship team, align mentor and mentee expectations, and how mentor training impacts on long-term career outcomes [Reference Behar-Horenstein and Prikhidko16,Reference Huskins, Silet and Weber-Main38,Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. This includes research on whether mentee demographics (e.g., race, ethnicity, gender, sex, and other commonly used measures of demographic representation) might affect mentoring practices or outcomes. The paucity of research on demographic effects on mentorship in the CTSA context stands in contrast to the broader mentorship literature, for which ample research exists. For example, previous research has found that women perceived mentorship to be more valuable to their career development and reported receiving more psychosocial support [Reference Shen, Tzioumis and Andersen63–Reference O’Brien, Biga, Kessler and Allen65]. More research is needed to establish whether similar relationships also exist in a CTSA context.
There is also a need to understand how mentees cultivate relationships with multiple mentors as part of a mentoring team, and how mentors help mentees launch and sustain an independent scientific career [Reference Behar-Horenstein and Prikhidko16]. Mentorship teams are particularly important for K Scholars, and more research is needed to understand how mentorship networks function.
More research is needed on how specific mentorship activities influence mentee outcomes. As noted in the NASEM report [Reference Dahlberg and Byars-Winston5], “there is little consensus on how to determine either the most essential specific forms of mentorship support or the programmatic or institutional structures that could enhance, incentivize, or reward mentorship support.” (p. 138). Prior research in academic medicine suggests that adequate institutional support, allowing mentees to choose their mentors, giving mentors and mentees protected time for mentorship, and using written statements to set boundaries and provide accountability all contribute to successful mentorship [Reference Kashiwagi, Varkey and Cook66]. The CTSA context could provide a good testing ground for groundbreaking research that could inform the broader body of knowledge on mentorship. One possible avenue of research would be to examine the effects of IDP adherence on mentee productivity. While we predict that mentees who adhere to their IDP’s will be more productive, to our knowledge no research to support this hypothesis currently exists, even outside the CTSA context.
Finally, future research should utilize conceptual models of mentorship. An early review of evaluations of clinical and translational research mentors [Reference Meagher, Taylor, Probsfield and Fleming12] proposed a mentorship evaluation model that measured outcome as a function of individual (e.g., demographic factors, education, personality) and environmental (e.g., institutional resources, institutional attitudes and institutional policies) factors. We build upon this model by adding indicators of mediating processes (Figure 2).

Figure 2. Proposed model for evaluating clinical and translational science awards (CTSA) mentoring.
The simple model shown in Figure 2 is intended to help guide future research on mentorship and mentored research training programs in the CTSA Consortium. Our goal in developing this model was to address a specific gap: the lack of a comprehensive framework explicitly designed to guide the evaluation of mentoring relationships, particularly within the complex environment of CTSA programs. We were informed by established conceptual frameworks depicting the functions of mentorship (e.g., psychosocial and career support) [Reference Kram67] and overarching theories that explain mentoring processes (such as ecological systems theory and social capital theory) [Reference Chandler, Kram and Yip68–Reference Aikens, Sadselia, Watkins, Evans, Eby and Dolan70]. Our awareness of these foundational theoretical contributions significantly shaped our understanding of mentoring dynamics.
Moreover, we considered process-oriented models of mentoring, notably the seminal work by Eby et al. [Reference Eby, Allen and Hoffman7]. This model’s emphasis on inputs, processes, and outcomes directly informed our conceptualization of how an effective evaluation framework should function to better understand mentoring relationships. While existing models describe what mentoring is or how it functions, our aim is to provide a framework that translates these understandings into actionable components for rigorous evaluation efforts. We identified a distinct need for a practical model that helps researchers and practitioners systematically assess the effectiveness and impact of mentoring relationships in a structured way, rather than solely describing mentoring processes or functions.
The first column of the model includes several key inputs, including the mentors’ and mentees’ prior experiences, similarities and differences between the mentor and mentee, mentorship training, and mentorship structure. The second column includes several mediating indicators of relationship quality, including the quality of the relationship between the mentor and mentee and the frequency of interaction between mentors and mentees. The third column of the model includes subjective and objective outputs, including mentee career-related performance, attitudinal outcomes, persistence, and learning outcomes. Lastly, there are several contextual factors representing institutional and systemic influences (such as organizational support and culture, funding availability, and institutional priorities) that we hypothesize influence inputs, mediating indicators, and outputs.
Conclusion
This scoping review shows that the CTSA Consortium is producing a growing body of research on mentorship and mentor training. Surveys conducted across the CTSA Consortium show that there is great diversity in scholars’ and trainees’ mentoring experiences and outcomes. Evaluations of mentored research training programs demonstrate their contributions to scholars’ and trainees’ professional careers, and there is ample evidence demonstrating that mentor training is an effective strategy to build the mentorship skills and competencies needed to cultivate the clinical and translational science workforce. However, there is a clear need for more practice-based investigation, especially to identify potential inputs and processes. The current literature provides a solid foundation on which future research should build.
Author contributions
Phillip Ianni: Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing-original draft, Writing-review & editing; Elias Samuels: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing-original draft, Writing-review & editing; Ellen Champagne: Data curation, Formal analysis, Visualization, Writing-original draft, Writing-review & editing; Eric Nehl: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing; Deborah DiazGranados: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing.
Funding statement
This research was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Numbers KL2TR002381, K12TR004374, R25TR004776, TL1TR002382, T32TR004371, T32TR004764, UL1TR002378, UM1TR004360, and UM1TR004404.
Competing interests
The authors report no declarations of interest.