Hostname: page-component-cb9f654ff-mx8w7 Total loading time: 0 Render date: 2025-08-22T04:08:16.417Z Has data issue: false hasContentIssue false

A scoping review of mentorship in a CTSA context: A summary of past work and an agenda for future research

Published online by Cambridge University Press:  21 July 2025

Phillip Ianni*
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Elias Samuels
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Ellen Champagne
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Eric Nehl
Affiliation:
Georgia Clinical and Translational Science Alliance, Emory University School of Medicine, Atlanta, GA, USA Emory University Rollins School of Public Health, Atlanta, GA, USA
Deborah DiazGranados
Affiliation:
Wright Regional Center for Clinical and Translational Science, Virginia Commonwealth University, Richmond, VA, USA Department of Psychiatry, School of Medicine, Virginia Commonwealth University, Richmond, VA, USA
*
Corresponding author: P. Ianni; Email: pianni@med.umich.edu
Rights & Permissions [Opens in a new window]

Abstract

Mentorship is a vital part of the training provided in the K and T programs funded by the Clinical and Translational Science Awards (CTSA). However, the inputs, indicators, and outcomes associated with a successful mentoring relationship remain poorly understood. In this review, we critically examine the current body of literature on mentorship in a CTSA context. We conducted a comprehensive search of the literature for relevant research articles. We included articles that were contextualized within a CTSA hub, examined a mentorship program, and conducted evaluation research. Through an initial search of online databases and by reviewing reference sections of relevant articles, we identified 141 potentially relevant articles. Twenty-five of these articles met our inclusion criteria. We identified three categories of research: nationwide institutional surveys of CTSA mentorship programs, mentored research training programs, and mentor training programs. While the findings highlighted the effectiveness of mentor training and mentored training programs, there is a notable lack of assessment of mentoring inputs and indicators. Based on our review, we propose a model for the evaluation of CTSA mentorship that includes measurable inputs, indicators, and outcomes. This model provides a holistic framework for evaluators and CTSA program directors to better understand their mentorship programs.

Information

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The advancement of translational science requires cultivating and improving effective mentorship practices among translational scientists. Mentorship has been studied extensively in many different contexts, including health care delivery [Reference Burgess, van Diggele and Mellis1], corporate settings [Reference Underhill2], not-for-profit settings [Reference Bortnowska and Seiler3], and government agencies [Reference Ehrich and Hansford4]. In higher education, mentorship has been studied across different career stages (undergraduate, graduate, postdoc, faculty) and disciplines (biomedical science, engineering, social sciences, physical sciences, allied health professions and humanities) [Reference Dahlberg and Byars-Winston5Reference Hernandez8]. However, relatively little research has been devoted to understanding mentoring practices and needs that are distinctive to the unique needs of the clinical and translational science context. This review establishes a foundation for studying mentorship in Translational Science by analyzing findings and identifying research gaps in studies supported by Clinical and Translational Science Awards (CTSA) from the National Center for Advancing Translational Sciences (NCATS).

Mentorship in the context of translational science is distinct from other STEM fields in a few different ways. First, the context and the activities required for effective translational science differ from those in basic science research. Unlike basic science researchers, translational science researchers focus on developing practical solutions for specific health-related problems [Reference Rubio, Schoenbaum and Lee9]. Second, translational science is distinct in its goal of testing and disseminating tools and practices to enhance the clinical and translational research enterprise. While testing methodologies may not be unique, the process of dissemination and translation into practice requires specialized skills and knowledge, highlighting the need for mentorship. Third, beyond distinguishing translational science from basic research, we also emphasize the importance of understanding effective mentorship behaviors [Reference Kraiger, Finkelstein and Varghese10], and we believe this need can fulfill a gap in the mentoring literature and be applied across many fields in science.

Translational scientists require training in a wide range of competencies [Reference Gilliland, White and Gee11]. Their mentors play a key role in this training process. A strong, active mentoring relationship serves both a career function and a psychosocial function and is one of the best predictors of academic success. However, as noted in a previous review of research mentorship in clinical and translational science, the mentorship experience is often difficult to define and measure [Reference Meagher, Taylor, Probsfield and Fleming12].

In addition, recent T32 and K12 notice of funding opportunities released by the NIH [13,14] put a strong emphasis on mentorship. As of 2025, all training grant applications must now include a Mentor/Trainee Assessment Plan, which requires that institutions specify how their programs will monitor mentoring relationships, which approaches and tools will be used to assess both mentors’ and mentees’ perceptions of the mentoring relationship, and how the program leadership will handle major discrepancies. For career development programs, institutions are now required to describe how participating faculty will be trained to use evidence-informed mentorship practices, how gains in perceived skill will be measured, how changes in mentoring behaviors and effectiveness as a result of mentor training will be measured, how the research training environment will be monitored and assessed, and how outstanding mentors will be recognized and rewarded.

To address these challenges, NCATS has developed programs and training resources to promote the development of effective mentors. Mentored research training programs, such as the T32 and the K12, aim to give trainees and scholars the opportunity to work on a research project with a faculty mentor. Many CTSA hubs have also developed mentor training programs [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15Reference Pfund, House and Spencer20] that are designed to support and improve the culture of mentoring by giving mentors needed skills. However, identification of mentorship practices and principles that are distinct to the field of clinical and translational science is still needed.

While rigorous evaluations of CTSA mentorship programs have been conducted [Reference Pfund, House and Asquith21,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22], there is little understanding of how and why they work, a challenge that can be described as the “black box” problem in program evaluation [Reference Solmeyer and Constance23]. As stated in the NASEM report [Reference Dahlberg and Byars-Winston5], “To fully understand mentorship, evaluation measures would ideally address both mentorship processes and mentorship outcomes and the system factors that can profoundly shape it.” (p. 127). To gain an understanding of the mechanisms of mentorship within the context of clinical and translational science, we examined the current state of the literature about mentorship provided by, or supported within, CTSA-funded research centers, identified gaps in evaluation, and articulated future directions for research that could address these gaps. Based on these findings, we propose a model of CTSA mentorship.

Methods

Study design

This study followed a systematic review methodology, guided by the framework proposed by Arksey and O’Malley [Reference Arksey and O’Malley24] and refined by Levac et al. [Reference Levac, Colquhoun and O’Brien25]. This approach was selected to examine the extent, range, and nature of the existing literature on evaluation research for mentorship programs at CTSA hubs and to identify key concepts, gaps in the research, and avenues for future inquiry. The reporting of this review adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines [Reference Tricco, Lillie and Zarin26].

Inclusion/exclusion criteria

The inclusion criteria for our review were based on the Population, Concept, and Context (PCC) framework as recommended by the Joanna Briggs Institute (JBI) methodology for scoping reviews [Reference Peters, Godfrey, McInerney, Soares, Khalil and Parker27]. Studies were eligible for inclusion if they:

  1. 1. Were conducted at a CTSA-funded mentored research program.

  2. 2. Included evaluation results (qualitative or quantitative).

  3. 3. Included measurable metrics of mentoring inputs, activities, outputs, or outcomes.

  4. 4. Examined faculty or postdoctoral participants (no CRP’s, undergraduate students, etc.).

Studies were excluded if they were not directly related to the core concept, were not published in English (if applicable), or if the full text was unavailable.

Search strategy

A comprehensive literature search was conducted using PubMed to identify relevant studies. The search strategy was developed and included a combination of keywords and Medical Subject Headings (MeSH) related to mentor programs. We used the following search terms: “mentor* AND program AND research AND (KL2 OR TL1 OR T32 OR K12 OR postdoctoral) AND (CTSA OR NCATS) AND (Clinical OR translational).” In addition to electronic databases, a secondary search of the reference lists of included articles and key journals was conducted to capture additional relevant studies. The literature search was conducted in 2024. There were no limits placed on year of publication, and articles included in this review were published between 2009 and 2022.

Study selection

All search results were imported into Covidence, a web-based collaboration software platform that streamlines the production of systematic and other literature reviews [28]. The review process had three steps. In the first step, three of the authors screened the titles and abstracts of the studies based on the predefined eligibility criteria. In the second step, full-text screening was then conducted on studies that met the initial inclusion criteria by two separate authors. Any disagreements between reviewers were resolved through discussion or by consulting the entire research team. In the third step, relevant data were extracted from the selected papers by three separate authors. We repeated this process with articles identified from the original list of papers. A diagram of this process is shown in Figure 1.

Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram.

Results

Study types, designs, and methods

Clear commonalities are shared by the 25 studies summarized in Table 1, notably including similarities in study types, designs, and methods. Typically, these studies involved mixed-methods research, often using surveys, interviews, and/or focus groups in a pre-post design.

Table 1. Summary of key study variables

However, there was also considerable variation across studies. For example, while most (60%) of these studies focused on a single CTSA hub, some were far broader in scope, with six studies evaluating mentoring activity in at least 45 hubs each. The number of participants in these studies also varied considerably, ranging from 6 to 1362, with an average of 177 participants per study (SD = 288).

As shown in Table 2, there are clear similarities between the design and methods of the studies that were evaluated. Most (16, 64%) of these studies utilized mixed methods, with all but one of these studies using survey methods; other evaluation methods used by these studies include interviews (4), focus groups (1), secondary data collection (3) and qualitative coding (2). Almost a quarter (24%) of the studies included in this review used only qualitative methods, including interviews (5), focus groups (2), and secondary data collection (2). Three of the studies evaluated used quantitative methods solely, all of which involved surveys.

Table 2. Study type, design, and methods extracted from the articles evaluated

Roughly half of the studies (48%) had a pre- and post-test design to measure change over the course of the intervention, three of which also used a comparison group to measure outcomes against a meaningfully comparable group of individuals. Five studies (20%) had a cross-sectional design, and the remaining 9 studies (32%) had other study designs.

Types of interventions

Most of the studies (n = 16, 64%) examined a K Scholars program, with four of the 16 studies examining both T trainee and K Scholars programs. Six studies examined other mentoring programs designed for health sciences faculty overall [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Spence, Buddenbaum, Bice, Welch and Carroll17,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Bonilha, Hyer and Krug29], junior faculty [Reference Spence, Buddenbaum, Bice, Welch and Carroll17], or physician-scientists [Reference Stefely, Theisen and Hanewall30]. Regarding the source of the data collected, ten studies (40%) gathered data from both mentors and mentees, and over half (n = 13, 52%) only collected data from mentors or program administrators. Two studies collected data only from mentees [Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Smyth, Coller and Jackson32].

Statistical tests used and interview/survey questions asked

Roughly half of the studies evaluated (n = 13, 52%) utilized inferential statistics, such as t-tests, chi-square tests, logistic regression, or MANOVAs, to inform conclusions or predictions about mentoring activities and impacts. And nine (36%) of the evaluated studies used validated survey measures of mentoring, with survey questions typically derived from Mentoring Competency Assessment (MCA) [Reference Fleming, House and Hanson33]. Some studies utilized other validated scales of mentoring activity [Reference Feldman, Huang and Guglielmo18,Reference Feldman, Steinauer and Khalili34] or in combination with the MCA [Reference Trejo, Wingard and Hazen35]. Seven studies (28%) employed both inferential statistics and validated survey measures.

The open-ended survey questions and focus group or interview protocols used in these studies were diverse, considered as a whole. Many of the open-ended survey questions solicited information about the mentoring experience, gains in mentoring knowledge and skills, the quality of training programs, examples of valuable mentoring interactions, and opportunities for developmental or programmatic improvements which could be made by CTSAs. The impact of mentoring on mentees research studies and research careers was also the subject of many of these survey questions as well as the focus group or interview protocols. The qualitative data collected through these means were typically coded by multiple members of the research team and grouped into thematic categories; only occasionally were these qualitative data grouped into themes without methodical coding carried out by multiple raters.

Inputs measured

Only five studies involved the use of inferential statistical tests that included independent variables [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Pfund, House and Asquith21,Reference Bonilha, Hyer and Krug29,Reference Smyth, Coller and Jackson32,Reference Trejo, Wingard and Hazen35]. All of these studies analyzed differences in outcomes across groups identified by their sex or faculty rank. Some studies tested for and found statistically significant differences between other participant characteristics including those identified by race, age, tenure track, and years of experience or between mentors and mentees [Reference Pfund, House and Asquith21,Reference Bonilha, Hyer and Krug29]. The remaining studies that were evaluated (n = 20, 80%) did not involve testing for statistically significant differences between subgroups of participants, although many tested for significant differences among participants before and after they received a programmatic intervention, such as mentorship training. The following summary of the results addresses these differences in the context of the overall findings of the studies reviewed.

Data collected in articles reviewed

We identified three types of studies in our review. Several studies focused largely on evaluating training programs across the whole CTSA Consortium [Reference Abedin, Rebello, Richards and Pincus36Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. A few studies focused on evaluating specific mentored research training programs within the CTSA Consortium [Reference Stefely, Theisen and Hanewall30,Reference Smyth, Coller and Jackson32,Reference Huskins, Silet and Weber-Main38,Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. Finally, the largest proportion of studies focused on evaluating mentor training programs [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Bonilha, Hyer and Krug29,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Feldman, Steinauer and Khalili34,Reference Trejo, Wingard and Hazen35,Reference Martina, Mutrie, Ward and Lewis43Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46].

We have structured the review according to these three types of papers. The findings of each type of paper are reviewed below.

Mentorship programs across the CTSA consortium

To understand the current state of mentoring in the CTSA consortium, we examined the findings of several manuscripts that evaluated K and T mentoring training programs across the CTSA Consortium [Reference Abedin, Rebello, Richards and Pincus36Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. These works demonstrate that CTSA K and T programs share many common mentoring training requirements, training opportunities, resources, and outcomes [Reference Abedin, Rebello, Richards and Pincus36,Reference Burnham, Schiro and Fleming37,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. However, they also show that there are considerable differences across both K and T programs, notably the mechanisms for evaluating mentoring performance [Reference Sancheznieto, Sorkness and Attia39Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].

The most common form of institutional support for mentors was mentor training. There was great variety in the mode of delivery of mentor training. Mentor training is usually given as a one-time orientation (51%), but informal training (33%), Web-based training (28%) and face-to-face training (23%) were other common formats [Reference Abedin, Rebello, Richards and Pincus36]. Mentor training was typically offered by both KL2 and TL1 programs but was often not mandatory. Thirty-four (64%) KL2 programs reported that they offered mentor training, but only 56% of those programs required mentors to attend training [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. Similarly, for TL1 predoctoral training programs, 90% offered mentor training, but this training was mandatory for only 20% of hubs [Reference Sancheznieto, Sorkness and Attia39]. For TL1 postdoctoral training programs, 84% of hubs offered mentor training, but only 30% required this training [Reference Sancheznieto, Sorkness and Attia39]. Most TL1 programs also provided mentee training to their mentees, with training available for both predoctoral (78%) and postdoctoral (76%) trainees. However, as with mentor training, mentee training was rarely required (22% of predoctoral programs and 32% of postdoctoral programs) [Reference Sancheznieto, Sorkness and Attia39].

KL2 programs often utilized multiple strategies to facilitate KL2 Scholars’ mentoring experience. In addition to mentor training, other strategies included aligning mentor and mentee expectations, mentor evaluations, mentor awards, and subsidizing membership to mentoring academies [Reference Burnham, Schiro and Fleming37,Reference Silet, Asquith and Fleming40]. 53% of KL2 programs reported offering incentives, such as consideration in annual evaluation or promotion [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. KL2 programs often utilized existing institutional mentoring policies, training, and resources [Reference Abedin, Rebello, Richards and Pincus36,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].

A majority of KL2 programs had mechanisms to communicate the programmatic expectations for the mentoring relationship to mentors (52%) and scholars (54%), such as contracts, agreements, signed letters, orientation meetings, handbooks, oversight committees, and initial meetings with the program director [Reference Huskins, Silet and Weber-Main38]. However, only 28%–38% of KL2 programs had written mentor-mentee agreements [Reference Silet, Asquith and Fleming40,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41] and 21% of KL2 programs had written policies to manage conflicts [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].

To evaluate mentors, most KL2 programs reported having some kind of formal evaluation, with 79% reporting using one or more evaluation processes [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. 37% of KL2 programs reported conducting formal evaluations of the mentoring relationship, with 11% of programs developing their own survey instrument. Typically, mentees rate their mentors using annual or semi-annual surveys [Reference Silet, Asquith and Fleming40,Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41].

While there has been research on the use of specific mentoring activities, such as Individualized Development Plans (IDPs) in the broader mentorship literature [Reference Vanderford, Evans, Weiss, Bira and Beltran-Gastelum47Reference Thompson, Santacroce and Pickler49], there has been little research on their influence on mentee outcomes in the CTSA literature. One recent survey [Reference Sancheznieto, Sorkness and Attia39] of the mentoring activities of TL1 predoctoral and postdoctoral programs examined the strategic use of IDPs, which are currently required for NIH-supported research training programs. They found that IDPs were primarily being used to track trainee progress, set milestones, and to provide opportunities for “midcourse” corrections in the training. However, as far as we are aware, there have been no studies examining how IDP adherence affects mentee productivity. Lastly, there were several reported barriers to mentoring, including a lack of knowledge or appreciation for the importance of mentor training, a lack of resources to provide training, and a lack of accountability for organizing mentor training activities [Reference Abedin, Rebello, Richards and Pincus36,Reference Silet, Asquith and Fleming40]. There was also a noted inconsistency in how mentor training is conducted across institutions [Reference Tillman, Jang, Abedin, Richards, Spaeth-Rublee and Pincus41]. The use of IDPs also varied, with IDPs not being utilized for any evident purposes by 7% of TL1 predoctoral programs and 10% of postdoctoral TL1 programs. Concerningly, some CTSA hubs reported that mentors and program directors contributed to trainees’ IDP without trainee input [Reference Sancheznieto, Sorkness and Attia39].

Mentored research programs

To understand how mentoring impacted mentees, we examined the results of manuscripts on mentored research programs. Across all four studies we reviewed, mentees reported that their mentors were critical to their success. Specifically, mentees reported that their mentors helped them generate ideas for research studies, plan and design studies, review drafts of grants and manuscripts, and acted as a resource for advice, encouragement, and feedback [Reference Stefely, Theisen and Hanewall30,Reference Smyth, Coller and Jackson32,Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. Additionally, mentees emphasized the importance of identifying and aligning expectations with their mentors [Reference Huskins, Silet and Weber-Main38].

There were also several other programmatic outcomes reported in these articles; however, we chose not to report these findings here. This is because we could not pinpoint whether these programmatic outcomes were specifically associated with the mentoring they received. Mentored research programs often include a range of didactic elements, including grant and scientific writing, team science, entrepreneurship, leadership, community engagement, and health disparities research [Reference Huskins, Silet and Weber-Main38,Reference Sorkness, Scholl, Fair and Umans50].

Mentor training programs

To examine whether mentor training programs had an impact on mentors and mentees, we examined several studies. Most of the manuscripts evaluating mentor training programs demonstrated the impact of mentor training on one or more levels of evaluation outcomes, including participant experience, learning or skill, behavior, programmatic results, or organizational impact [Reference Kirkpatrick and Kirkpatrick51,Reference Yardley and Dornan52]. Most of these manuscripts measured the impact of both the training experience and learning outcomes [Reference Feldman, Huang and Guglielmo18,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Martina, Mutrie, Ward and Lewis43Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. The types of mentor training programs, inputs, outcomes, and statistical tests used in these papers are shown in Table 3.

Table 3. Type of mentor training, outcomes, inputs, and statistical tests

*MCA = Mentoring Competency Assessment.

Overall, quantitative data analysis provided evidence for the effectiveness of mentor training. Most studies found that mentor training programs increased mentors’ self-reported confidence in their mentoring skills [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Feldman, Huang and Guglielmo18,Reference Weber-Main, Shanedling, Kaizer, Connett, Lamere and El-Fakahany22,Reference Trejo, Wingard and Hazen35,Reference McGee, Blumberg and Ziegler44Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. There is evidence that this increase in confidence was durable and long-lasting [Reference Feldman, Steinauer and Khalili34] and significantly greater than a control group that did not receive training [Reference Pfund, House and Asquith21]. This finding is consistent with previous research on mentor training programs in medicine [Reference Sheri, Too, Chuah, Toh, Mason and Radha Krishna53] There is also evidence that mentor training programs have similar benefits for mentees, with mentees feeling more confident in their ability to connect with potential and future mentors, know what characteristics to look for in current and future mentors, and manage the work environment [Reference Nearing, Nuechterlein, Tan, Zerzan, Libby and Austin45].

There appear to be several other benefits of mentor training. In one study [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15], as a result of mentor training, mentors reported statistically significant increases in: the overall quality of their mentoring, a perception that they were currently meeting their mentees’ expectations, and their ability to set clear expectations for mentees [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15], However, these benefits may differ by faculty rank. While assistant and associate professors’ perceived ability to help mentees acquire resources and set clear expectations significantly increased from pre-test to post-test, full professors’ scores on these items did not change significantly from pre-test to post-test [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15]. Mentor training has also been found to significantly increase the percent of faculty familiar with their departmental mentoring plans, the percent of faculty (instructors, assistant professors, and associate professors) with a mentor, the percent of full professors in a mentoring role, the percent of faculty familiar with their college’s promotion criteria, the percent of faculty satisfied with departmental support, and significantly decrease the number of mentees perceiving mentoring resources to be insufficient [Reference Bonilha, Hyer and Krug29].

There has been little research in the CTSA context on the effects of demographic characteristics on the mentorship process. Evidence from two studies suggests that mentor training may be more beneficial for male mentors than for female mentors. Interaction effects found that as a result of the training they received, males provided more constructive feedback and helped mentees develop strategies to meet goals; no significant increase was found for females [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15]. However, female gender was significantly associated with satisfaction with departmental support among mentees [Reference Bonilha, Hyer and Krug29]. We caution that the findings of these two studies are narrowly-focused and so cannot provide meaningful insights into the effects of any other demographic characteristics variables on the mentorship process.

A number of studies also evaluated the impact of mentor training on participant behavior [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15Reference Spence, Buddenbaum, Bice, Welch and Carroll17,Reference Patino, Kubicek, Robles, Kiger and Dzekov19,Reference Pfund, House and Spencer20,Reference Robinson, Schwartz, DiMeglio, Ahluwalia and Gabrilove31,Reference Feldman, Steinauer and Khalili34]. In an experimental study, Pfund [Reference Pfund, House and Spencer20,Reference Pfund, House and Asquith21] compared the impact of mentor training on participants’ experience and learning using the MCA against a control group, but also evaluated participants’ actual mentoring practices throughout the course of a multi-session training. They found that compared to the control group, mentors in the intervention group reported a significantly greater increase in MCA composite scores and also reported more changes in their mentoring practices.

Two studies focused on evaluating the impact of mentor training at organizational or systemic levels. Trejo and colleagues [Reference Trejo, Wingard and Hazen35] found that participants in a faculty mentor training program reported improved overall morale and an increased perception of a supportive organizational environment. Mentor training can also improve rates of faculty satisfaction and retention [Reference Bonilha, Hyer and Krug29].

Discussion

Summary of findings

The results of nationwide surveys show that there is a wide variety of mentoring practices across the CTSA, including mode of delivery, whether mentor training is mandatory or optional, strategies to facilitate KL2 Scholars’ mentorship experience, mechanisms to communicate programmatic expectations for the mentoring relationship, and methods to evaluate mentors. Studies on mentored research training programs demonstrated that mentors can have a substantive impact on their mentees’ professional work. The findings of mentor training programs in a CTSA context are consistent with the body of literature on mentor training programs in medicine more broadly [Reference Sheri, Too, Chuah, Toh, Mason and Radha Krishna53]. Specifically, mentor training was consistently found to boost mentors’ confidence in several domains of mentoring competence.

While there is an ample body of research on mentorship supported by CTSA institutes, there are also key gaps in the current research. Most notably, many research studies on mentorship use small sample sizes, self-report measures, and correlational designs, which have limited effectiveness for evaluating student-faculty mentorship programs [Reference Campbell, Allen and Eby54]. There is a need for more rigorous designs, such as longitudinal and quasi-experimental methods, that enable researchers to get a better understanding of the dynamics of mentor relationships over the course of a mentee’s research career. It is reasonable to speculate that this gap exists in this body of literature because cross-sectional evaluations of mentoring practices, programs, and training are much less time- and resource-intensive to conduct compared to longitudinal studies of the dynamics of mentoring relationships over long periods of time. Outside the CTSA context, a recent review [Reference Gangrade, Samuels and Attar55] found only 24 out of 109 reviewed articles on mentoring in medical and STEM settings used a longitudinal design, suggesting that this kind of design is still rarely used in mentoring research. The implications of this general trend in the scientific literature are discussed further in regards to potential directions for future research.

There are also clear gaps in the literature regarding mentorship teams, processes, and quality improvement initiatives. Few of the papers reviewed here included process measures of mentorship in their evaluations. As Steiner [Reference Steiner56] writes: “Such structural tools will not improve mentorship outcomes unless they are consistently adopted into the day-to-day process of mentorship…since all good mentorships are journeys, training programs should monitor and evaluate mentoring relationships continuously—similarly to the existing strategies for evaluating clinical or classroom teaching. Systematic assessments of mentorship are surprisingly rare.” (p. 3–4). Potentially measurable mentorship processes identified in the STEM literature include career support, psychosocial support, role modeling, and negative mentoring experiences [Reference Dahlberg and Byars-Winston5,Reference Hernandez8]. As far as we are aware, these processes have not yet been investigated in a translational science mentoring context.

There were several limitations of this review. First, it is possible that we could have missed relevant articles in our literature search because of our choice of search terms, which were intentionally designed to be limited and focused. In addition, at the time of this writing, some of the studies (n = 10) included in this review are over 10 years old and may not reflect the current state of mentor activity and training across the CTSA Consortium. Lastly, publication bias likely excluded findings that were not significant. Because of this, the studies reviewed may not give an accurate picture of the impact of mentor training programs [Reference Torgerson57].

Directions for future research

Future research on CTSA mentored research programs should utilize sophisticated statistical methods involving comparison groups, power analysis, quasi-experimental designs, and propensity score matching [Reference Behar-Horenstein, Feng, Prikhidko, Su, Kuang and Fillingim15,Reference Pfund, House and Spencer20,Reference Samuels, Ianni, Eakin, Champagne and Ellingrod58]. The use of such rigorous methods can answer important empirical questions about causality, providing program directors with useful, in-depth knowledge of how their programs work. While there have been a few rigorous evaluations that statistically model the impact of a mentorship program on mentee outcomes in clinical and translational science [Reference Spence, Buddenbaum, Bice, Welch and Carroll17], much more of this research is needed. Rigorous studies of the impact of mentorship in STEM fields could provide a guide for evaluation of mentorship programs in clinical and translational science [Reference Kuchynka, Gates and Rivera59Reference Estrada, Hernandez and Schultz61]. In addition, CTSA institutions ought to use standardized methodologies to evaluate their programs [Reference Comeau, Escoffery, Freedman, Ziegler and Blumberg42]. To facilitate this process, software programs such as Flight Tracker [Reference Helton, Pearson and Hartmann62] can be used to extract data required for standardized evaluations of mentorship programs, including mentee grant success and counts of papers co-authored by mentors and mentees [Reference Steiner56].

Future research should also evaluate the criteria and processes used to match mentees with appropriate mentors and mentorship team, align mentor and mentee expectations, and how mentor training impacts on long-term career outcomes [Reference Behar-Horenstein and Prikhidko16,Reference Huskins, Silet and Weber-Main38,Reference Schweitzer, Rainwater, Ton, Giacinto, Sauder and Meyers46]. This includes research on whether mentee demographics (e.g., race, ethnicity, gender, sex, and other commonly used measures of demographic representation) might affect mentoring practices or outcomes. The paucity of research on demographic effects on mentorship in the CTSA context stands in contrast to the broader mentorship literature, for which ample research exists. For example, previous research has found that women perceived mentorship to be more valuable to their career development and reported receiving more psychosocial support [Reference Shen, Tzioumis and Andersen63Reference O’Brien, Biga, Kessler and Allen65]. More research is needed to establish whether similar relationships also exist in a CTSA context.

There is also a need to understand how mentees cultivate relationships with multiple mentors as part of a mentoring team, and how mentors help mentees launch and sustain an independent scientific career [Reference Behar-Horenstein and Prikhidko16]. Mentorship teams are particularly important for K Scholars, and more research is needed to understand how mentorship networks function.

More research is needed on how specific mentorship activities influence mentee outcomes. As noted in the NASEM report [Reference Dahlberg and Byars-Winston5], “there is little consensus on how to determine either the most essential specific forms of mentorship support or the programmatic or institutional structures that could enhance, incentivize, or reward mentorship support.” (p. 138). Prior research in academic medicine suggests that adequate institutional support, allowing mentees to choose their mentors, giving mentors and mentees protected time for mentorship, and using written statements to set boundaries and provide accountability all contribute to successful mentorship [Reference Kashiwagi, Varkey and Cook66]. The CTSA context could provide a good testing ground for groundbreaking research that could inform the broader body of knowledge on mentorship. One possible avenue of research would be to examine the effects of IDP adherence on mentee productivity. While we predict that mentees who adhere to their IDP’s will be more productive, to our knowledge no research to support this hypothesis currently exists, even outside the CTSA context.

Finally, future research should utilize conceptual models of mentorship. An early review of evaluations of clinical and translational research mentors [Reference Meagher, Taylor, Probsfield and Fleming12] proposed a mentorship evaluation model that measured outcome as a function of individual (e.g., demographic factors, education, personality) and environmental (e.g., institutional resources, institutional attitudes and institutional policies) factors. We build upon this model by adding indicators of mediating processes (Figure 2).

Figure 2. Proposed model for evaluating clinical and translational science awards (CTSA) mentoring.

The simple model shown in Figure 2 is intended to help guide future research on mentorship and mentored research training programs in the CTSA Consortium. Our goal in developing this model was to address a specific gap: the lack of a comprehensive framework explicitly designed to guide the evaluation of mentoring relationships, particularly within the complex environment of CTSA programs. We were informed by established conceptual frameworks depicting the functions of mentorship (e.g., psychosocial and career support) [Reference Kram67] and overarching theories that explain mentoring processes (such as ecological systems theory and social capital theory) [Reference Chandler, Kram and Yip68Reference Aikens, Sadselia, Watkins, Evans, Eby and Dolan70]. Our awareness of these foundational theoretical contributions significantly shaped our understanding of mentoring dynamics.

Moreover, we considered process-oriented models of mentoring, notably the seminal work by Eby et al. [Reference Eby, Allen and Hoffman7]. This model’s emphasis on inputs, processes, and outcomes directly informed our conceptualization of how an effective evaluation framework should function to better understand mentoring relationships. While existing models describe what mentoring is or how it functions, our aim is to provide a framework that translates these understandings into actionable components for rigorous evaluation efforts. We identified a distinct need for a practical model that helps researchers and practitioners systematically assess the effectiveness and impact of mentoring relationships in a structured way, rather than solely describing mentoring processes or functions.

The first column of the model includes several key inputs, including the mentors’ and mentees’ prior experiences, similarities and differences between the mentor and mentee, mentorship training, and mentorship structure. The second column includes several mediating indicators of relationship quality, including the quality of the relationship between the mentor and mentee and the frequency of interaction between mentors and mentees. The third column of the model includes subjective and objective outputs, including mentee career-related performance, attitudinal outcomes, persistence, and learning outcomes. Lastly, there are several contextual factors representing institutional and systemic influences (such as organizational support and culture, funding availability, and institutional priorities) that we hypothesize influence inputs, mediating indicators, and outputs.

Conclusion

This scoping review shows that the CTSA Consortium is producing a growing body of research on mentorship and mentor training. Surveys conducted across the CTSA Consortium show that there is great diversity in scholars’ and trainees’ mentoring experiences and outcomes. Evaluations of mentored research training programs demonstrate their contributions to scholars’ and trainees’ professional careers, and there is ample evidence demonstrating that mentor training is an effective strategy to build the mentorship skills and competencies needed to cultivate the clinical and translational science workforce. However, there is a clear need for more practice-based investigation, especially to identify potential inputs and processes. The current literature provides a solid foundation on which future research should build.

Author contributions

Phillip Ianni: Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing-original draft, Writing-review & editing; Elias Samuels: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing-original draft, Writing-review & editing; Ellen Champagne: Data curation, Formal analysis, Visualization, Writing-original draft, Writing-review & editing; Eric Nehl: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing; Deborah DiazGranados: Conceptualization, Formal analysis, Writing-original draft, Writing-review & editing.

Funding statement

This research was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Numbers KL2TR002381, K12TR004374, R25TR004776, TL1TR002382, T32TR004371, T32TR004764, UL1TR002378, UM1TR004360, and UM1TR004404.

Competing interests

The authors report no declarations of interest.

References

Burgess, A, van Diggele, C, Mellis, C. Mentorship in the health professions: a review. Clin Teach. 2018;15:197202. doi: 10.1111/tct.12756.Google Scholar
Underhill, CM. The effectiveness of mentoring programs in corporate settings: a meta-analytical review of the literature. J Vocat Behav. 2006;68:292307.Google Scholar
Bortnowska, H, Seiler, B. Formal mentoring in nonprofit organizations. Model proposition. Management. 2019;23:188208.Google Scholar
Ehrich, L, Hansford, B. Mentoring in the public sector. Int J Pract Exp in Pro Educ. 2008;11:116.Google Scholar
Dahlberg, ML, Byars-Winston, A, eds. The Science of Effective Mentorship in STEMM. Washington, DC: National Academies Press, 2019.Google Scholar
Pleschova, G, McAlpine, L. Enhancing university teaching and learning through mentoring: a systematic review of the literature. Int J Ment Coach Educ. 2015;4:107125.Google Scholar
Eby, LT, Allen, TD, Hoffman, BJ, et al. An interdisciplinary meta-analysis of the potential antecedents, correlates, and consequences of protege perceptions of mentoring. Psychol Bull. 2013;139:441476. doi: 10.1037/a0029279.Google Scholar
Hernandez, PR. Landscape of assessments of mentoring relationship processes in postsecondary STEMM contexts: A synthesis of validity evidence from mentee, mentor, institutional/Programmatic perspectives. Paper commissioned by the National Academies of Sciences Engineering & Medicine Committee on The Science of Effective Mentorship in STEMM, 2018.Google Scholar
Rubio, DM, Schoenbaum, EE, Lee, LS, et al. Defining translational research: implications for training. Acad Med. 2010;85:470475. doi: 10.1097/ACM.0b013e3181ccd618.Google Scholar
Kraiger, K, Finkelstein, LM, Varghese, LS. Enacting effective mentoring behaviors: development and initial investigation of the cuboid of mentoring. J Bus Psychol. 2019;34:403424.Google Scholar
Gilliland, CT, White, J, Gee, B, et al. The fundamental characteristics of a translational scientist. ACS Pharmacol Transl Sci. 2019;2:213216. doi: 10.1021/acsptsci.9b00022.Google Scholar
Meagher, E, Taylor, L, Probsfield, J, Fleming, M. Evaluating research mentors working in the area of clinical translational science: a review of the literature. Clin Transl Sci. 2011;4:353358. doi: 10.1111/j.1752-8062.2011.00317.x.Google Scholar
National Institutes of Health. PAR-25-195 Limited Competition: Ruth L. Kirschstein National Research Service Award (NRSA) Postdoctoral Research Training Grant for the Clinical and Translational Science Awards (CTSA) Program (T32 Clinical Trial Not Allowed). 2024.Google Scholar
National Institutes of Health. PAR-25-196 Limited Competition: Mentored Research Career Development Program Award in Clinical and Translational Science Awards (CTSA) Program (K12 Clinical Trial Optional) 2024.Google Scholar
Behar-Horenstein, LS, Feng, X, Prikhidko, A, Su, Y, Kuang, H, Fillingim, RB. Assessing mentor academy program effectiveness using mixed methods. Mentor Tutoring. 2019;27:109125. doi: 10.1080/13611267.2019.1586305.Google Scholar
Behar-Horenstein, LS, Prikhidko, A. Exploring mentoring in the context of team science. Mentor Tutoring. 2017;25:430454. doi: 10.1080/13611267.2017.1403579.Google Scholar
Spence, JP, Buddenbaum, JL, Bice, PJ, Welch, JL, Carroll, AE. Independent investigator incubator (I(3)): a comprehensive mentorship program to jumpstart productive research careers for junior faculty. BMC Med Educ. 2018;18:186. doi: 10.1186/s12909-018-1290-3.Google Scholar
Feldman, MD, Huang, L, Guglielmo, BJ, et al. Training the next generation of research mentors: the university of California, San Francisco, clinical & translational science institute mentor development program. Clin Transl Sci. 2009;2:216221. doi: 10.1111/j.1752-8062.2009.00120.x.Google Scholar
Patino, CM, Kubicek, K, Robles, M, Kiger, H, Dzekov, J. The community mentorship program: providing community-engagement opportunities for early-stage clinical and translational scientists to facilitate research translation. Acad Med. 2017;92:209213. doi: 10.1097/acm.0000000000001332.Google Scholar
Pfund, C, House, S, Spencer, K, et al. A research mentor training curriculum for clinical and translational researchers. Clin Transl Sci. 2013;6:2633. doi: 10.1111/cts.12009.Google Scholar
Pfund, C, House, SC, Asquith, P, et al. Training mentors of clinical and translational research scholars: a randomized controlled trial. Acad Med. 2014;89:774782. doi: 10.1097/ACM.0000000000000218.Google Scholar
Weber-Main, AM, Shanedling, J, Kaizer, AM, Connett, J, Lamere, M, El-Fakahany, EE. A randomized controlled pilot study of the University of Minnesota mentoring excellence training academy: a hybrid learning approach to research mentor training. J Clin Transl Sci. 2019;3:152164. doi: 10.1017/cts.2019.368.Google Scholar
Solmeyer, A, Constance, N. Unpacking the “black box” of sociall programs and policies: introduction. Am J Eval. 2015;36:470474.Google Scholar
Arksey, H, O’Malley, L. Scoping studies: towards a methodological framework. Int J Soc Res Method. 2005;8:1932.Google Scholar
Levac, D, Colquhoun, H, O’Brien, KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:69. doi: 10.1186/1748-5908-5-69.Google Scholar
Tricco, AC, Lillie, E, Zarin, W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467473. doi: 10.7326/M18-0850.Google Scholar
Peters, MDJ, Godfrey, CM, McInerney, P, Soares, CB, Khalil, H, Parker, D. The Joanna Briggs Institute Reviewers Manual. The Joanna Briggs Institute; 2015.Google Scholar
Veritas Health Innovation. Covidence systematic review software. Veritas Health Innovation, (www.covidence.org) Accessed February 26, 2024.Google Scholar
Bonilha, H, Hyer, M, Krug, E, et al. An institution-wide faculty mentoring program at an academic health center with 6-year prospective outcome data. J Clin Transl Sci. 2019;3:308315. doi: 10.1017/cts.2019.412.Google Scholar
Stefely, JA, Theisen, E, Hanewall, C, et al. A physician-scientist preceptorship in clinical and translational research enhances training and mentorship. BMC Med Educ. 2019;19:89. doi: 10.1186/s12909-019-1523-0.Google Scholar
Robinson, GF, Schwartz, LS, DiMeglio, LA, Ahluwalia, JS, Gabrilove, JL. Understanding career success and its contributing factors for clinical and translational investigators. Acad Med. 2016;91:570582. doi: 10.1097/acm.0000000000000979.Google Scholar
Smyth, SS, Coller, BS, Jackson, RD, et al. KL2 scholars’ perceptions of factors contributing to sustained translational science career success. J Clin Transl Sci. 2022;6:e34. doi: 10.1017/cts.2021.886.Google Scholar
Fleming, M, House, S, Hanson, VS, et al. The mentoring competency assessment: validation of a new instrument to evaluate skills of research mentors. Acad Med. 2013;88:10021008. doi: 10.1097/ACM.0b013e318295e298.Google Scholar
Feldman, MD, Steinauer, JE, Khalili, M, et al. A mentor development program for clinical translational science faculty leads to sustained, improved confidence in mentoring skills. Clin Transl Sci. 2012;5:362367. doi: 10.1111/j.1752-8062.2012.00419.x.Google Scholar
Trejo, J, Wingard, D, Hazen, V, et al. A system-wide health sciences faculty mentor training program is associated with improved effective mentoring and institutional climate. J Clin Transl Sci. 2022;6:e18. doi: 10.1017/cts.2021.883.Google Scholar
Abedin, Z, Rebello, TJ, Richards, BF, Pincus, HA. Mentor training within academic health centers with clinical and translational science awards. Clin Transl Sci. 2013;6:376380. doi: 10.1111/cts.12067.Google Scholar
Burnham, EL, Schiro, S, Fleming, M. Mentoring K scholars: strategies to support research mentors. Clin Transl Sci. 2011;4:199203. doi: 10.1111/j.1752-8062.2011.00286.x.Google Scholar
Huskins, WC, Silet, K, Weber-Main, AM, et al. Identifying and aligning expectations in a mentoring relationship. Clin Transl Sci. 2011;4:439447. doi: 10.1111/j.1752-8062.2011.00356.x.Google Scholar
Sancheznieto, F, Sorkness, CA, Attia, J, et al. Clinical and translational science award T32/TL1 training programs: program goals and mentorship practices. J Clin Transl Sci. 2022;6:e13. doi: 10.1017/cts.2021.884.Google Scholar
Silet, KA, Asquith, P, Fleming, MF. Survey of mentoring programs for KL2 scholars. Clin Transl Sci. 2010;3:299304. doi: 10.1111/j.1752-8062.2010.00237.x.Google Scholar
Tillman, RE, Jang, S, Abedin, Z, Richards, BF, Spaeth-Rublee, B, Pincus, HA. Policies, activities, and structures supporting research mentoring: a national survey of academic health centers with clinical and translational science awards. Acad Med. 2013;88:9096. doi: 10.1097/ACM.0b013e3182772b94.Google Scholar
Comeau, DL, Escoffery, C, Freedman, A, Ziegler, TR, Blumberg, HM. Improving clinical and translational research training: a qualitative evaluation of the Atlanta clinical and translational science institute KL2-mentored research scholars program. J Investig Med. 2017;65:2331. doi: 10.1136/jim-2016-000143.Google Scholar
Martina, CA, Mutrie, A, Ward, D, Lewis, V. A sustainable course in research mentoring. Clin Transl Sci. 2014;7:413419. doi: 10.1111/cts.12176.Google Scholar
McGee, RE, Blumberg, HM, Ziegler, TR, et al. Mentor training for junior faculty: a brief evaluation report from the Georgia clinical and translational science alliance. J Investig Med. 2023;71:577585. doi: 10.1177/10815589231168601.Google Scholar
Nearing, KA, Nuechterlein, BM, Tan, S, Zerzan, JT, Libby, AM, Austin, GL. Training mentor-mentee pairs to build a robust culture for mentorship and a pipeline of clinical and translational researchers: the colorado mentoring training program. Acad Med. 2020;95:730736. doi: 10.1097/ACM.0000000000003152.Google Scholar
Schweitzer, JB, Rainwater, JA, Ton, H, Giacinto, RE, Sauder, CAM, Meyers, FJ. Building a comprehensive mentoring academy for schools of health. J Clin Transl Sci. 2019;3:211217. doi: 10.1017/cts.2019.406.Google Scholar
Vanderford, NL, Evans, TM, Weiss, LT, Bira, L, Beltran-Gastelum, J. Use and effectiveness of the individual development plan among postdoctoral researchers: findings from a cross-sectional study. F1000Res. 2018;7:1132. doi: 10.12688/f1000research.15610.2.Google Scholar
Vanderford, NL, Evans, TM, Weiss, LT, Bira, L, Beltran-Gastelum, J. A cross-sectional study of the use and effectiveness of the individual development plan among doctoral students. F1000Res. 2018;7:722. doi: 10.12688/f1000research.15154.2.Google Scholar
Thompson, HJ, Santacroce, SJ, Pickler, RH, et al. Use of individual development plans for nurse scientist training. Nurs Outlook. 2020;68:284292. doi: 10.1016/j.outlook.2020.01.001.Google Scholar
Sorkness, CA, Scholl, L, Fair, AM, Umans, JG. KL2 mentored career development programs at clinical and translational science award hubs: practices and outcomes. J Clin Transl Sci. 2020;4:4352. doi: 10.1017/cts.2019.424.Google Scholar
Kirkpatrick, D, Kirkpatrick, J. Evaluating training programs : the four levels. 3rd ed. Berrett-Koehler, 2006.Google Scholar
Yardley, S, Dornan, T. Kirkpatrick’s levels and education ‘evidence’. Med Educ. 2012;46:97106. doi: 10.1111/j.1365-2923.2011.04076.x.Google Scholar
Sheri, K, Too, JYJ, Chuah, SEL, Toh, YP, Mason, S, Radha Krishna, LK. A scoping review of mentor training programs in medicine between 1990 and 2017. Med Educ Online. 2019;24:1555435. doi: 10.1080/10872981.2018.1555435.Google Scholar
Campbell, CD. Best practices for student-faculty mentoring programs. In: Allen, TD, Eby, LT, eds. The Blackwell Handbook of Mentoring. Oxford, UKBlackwell Publishing, 2007: 325343.Google Scholar
Gangrade, N, Samuels, C, Attar, H, et al. Mentorship interventions in postgraduate medical and STEM settings: a scoping review. CBE Life Sci Educ. 2024;23:ar33.Google Scholar
Steiner, JF. Promoting mentorship in translational research: should we hope for Athena or train mentor? Acad Med. 2014;89:702704. doi: 10.1097/ACM.0000000000000205.Google Scholar
Torgerson, C. Publication bias: the Achilles’ heel of systematic reviews? Brit J Educ Stud. 2006;54(1):89102.Google Scholar
Samuels, E, Ianni, PA, Eakin, B, Champagne, E, Ellingrod, V. A quasiexperimental evaluation of a clinical research training program. Perf Impr Qtr. 2023;36:413.Google Scholar
Kuchynka, SL, Gates, AE, Rivera, LM. When and why is faculty mentorship effective for underrepresented students in STEM? A multicampus quasi-experiment. Cult Divers Ethn Min. 2023;31:6975.Google Scholar
Hernandez, PR, Ferguson, CF, Pedersen, R, Richards-Babb, M, Quedado, K, Shook, NJ. Research apprenticeship training promotes faculty-student psychological similarity and high-quality mentoring: a longitudinal quasi-experiment. Mentor. Tutor.: Partnership Learn. 2023;31:163183.Google Scholar
Estrada, M, Hernandez, PR, Schultz, PW. A longitudinal study of how quality mentorship and research experience integrate underrepresented minorities into STEM careers. CBE Life Sci Educ Spring. 2018;17:ar9. doi:10.1187/cbe.17-04-0066.Google Scholar
Helton, R, Pearson, S, Hartmann, K. Flight Tracker: A REDCap Tool to Streamline Career Development Grant Preparation and Reporting. presented at: Las Vegas, Nevada: Association for Clinical and Translational Science, 2024.Google Scholar
Shen, MR, Tzioumis, E, Andersen, E, et al. Impact of mentoring on academic career success for women in medicine: a systematic review. Acad Med. 2022;97:444458. doi: 10.1097/ACM.0000000000004563.Google Scholar
Farkas, AH, Bonifacino, E, Turner, R, Tilstra, SA, Corbelli, JA. Mentorship of women in academic medicine: a systematic review. J Gen Intern Med. 2019;34:13221329. doi: 10.1007/s11606-019-04955-2.Google Scholar
O’Brien, KE, Biga, A, Kessler, SR, Allen, TD. A meta-analytic investigation of gender differences in mentoring. J Manage. 2010;36:537554.Google Scholar
Kashiwagi, DT, Varkey, P, Cook, DA. Mentoring programs for physicians in academic medicine: a systematic review. Acad Med. 2013;88:10291037. doi: 10.1097/ACM.0b013e318294f368.Google Scholar
Kram, KE. Phases of the mentor relationship. Acad Manage J. 1983;26:608625.Google Scholar
Chandler, DE, Kram, KE, Yip, J. An ecological systems perspective on mentoring at work: a review and future prospects. Acad Manag Ann. 2011;5:519570.Google Scholar
Hezlett, SA, Gibson, SK. Linking mentoring and social capital: implications for career and organization development. Adv. Dev. Human Resour. 2007;9:384411.Google Scholar
Aikens, ML, Sadselia, S, Watkins, K, Evans, M, Eby, LT, Dolan, EL. A social capital perspective on the mentoring of undergraduate life science researchers: an empirical study of undergraduate-postgraduate-faculty triads. CBE Life Sci Educ Summer. 2016;15:ar16. doi: 10.1187/cbe.15-10-0208.Google Scholar
Figure 0

Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA) flow diagram.

Figure 1

Table 1. Summary of key study variables

Figure 2

Table 2. Study type, design, and methods extracted from the articles evaluated

Figure 3

Table 3. Type of mentor training, outcomes, inputs, and statistical tests

Figure 4

Figure 2. Proposed model for evaluating clinical and translational science awards (CTSA) mentoring.