Hostname: page-component-54dcc4c588-mz6gc Total loading time: 0 Render date: 2025-09-27T16:15:08.741Z Has data issue: false hasContentIssue false

Using the Researcher Investment Tool to inform a clinical and translational research initiative

Published online by Cambridge University Press:  12 September 2025

Brenda M. Joly*
Affiliation:
Public Health Program, Muskie School of Public Service, University of Southern Maine, Portland, ME, USA
Kassandra A. Cousineau
Affiliation:
Department of Pediatrics and Vermont Child Health Improvement Program, The Robert Larner M.D. College of Medicine, University of Vermont, Burlington, VT, USA
Carolyn E. Gray
Affiliation:
Catherine Cutler Institute, Muskie School of Public Service, University of Southern Maine, Portland, ME, USA
Valerie S. Harder
Affiliation:
Department of Pediatrics and Vermont Child Health Improvement Program, The Robert Larner M.D. College of Medicine, University of Vermont, Burlington, VT, USA
*
Corresponding author: B.M. Joly; Email: brenda.joly@maine.edu
Rights & Permissions [Opens in a new window]

Abstract

Background:

Numerous efforts are focused on building the clinical and translational research (CTR) workforce. Approaches to evaluate CTR initiatives are varied, and efforts often rely on research project-level outcomes. This article applies an evaluation tool to capture individual-level data.

Objective:

The study used a novel Researcher Investment Tool (RIT) to measure researchers’ experience as well as perceptions of institutional support, including an analysis based on researcher characteristics. The study also evaluated the RIT based on common measures, including a bibliometric indicator, investigator status, and percent time dedicated to research.

Methods:

The RIT was administered to researchers who received funding or targeted research support from a CTR initiative. Mean scores were assessed by RIT section, domains/sub-domains, and for each item. Mean scores per section were compared across researcher characteristics using t-tests, and associations between common measures and average domain scores were tested using linear regression.

Results:

Thirty researchers completed all RIT items. RIT domain scores ranged from a high mean of 4.0 for the research skills domain to a low mean of 2.6 for researcher productivity and community engagement domains. Analysis of indicators of commonly used measures across domains suggest that researchers with a higher bibliometric score had more advanced research skills, service to profession, research productivity, and research collaboration (p < .05). New investigators had lower perceptions of institutional support (p < .05).

Conclusions:

As an evaluation tool, the RIT captures individual-level data that may help to determine key areas of strength and opportunities for growth of a CTR program.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The need for a strong and diverse research workforce to address health issues is well documented in the literature [Reference Blanchard, Kleppel and Bianchi1Reference Skinnider, Twa, Squair, Rosenblum and Lukac9]. The National Institutes of Health (NIH) has a congressionally mandated program known as the Institutional Development Award (IDeA) [10]. The purpose of the program is to build research capacity in states with historically low levels of NIH funding by strengthening the ability of researchers to conduct and successfully compete for clinical and translational (CTR) research funding [10]. The IDeA CTR Network award program currently funds 11 statewide or multi-state networks in the United States to build biomedical research capacity. A key aim of this federal program is to support the career trajectory of new and early-stage clinical and translational researchers.

One recipient of this IDeA funding was the Northern New England Clinical and Translational Research (NNE-CTR) Network. The NNE-CTR was launched in 2017 and received a second cycle of competitive funding with an overarching goal of fostering research to address health challenges in Maine, New Hampshire, and Vermont [11]. To date, the NNE-CTR has funded 46 small research projects. The NNE-CTR has also provided research supports such as mentorship, professional development, research navigation, community engagement and rural health navigation, access to laboratory testing, and more, to help investigators build skills, enhance their confidence, and gain experience with the research process.

Despite the focus on building and evaluating the health research workforce through efforts such as the NNE-CTR, there are limited tools and systematic methods available to assess how a researcher’s experiences change over time and how their career progresses, especially as systems of support are established to promote research. Tracking individual-level changes remains a key objective of initiatives such as the NNE-CTR, yet there is a gap in comprehensive, broadly applicable tools that offer standardized and consistent measurement [Reference Joly, Gray, Rand, Bizier and Pearson12Reference Joly, Gray, Cousineau, Pearson and Harder14]. Despite several theoretical approaches, metrics, and evaluation tools for assessing CTR initiatives [15Reference Hall, Stokols and Moser30], there are few validated survey instruments that address the full spectrum of a researcher’s experience across multiple domains. Many of the existing efforts focus on traditional research productivity measures related to quantifying publications, tracking bibliometric indicators, and calculating the number, size, and type of research grants [Reference Joly, Gray, Rand, Bizier and Pearson12]. Recently, studies have focused on the widely used H-index [Reference Hirsch and Buela-Casal31Reference Schreiber and Giustini33] as well as exploring demographic and employment characteristics (e.g., early-stage researchers, protected research time) [Reference Chou, Hammon and Akins3,Reference Kalet, Lusk and Rockfeld34Reference Lowenstein, Fernandez and Crane36]. A review of published measurement tools revealed survey instruments that were often narrowly tailored to specific aspects (e.g., mentorship, impact), as well as approaches that focused on assessing the attributes, activities, and skills of a researcher [Reference Joly, Gray, Rand, Bizier and Pearson12].

In response to the measurement gaps, the Researcher Investment Tool (RIT) was developed to more fully capture broadly defined research domains relevant to the CTR, thus offering a more comprehensive approach. As seen in Figure 1, the tool includes traditional productivity measures and research skills and draws from many of the existing tools that are narrowly focused on one area. In a single tool, the RIT broadly captures researcher perceptions of institutional support and their experiences related to research service activities, collaboration, mentorship, community engagement, as well as the overall impact of their research. The RIT spans eight domains and it offers a standardized approach that can be used over time to measure changes. As previously reported, initial psychometric testing of the RIT revealed good reliability and validity, suggesting its application is both consistent and accurate [Reference Joly, Gray, Cousineau, Pearson and Harder14].

Figure 1. Researcher Investment Tool sections and domains.

The primary objective of this study was to apply the RIT to a sample of NNE-CTR researchers to determine their experiences and perceptions overall and based on researcher characteristics. In addition, a secondary objective was to further explore the validity of our tool by testing associations between individual research domain scores based on three commonly used indicators [Reference Hirsch and Buela-Casal31Reference Dev, Kauf and Zekry39] to further evaluate the tool’s applicability including: (1) a widely used bibliometric known as the H-Index score [Reference Hirsch and Buela-Casal31], (2) NIH investigator status [Reference Walsh, Moore and Doyle37,Reference Nikaj, Roychowdhury, Lund, Matthews and Pearson38], and (3) the percent time dedicated to research [Reference Lowenstein, Fernandez and Crane36,Reference Dev, Kauf and Zekry39]. We anticipated higher RIT domain scores among those with a higher H-Index score, and lower scores among new or early-stage investigators as well as those who had 50% time or less designated to research.

Methods

Sample

A total of 48 NNE-CTR Network members were invited to participate. Inclusion criteria involved receiving small research project funding, serving as a research clinical scholar, or participating in a “research studio” that offered individualized and intensive research support. This group was selected based on the targeted and ongoing wrap-around research supports they received through the NNE-CTR. The RIT was administered in May of 2024, via Qualtrics, an online survey platform. The survey took approximately 15 minutes to complete, and included items that operationalized all eight domains based on a five-point Likert-type scale. The first seven domains assessed self-reported level of experience (1 = no experience, 5 = extensive experience) and the eighth domain measured self-reported perceptions regarding institutional support (1 = not at all, 5 = to a great extent). The survey also included a demographic section to capture the characteristics of researchers. Respondents had five weeks to complete the survey, and reminder emails were sent to non-respondents each week.

Researcher Investment Tool (RIT)

The RIT is a standardized and validated approach for capturing individual-level measures across eight research domains related to CTR. The survey tool was uniformly developed based on an extensive review of the literature; it follows a consistent structure and format for application, and it has demonstrated consistency over time. As noted above, psychometric testing revealed strong content validity, acceptable internal consistency for all domains, and strong test–retest reliability for nearly all domains [Reference Joly, Gray, Cousineau, Pearson and Harder14]. The RIT has 90-items to assess experiences and perceptions of researchers (Figure 1). The first section measures researcher experiences based on 79 items across seven domains, and four sub-domains. The domains focus on research skills, service to the profession, productivity, collaboration, mentorship, community engagement, and research impact. Participants were asked to rate their level of experience using a five-point Likert-type scale anchored by 1 “No Experience” and 5 “Extensive Experience.” The second section measured researcher perceptions based on 11 items assessing one domain known as institutional support. In this second section, participants were asked to respond to a 5-point Likert-type scale anchored by 1 “Not at all” and 5 “To a great extent.” Participants’ RIT mean scores for each domain were calculated by adding the Likert scores within a domain and dividing by the number of items in the domain. Participants’ mean scores for each sub-domain and section were calculated the same way.

Demographic measures and common indicators

Participants were also asked about demographics regarding their gender (male, female, prefer not to answer), race (White, non-White), ethnicity (Hispanic, non-Hispanic), highest academic degree (MD/DO, all other degrees), and provision of clinical care (yes, no). Additionally, participants were asked about their research careers, which included research specialty (categorized as clinical research, other non-clinical research) and NIH investigator status (new investigator, early-stage investigator) with the answer options “yes,” “no,” or “not sure” for both items. New investigator was defined as “someone who has not competed successfully for a substantial research grant from NIH.” Early-stage investigator status was defined as “someone who has completed their terminal research degree or end of post-graduate clinical training within the past 10 years and who has not previously competed successfully as a PD/PI for a substantial NIH independent research award.” Participants were asked, “In an average month, what percent of time do you spend doing research”?, and answer options included “none,” “1%–10%,” “11%–25%,” “26%–50%,” and “greater than 50%.” Finally, we included the H-Index; a widely used bibliometric score that measures the productivity and citation impact of a researcher’s published work [Reference Joly, Gray, Cousineau, Pearson and Harder14]. The H-Index is among the most cited bibliometric measures of research output among health services researchers [Reference Joly, Gray, Rand, Bizier and Pearson12]. We identified and recorded the H-Index score for each participant first through Google Scholar, an approach published by others [Reference Birks, Fairhurst, Bloor, Campbell, Baird and Torgerson40]. If there was no profile, we reviewed ResearchGate, then Scopus to identify individual scores. For our secondary objective, we focused on four common indicators that have been previously studied [Reference Hirsch and Buela-Casal31Reference Lowenstein, Fernandez and Crane36] including the H-Index, NIH new investigator status, NIH early-stage investigator status, and percent dedicated time for research.

Data analysis

Data were cleaned and analyzed using statistical software package Stata 17. In order to address objective one, we focused on scores, overall and based on researcher characteristics. We ran frequency analyses to calculate the maximum, mean, standard deviation, and range for each section, domain, sub-domain, and item. We tested associations between participant characteristics and mean scores for researcher experience (section one), and researcher perceptions (section two) using independent t-tests. Research specialty was dichotomized into clinical research versus all other specialties. Percent dedicated time for research was also dichotomized into less than or equal to 50% time and greater than 50% time dedicated to research. Participants indicating gender as “prefer not to say” (n = 2) and “not sure” as NIH early-stage investigator status (n = 3) were dropped from these analyses. Analyses on race/ethnicity were not possible due to low variability. We compared each total domain score with the maximum potential score to calculate the percent of each domain score achieved.

In order to address the second objective, we used simple linear regressions to test associations between mean item scores within each of the eight RIT domains and our list of common indicators used in prior research. These metrics included H-Index score (continuous), NIH new investigator status (yes/no), NIH early-stage investigator status (yes/no), and greater than 50% time dedicated to research (yes/no). For all statistical tests, we used a p-value cutoff of 0.05 to indicate statistical significance.

Results

Of the 48 invited, 30 participants completed the RIT and demographic questions, for a response rate of 63%. All participants answered each item on the RIT. One participant did not respond to three of the four demographic questions related to research focus. All other demographic questions were fully completed by all participants. We obtained H-Index scores for all participants.

Participant characteristics and H-index scores

Table 1 shows that over 75% were female, approximately 40% indicated having a clinical research specialty, half had a medical degree (MD or DO), and about half provided clinical care. Over 60% of participants were not NIH new investigators, over 60% reported they were not early-stage investigators, and half had greater than 50% dedicated research time (Table 1). Not shown in Table 1 was that participants were 90% White and non-Hispanic and had median H-Index scores of 13 (IQR = 7, 25) ranging from 0 to 94.

Table 1. Participant characteristics and mean scores on sections 1 and 2 of the Researcher Investment Tool

Notes: NIH = National Institutes of Health; not reported - 90% of respondents were White and not Hispanic/Latinx; “ref” indicates the reference group.

1 Participants who indicated sex as “Prefer not to say” (n = 1) dropped from analyses.

2 All other research specialties included basic science research (n = 7), health services research (n = 2), community or public health research (n = 5), and other (n = 3).

3 All other academic degrees included PhD/ScD (n = 10), Master’s degree (n = 3), and other (n = 2).

4 Participants who indicated NIH Early-Stage Investigator Status as “Not sure” (n = 3) dropped from analyses.

* p < 0.05.

RIT experience and perception scores and participant characteristics

There were no differences between participant demographics (gender, research specialty, academic degree, and provision of clinical care) and overall mean RIT scores for researcher experience (section one) and researcher perceptions of institutional support (section two; Table 1). Similarly, there were no differences between NIH status (new or early-stage investigator), and percent dedicated time for research and overall mean RIT scores for experience. This broad look at overall mean RIT scores within the experience and perceptions sections indicated that participants who were a NIH new investigator had a 0.65 point lower mean RIT score (lower perceived institutional support for research), compared to participants that were not NIH new investigators (p < 0.05; Table 1). The supplemental table includes mean scores on the RIT domains within section 1, by participant demographics and other characteristics.

RIT domain scores

Table 2 shows mean RIT domain and sub-domain scores ranging from a high mean of 4.0 (sd = 0.7) for the research skills domain to a low mean of 2.6 for researcher productivity (sd = 0.7) and community engagement (sd = 1.1) domains. Figure 2 shows the percent of each domain score (mean/maximum) achieved in reference to 100%. Four domains in the Researcher Experience section (research productivity, service to the profession, research impact, and community engagement) fell below 60%, as did the one domain in the Researcher Perceptions of Institutional Support section.

Figure 2. The percentage of total score out of the maximum score for each Researcher Investment Tool domain.

Table 2. Maximum score, mean, standard deviation, and range for the Researcher Investment Tool

Notes: IRB = Institutional Review Board; IACUC = Institutional Animal Care and Use Committee; NIH = National Institutes of Health; USDA = United States Department of Agriculture; PI = Principal Investigator.

Section I. Researcher experience

Domain 1: research skills

As shown in Table 2, this domain was notably higher than all other domains, yet five of 12 items had a mean score below the domain mean: applying for research funding, leading a research team, writing an Institutional Review Board (IRB/IACUC) application, and tracking or closing a research study. Most participants reported high levels of skills collaborating with other researchers and delivering a research presentation.

Domain 2: service to the profession

This domain had nearly the lowest mean score across research service-related items. Eight of 13 items scored below the domain mean including: serving on an IRB/IACUC review team, serving on an editorial board or as a guest editor for a journal, contributing to scientific guidelines, presenting research internationally, leading a research facility/unit, or participating in a NIH study section.

Domain 3: researcher productivity

This domain had one of the lowest mean scores across 16 items reflecting research funding and research dissemination. A few items with averages below the domain mean revealed there was limited experience with NIH funding targeting early career support including F grants, K grants and from funds secured through industry sources.

Domain 4: research collaboration

The mean score for this domain was at 3.2 with the team composition sub-domain mean score lower and the team collaboration sub-domain mean score higher. Items below the domain mean included leading research teams that were multi-disciplinary or multi-institutional and participating in a research team that includes a patient voice. Overall, four items in this domain received the highest mean score including: multi-disciplinary team participation and authorship as well as multi-institutional team participation and authorship.

Domain 5: research mentorship

The domain mean score was 3.1 with the sub-domain mean scores for mentee experience 0.1 points higher and mentor experience 0.1 points lower. Six items had a mean score below the domain mean including: introducing mentees to new research opportunities, creating a mentee/mentor plan, developing a research career plan with a mentor, assisting a mentee with their own grant proposals or their own funded project, and advocating for compensated research time for a mentee. The two items with the highest levels of experience were mentoring students and participating in research projects with mentors.

Domain 6: community engagement

This domain had the other lowest mean score and the fewest number of items. The item-level mean scores ranged from the lowest for experience communicating research findings back to community partners to the highest for experience aligning research interests with community priorities.

Domain 7: research impact

The mean domain score was also low and had six items. The highest mean level of experience was for participation in research that has positively influenced the work of other researchers. The lowest levels of experience were reported across two items including participation in research that positively influenced health outcomes and participation in research that positively influenced health policy.

Section II. Researcher perceptions

Domain 8: perceptions of institutional support

As shown in Table 2, this domain had a mean score in the middle of other domains. The items with the most favorable perceived institutional support were related to having designated time for research and the participants’ level of clarity on the research expectations given their role. Items that fell below the domain mean included: my organization offers mentorship to support career advancement and research funding, my organization is committed to helping me build research skills, my organization recognized my research accomplishments, and my organization supports community-engaged research efforts.

Common indicators: H-index, investigator status, dedicated time

Table 3 shows results from regression analyses exploring associations between RIT domain mean scores and common indicators (objective two) including H-Index, investigator status, and dedicated research time. Participant H-Index score was associated with several research experience RIT domains including research skills, service to profession, research productivity and research collaboration (Table 3). For every 10-point increase in H-Index score, mean score for research skills (p = 0.004), research productivity (p = 0.002) and research collaboration (p = 0.021) increased by 0.2 points, and for service to profession, mean score increased by 0.3 points (p = 0.001). Table 3 also shows that mean scores for most RIT domains were lower for new or early-stage investigators when compared to mean domain scores for all other participants, but only two comparisons were significant. We found that NIH new investigators had 0.7 points lower mean scores in the researcher perceptions of institutional support domain than all other participants (p = 0.033), and NIH early-stage investigators had 0.5 points lower mean scores in the research skills domain than all other participants (p = 0.044). Finally, participants who indicated having more than 50% time dedicated for research scored 1.1 points lower on the community engagement domain than participants with less time dedicated to research (p = 0.005; Table 3).

Table 3. Associations between common indicators and Researcher Investment Tool domain mean scores

Notes: NIH = National Institutes of Health; CI = Confidence Interval; Coeff = Coefficient.

* p < 0.05.

** p < 0.01.

*** p < 0.001.

Discussion

Our use of the RIT provides baseline data across eight domains assessing researchers’ experience and perceptions of institutional support within an IDeA CTR initiative. The study’s results suggest variability in levels of research experience across and within the RIT domains, with research skills rated the highest, and moderate levels of perceived institutional support, suggesting both strengths and areas for continued improvement in fostering research careers. This study’s results also support the expectation that researchers with higher bibliometric productivity as measured by the H-Index have more skills, service, productivity, and collaboration, while new and early-stage investigators show lower mean scores in most RIT domains. Overall, these results provide valuable insight into the current state of clinical and translational researchers in a historically underfunded region while also contributing to literature focused on measuring the effectiveness of programs like CTR networks.

As expected, there was variability in average item and domain scores, indicating participants had different levels of experience and perceptions. The fact that the research skills domain had the highest mean score overall suggested that our group of participants had high to moderate levels of experience with conceptualizing research, collaborating with others, working with data, and disseminating findings. Given our focus on investigators who were seeking research support or funding, we expected this group would have experience with the fundamental competencies needed to carry out a research project. In addition to the research skills domain, these baseline findings suggest moderate levels of experience with research collaboration and research mentorship. Other studies have shown that fostering collaborative opportunities and mentoring programs can influence the productivity of researchers, particularly those who are early-stage investigators [Reference Hall, Stokols and Moser30,Reference McRae and Zimmerman41Reference Mazumdar, Messinger and Finkelstein44]. Similarly, perceptions of institutional support were moderate, reflecting the culture, practices, and resources used by an organization to foster research. In general, participants reported relatively favorable support among leaders who valued research, as well as designated time to conduct research, available tools, and the necessary organizational support needed to foster research. They also reported having a clear understanding of research expectations, given their role.

The research productivity and community engagement domains had the lowest mean scores. While average item scores revealed participants had moderate levels of experience securing research funding from institutional or non-federal sources, they reported limited experience with NIH funding mechanisms aimed at early-stage investigators – a key outcome tracked by CTR Network awardees. Efforts to monitor progress in this area over time may be needed to better understand how investments and supportive efforts impact a researcher’s productivity as well as barriers they continue to face. Similarly, although the IDeA CTR Network awards emphasize community engagement, these findings revealed lower levels of aligning research with community interests, involving of the community in research, and including community members in the dissemination of findings. The challenges of community-based participatory research are well documented in the literature [Reference Hallmark, Bohn, Hallberg and Croisant45Reference LeClair, Lim and Rubin47]. In our case, there may be several factors at play including the scope of the research, the readiness of researchers to engage the community, the use of available NNE-CTR resources to connect researchers with the community (e.g., community engagement research navigators), and the availability of funds or organizational support to involve the community, all opportunities for future research.

Results from objective two focusing on the analysis of the H-Index, investigator status, and dedicated research time in relation to RIT domains were mixed. We hoped to further validate the RIT by exploring the relationship between these constructs based on our theoretical expectations, and we were only able to partially do so. We expected the H-Index would be associated with RIT domains, as we anticipated that those who published more and whose work was cited more frequently would have higher levels of research experience. While we found that this was the case, only four of the eight RIT domains were associated with the H-Index. No statistical relationship was found with the domains focused on mentorship, community engagement, research impact, or perceptions of institutional support. More work is needed to understand if, and how, these domains are related to bibliometric indicators. Although most associations between new and early-stage investigator status and RIT domains were not significant, all but one were in the negative direction, as expected. While we used the NIH definitions to determine self-report investigator status, future validation studies may benefit from including other approaches for categorizing early-career researchers. Given existing efforts to support junior-level clinical and translation researchers, it will be important to continue exploring how career progression changes, and the individual nature of the RIT may prove useful in measuring these changes longitudinally. The finding that those with more than 50% dedicated research time had the lowest mean score on the RIT domain for community engagement suggests that those with more research time are not engaging with community as much. This was not expected, but it may have been due to faculty start-up funding, or the relatively new focus on community engaged research within more traditional medical research or lab-based research, or other unmeasured factors. The fact that no other associations were significant with dedicated research time warrants further investigation. It is possible this metric may not be as useful, or perhaps there are alternative approaches to capture dedicated research time. Despite the mixed results exploring our study’s second objective, the analyses build on our prior psychometric work with the RIT and may add further support for the use of the RIT in CTR evaluation.

This study includes several important limitations. First, although our baseline measures proved useful in capturing the current level of research experience and perceptions of institutional support, repeated measures will be needed to determine changes in this group over time, given the goal of research independence - a key focus of our evaluation. We plan to administer the RIT again, in two years. However, it is difficult to estimate the amount of time needed to see measurable changes using the RIT. Second, our participants were selected due to their small research funding awards and NNE-CTR research support. Since project awards have been in place since 2017, several awardees have had additional time to advance their research careers, post project funding. Our results provide a snapshot at one point in time, regardless of when project funds were awarded. Capturing baseline data prior to receiving a research project award may have been more informative, but may not have provided as much variability in scores. Third, most of our respondents did not identify as new or early-stage investigators. While several may have been new to CTR, our sample is skewed to those with more research experience, likely impacting the H-Index scores. More work is needed in this area to address the limitations of our sample.

Fourth, the RIT provides a comprehensive measure that is useful to evaluators but challenging given the self-report nature, length, and reoccurring administration over time. Additional efforts are needed to understand which domains are essential for repeated measures and the appropriate frequency for assessing change as well as which domains prove to be most useful in assessing research career progression. Finally, our efforts to further study the RIT by comparing domain scores with existing and commonly used research measures demonstrated mixed results, providing validation in limited areas only. There is currently no “gold standard” instrument we can use for criterion validity testing. Therefore, we selected indicators we theoretically expected to be associated with our tool, yet the chosen indicators have limitations. For example, the H-Index tends to favor those with more research experience. We also relied on free platforms (e.g., Google Scholar and ResearchGate versus Scopus) to identify the H-Index score, and we categorized dedicated research time based on an arbitrary cut-off (greater than 50%) we used in prior evaluation work. .

Conclusions

The RIT is a novel approach to assessing researcher experiences and perceptions of institutional support within an IDeA CTR initiative. As an evaluation tool, the RIT can be used to identify the strengths of a research network, moving beyond traditional research productivity measures, as well as opportunities to identify areas where added supports could foster research career progression.

Supplementary material

For supplementary material/s referred to in this article, please visit https://doi.org/10.1017/cts.2025.10125.

Author contributions

Brenda Joly: Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Visualization, Writing-original draft, Writing-review & editing; Kassandra Cousineau: Data curation, Formal analysis, Methodology, Validation, Visualization, Writing-original draft, Writing-review & editing; Carolyn Gray: Conceptualization, Data curation, Investigation, Project administration, Resources, Supervision, Validation, Writing-review & editing; Valerie Harder: Data curation, Formal analysis, Methodology, Supervision, Validation, Visualization, Writing-original draft, Writing-review & editing.

Funding statement

The research reported here was supported by grant U54 GM115516 from the NIH for the NNE-CTR Network.

Competing interests

There are no conflicts.

References

Blanchard, RD, Kleppel, R, Bianchi, DW. The impact of an institutional grant program on the economic, social, and cultural capital of women researchers. J Womens Health (Larchmt). 2019;28:16981704. doi: 10.1089/jwh.2018.7642.Google Scholar
Choo, E, Mathis, S, Harrod, T, et al. Contributors to independent research funding success from the perspective of K12 BIRCWH program directors. Am J Med Sci. 2020;360:596603. doi: 10.1016/j.amjms.2020.09.006.Google Scholar
Chou, AF, Hammon, D, Akins, DR. Impact of the Oklahoma IDeA network of biomedical research excellence research support and mentoring program for early-stage faculty. Adv Physiol Educ. 2022;46:443452. doi: 10.1152/advan.00075.2021.Google Scholar
Finney, JW, Amundson, EO, Bi, X, et al. Evaluating the productivity of VA, NIH, and AHRQ health services research career development awardees. Acad Med. 2016;91:563569. doi: 10.1097/acm.0000000000000982.Google Scholar
Mason, JL, Lei, M, Faupel-Badger, JM, et al. Outcome evaluation of the national cancer institute career development awards program. J Cancer Educ. 2013;28:917. doi: 10.1007/s13187-012-0444-y.Google Scholar
Milewicz, DM, Lorenz, RG, Dermody, TS, Brass, LF, the National Association of MD-PhD Programs Executive Committee. Rescuing the physician-scientist workforce: the time for action is now. J Clin Invest. 2015;125:37423747. doi: 10.1172/jci84170.Google Scholar
National Institutes of Health. Physician-scientist workforce working group report. NIH;2014, (https://acd.od.nih.gov/documents/reports/PSW_Report_ACD_06042014.pdf) Accessed November 17, 2022..Google Scholar
Nikaj, S, Lund, PK. The impact of individual mentored career development (K) awards on the research trajectories of early-career scientists. Acad Med. 2019;94:708714. doi: 10.1097/acm.0000000000002543.Google Scholar
Skinnider, MA, Twa, DDW, Squair, JW, Rosenblum, ND, Lukac, CD. Canadian MD/PhD Program Investigation Group. Predictors of sustained research involvement among MD/PhD programme graduates. Med Educ. 2018;52:536545. doi: 10.1111/medu.13513.Google Scholar
U.S. Department of Health and Human Services. (n.d.). IDeA clinical and translational research programs. National Institute of General Medical Sciences. (https://www.nigms.nih.gov/Research/DRCB/IDeA/Pages/idea-ctrp) Accessed December 3, 2024.Google Scholar
National Institute of General Medical Sciences. Institutional development award (IDeA) networks for clinical and translational research (IDeA-CTR). IDeA interactive portfolio dashboard. NIH; 2023 (https://www.nigms.nih.gov/Research/DRCB/Pages/DRCB-IDeA-Interactive-Portfolio-Dashboard.aspx?awardsby=subprogram&subprgm=IDeA%20-%20CTR) Accessed December 3, 2024.Google Scholar
Joly, BM, Gray, C, Rand, J, Bizier, K, Pearson, K. Approaches and tools to measure individual-level research experience, activities, and outcomes: a narrative review. J Clin Transl Sci. 2025;9(1):e161. doi: 10.1017/cts.2025.10076.Google Scholar
Bilardi, D, Rapa, E, Bernays, S, Lang, T. Measuring research capacity development in healthcare workers: a systematic review. BMJ Open. 2021;11:e046796. doi: 10.1136/bmjopen-2020-046796.Google Scholar
Joly, BJ, Gray, C, Cousineau, K, Pearson, K, Harder, VS. The development, validity, and reliability of the researcher investment tool. J Clin Transl Sci. 2025;9(1):e160. doi: 10.1017/cts.2024.673.Google Scholar
Center for Leading Innovation and Collaboration . Measures of impact working group: Final report. 2020. February 24 (https://clic-ctsa.org/node/6481) Accessed March 23, 2021Google Scholar
Cruz Rivera, S, Kyte, DG, Aiyegbusi, OL, Keeley, TJ, Calvert, MJ. Assessing the impact of healthcare research: a systematic review of methodological frameworks. PLoS Med. 2017;14:e1002370. doi: 10.1371/journal.pmed.1002370.Google Scholar
Dembe, AE, Lynch, MS, Gugiu, PC, Jackson, RD. The translational research impact scale: development, construct validity, and reliability testing. Eval Health Prof. 2014;37:5070. doi: 10.1177/0163278713506112.Google Scholar
Patel, PA, Patel, KK, Gopali, R, Reddy, A, Bogorad, D, Bollinger, K. The relative citation ratio: examining a novel measure of research productivity among southern academic ophthalmologists. Semin Ophthalmol. 2022;37:195202. doi: 10.1080/08820538.2021.1915341.Google Scholar
Rollins, L, Llewellyn, N, Ngaiza, M, Nehl, E, Carter, DR, Sands, JM. Using the payback framework to evaluate the outcomes of pilot projects supported by the Georgia clinical and translational science alliance. J Clin Transl Sci. 2021;5:19. doi: 10.1017/cts.2020.542.Google Scholar
Vanderbilt University. Flight tracker 2024 (https://edgeforscholars.vumc.org/information/flight-tracker/) Accessed March 5, 2024.Google Scholar
Brennan, SE, McKenzie, JE, Turner, T, et al. Development and validation of SEER (Seeking, engaging with and evaluating research): a measure of policymakers’ capacity to engage with and use research. Health Res Policy Syst. 2017;15:1. doi: 10.1186/s12961-016-0162-8.Google Scholar
Canadian Academy of Health Sciences, Panel on Return on Investment in Health Research. Making an impact: a preferred framework and indicators to measure returns on investment in health research. Canadian Academy of Health Sciences. 2009 (https://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf) Accessed February 9, 2024.Google Scholar
Tigges, BB, Sood, A, Dominguez, N, Kurka, JM, Myers, OB, Helitzer, D. Measuring organizational mentoring climate: importance and availability scales. J Clin Transl Sci. 2020;5:e53. doi: 10.1017/cts.2020.547.Google Scholar
Oetzel, JG, Zhou, C, Duran, B, et al. Establishing the psychometric properties of constructs in a community-based participatory research conceptual model. Am J Health Promot. 2015;29:e188e202. doi: 10.4278/ajhp.130731-QUAN-398.Google Scholar
Trochim, WM, Marcus, SE, Mâsse, LC, Moser, RP, Weld, PC. The evaluation of large research initiatives. Am J Eval. 2008;29:828. doi: 10.1177/1098214007309280.Google Scholar
Fleming, M, House, S, Hanson, VS, et al. The mentoring competency assessment: validation of a new instrument to evaluate skills of research mentors. Acad Med. 2013;88:10021008. doi: 10.1097/ACM.0b013e318295e298.Google Scholar
Hyun, SH, Rogers, JG, House, SC, Sorkness, CA, Pfund, C. Revalidation of the mentoring competency assessment to evaluate skills of research mentors: the MCA-21. J Clin Transl Sci. 2022;6:e46. doi: 10.1017/cts.2022.381.Google Scholar
Hyun, SH, Rogers, JG, House, SC, Sorkness, CA, Pfund, C. Erratum: re-validation of the mentoring competency assessment to evaluate skills of research mentors: the MCA-21 - corrigendum. J Clin Transl Sci. 2024;8:e188. doi: 10.1017/cts.2024.637.Google Scholar
Luke, DA, Sarli, CC, Suiter, AM, et al. The translational science benefits model: a new framework for assessing the health and societal benefits of clinical and translational sciences. Clin Transl Sci. 2018;11:7784. doi: 10.1111/cts.12495.Google Scholar
Hall, KL, Stokols, D, Moser, RP, et al. The collaboration readiness of transdisciplinary research teams and centers findings from the national cancer institute’s TREC year-one evaluation study. Am J Prev Med. 2008;35:S161S172. doi: 10.1016/j.amepre.2008.03.035.Google Scholar
Hirsch, JE, Buela-Casal, G. The meaning of the h-index. Int J Clin Health Psychol. 2014;14:161164. doi: 10.1016/s1697-2600(14)70050-x.Google Scholar
Hirsch, JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102:1656916572. doi: 10.1073/pnas.0507655102.Google Scholar
Schreiber, WE, Giustini, DM. Measuring scientific impact with the h-index: a primer for pathologists. Am J Clin Pathol. 2019;151:286291. doi: 10.1093/ajcp/aqy137.Google Scholar
Kalet, A, Lusk, P, Rockfeld, J, et al. The challenges, joys, and career satisfaction of women graduates of the Robert Wood Johnson Clinical Scholars Program 1973–2011. J Gen Intern Med. 2020;35:22582265. doi: 10.1007/s11606-020-05715-3.Google Scholar
Akabas, M, Brass, L, Tartakovsky, I. National MD-PhD program outcomes study, Association of American Medical Colleges, 2018. (https://www.aamc.org/data-reports/workforce/report/national-md-phd-program-outcomes-study) Accessed April 6, 2023,Google Scholar
Lowenstein, SR, Fernandez, G, Crane, LA. Medical school faculty discontent: prevalence and predictors of intent to leave academic careers. BMC Med Educ. 2007;7:37. doi: 10.1186/1472-6920-7-37.Google Scholar
Walsh, R, Moore, RF, Doyle, JM. An evaluation of the National Institutes of Health Early Stage Investigator policy: using existing data to evaluate federal policy. Res Eval. 2018;27:380387. doi: 10.1093/reseval/rvy012.Google Scholar
Nikaj, S, Roychowdhury, D, Lund, PK, Matthews, M, Pearson, K. Examining trends in the diversity of the U.S. National institutes of health participating and funded workforce. FASEB J. 2018;32:64106422. doi: 10.1096/fj.201800639.Google Scholar
Dev, AT, Kauf, TL, Zekry, A, et al. Factors influencing the participation of gastroenterologists and hepatologists in clinical research. BMC Health Serv Res. 2008;8:208. doi: 10.1186/1472-6963-8-208.Google Scholar
Birks, Y, Fairhurst, C, Bloor, K, Campbell, M, Baird, W, Torgerson, D. Use of the h-index to measure the quality of the output of health services researchers. J Health Serv Res Policy. 2014;19:102109. doi: 10.1177/1355819613518766.Google Scholar
McRae, M, Zimmerman, KM. Identifying components of success within health sciences-focused mentoring programs through a review of the literature. Am J Pharm Educ. 2019;83:6976. doi: 10.5688/ajpe6976.Google Scholar
Hawk, LW Jr, Murphy, TF, Hartmann, KE, Burnett, A, Maguin, E. A randomized controlled trial of a team science intervention to enhance collaboration readiness and behavior among early career scholars in the clinical and translational science award network. J Clin Transl Sci. 2024;8:e6. doi: 10.1017/cts.2023.692.Google Scholar
Lee, S, Bozeman, B. The impact of research collaboration on scientific productivity. Soc Stud Sci. 2005;35:673702. doi: 10.1177/0306312705052359.Google Scholar
Mazumdar, M, Messinger, S, Finkelstein, DM, et al. Evaluating academic scientists collaborating in team-based research: a proposed framework. Acad Med. 2015;90:13021308. doi: 10.1097/acm.0000000000000759.Google Scholar
Hallmark, CC, Bohn, K, Hallberg, L, Croisant, SA. Addressing institutional and community barriers to development and implementation of community-engaged research through competency-based academic and community training. Front Public Health. 2023;10:1070475. doi: 10.3389/fpubh.2022.1070475.Google Scholar
Martinez, LS, Russell, B, Rubin, CL, Leslie, LK, Brugge, D. Clinical and translational research and community engagement: implications for researcher capacity building. Clin Transl Sci. 2012;5:329332. doi: 10.1111/j.1752-8062.2012.00433.x.Google Scholar
LeClair, A, Lim, JJ, Rubin, C. Lessons learned from developing and sustaining a community-research collaborative through translational research. J Clin Transl Sci. 2018;2:7985. doi: 10.1017/cts.2018.7.Google Scholar
Figure 0

Figure 1. Researcher Investment Tool sections and domains.

Figure 1

Table 1. Participant characteristics and mean scores on sections 1 and 2 of the Researcher Investment Tool

Figure 2

Figure 2. The percentage of total score out of the maximum score for each Researcher Investment Tool domain.

Figure 3

Table 2. Maximum score, mean, standard deviation, and range for the Researcher Investment Tool

Figure 4

Table 3. Associations between common indicators and Researcher Investment Tool domain mean scores

Supplementary material: File

Joly et al. supplementary material

Joly et al. supplementary material
Download Joly et al. supplementary material(File)
File 27.5 KB