Hostname: page-component-65b85459fc-22psd Total loading time: 0 Render date: 2025-10-15T09:17:10.215Z Has data issue: false hasContentIssue false

Increasing evaluation team capacity through creation of an innovative digital tool

Published online by Cambridge University Press:  26 September 2025

Kayla J. Kuhfeldt*
Affiliation:
Boston University, Clinical & Translational Science Institute, Boston, MA, USA
Kim C. Brimhall
Affiliation:
Boston University, Clinical & Translational Science Institute, Boston, MA, USA Boston University, School of Public Health, Health Law, Policy & Management, Boston, MA, USA
*
Corresponding author: K. Kuhfeldt; Email: kaylakuh@bu.edu
Rights & Permissions [Opens in a new window]

Abstract

Evaluation teams have been critical to the success of Clinical and Translational Science Award (CTSA) programs funded by the National Center for Advancing Translational Sciences (NCATS). Given the limited resources often available to evaluation teams and the growing emphasis on impact evaluation and continuous quality improvement (CQI), CTSA programs may need to develop innovative strategies to build capacity for effectively implementing CQI and impact evaluation, while still tracking commonly reported metrics. To address this challenge, the Boston University (BU) Clinical and Translational Science Institute (CTSI) partnered with the BU Hariri’s Software and Application Innovation Lab (SAIL) to develop a web-based digital tool, known as TrackImpact, that streamlines data collection, saving significant time and resources, and increasing evaluation team capacity for other activities. Time and cost saving analyses are used to demonstrate how we increased evaluation team capacity by using this innovative digital tool.

Information

Type
Special Communication
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

Evaluation teams have been critical to the success of Clinical and Translational Science Award (CTSA) programs, funded by the National Center for Advancing Translational Sciences (NCATS). They help monitor success and processes, make improvements to programs, and assess the cost and impact CTSA programs have had over time. The 2021 National Evaluators Survey conducted by the CTSA Program Evaluators’ Group showed structural differences in the number of dedicated evaluation specialists within a single CTSA program [Reference Hoyo, Nehl and Dozier1]. For example, Hoyo et al. (2024) reported only 17% of the responding CTSA programs reported having four dedicated evaluation specialists, 24% reported having three evaluation specialists within their program, and approximately 34% of CTSA programs indicated they had two evaluation team members [Reference Hoyo, Nehl and Dozier1]. Over 60% of CTSA programs reported a net full-time equivalent (FTE) for evaluation specialists between 0.5 and 2.0, suggesting evaluation teams may have limited resources and evaluators are likely sharing responsibilities outside of dedicated evaluation efforts [Reference Hoyo, Nehl and Dozier1]. The Boston University (BU) Clinical and Translational Science Institute (CTSI), is similar and representative of many other CTSA programs with limited FTE evaluators. We currently have one full-time staff evaluator and one part-time faculty evaluator. Depending on the availability of funding, additional part-time graduate-level student employees are recruited when needed. With limited resources, evaluation teams are challenged with efficiently maximizing their efforts to ensure accurate and timely evaluation of CTSA programs and their impact. To help address this challenge, the BU CTSI has developed a web-based software tool that can assist evaluation teams with critical data collection, providing valuable savings in cost and time.

The NCATS mission and vision includes turning research observations into health solutions through translational science and having more treatments for all people more quickly [2]. Recently, there has been a strong emphasis for CTSA evaluation teams toward continuous quality improvement (CQI) and impact evaluation to ensure CTSA programs reach NCATS’ mission and vision [3,4]. Changing NCATS Funding Opportunity Announcement (FOA) requirements have led CTSA evaluation teams to restructure and rethink previous practices in order to conduct new, in-depth impact evaluation and CQI activities within their designated resources and team’s capacity [Reference Patel, Rainwater and Trochim5]. This creates a unique challenge for current CTSA evaluation teams who need to find a way to accomplish more (i.e., CQI and in-depth impact evaluation) with the same level of resources dedicated to evaluation efforts (i.e., the same number of evaluation specialists).

Bibliometric data is quantitative information that many CTSA programs use to evaluate productivity and impact of their program participants. Depending on the organizational structures of evaluation teams, available resources, and tools used to conduct the monitoring and tracking of CTSA participant bibliometric data, variation exists among types of data points collected, the quality and accuracy of the data, and how long the data can be tracked over time. Although longitudinal bibliometric data is critical for effectively evaluating CTSA program’s productivity and impact, challenges remain with this type of data collection. Common challenges to quality bibliometric data include difficulties in accurately attributing work to a CTSA participant due to common or similar participant names, published name variations, and changes to a participant’s institutional affiliations when participants move to different institutions over time.

Several software tools have been developed to try and address these common bibliometric data challenges. However, many still fall short in providing CTSA evaluators with an effective bibliometric data tool. Software tools like Flight Tracker [6], bibliometrix [7], CiteSpace [8], Dimensions [9], and Scopus [10], strive to track and analyze participant publication and funded grant information to measure participant research productivity and impact. Some of these tools use participant identification methods, such as Open Researcher and Contributor ID (ORCID) to help accurately attribute publications and funded grants to participants [11], however, duplicative data on participants and misattribution of work continues to exist. Likewise, many of these bibliometric data tools are designed for collection of specific metrics [e.g., Flight Tracker has a strong linkage and tracking system with National Institutes of Health (NIH) funded grants and publications] [6] and may not provide the flexibility and reach needed for tracking a CTSA program’s impact across a variety of domains, such as influence CTSA participants may have on governmental and nongovernmental policy and legislation. Thus, the BU CTSI in collaboration with the BU Rafik B. Hariri Institute for Computing and Computational Science and Engineering’s Software & Application Innovation Lab (SAIL), designed and built a digital tool to track the productivity, influence, and impact of BU CTSI funded scholars, trainees, and pilot awardees. The tool aggregates data from a wide range of sources to assess research productivity, influence, and impact across various public health domains.

Traditional methods for CTSA evaluation of research productivity, influence, and impact

Before the development of a digital tool for tracking BU CTSI’s impact, it’s important to understand how this data has been historically collected and evaluated. The BU CTSI evaluation team monitors and evaluates its publication portfolio by tracking both traditional publication metrics and Altmetrics to assess research productivity, influence, and progress toward translation. Historically, the BU CTSI evaluation team collected this data by hand using external and internal university software systems. For example, an evaluator would access each external software system, such as Dimensions [9], SciVal [12], Overton [13], and Web of Science [14], as well as internal university software systems, such as BU Profiles [15], to collect metric data by hand. Some examples of metric data collected include BU CTSI supported publication count, citation count, h-index, news media mentions, policy citations, and total Altmetric attention scores. The evaluator would review all data, remove duplicated information, and organize data by BU CTSI participant and program in an overall aggregated report. Generation of grant funding was also tracked and monitored by evaluators to assess the ability of BU CTSI-supported investigators to continue in research and become independent investigators after completion of training programs. To collect data, evaluators accessed an external NIH software system (NIH RePORTER) [16], an internal university software system (BU Profiles) [15], and self-reported progress reports completed by BU CTSI participants using WebCAMP, a software system created by Weill Cornell Medicine’s Clinical and Translational Science Center in 2014 [17].

An evaluator collected this data by hand utilizing the participant’s full name and university affiliations to determine the correct association of metrics. All publication and grant metrics were ultimately aggregated by hand and stored using individual Excel spreadsheets organized by BU CTSI program (see Figure 1). Each BU CTSI program maintained Excel spreadsheets that organized program cohorts by individual participants, including the year they first received BU CTSI support. This structure allowed for tracking cohorts and individuals over time to assess their research productivity, influence, and impact. This also allowed for examination of how programmatic changes (i.e., CQI efforts) may have influenced various metric outcomes, such as research productivity and impact.

Figure 1. Boston University Clinical & Translational Science Institute evaluators’ manual bibliometric analysis process. Note. BU = Boston University; BMC = Boston Medical Center; NIH = National Institutes of Health; DOI = digital object identifier.

Also, each BU CTSI program had their own dedicated dashboard that was manually created and updated by hand using Microsoft PowerPoint. Each program dashboard included data visualizations on aggregated data for the full program’s participant demographic breakdown, including the participant’s department/school, position title, race, ethnicity, etc. Program dashboards also displayed data on participant cohorts and data visualizations for publication and grant outcomes over time. Based on each BU CTSI program’s timeline, a demographic and bibliometric analysis was completed to update the program’s dashboard. These dashboards were made available for program leaders and evaluators to promote data transparency and guide programmatic decision-making.

New innovation for CTSA evaluation of research productivity, influence, and impact

To standardize data collection, increase the reach and range of the types of bibliometric information collected on commonly reported metric data, and increase BU CTSI’s evaluation capacity, the BU CTSI and SAIL collaboratively developed a digital tool, known as TrackImpact, for assessing research productivity, influence and impact. This software innovation is designed as a more flexible and efficient data collection process for assessing the contributions of BU CTSI’s funded trainees, scholars and participants. The TrackImpact tool is a web-based application that includes two parts: (a) A NocoDB deployment which securely stores all data collected using authentication and user access controls; and (b) an automated process for retrieving metric data from external and internal university software systems.

NocoDB is a no-code platform that provides an intuitive spreadsheet interface that allows for creating databases and various data analyses [18,19]. The platform allows for multiple connections with external and internal software systems or websites, which allows TrackImpact to automate the process of collecting and consolidating chosen data points from various software systems. We have connected multiple external software systems including Dimensions [9], Overton [13], BU Profiles [15], and NIH RePORTER [16] to our NocoDB platform. We also connect the NocoDB platform to our internal administrative and evaluation software system, WebCAMP [17], which is used by the BU CTSI to store our application and progress report data for all programs (see Figure 2). All integrations follow the Health Insurance Portability and Accountability Act (HIPPA) and BU Data Center Policy regulations.

Figure 2. TrackImpact tool internal and external software systems and collected metrics. Note. BU = Boston University; CTSI = Clinical & Translational Science Institute; TSBM = Translational Science Benefits Model; NIH = National Institutes of Health.

Once the NocoDB platform integrates the internal and external evaluation and administrative tools, analyses can run at the push of a button [18]. When an analysis is needed, TrackImpact gathers individual investigator names and their associated metrics, synchronizes the data, and integrates it into the tool for analysis. The integrated tools (e.g., Dimensions [9], BU Profiles [15], NIH RePORTER [16], Overton [13], etc.) link publications and grant information directly to that investigator with the use of unique identifiers to ensure correct, accurate association. Dimensions and Overton have their own unique identifier [9,13], which is a one-time up-front cost of coding into the platform in between the steps of pulling data from WebCAMP and the other tracking tools. BU Profiles and WebCAMP are both internal software and use BU email addresses to ensure correct association [15,17]. Over time, these unique identifiers are saved into NocoDB to ensure the correct name and affiliation is associated with our programs, even as investigators publish under changing names or affiliations.

Once TrackImpact runs through the syncing process, all collected data is presented and can be filtered in many forms. For example, data can be organized and aggregated by BU CTSI participant, participant cohort, program, or by various time periods. The tool can also filter by individuals who have utilized multiple programs to show the impact of various pathways through BU CTSI resources. This filtering variation allows for different levels of analysis (e.g., individual-level by participants, group-level by participant cohorts and programs). It also allows for exploring metrics and outcomes over varying time periods and based on translational science topics.

All data from TrackImpact can be exported into user friendly data platforms, such as Excel, to create further data visualizations that can be used in presentations or meetings (see Figure 3). A few examples of data points collected by the tool includes: grant dollar amount; funding organization; publication topic; number of publications; number of citations; Altmetric score; number of news media coverage; h-index; number of collaborators within/outside our institution; policy document citation; and policy mentions. We are also able to change what metrics we look at with each analysis, including or excluding metrics depending on the report needed.

Figure 3. TrackImpact tool user pathway. Note. SAIL = Software & Application Innovation Lab.

Time savings and cost benefits analysis: traditional methods vs. TrackImpact tool

We examined the time saving and cost benefits of using the TrackImpact tool with seven BU CTSI programs [Reference Brent20]. First, time and cost of traditional methods for evaluating participant research productivity, influence, and impact were measured to develop a baseline assessment of the required resources necessary for successful implementation. The BU CTSI was founded in 2008 and each of the seven programs have different start dates and number of cohorts (i.e., the number of participants and cohorts influence how much time it takes to manually collect metric data for all participants and then aggregate the data by cohort if applicable).

To gain a baseline assessment of how much time it takes both a fully trained graduate-level student worker and an FTE evaluator to manually collect all metric data for each participant per program (see Figure 1 for a review of the manual process), we tracked graduate-level student and evaluator hours in completing the manual data collection process for bibliometric analysis from each external and internal university software system, collecting participant demographic data (personal characteristics such as race and ethnicity are only collected once, educational and work-related characteristics are updated yearly), cleaning the data to remove duplicate information, creating an Excel spreadsheet of the aggregated data by cohort and program, and updating each program dashboard with summary results. We examined how long it takes a trained graduate-level student versus an FTE evaluator to complete the manual data collection and evaluation process (i.e., traditional evaluation methods) as well as using TrackImpact to assess the cost effectiveness on two levels: a) traditional evaluation methods versus the tool, TrackImpact; and b) a trained graduate-level student versus a professional FTE evaluator.

The seven BU CTSI programs were examined in 2024: Career Development Award Writing Workshop (CDA; n = 72 participants; n = 15 cohorts), Program for Early Research Career Development (PERC; n = 57 participants; n = 9 cohorts), KL2 Career Development Award [KL2; n = 68 participants (38 scholars and 30 mentors)], TL1 Predoctoral and Postdoctoral Fellowships in Regenerative Medicine [TL1; n = 36 participants (21 trainees and 15 mentors)], Mentor the Mentor Workshop (Mentor the Mentor; n = 54 participants), Mock NIH Study Section (Mock Study; n = 9 participants; n = 2 cohorts), and Pilot Awards (n = 226 participants). The CDA data is collected and updated once per year, whereas all other programs are updated twice per year.

Data was collected by tracking how many minutes it took a trained graduate-level student and a BU CTSI FTE evaluator to complete both the traditional evaluation method and using TrackImpact to collect all metric data per program participant and aggregated by program cohort if applicable for one overall yearly update. The number of minutes needed to collect bibliometric data per participant were multiplied by how many participants were in each BU CTSI program and then converted into the number of hours needed per program (the total number of minutes were divided by 60) and ultimately how many weeks per year is needed for bibliometric evaluation using both methods and by a student and evaluator (total hours per program was summarized and then divided by 40 to approximate 40 hours per week FTE; this number was then rounded to the nearest whole number to represent an approximate number of weeks needed per year).

To assess cost, the number of hours needed for evaluation were multiplied by the hourly wage of either a graduate-level student worker ($20 hourly wage) or an evaluator ($42 hourly wage) to assess the cost of one program update. The number of program updates needed per year varies by program, and therefore programs who have updates every 6-months (twice per year) were calculated to get a total cost for evaluation per year per program.

Results

Table 1 presents the results of the cost effectiveness and benefits analysis organized to compare traditional methods of evaluation versus the TrackImpact tool by BU CTSI program, number of hours and cost per program update, number of program updates yearly, and cost per year per program. This information is provided for whether a trained graduate-level student worker collects and assesses this information or whether an FTE evaluator collects and assesses this information. The last rows of Table 1 present the total overall results, with bold font representing the accumulative costs and times for yearly bibliometric data collection and evaluation.

Table 1. Cost effectiveness analysis results comparing traditional methods versus the TrackImpact tool

Note. CDA = Career Development Award; PERC = Program for Early Research Career Development; KL2 = KL2 Career Development Award; TL1 = TL1 Predoctoral and Postdoctoral Fellowships in Regenerative Medicine; Mentor the Mentor = Mentor the Mentor Workshop; Mock Study = Mock National Institutes of Health (NIH) Study Section; Traditional methods = Graduate student workers take approximately 33 minutes per participant to collect metric data and 1.33 minutes for aggregating data by cohorts; Evaluators take approximately 25 minutes per participant to collect metric data and 1 minute for aggregating data by cohorts; TrackImpact tool = Graduate student workers take approximately 5 minutes per participant to collect metric data and 1.33 minutes for aggregating data by cohorts; Evaluators take approximately 3 minutes per participant to collect metric data and 1 minute for aggregating data by cohorts. Cost analysis based on hourly wages for a graduate student = $20 per hour versus an evaluator = $42 per hour; Total hours per week are approximated based on FTE at 40 hours per week and rounded to the nearest whole number.

To manually collect bibliometric data for each participant (traditional evaluation methods), it took on average 33 minutes for a graduate-level trained student worker and 25 minutes for a BU CTSI evaluator. This entailed manually searching each participant in various external and internal university software systems (see Figure 1), reviewing data to remove duplicate information, and aggregating data within participants and by cohorts if applicable. To aggregate bibliometric data by participant cohorts, it took on average 1.33 minutes per cohort for a trained graduate-level student worker and 1 minute per cohort for the BU CTSI evaluator. To use the digital TrackImpact tool, it took on average 5 minutes for a trained graduate-level student worker and 3 minutes for a BU CTSI evaluator. This entailed inputting participant information into TrackImpact, then TrackImpact searched all available external and internal university software systems to collect, review and remove duplicated information, and aggregate bibliometric data by participant and cohort if applicable (see Figure 3). To specify summary reports in the TrackImpact tool (i.e., indicate what information should appear in the summary report by selecting information categories in the TrackImpact tool), it took on average 1.33 minutes per cohort for a trained graduate-level student worker and 1 minute per cohort for the BU CTSI evaluator (averages were calculated based the number of participants and cohorts per program; see Table 1).

Table 1 presents the full summary of results, highlighting the cost and time savings benefits of TrackImpact (graduate-level student worker yearly cost of $1,636.48 and FTE of about 2 weeks per year; BU CTSI evaluator yearly cost of $2,067.10 and FTE of about 1 week per year) relative to traditional evaluation methods (graduate-level student worker yearly cost of $10,708.10 and FTE of about 13 weeks per year; BU CTSI evaluator yearly cost of $17,035.62 and FTE of about 10 weeks per year). It is important to note the one-time cost and time of graduate-level student training and on-boarding are not represented in the data as this can vary by student (based on experience and skill) and institution (based on pay rates and regulations). We based our analysis on a public health graduate-level student focused on data analytics who was on-boarded and trained on all bibliometric tools and analyses by an FTE evaluator and a BU Librarian.

Discussion

TrackImpact offers cost (yearly cost savings of approximately $9,072 when using a graduate-level student worker and $14,969 when using an evaluator) and time (yearly time savings of approximately 11 weeks FTE for a graduate-level student worker and 9 weeks FTE for an evaluator) savings compared to traditional evaluation methods, allowing evaluation teams to allocate the saved time and resources to other CTSA program priorities and institutional projects. The creation of this digital tool provides a centralized platform for tracking commonly reported impact and outcome metrics over time, saving significant time and resources and increasing capacity for other CTSA programs and evaluation priorities.

TrackImpact moves towards an automated process for the collection of commonly reported publication and grant metrics, which could allow teams to have a more seamless process to their data collection. There is an upfront time cost to ensure teams have collected all unique identifiers to ensure accuracy in tracking investigator’s name and affiliations over time, but once saved on the platform it is only a one-time need.

There is also a potential benefit for future time savings when on-boarding new team members due to the shorter learning curve for understanding the tool versus the traditional method of collecting data by hand. To implement similar tools in other CTSA programs, teams would first need to create a list of tools that would be connected and all investigator’s unique identifiers from these tools, then find an internal or external digital innovation lab or software engineer that has the skills to create such a platform. This tool also provides opportunities for cross-CTSA program collaborations through the sharing of ideas and standardized collection methods across CTSA programs to look at the larger network impact.

We acknowledge that there are barriers to the implementation of a digital tool, like TrackImpact, across the network. Such barriers could include lack of resources to finance the creation of a new tool, the lack of organizational licenses to access certain bibliometric tools (i.e., Dimensions [9], Overton [13], etc.), or not having complete buy-in from all members to create a centralized system to conduct such analyses. Similar to other web-based digital tools, there may also be small on-going costs associated with periodic maintenance to ensure continued data quality and accuracy. We also recognize that there are potentially other barriers not yet foreseen, as we have not fully implemented the tool into practice at this time. However, this article provides a roadmap for CTSA programs wanting to implement similar tools in their settings and moves towards a more automated approach to collecting and analyzing these commonly reported metrics.

Conclusion

Our innovative digital tool enables evaluation teams to increase capacity within their staffing structures, by reducing the amount of time and cost spent for evaluators measuring and tracking researcher’s productivity and influence metrics over time. This approach to using digital tools to increase capacity in evaluation teams can be replicated in other CTSA program settings through the creation of their own NocoDB platform or collaborating with internal or external software engineers. We hope to share future impacts on the implementation and dissemination phases of TrackImpact.

Acknowledgments

The authors wish to acknowledge Boston University Software & Application Innovation Lab (BU SAIL) team members, Collin Bolles and William Tomlinson, for their assistance with designing and creation of the TrackImpact tool, along with creation of figures and reviewing the manuscript sections focused on the TrackImpact tool.

Author contributions

Kayla J. Kuhfeldt: Conceptualization, Data curation, Formal analysis, Methodology, Resources, Writing-original draft, Writing-review & editing; Kim C. Brimhall: Conceptualization, Data curation, Formal analysis, Methodology, Resources, Writing-original draft, Writing-review & editing.

Funding statement

This project and publication was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through BU-CTSI Grant Number 1UL1TR001430. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

Competing interests

The author(s) declare none.

References

Hoyo, V, Nehl, E, Dozier, A, et al. A landscape assessment of CTSA evaluators and their work in the CTSA consortium, 2021 survey findings. J Clin Transl Sci. 2024;8:e79. doi: 10.1017/CTS.2024.526.Google Scholar
National Center for Advancing Translational Sciences. NCATS Overview. 2025. (https://ncats.nih.gov/about/ncats-overview) Accessed March 25, 2025.Google Scholar
National Institutes of Health. Expired PAR-21-293: Clinical and Translational Science Award (UM1 Clinical Trial Optional). National Institutes of Health. 2021. (https://grants.nih.gov/grants/guide/pa-files/PAR-21-293.html) Accessed March 25, 2025.Google Scholar
National Institutes of Health. PAR-24-272: clinical and translational science award (UM1 clinical trial optional). National Institutes of Health. 2024. (https://grants.nih.gov/grants/guide/pa-files/PAR-24-272.html) Accessed March 25, 2025.Google Scholar
Patel, T, Rainwater, J, Trochim, WM, et al. Opportunities for strengthening CTSA evaluation. J Clin Transl Sci. 2019;3:5964. doi: 10.1017/CTS.2019.387.Google Scholar
Flight Tracker – Edge for Scholars at Vanderbilt. (https://edgeforscholars.vumc.org/additional-resources/flight-tracker/) Accessed June 23, 2025.Google Scholar
Bibliometrix - Home. (https://www.bibliometrix.org/home/) Accessed June 23, 2025.Google Scholar
CiteSpace Home. (https://citespace.podia.com/) Accessed June 23, 2025.Google Scholar
Dimensions AI. (https://www.dimensions.ai/) Accessed June 23, 2025.Google Scholar
Welcome to Scopus. (https://www.scopus.com/home.uri) Accessed June 23, 2025.Google Scholar
ORCID. (https://orcid.org/) Accessed June 11, 2025.Google Scholar
SciVal. (https://www.scival.com/home) Accessed June 25, 2025.Google Scholar
Overton. Welcome to Overton. (https://www.overton.io/) Accessed February 6, 2025.Google Scholar
Boston University Profiles. (https://profiles.bu.edu/search/) Accessed June 25, 2025.Google Scholar
National Institute of Health. RePORTER. (https://reporter.nih.gov/) Accessed June 25, 2025.Google Scholar
Weill Cornell Medicine Clinical & Translational Science Center. WebCAMP. (https://ctscweb.weill.cornell.edu/research-resources/clinical-translational-research-informatics/webcamp) Accessed June 25, 2025.Google Scholar
How It Works? A Quick Overview. NocoDB. 2025. (https://nocodb.com/#How-it-works) Accessed March 25, 2025.Google Scholar
Welcome. NocoDB. 2025. (https://docs.nocodb.com/) Accessed March 25, 2025.Google Scholar
Brent, RJ. Cost-Benefit Analysis and Health Care Evaluations. Edward Elgar Publishing. 2004. (https://books.google.com/books/about/Cost_benefit_Analysis_and_Health_Care_Ev.html?id=H2P7cg7UGKAC) Accessed March 26, 2025.Google Scholar
Figure 0

Figure 1. Boston University Clinical & Translational Science Institute evaluators’ manual bibliometric analysis process. Note. BU = Boston University; BMC = Boston Medical Center; NIH = National Institutes of Health; DOI = digital object identifier.

Figure 1

Figure 2. TrackImpact tool internal and external software systems and collected metrics. Note. BU = Boston University; CTSI = Clinical & Translational Science Institute; TSBM = Translational Science Benefits Model; NIH = National Institutes of Health.

Figure 2

Figure 3. TrackImpact tool user pathway. Note. SAIL = Software & Application Innovation Lab.

Figure 3

Table 1. Cost effectiveness analysis results comparing traditional methods versus the TrackImpact tool