To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Extrapolation is often required to inform cost-effectiveness (CE) evaluations of immune-checkpoint inhibitors (ICIs) since survival data from pivotal clinical trials are seldom complete. The objectives of this study were to evaluate the accuracy of estimates of long-term overall survival (OS) predicted in French CE assessment reports of ICIs, and to identify models presenting the best fit to the observed long-term survival data.
Methods
A systematic review of French assessment reports of ICIs in the metastatic setting since inception until May 2020 was performed. A targeted literature review was conducted to collect associated extended follow-up of randomized controlled trials (RCTs) used in the CE assessment reports. Difference between projected and observed OS was calculated. A range of standard parametric and spline-based models were applied to the extended follow-up data from the RCT to determine the best-fitting survival models.
Results
Of the 121 CE assessment reports published, 11 reports met the inclusion criteria. OS was underestimated in 73 percent of the CE assessment reports. The mean relative difference between each source was −13 percent (median: −15 percent; IQR: −0.4 to 26 percent). Models providing the best fit were those that could reflect nonmonotonic hazards.
Conclusions
Based on the available data at the time of submission, longer-term survival of ICIs was not fully captured by the extrapolation models used in CE assessments. Standard and flexible parametric models which can capture nonmonotonic hazard functions provided the best fit to the extended follow-up data. However, these models may still have performed poorly if fitted to survival data available at the time of submission to the French National Authority for Health.
Economic models play a central role in the decision-making process of the National Institute for Health and Care Excellence (NICE). Inadequate validation methods allow for errors to be included in economic models. These errors may alter the final recommendations and have a significant impact on outcomes for stakeholders.
Objective
To describe the patterns of technical errors found in NICE submissions and to provide an insight into the validation exercises carried out by the companies prior to submission.
Methods
All forty-one single technology appraisals (STAs) completed in 2017 by NICE were reviewed and all were on medicines. The frequency of errors and information on their type, magnitude, and impact was extracted from publicly available NICE documentation along with the details of model validation methods used.
Results
Two STAs (5 percent) had no reported errors, nineteen (46 percent) had between one and four errors, sixteen (39 percent) had between five and nine errors, and four (10 percent) had more than ten errors. The most common errors were transcription errors (29 percent), logic errors (29 percent), and computational errors (25 percent). All STAs went through at least one type of validation. Moreover, errors that were notable enough were reported in the final appraisal document (FAD) in eight (20 percent) of the STAs assessed but each of these eight STAs received positive recommendations.
Conclusions
Technical errors are common in the economic models submitted to NICE. Some errors were considered important enough to be reported in the FAD. Improvements are needed in the model development process to ensure technical errors are kept to a minimum.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.