We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In restricted statistical models, since the first derivatives of the likelihood displacement are often nonzero, the commonly adopted formulation for local influence analysis is not appropriate. However, there are two kinds of model restrictions in which the first derivatives of the likelihood displacement are still zero. General formulas for assessing local influence under these restrictions are derived and applied to factor analysis as the usually used restriction in factor analysis satisfies the conditions. Various influence schemes are introduced and a comparison to the influence function approach is discussed. It is also shown that local influence for factor analysis is invariant to the scale of the data and is independent of the rotation of the factor loadings.
This paper focuses on analyzing data collected in situations where investigators use multiple discrete indicators as surrogates, for example, a set of questionnaires. A very flexible latent class model is used for analysis. We propose a Bayesian framework to perform the joint estimation of the number of latent classes and model parameters. The proposed approach applies the reversible jump Markov chain Monte Carlo to analyze finite mixtures of multivariate multinomial distributions. In the paper, we also develop a procedure for the unique labeling of the classes. We have carried out a detailed sensitivity analysis for various hyperparameter specifications, which leads us to make standard default recommendations for the choice of priors. The usefulness of the proposed method is demonstrated through computer simulations and a study on subtypes of schizophrenia using the Positive and Negative Syndrome Scale (PANSS).
Considering that causal mechanisms unfold over time, it is important to investigate the mechanisms over time, taking into account the time-varying features of treatments and mediators. However, identification of the average causal mediation effect in the presence of time-varying treatments and mediators is often complicated by time-varying confounding. This article aims to provide a novel approach to uncovering causal mechanisms in time-varying treatments and mediators in the presence of time-varying confounding. We provide different strategies for identification and sensitivity analysis under homogeneous and heterogeneous effects. Homogeneous effects are those in which each individual experiences the same effect, and heterogeneous effects are those in which the effects vary over individuals. Most importantly, we provide an alternative definition of average causal mediation effects that evaluates a partial mediation effect; the effect that is mediated by paths other than through an intermediate confounding variable. We argue that this alternative definition allows us to better assess at least a part of the mediated effect and provides meaningful and unique interpretations. A case study using ECLS-K data that evaluates kindergarten retention policy is offered to illustrate our proposed approach.
Influence curves of some parameters under various methods of factor analysis have been given in the literature. These influence curves depend on the influence curves for either the covariance or the correlation matrix used in the analysis. The differences between the influence curves based on the covariance and the correlation matrices are derived in this paper. Simple formulas for the differences of the influence curves, based on the two matrices, for the unique variance matrix, factor loadings and some other parameter are obtained under scale-invariant estimation methods, though the influence curves themselves are in complex forms.
This research concerns a mediation model, where the mediator model is linear and the outcome model is also linear but with a treatment–mediator interaction term and a residual correlated with the residual of the mediator model. Assuming the treatment is randomly assigned, parameters in this mediation model are shown to be partially identifiable. Under the normality assumption on the residual of the mediator and the residual of the outcome, explicit full-information maximum likelihood estimates of model parameters are introduced given the correlation between the residual for the mediator and the residual for the outcome. A consistent variance matrix of these estimates is derived. Currently, the coefficients of this mediation model are estimated using the iterative feasible generalized least squares (IFGLS) method that is originally developed for seemingly unrelated regressions (SURs). We argue that this mediation model is not a system of SURs. While the IFGLS estimates are consistent, their variance matrix is not. Theoretical comparisons of the FIMLE variance matrix and the IFGLS variance matrix are conducted. Our results are demonstrated by simulation studies and an empirical study. The FIMLE method has been implemented in a freely available R package iMediate.
Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters’ restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.
In patients with treatment resistant depression (TRD), the ESCAPE-TRD study showed esketamine nasal spray was superior to quetiapine extended release.
Aims
To determine the robustness of the ESCAPE-TRD results and confirm the superiority of esketamine nasal spray over quetiapine extended release.
Method
ESCAPE-TRD was a randomised, open-label, rater-blinded, active-controlled phase IIIb trial. Patients had TRD (i.e. non-response to two or more antidepressive treatments within a major depressive episode). Patients were randomised 1:1 to flexibly dosed esketamine nasal spray or quetiapine extended release, while continuing an ongoing selective serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor. The primary end-point was achieving a Montgomery–Åsberg Depression Rating Scale score of ≤10 at Week 8, while the key secondary end-point was remaining relapse free through Week 32 after achieving remission at Week 8. Sensitivity analyses were performed on these end-points by varying the definition of remission based on timepoint, threshold and scale.
Results
Of 676 patients, 336 were randomised to esketamine nasal spray and 340 to quetiapine extended release. All sensitivity analyses on the primary and key secondary end-point favoured esketamine nasal spray over quetiapine extended release, with relative risks ranging from 1.462 to 1.737 and from 1.417 to 1.838, respectively (all p < 0.05). Treatment with esketamine nasal spray shortened time to first and confirmed remission (hazard ratio: 1.711 [95% confidence interval 1.402, 2.087], p < 0.001; 1.658 [1.337, 2.055], p < 0.001).
Conclusion
Esketamine nasal spray consistently demonstrated significant superiority over quetiapine extended release using all pre-specified definitions for remission and relapse. Sensitivity analyses supported the conclusions of the primary ESCAPE-TRD analysis and demonstrated robustness of the results.
This Element works as non-technical overview of Agent-Based Modelling (ABM), a methodology which can be applied to economics, as well as fields of natural and social sciences. This Element presents the introductory notions and historical background of ABM, as well as a general overview of the tools and characteristics of this kind of models, with particular focus on more advanced topics like validation and sensitivity analysis. Agent-based simulations are an increasingly popular methodology which fits well with the purpose of studying problems of computational complexity in systems populated by heterogeneous interacting agents.
Carefully designing blade geometric parameters is necessary as they determine the aerodynamic performance of a rotor. However, manufacturing inaccuracies cause the blade geometric parameters to deviate randomly from the ideal design. Therefore, it is essential to quantify uncertainty and analyse the sensitivity of the blade geometric deviations on the compressor performance. This work considers a subsonic compressor rotor stage and examines samples with different geometry features using three-dimensional Reynolds-averaged Navier-Stokes simulations. A method to combine Halton sequence and non-intrusive polynomial chaos is adopted to perform the uncertainty quantitative (UQ) analysis. The Sobol’ index and Spearman correlation coefficient help analyse the sensitivity and correlation between the compressor performance and blade geometric deviations, respectively. The results show that the fluctuation amplitude of the compressor performance decreases for lower mass flow rates, and the sensitivity of the compressor performance to the blade geometrical parameters varies with the working conditions. The effects of various blade geometric deviations on the compressor performance are independent and linearly superimposed, and the combined effects of different geometric deviations on the compressor performance are small.
Open rotors can play a critical role towards transitioning to a more sustainable aviation by providing a fuel-efficient alternative. This paper considers the sensitivity of an open-rotor engine to variations of three operational parameters during take-off, focusing on both aerodynamics and aeroacoustics. Via a sensitivity analysis, insights to the complex interactions of aerodynamics and aeroacoustics can be gained. For both the aerodynamics and aeroacoustics of the engine, numerical methods have been implemented. Namely, the flowfield has been solved using unsteady Reynolds Averaged Navier Stokes and the acoustic footprint of the engine has been quantified through the Ffowcs Williams-Hawking equations. The analysis has concluded that the aerodynamic performance of the open rotor can decisively be impacted by small variations of the operational parameters. Specifically, blade loading increased by 9.8% for a 5% decrease in inlet total temperature with the uncertainty being amplified through the engine. In comparison, the aeroacoustic footprint of the engine had more moderate variations, with the overall sound pressure level increasing by up to 2.4dB for a microphone lying on the engine axis and aft of the inlet. The results signify that there is considerable sensitivity in the model and shall be systematically examined during the design or optimisation process.
This chapter applies the total error framework presented in Chapter 5 to a case example of preelection polling during the 2016 US presidential election. Here, the focus is on problems with a single poll.
The United States Congress passed the 21st Century Cures Act mandating the development of Food and Drug Administration guidance on regulatory use of real-world evidence. The Forum on the Integration of Observational and Randomized Data conducted a meeting with various stakeholder groups to build consensus around best practices for the use of real-world data (RWD) to support regulatory science. Our companion paper describes in detail the context and discussion of the meeting, which includes a recommendation to use a causal roadmap for study designs using RWD. This article discusses one step of the roadmap: the specification of a sensitivity analysis for testing robustness to violations of causal model assumptions.
Methods:
We present an example of a sensitivity analysis from a RWD study on the effectiveness of Nifurtimox in treating Chagas disease, and an overview of various methods, emphasizing practical considerations on their use for regulatory purposes.
Results:
Sensitivity analyses must be accompanied by careful design of other aspects of the causal roadmap. Their prespecification is crucial to avoid wrong conclusions due to researcher degrees of freedom. Sensitivity analysis methods require auxiliary information to produce meaningful conclusions; it is important that they have at least two properties: the validity of the conclusions does not rely on unverifiable assumptions, and the auxiliary information required by the method is learnable from the corpus of current scientific knowledge.
Conclusions:
Prespecified and assumption-lean sensitivity analyses are a crucial tool that can strengthen the validity and trustworthiness of effectiveness conclusions for regulatory science.
The curse of dimensionality confounds the comprehensive evaluation of computational structural mechanics problems. Adequately capturing complex material behavior and interacting physics phenomenon in models can lead to long run times and memory requirements resulting in the need for substantial computational resources to analyze one scenario for a single set of input parameters. The computational requirements are then compounded when considering the number and range of input parameters spanning material properties, loading, boundary conditions, and model geometry that must be evaluated to characterize behavior, identify dominant parameters, perform uncertainty quantification, and optimize performance. To reduce model dimensionality, global sensitivity analysis (GSA) enables the identification of dominant input parameters for a specific structural performance output. However, many distinct types of GSA methods are available, presenting a challenge when selecting the optimal approach for a specific problem. While substantial documentation is available in the literature providing details on the methodology and derivation of GSA methods, application-based case studies focus on fields such as finance, chemistry, and environmental science. To inform the selection and implementation of a GSA method for structural mechanics problems for a nonexpert user, this article investigates five of the most widespread GSA methods with commonly used structural mechanics methods and models of varying dimensionality and complexity. It is concluded that all methods can identify the most dominant parameters, although with significantly different computational costs and quantitative capabilities. Therefore, method selection is dependent on computational resources, information required from the GSA, and available data.
Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.
Causal inference from observational data is notoriously difficult, and relies upon many unverifiable assumptions, including no confounding or selection bias. Here, we demonstrate how to apply a range of sensitivity analyses to examine whether a causal interpretation from observational data may be justified. These methods include: testing different confounding structures (as the assumed confounding model may be incorrect), exploring potential residual confounding and assessing the impact of selection bias due to missing data. We aim to answer the causal question ‘Does religiosity promote cooperative behaviour?’ as a motivating example of how these methods can be applied. We use data from the parental generation of a large-scale (n = approximately 14,000) prospective UK birth cohort (the Avon Longitudinal Study of Parents and Children), which has detailed information on religiosity and potential confounding variables, while cooperation was measured via self-reported history of blood donation. In this study, there was no association between religious belief or affiliation and blood donation. Religious attendance was positively associated with blood donation, but could plausibly be explained by unmeasured confounding. In this population, evidence that religiosity causes blood donation is suggestive, but rather weak. These analyses illustrate how sensitivity analyses can aid causal inference from observational research.
Survey weighting allows researchers to account for bias in survey samples, due to unit nonresponse or convenience sampling, using measured demographic covariates. Unfortunately, in practice, it is impossible to know whether the estimated survey weights are sufficient to alleviate concerns about bias due to unobserved confounders or incorrect functional forms used in weighting. In the following paper, we propose two sensitivity analyses for the exclusion of important covariates: (1) a sensitivity analysis for partially observed confounders (i.e., variables measured across the survey sample, but not the target population) and (2) a sensitivity analysis for fully unobserved confounders (i.e., variables not measured in either the survey or the target population). We provide graphical and numerical summaries of the potential bias that arises from such confounders, and introduce a benchmarking approach that allows researchers to quantitatively reason about the sensitivity of their results. We demonstrate our proposed sensitivity analyses using state-level 2020 U.S. Presidential Election polls.
The Welfare Quality® (WQ) protocol for on-farm dairy cattle welfare assessment describes 33 measures and a step-wise method to integrate the outcomes into 12 criteria scores, grouped into four principle scores and into an overall welfare categorisation with four possible levels. The relative contribution of various welfare measures to the integrated scores has been contested. Using a European dataset (491 herds), we investigated: i) variation in sensitivity of integrated outcomes to extremely low and high values of measures, criteria and principles by replacing each actual value with minimum and maximum observed and theoretically possible values; and ii) the reasons for this variation in sensitivity. As intended by the WQ consortium, the sensitivity of integrated scores depends on: i) the observed value of the specific measures/criteria; ii) whether the change was positive/negative; and iii) the relative weight attributed to the measures. Additionally, two unintended factors of considerable influence appear to be side-effects of the complexity of the integration method. Namely: i) the number of measures integrated into criteria and principle scores; and ii) the aggregation method of the measures. Therefore, resource-based measures related to drinkers (which have been criticised with respect to their validity to assess absence of prolonged thirst), have a much larger influence on integrated scores than health-related measures such as ‘mortality rate’ and ‘lameness score’. Hence, the integration method of the WQ protocol for dairy cattle should be revised to ensure that the relative contribution of the various welfare measures to the integrated scores more accurately reflect their relevance for dairy cattle welfare.
The theoretical background on sensitivity analysis, especially on the deterministic approach, is described along with definitions on the forward sensitivity coefficient, adjoint sensitivity coefficient, and relative sensitivity coefficient along with examples of their practical applications. Concept, strategies, and applications of adaptive (targeted) observations are discussed, using adjoint sensitivity analysis, singular vectors, the ensemble transform Kalman filter, and conditional nonlinear optimal perturbations. Forecast sensitivity of observations is also discussed as a tool for assessing the impact of observations. In addition, various targeting field programs are introduced.
This chapter illustrates how to apply explicit Bayesian analysis to scrutinize qualitative research, pinpoint sources of disagreement on inferences, and facilitate consensus-building discussions among scholars, highlighting examples of intuitive Bayesian reasoning as well as departures from Bayesian principles in published research.