We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The collection of repeated measures in psychological research is one of the most common data collection formats employed in survey and experimental research. The behavioral decision theory literature documents the existence of the dynamic evolution of preferences that occur over time and experience due to learning, exposure to additional information, fatigue, cognitive storage limitations, etc. We introduce a Bayesian dynamic linear methodology employing an empirical Bayes estimation framework that permits the detection and modeling of such potential changes to the underlying preference utility structure of the respondent. An illustration of revealed stated preference analysis (i.e., conjoint analysis) is given involving students’ preferences for apartments and their underlying attributes and features. We also present the results of several simulations demonstrating the ability of the proposed procedure to recover a variety of different sources of dynamics that may surface with preference elicitation over repeated sequential measurement. Finally, directions for future research are discussed.
Techniques are developed for surrounding each of the points in a multidimensional scaling solution with a region which will contain the population point with some level of confidence. Bayesian credibility regions are also discussed. A general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints. This theorem is applied to a number of models to display asymptotic variance-covariance matrices for coordinate estimates under different rotational constraints. A technique is described for displaying Bayesian conditional credibility regions for any sample size.
In maximum likelihood estimation the standard error of the location parameter of the three parameter logistic model can be large, due to inaccurate estimation of the lower asymptote. Thissen and Wainer who demonstrated this effect, suggested that the introduction of a prior distribution for the lower asymptote might alleviate the problems. Here it is demonstrated in some detail that the standard error of the location parameter can be made acceptably small in this way.
We propose a two-way Bayesian vector spatial procedure incorporating dimension reparameterization with a variable selection option to determine the dimensionality and simultaneously identify the significant covariates that help interpret the derived dimensions in the joint space map. We discuss how we solve identifiability problems in a Bayesian context that are associated with the two-way vector spatial model, and demonstrate through a simulation study how our proposed model outperforms a popular benchmark model. In addition, an empirical application dealing with consumers’ ratings of large sport utility vehicles is presented to illustrate the proposed methodology. We are able to obtain interpretable and managerially insightful results from our proposed model with variable selection in comparison with the benchmark model.
A taxonomy of latent structure assumptions (LSAs) for probability matrix decomposition (PMD) models is proposed which includes the original PMD model (Maris, De Boeck, & Van Mechelen, 1996) as well as a three-way extension of the multiple classification latent class model (Maris, 1999). It is shown that PMD models involving different LSAs are actually restricted latent class models with latent variables that depend on some external variables. For parameter estimation a combined approach is proposed that uses both a mode-finding algorithm (EM) and a sampling-based approach (Gibbs sampling). A simulation study is conducted to investigate the extent to which information criteria, specific model checks, and checks for global goodness of fit may help to specify the basic assumptions of the different PMD models. Finally, an application is described with models involving different latent structure assumptions for data on hostile behavior in frustrating situations.
A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.
In this paper, we propose a Bayesian framework for estimating finite mixtures of the LISREL model. The basic idea in our analysis is to augment the observed data of the manifest variables with the latent variables and the allocation variables. The Gibbs sampler is implemented to obtain the Bayesian solution. Other associated statistical inferences, such as the direct estimation of the latent variables, establishment of a goodness-of-fit assessment for a posited model, Bayesian classification, residual and outlier analyses, are discussed. The methodology is illustrated with a simulation study and a real example.
This paper provides a statistical framework for estimating higher-order characteristics of the response time distribution, such as the scale (variability) and shape. Consideration of these higher order characteristics often provides for more rigorous theory development in cognitive and perceptual psychology (e.g., Luce, 1986). RT distribution for a single participant depends on certain participant characteristics, which in turn can be thought of as arising from a distribution of latent variables. The present work focuses on the three-parameter Weibull distribution, with parameters for shape, scale, and shift (initial value). Bayesian estimation in a hierarchical framework is conceptually straightforward. Parameter estimates, both for participant quantities and population parameters, are obtained through Markov Chain Monte Carlo methods. The methods are illustrated with an application to response time data in an absolute identification task. The behavior of the Bayes estimates are compared to maximum likelihood (ML) estimates through Monte Carlo simulations. For small sample size, there is an occasional tendency for the ML estimates to be unreasonably extreme. In contrast, by borrowing strength across participants, Bayes estimation “shrinks” extreme estimates. The results are that the Bayes estimators are more accurate than the corresponding ML estimators.
The specification of the \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{Q}}$$\end{document} matrix in cognitive diagnosis models is important for correct classification of attribute profiles. Researchers have proposed many methods for estimation and validation of the data-driven \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{Q}}$$\end{document} matrices. However, inference of the number of attributes in the general restricted latent class model remains an open question. We propose a Bayesian framework for general restricted latent class models and use the spike-and-slab prior to avoid the computation issues caused by the varying dimensions of model parameters associated with the number of attributes, K. We develop an efficient Metropolis-within-Gibbs algorithm to estimate K and the corresponding \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{Q}}$$\end{document} matrix simultaneously. The proposed algorithm uses the stick-breaking construction to mimic an Indian buffet process and employs a novel Metropolis–Hastings transition step to encourage exploring the sample space associated with different values of K. We evaluate the performance of the proposed method through a simulation study under different model specifications and apply the method to a real data set related to a fluid intelligence matrix reasoning test.
The analysis of insurance and annuity products issued on multiple lives requires the use of statistical models which account for lifetime dependence. This paper presents a Dirichlet process mixture-based approach that allows to model dependent lifetimes within a group, such as married couples, accounting for individual as well as group-specific covariates. The model is analyzed in a fully Bayesian setting and illustrated to jointly model the lifetime of male–female couples in a portfolio of joint and last survivor annuities of a Canadian life insurer. The inferential approach allows to account for right censoring and left truncation, which are common features of data in survival analysis. The model shows improved in-sample and out-of-sample performance compared to traditional approaches assuming independent lifetimes and offers additional insights into the determinants of the dependence between lifetimes and their impact on joint and last survivor annuity prices.
Who joins extremist movements? Answering this question is beset by methodological challenges as survey techniques are infeasible and selective samples provide no counterfactual. Recruits can be assigned to contextual units, but this is vulnerable to problems of ecological inference. In this article, we elaborate a technique that combines survey and ecological approaches. The Bayesian hierarchical case–control design that we propose allows us to identify individual-level and contextual factors patterning the incidence of recruitment to extremism, while accounting for spatial autocorrelation, rare events, and contamination. We empirically validate our approach by matching a sample of Islamic State (ISIS) fighters from nine MENA countries with representative population surveys enumerated shortly before recruits joined the movement. High-status individuals in their early twenties with college education were more likely to join ISIS. There is more mixed evidence for relative deprivation. The accompanying extremeR package provides functionality for applied researchers to implement our approach.
Evolutionary studies on Dengue virus (DENV) in endemic regions are necessary since naturally occurring mutations may lead to genotypic variations or shifts in serotypes, which may lead to future outbreaks. Our study comprehends the evolutionary dynamics of DENV, using phylogenetic, molecular clock, skyline plots, network, selection pressure, and entropy analyses based on partial CprM gene sequences. We have collected 250 samples, 161 in 2017 and 89 in 2018. Details for the 2017 samples were published in our previous article and that of 2018 are presented in this study. Further evolutionary analysis was carried out using 800 sequences, which incorporate the study and global sequences from GenBank: DENV-1 (n = 240), DENV-3 (n = 374), and DENV-4 (n = 186), identified during 1944–2020, 1956–2020, and 1956–2021, respectively. Genotypes V, III, and I were identified as the predominant genotypes of the DENV-1, DENV-3, and DENV-4 serotypes, respectively. The rate of nucleotide substitution was found highest in DENV-3 (7.90 × 10−4 s/s/y), followed by DENV-4 (6.23 × 10−4 s/s/y) and DENV-1 (5.99 × 10−4 s/s/y). The Bayesian skyline plots of the Indian strains revealed dissimilar patterns amongst the population size of the three serotypes. Network analyses showed the presence of different clusters within the prevalent genotypes. The data presented in this study will assist in supplementing the measures for vaccine development against DENV.
Various CHA scholars have contributed to causal process tracing and helped establish it as principal alternative to VBA. A recent, Bayesian-informed version is particularly relevant to CHA because it shares the same historiographical sensibilities of seeing knowledge evolve through a close dialogue between new findings and the existing foreknowledge. It makes causal inferences conditional on a pre-testing articulation of testworthy hypotheses and juxtaposes new test results onto older ones at the post-testing stage. Process tracing defines the testworthiness of hypotheses according to the number and diversity of empirical implications a theory has before the testing starts. It also defines test strength in terms of the specificity of the hypotheses that are tested and of their uniqueness vis-à-vis the alternative explanations against whichthey are paired. The chapter introduces a new tool, the theory ledger, to help evaluate test strength and to update confidence in causal inferencing.
Approximate Bayesian analysis is presented as the solution for complex computational models where no explicit maximum likelihood estimation is possible. The activation-suppression racemodel (ASR), which does have a likelihood amenable to Markov chain Monte Carlo methods, is used to demonstrate the accuracy with which parameters can be estimated with the approximate Bayesian methods.
Ageism has become a social problem in an aged society. This study re-examines an ageism affirmation strategy; the designs and plans for this study were pre-registered. Participants were randomly assigned to either an experimental group (in which they read an explanatory text about the stereotype embodiment theory and related empirical findings) or a control group (in which they read an irrelevant text). The hypothesis was that negative attitudes toward older adults are reduced in the experimental group compared with the control group. Bayesian analysis was used for hypothesis testing. The results showed that negative attitudes toward older adults were reduced in the experimental group. These findings contribute to the development of psychological and gerontological interventions aimed at affirming ageism. In addition, continued efforts to reduce questionable research practices and the spread of Bayesian analysis in psychological research are expected.
This paper compares the behavior of individuals playing a classic two-person deterministic prisoner’s dilemma (PD) game with choice data obtained from repeated interdependent security prisoner’s dilemma games with varying probabilities of loss and the ability to learn (or not learn) about the actions of one’s counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain.
We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Birnbaum and Quispe-Torreblanca (2018) presented a frequentist analysis of a family of six True and Error (TE) models for the analysis of two choice problems presented twice to each participant. Lee (2018) performed a Bayesian analysis of the same models, and found very similar parameter estimates and conclusions for the same data. He also discussed some potential differences between Bayesian and frequentist analyses and interpretations for model comparisons. This paper responds to certain points of possible controversy regarding model selection that attempt to take into account the concept of flexibility or complexity of a model. Reasons to question the use of Bayes factors to decide among models differing in fit and complexity are presented. The partially nested inter-relations among the six TE models are represented in a Venn diagram. Another view of model complexity is presented in terms of possible sets of data that could fit a model rather than in terms of possible sets of parameters that do or do not fit a given set of data. It is argued that less complex theories are not necessarily more likely to be true, and when the space of all possible theories is not well-defined, one should be cautious in interpreting calculated posterior probabilities that appear to prove a theory to be true.
Sixty-two 14C dates are analyzed in combination with a recently established local floating tree-ring sequence for the Early Neolithic site of La Draga (Banyoles, northeast Iberian Peninsula). Archaeological data, radiometric and dendrochronological dates, as well as sedimentary and micro-stratigraphical information are used to build a Bayesian chronological model, using the ChronoModel 2.0 and OxCal 4.4 computer programs, and IntCal 2020 calibration curve. The dendrochronological sequence is analyzed, and partially fixed to the calendrical scale using a wiggle-matching approach. Depositional events and the general stratigraphic sequence are expressed in expanded Harris Matrix diagrams and ordered in a temporal sequence using Allen Algebra. Post-depositional processes affecting the stratigraphic sequence are related both to the phreatic water level and the contemporaneous lakeshore. The most probable chronological model suggests two main Neolithic occupations, that can be divided into no less than three different “phases,” including the construction, use and repair of the foundational wooden platforms, as well as evidence for later constructions after the reorganization of the ground surface using travertine slabs. The chronological model is discussed considering both the modern debate on the Climatic oscillations during the period 8000–4800 cal BC, and the origins of the Early Neolithic in the western Mediterranean region.
Modelling loss reserve data in run-off triangles is challenging due to the complex but unknown dynamics in the claim/loss process. Popular loss reserve models describe the mean process through development year, accident year, and calendar year effects using the analysis of variance and covariance (ANCOVA) models. We propose to include in the mean function the persistence terms in the conditional autoregressive range model for modelling the persistence of claim across development years. In the ANCOVA model, we adopt linear trends for the accident and calendar year effects and a quadratic trend for the development year effect. We investigate linear or log-transformed mean functions and four distributions, namely generalised beta type 2, generalised gamma, Weibull, and exponential extension, with positive support to enhance the model flexibility. The proposed models are implemented using the Bayesian user-friendly package Stan running in the R environment. Results show that the models with log-transformed mean function and persistence terms provide better model fits. Lastly, the best model is applied to forecast partial loss reserve and calendar year reserve for three years.