We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative interindividual differences and autodependencies are accounted for by assuming within-subject variability with respect to the parameters of the IRT model. In particular, the parameters of the IRT model are governed by an unobserved or “hidden'” homogeneous Markov process. The model includes the mixture linear logistic test model (Mislevy & Verhelst, 1990), the mixture Rasch model (Rost, 1990), and the Saltus model (Wilson, 1989) as specific instances. The model is applied to a longitudinal experiment on discontinuity in conservation acquisition (van der Maas, 1993).
Researchers in the field of network psychometrics often focus on the estimation of Gaussian graphical models (GGMs)—an undirected network model of partial correlations—between observed variables of cross-sectional data or single-subject time-series data. This assumes that all variables are measured without measurement error, which may be implausible. In addition, cross-sectional data cannot distinguish between within-subject and between-subject effects. This paper provides a general framework that extends GGM modeling with latent variables, including relationships over time. These relationships can be estimated from time-series data or panel data featuring at least three waves of measurement. The model takes the form of a graphical vector-autoregression model between latent variables and is termed the ts-lvgvar when estimated from time-series data and the panel-lvgvar when estimated from panel data. These methods have been implemented in the software package psychonetrics, which is exemplified in two empirical examples, one using time-series data and one using panel data, and evaluated in two large-scale simulation studies. The paper concludes with a discussion on ergodicity and generalizability. Although within-subject effects may in principle be separated from between-subject effects, the interpretation of these results rests on the intensity and the time interval of measurement and on the plausibility of the assumption of stationarity.
Stochastic actor-oriented models (SAOMs) can be used to analyse dynamic network data, collected by observing a network and a behaviour in a panel design. The parameters of SAOMs are usually estimated by the method of moments (MoM) implemented by a stochastic approximation algorithm, where statistics defining the moment conditions correspond in a natural way to the parameters. Here, we propose to apply the generalized method of moments (GMoM), using more statistics than parameters. We concentrate on statistics depending jointly on the network and the behaviour, because of the importance of their interdependence, and propose to add contemporaneous statistics to the usual cross-lagged statistics. We describe the stochastic algorithm developed to approximate the GMoM solution. A small simulation study supports the greater statistical efficiency of the GMoM estimator compared to the MoM.
Changes in dichotomous data caused by treatments can be analyzed by means of the so-called linear logistic model with relaxed assumptions (LLRA). The LLRA does not require observable criteria representing a single underlying latent trait, but it postulates the generalizability of the treatment effects over criteria and subjects. To test this latter crucial assumption, the mixture LLRA was proposed that allows directly unobservable types of subjects to have different treatment effects. As the earlier methods for estimating the parameters of the mixture LLRA have specific drawbacks, a further method based on the conditional maximum likelihood principle will be presented here. In contrast to the earlier conditional methods, it uses all of the dichotomous change data while having fewer parameters. Further, its goodness-of-fit tests become more sensitive to a falsely specified number of change-types even though the treatment effects are biased. For typically occurring small to moderate sample sizes, however, parametric bootstrapping of the distributions of the fit statistics is recommended for performing hypotheses tests. Finally, three applications of the new method to empirical data are described: first, about the effect of the so-called Trager psychophysical integration, second, about the effect of autogenic therapy on patients with psychosomatic symptoms, and, third, about the effect of religious education on the attitude towards sects.
This paper specifies the panel data experimental design condition under which ordinary least squares, fixed effects, and random effects estimators yield identical estimates of treatment effects. This condition is relevant to the large body of laboratory experimental research that generates panel data. Although the point estimates and the true standard errors of the estimated average treatment effects are identical across the three estimators, the estimated standard errors differ. A standard F test as well as asymptotic reasoning guide the choice of which estimated standard errors are the appropriate ones to use for statistical inference.
A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.
Spatial econometric models allow for interactions among cross-sectional units through spatial weight matrices. This paper parameterizes each spatial weight matrix in the widely used spatial Durbin model with a different instead of one common distance decay parameter, using negative exponential and inverse distance matrices. We propose a joint estimation approach of the decay and response parameters, and we investigate its performance in a Monte Carlo simulation experiment. We also present the results of an empirical application on military expenditures. Indirect effects in particular appear to be sensitive to different parameterizations.
Based on the recent papers, two distributions for the total claims amount (loss cost) are considered: compound Poisson-gamma and Tweedie. Each is used as an underlying distribution in the Bonus-Malus Scale (BMS) model. The BMS model links the premium of an insurance contract to a function of the insurance experience of the related policy. In other words, the idea is to model the increase and the decrease in premiums for insureds who do or do not file claims. We applied our approach to a sample of data from a major insurance company in Canada. Data fit and predictability were analyzed. We showed that the studied models are exciting alternatives to consider from a practical point of view, and that predictive ratemaking models can address some important practical considerations.
Do the rich become more or less supportive of redistribution when exposed to poor people in their local surroundings? Most existing observational studies find that exposure to poor individuals is positively associated with support for redistribution among the well-off, but one prominent field experiment found a negative link. We seek to resolve these divergent findings by employing a design closer to the studies that have found a positive link, but with more causal leverage than these; specifically, a three-wave panel survey linked with fine-grained registry data on local income composition in Denmark. In within-individual models, increased exposure to poor individuals is associated with lower support for redistribution among wealthy individuals. By contrast, between-individual models yield a positive relationship, thus indicating that self-selection based on stable individual characteristics likely explains the predominant finding in previous work.
Chapter 5 shows how the methods introduced in the preceding chapters can be used to gain novel substantive and theoretical insights. We show how RIO can be used to identify multiple storylines implied by a single regression model by examining cases (or sets of cases) that contribute to the regression model in otherwise unseen ways. We illustrate RIO’s substantive benefits through empirical analyses of (1) the effects of regional integration on inequality, (2) the social determinants of health, and (3) the correlates of dog ownership.
The purpose of this paper is to analyse the effects of natural resources on income inequality conditional on economic complexity in 111 developed and developing countries from 1995 to 2016. The system-GMM results show that economic complexity reverses the positive effects of natural resource dependence on income inequality. Furthermore, results are robust to the distinction between dependence on point resources (fossil fuels, ores, and metals), dependence on diffuse resources (agricultural raw material), and resource abundance. Finally, there are significant differences between countries, depending on the level of ethnic fragmentation and democracy.
Do negative economic shocks heighten public opposition to immigration, and through what mechanisms? Extant research suggests that economic circumstances and levels of labour market competition have little bearing on citizens' immigration attitudes. Yet personal economic shocks have the potential to trigger the threatened, anti-immigration responses – possibly through channels other than labour market competition – that prior cross-sectional research has been unable to detect. To examine these propositions, we used a unique panel study which tracked a large, population-based sample of Americans between 2007 and 2020. We found that adverse economic shocks, especially job losses, spurred opposition to unauthorized immigration. However, such effects are not concentrated among those most likely to face labour market competition from unauthorized immigrants. Instead, they are concentrated among white male Americans. This evidence suggests that the respondents' anti-immigration turn does not stem from economic concerns alone. Instead, personal experiences with the economy are refracted through salient socio-political lenses.
This chapter describes some of the issues to be considered when dealing with longitudinal data. Longitudinal data can be defined as data gathered on a set of units over multiple time periods. Longitudinal data can be collected either prospectively or retrospectively, and data can be either qualitative or quantitative. Different ways of deriving repeated observations generate the three main types of longitudinal design: repeated cross-sectional surveys, panel surveys, and retrospective surveys. The world of longitudinal research is thus very heterogeneous. This chapter provides both a summary of advantages and disadvantages of each longitudinal design and some guidelines for authors and researchers.
The purpose of this article is to assess the relationship between trade liberalisation in Tunisia and the employment intensity of sectoral output growth, in order to examine the claim that free trade creates jobs by stimulating growth. Using panel data for 15 Tunisian sectors over the period 1983–2010, we compare estimated sectoral output–employment elasticities prior to and following the Free Trade Agreement process with the European Union. The results provide evidence that trade liberalisation in Tunisia has led to an increase in the intensity of employment in exporting manufacturing sectors like textiles, clothing and leather industries, and mechanical and electrical industries. However, their ability to generate jobs in response to value-added growth remains weak. Conversely, since the Free Trade Agreement process, the most labour-intensive service sectors, notably tourism and miscellaneous services, have shown a significant decrease in the employment intensity of their output growth. Our findings suggest that the Free Trade Agreement with the European Union has not really fostered the shift of the Tunisian Economy towards a more inclusive model and support the argument for a reorientation of investment policy in favour of sectors generating more job opportunities.
We investigate the impact of five types of subsidies granted under the European Union Common Agricultural Policy on the persistent and transient inefficiency of Polish dairy farms. Our research shows that coupled and environmental subsidies reduce transient technical inefficiency, while the opposite is true for Less Favoured Areas (LFA) and other rural subsidies. Simultaneously, environmental, LFA, and other rural subsidies increase persistent technical inefficiency. These results imply that the impact of each type of subsidy on technical efficiency can be different and that the effect of the particular type of subsidy can vary between transient and persistent technical inefficiency.
The June 2016 Brexit referendum sent international shock waves, possibly causing adjustments in public opinion not only in the UK, but also abroad. We suggest that these adjustments went beyond substantive attitudes on European integration and included procedural preferences towards direct democracy. Drawing on the insight that support for direct democracy can be instrumentally motivated, we argue that the outcome of the Brexit referendum led (politically informed) individuals to update their support for referendums based on their views towards European integration. Using panel data from Germany, we find that those in favour of European integration, especially those with high political involvement, turned more sceptical of the introduction of referendums in the aftermath of the Brexit referendum. Our study contributes to the understanding of preferences for direct democracy and documents a remarkable case of how – seemingly basic – procedural preferences can, in today's internationalized information environment, be shaped by high-profile events abroad.
The violent conclusion of Trump's 2017–21 presidency has produced sobering reassessments of American democracy. Elected officials' actions necessarily implicate public opinion, but to what extent did Trump's presidency and its anti-democratic efforts reflect shifts in public opinion in prior years? Were there attitudinal changes that served as early-warning signs? We answer those questions via a fifteen-wave, population-based panel spanning 2007 to 2020. Specifically, we track attitudes on system legitimacy and election fairness, assessments of Trump and other politicians, and open-ended explanations of vote choice and party perceptions. Across measures, there was little movement in public opinion foreshadowing Trump's norm-upending presidency, though levels of out-party animus were consistently high. Recent shifts in public opinion were thus not a primary engine of the Trump presidency's anti-democratic efforts or their violent culmination. Such stability suggests that understanding the precipitating causes of those efforts requires attention to other actors, including activists and elites.
Chapter 3 introduces our approach to measuring the transparency of deliberations in state legislatures. We discuss our coding strategies and provide descriptive information about our temporal data on the adoption of open deliberation laws and exemptions to those laws. This summary of the data provides important context, including general patterns in the timing and geography of the transparency movement and its recent decline. Importantly, the chapter includes a discussion on enforcement of these laws, demonstrating empirically that they are not written as token gestures toward accountability. They are intended to provide meaningful, powerful mechanisms to keep legislative deliberation public. Finally, we develop event history models of transparency adoption and exemption across the states to better understand the systematic factors associated with the decisions to open or close legislative meetings. These models generalize the historical patterns we uncover in Chapter 2, demonstrating in particular the pivotal role of a powerful press corps in pushing the transparency initiative forward and sustaining it over time.
In recent papers, Bonus-Malus Scales (BMS) estimated using data have been considered as an alternative to longitudinal data and hierarchical data approaches to model the dependence between different contracts for the same insured. Those papers, however, did not discuss in detail how to construct and understand BMS models, and many of the BMS’s basic properties were not discussed. The first objective of this paper is to correct this situation by explaining the logic behind BMS models and by describing those properties. More particularly, we will explain how BMS models are linked with simple count regression models that have covariates associated with the past claims experience. This study could help actuaries to understand how and why they should use BMS models for experience rating. The second objective of this paper is to create artificial past claims history for each insured. This is done by combining recent panel data theory with BMS models. We show that this addition significantly improves the prediction capacity of the BMS and provides a temporary solution for insurers who do not have enough historical data. We apply the BMS model to real data from a major Canadian insurance company. Results are analysed deeply to identify specific aspects of the BMS model.