We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consider the class of two parameter marginal logistic (Rasch) models, for a test of m True-False items, where the latent ability is assumed to be bounded. Using results of Karlin and Studen, we show that this class of nonparametric marginal logistic (NML) models is equivalent to the class of marginal logistic models where the latent ability assumes at most (m + 2)/2 values. This equivalence has two implications. First, estimation for the NML model is accomplished by estimating the parameters of a discrete marginal logistic model. Second, consistency for the maximum likelihood estimates of the NML model can be shown (when m is odd) using the results of Kiefer and Wolfowitz. An example is presented which demonstrates the estimation strategy and contrasts the NML model with a normal marginal logistic model.
Diagnostic classification models are confirmatory in the sense that the relationship between the latent attributes and responses to items is specified or parameterized. Such models are readily interpretable with each component of the model usually having a practical meaning. However, parameterized diagnostic classification models are sometimes too simple to capture all the data patterns, resulting in significant model lack of fit. In this paper, we attempt to obtain a compromise between interpretability and goodness of fit by regularizing a latent class model. Our approach starts with minimal assumptions on the data structure, followed by suitable regularization to reduce complexity, so that readily interpretable, yet flexible model is obtained. An expectation–maximization-type algorithm is developed for efficient computation. It is shown that the proposed approach enjoys good theoretical properties. Results from simulation studies and a real application are presented.
This paper traces the course of the consequences of viewing test responses as simply providing dichotomous data concerning ordinal relations. It begins by proposing that the score matrix is best considered to be items-plus-persons by items-plus-persons, and recording the wrongs as well as the rights. This shows how an underlying order is defined, and was used to provide the basis for a tailored testing procedure. It also was used to define a number of measures of test consistency. Test items provide person dominance relations, and the relations provided by one item can be in one of three relations with a second one: redundant, contradictory, or unique. Summary statistics concerning the number of relations of each kind are easy to get and provide useful information about the test, information which is related to but different from the usual statistics. These concepts can be extended to form the basis of a test theory which is based on ordinal statistics and frequency counts and which invokes the concept of true scores only in a limited sense.
A closed form estimator of the uniqueness (unique variance) in factor analysis is proposed. It has analytically desirable properties—consistency, asymptotic normality and scale invariance. The estimation procedure is given through the application to the two sets of Emmett's data and Holzinger and Swineford's data. The new estimator is shown to lead to values rather close to the maximum likelihood estimator.
Previous studies have found some puzzling power anomalies related to testing the indirect effect of a mediator. The power for the indirect effect stagnates and even declines as the size of the indirect effect increases. Furthermore, the power for the indirect effect can be much higher than the power for the total effect in a model where there is no direct effect and therefore the indirect effect is of the same magnitude as the total effect. In the presence of direct effect, the power for the indirect effect is often much higher than the power for the direct effect even when these two effects are of the same magnitude. In this study, the limiting distributions of related statistics and their non-centralities are derived. Computer simulations are conducted to demonstrate their validity. These theoretical results are used to explain the observed anomalies.
This commentary concerns the theoretical properties of the estimation procedure in “A General Method of Empirical Q-matrix Validation” by Jimmy de la Torre and Chia-Yi Chiu. It raises the consistency issue of the estimator, proposes some modifications to it, and also makes some conjectures.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
Chang and Stout (1993) presented a derivation of the asymptotic posterior normality of the latent trait given examinee responses under nonrestrictive nonparametric assumptions for dichotomous IRT models. This paper presents an extention of their results to polytomous IRT models in a fairly straightforward manner. In addition, a global information function is defined, and the relationship between the global information function and the currently used information functions is discussed. An information index that combines both the global and local information is proposed for adaptive testing applications.
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized method of moments (GMM) estimation techniques in multilevel modeling, the authors present a series of estimators along a robust to efficient continuum. This continuum depends on the assumptions that the analyst makes regarding the extent of the correlated effects. It is shown that the GMM approach provides an overarching framework that encompasses well-known estimators such as fixed and random effects estimators and also provides more options. These GMM estimators can be expressed as instrumental variable (IV) estimators which enhances their interpretability. Moreover, by exploiting the hierarchical structure of the data, the current technique does not require additional variables unlike traditional IV methods. Further, statistical tests are developed to compare the different estimators. A simulation study examines the finite sample properties of the estimators and tests and confirms the theoretical order of the estimators with respect to their robustness and efficiency. It further shows that not only are regression coefficients biased, but variance components may be severely underestimated in the presence of correlated effects. Empirical standard errors are employed as they are less sensitive to correlated effects when compared to model-based standard errors. An example using student achievement data shows that GMM estimators can be effectively used in a search for the most efficient among unbiased estimators.
The asymptotic posterior normality (APN) of the latent variable vector in an item response theory (IRT) model is a crucial argument in IRT modeling approaches. In case of a single latent trait and under general assumptions, Chang and Stout (Psychometrika, 58(1):37–52, 1993) proved the APN for a broad class of latent trait models for binary items. Under the same setup, they also showed the consistency of the latent trait’s maximum likelihood estimator (MLE). Since then, several modeling approaches have been developed that consider multivariate latent traits and assume their APN, a conjecture which has not been proved so far. We fill this theoretical gap by extending the results of Chang and Stout for multivariate latent traits. Further, we discuss the existence and consistency of MLEs, maximum a-posteriori and expected a-posteriori estimators for the latent traits under the same broad class of latent trait models.
The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output “AND” gate (DINA) model and the Deterministic Input Noisy Output “OR” gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.
The paper derives sufficient conditions for the consistency and asymptotic normality of the least squares estimator of a trilinear decomposition model for multiway data analysis.
Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.
We consider latent variable models for an infinite sequence (or universe) of manifest (observable) variables that may be discrete, continuous or some combination of these. The main theorem is a general characterization by empirical conditions of when it is possible to construct latent variable models that satisfy unidimensionality, monotonicity, conditional independence, and tail-measurability. Tail-measurability means that the latent variable can be estimated consistently from the sequence of manifest variables even though an arbitrary finite subsequence has been removed. The characterizing, necessary and sufficient, conditions that the manifest variables must satisfy for these models are conditional association and vanishing conditional dependence (as one conditions upon successively more other manifest variables). Our main theorem considerably generalizes and sharpens earlier results of Ellis and van den Wollenberg (1993), Holland and Rosenbaum (1986), and Junker (1993). It is also related to the work of Stout (1990).
The main theorem is preceded by many results for latent variable models in general—not necessarily unidimensional and monotone. They pertain to the uniqueness of latent variables and are connected with the conditional independence theorem of Suppes and Zanotti (1981). We discuss new definitions of the concepts of “true-score” and “subpopulation,” which generalize these notions from the “stochastic subject,” “random sampling,” and “domain sampling” formulations of latent variable models (e.g., Holland, 1990; Lord & Novick, 1968). These definitions do not require the a priori specification of a latent variable model.
Interpretation is ubiquitous in everyday life. We constantly interpret a variety of objects. Interpretation is central to the practice of international law, too. Arguing about international law’s content is the everyday business of international lawyers, and this often includes arguing about the existence and content of norms of customary international law (CIL). Although a number of scholars recognise that CIL can be interpreted, disagreements remain as to the precise methods and extent of CIL interpretation. Such disagreements are born of a common concern to secure competently made, coherent and accurate interpretations of CIL, given the latter’s non-textual nature. This chapter aims to explore in a preliminary manner two related questions regarding CIL interpretation: (1) Is it necessary, or even possible, to strive towards coherence in the interpretation of CIL? (2) Are there any possible indicators of (in-)coherence in that respect? Providing answers to these questions depends on how one understands coherence in the first place, including its relation to legal reasoning. A substantial part of the chapter will therefore deal with that as well.
A core normative assumption of welfare economics is that people ought to maximise utility and, as a corollary of that, they should be consistent in their choices. Behavioural economists have observed that people demonstrate systematic choice inconsistences, but rather than relaxing the normative assumption of utility maximisation they tend to attribute these behaviours to individual error. I argue in this article that this, in itself, is an error – an ‘error error’. In reality, a planner cannot hope to understand the multifarious desires that drive a person’s choices. Consequently, she is not able to discern which choice in an inconsistent set is erroneous. Moreover, those who are inconsistent may view neither of their choices as erroneous if the context reacts meaningfully with their valuation of outcomes. Others are similarly opposed to planners paternalistically intervening in the market mechanism to correct for behavioural inconsistencies, and advocate that the free market is the best means by which people can settle on mutually agreeable exchanges. However, I maintain that policymakers have a legitimate role in also enhancing people’s agentic capabilities. The most important way in which to achieve this is to invest in aspects of human capital and to create institutions that are broadly considered foundational to a person’s agency. However, there is also a role for so-called boosts to help to correct basic characterisation errors. I further contend that government regulations against self-interested acts of behavioural-informed manipulation by one party over another are legitimate, to protect the manipulated party from undesired inconsistency in their choices.
This article examines the alignment of bilateral investment treaties (BITs) with domestic development policies. The analysis reveals the presence of considerable disparity between Ethiopian BITs and the country's domestic development policies and the importance of ensuring consistency between the two. The potential options to resolve this disparity can be combined on a case-by-case basis, depending on different challenges, such as bargaining power, political commitment, procedural requirements and resistance from other treaty partners. The changing dynamics of global politics and the growing backlash against BITs have created a conducive environment for such reform.
The chapter addresses the penal regime of international criminal jurisdictions, focusing primarily on the law and practice of the UN ad hoc tribunals and the International Criminal Court (ICC). It sets out the categories of penalties which may be imposed by international criminal courts and tribunals for the core crimes and the offences against the administration of justice. The chapter sets out the commonly-adduced general purposes for punishing perpetrators of international crimes (retribution, deterrence, rehabilitation, etc.) and addresses the extent to which the punishment rationales acknowledged at the national level remain valid within the international penal regime. It analyses the international jurisdictions’ sentencing principles and practice, in particular the need for the individualization of penalties while ensuring consistency in sentencing and the relative weight accorded to aggravating and mitigating circumstances in determining the appropriate sentence. The chapter also surveys the procedures at sentencing, in particular the option of following the unified or bifurcated process for the determination of the guilt or innocence and, if appropriate, the sentence, as well as the arrangements adopted for pardon, early release (commutation) and review of sentences.
In the traditional multidimensional credibility models developed by Jewell ((1973) Operations Research Center, pp. 73–77.), the estimation of the hypothetical mean vector involves complex matrix manipulations, which can be challenging to implement in practice. Additionally, the estimation of hyperparameters becomes even more difficult in high-dimensional risk variable scenarios. To address these issues, this paper proposes a new multidimensional credibility model based on the conditional joint distribution function for predicting future premiums. First, we develop an estimator of the joint distribution function of a vector of claims using linear combinations of indicator functions based on past observations. By minimizing the integral of the expected quadratic distance function between the proposed estimator and the true joint distribution function, we obtain the optimal linear Bayesian estimator of the joint distribution function. Using the plug-in method, we obtain an explicit formula for the multidimensional credibility estimator of the hypothetical mean vector. In contrast to the traditional multidimensional credibility approach, our newly proposed estimator does not involve a matrix as the credibility factor, but rather a scalar. This scalar is composed of both population information and sample information, and it still maintains the essential property of increasingness with respect to the sample size. Furthermore, the new estimator based on the joint distribution function can be naturally extended and applied to estimate the process covariance matrix and risk premiums under various premium principles. We further illustrate the performance of the new estimator by comparing it with the traditional multidimensional credibility model using bivariate exponential-gamma and multivariate normal distributions. Finally, we present two real examples to demonstrate the findings of our study.
Must rational thinkers have consistent sets of beliefs? I shall argue that it can be rational for a thinker to believe a set of propositions known to be inconsistent. If this is right, an important test for a theory of rational belief is that it allows for the right kinds of inconsistency. One problem we face in trying to resolve disagreements about putative rational requirements is that parties to the disagreement might be working with different conceptions of the relevant attitudes. My aim is modest. I hope to show that there is at least one important notion of belief such that a thinker might rationally hold a collection of beliefs (so understood) even when the thinker knows their contents entail a contradiction.