To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For network meta-analysis (NMA), we usually assume that the treatment arms are independent within each included trial. This assumption is justified for parallel design trials and leads to a property we call consistency of variances for both multi-arm trials and NMA estimates. However, the assumption is violated for trials with correlated arms, for example, split-body trials. For multi-arm trials with correlated arms, the variance of a contrast is not the sum of the arm-based variances, but comes with a correlation term. This may lead to violations of variance consistency, and the inconsistency of variances may even propagate to the NMA estimates. We explain this using a geometric analogy where three-arm trials correspond to triangles and four-arm trials correspond to tetrahedrons. We also investigate which information has to be extracted for a multi-arm trial with correlated arms and provide an algorithm to analyze NMAs including such trials.
This chapter explains how to estimate population parameters from data. We introduce random sampling, an approach that yields accurate estimates from limited data. We then define the bias and the standard error, which quantify the average error of an estimator and how much it varies, respectively. In addition, we derive deviation bounds and use them to prove the law of large numbers, which states that averaging many independent samples from a distribution yields an accurate estimate of its mean. An important consequence is that random sampling provides a precise estimate of means and proportions. However, we caution that this is not necessarily the case, if the data contain extreme values. Next, we discuss the central limit theorem (CLT), according to which averages of independent quantities tend to be Gaussian. We again provide a cautionary tale, warning that this does not hold in the absence of independence. Then, we explain how to use the CLT to build confidence intervals which quantify the uncertainty of estimates obtained from finite data. Finally, we introduce the bootstrap, a popular computational technique to estimate standard errors and build confidence intervals.
The Hawkes process is a popular candidate for researchers to model phenomena that exhibit a self-exciting nature. The classical Hawkes process assumes the excitation kernel takes an exponential form, thus suggesting that the peak excitation effect of an event is immediate and the excitation effect decays towards 0 exponentially. While the assumption of an exponential kernel makes it convenient for studying the asymptotic properties of the Hawkes process, it can be restrictive and unrealistic for modelling purposes. A variation on the classical Hawkes process is proposed where the exponential assumption on the kernel is replaced by integrability and smoothness type conditions. However, it is substantially more difficult to conduct asymptotic analysis under this setup since the intensity process is non-Markovian when the excitation kernel is non-exponential, rendering techniques for studying the asymptotics of Markov processes inappropriate. By considering the Hawkes process with a general excitation kernel as a stationary Poisson cluster process, the intensity process is shown to be ergodic. Furthermore, a parametric setup is considered, under which, by utilising the recently established ergodic property of the intensity process, consistency of the maximum likelihood estimator is demonstrated.
In this chapter we will investigate how sociolinguistic theory overlaps with selected areas of applied linguistics. We revisit the question how discrimination operates in the language ideology of Standard English and find out how this may entail serious impediments in domains such as education and health advice. We look at how anthropological and ethnographic issues have an impact on cultural misunderstandings, how insights from variation and change can be used to help improve children’s reading and writing skills, and will discuss the involvement of sociolinguists in dialect maintenance and revival issues. There are special sections of forensic sociolinguistics and legal aspects of language usage, and we present hands-on cases of real-life issues where sociolinguistics is relevant, particularly the court case following the murder of Trayvon Martin in 2013.
This title focuses on the interpretative methodologies and principles employed by international human rights organs in applying and developing human rights norms. It explores the role of various interpreters, including international, regional, and national courts, in shaping the meaning and scope of human rights. The section examines the methods of interpretation used by human rights bodies, such as textual, contextual, purposive, and evolutionary approaches, and the challenges in ensuring consistency and coherence across different jurisdictions. It also discusses the purposes of interpretation, including the protection of human rights, the development of international human rights law, and the promotion of judicial dialogue and coherence. By analyzing the interpretative practices of human rights organs, this title aims to provide a deeper understanding of the dynamics of human rights interpretation and the factors influencing the application of human rights norms in diverse legal and cultural contexts.
This part focuses on the interpretative methodologies and principles employed by international human rights organs in applying and developing human rights norms. It explores the role of various interpreters, including international, regional, and national courts, in shaping the meaning and scope of human rights. The sections examine the methods of interpretation used by human rights bodies, such as textual, contextual, purposive, and evolutionary approaches, and the challenges in ensuring consistency and coherence across different jurisdictions. It also discusses the purposes of interpretation, including the protection of human rights, the development of international human rights law, and the promotion of judicial dialogue and coherence. This part delves into the international legal regime governing human rights and freedoms, covering states’ general obligations, the conditions for engaging state responsibility, and the regime for the enjoyment and exercise of rights and freedoms. By analyzing the interpretative practices and legal obligations, this part aims to provide a deeper understanding of the dynamics of human rights interpretation and the factors influencing the application of human rights norms in diverse legal and cultural contexts.
The preface paradox is often taken to show that beliefs can be individually rational but jointly inconsistent. However, this received conflict between rationality and consistency is unfounded. This paper seeks to show that no rational beliefs are actually inconsistent in the preface paradox
We explore general notions of consistency. These notions are sentences $\mathcal {C}_{\alpha }$ (they depend on numerations $\alpha $ of a certain theory) that generalize the usual features of consistency statements. The following forms of consistency fit the definition of general notions of consistency (${\texttt {Pr}}_{\alpha }$ denotes the provability predicate for the numeration $\alpha $): $\neg {\texttt {Pr}}_{\alpha }(\ulcorner \perp \urcorner )$, $\omega \text {-}{\texttt {Con}}_{\alpha }$ (the formalized $\omega $-consistency), $\neg {\texttt {Pr}}_{\alpha }(\ulcorner {\texttt {Pr}}_{\alpha }(\ulcorner \cdots {\texttt {Pr}}_{\alpha }(\ulcorner \perp \urcorner )\cdots \urcorner )\urcorner )$, and $n\text {-}{\texttt {Con}}_{\alpha }$ (the formalized n-consistency of Kreisel).
We generalize the former notions of consistency while maintaining two important features, to wit: Gödel’s Second Incompleteness Theorem, i.e., (with $\xi $ some standard $\Delta _0(T)$-numeration of the axioms of T), and a result by Feferman that guarantees the existence of a numeration $\tau $ such that $T\vdash \mathcal {C}_\tau $.
We encompass slow consistency into our framework. To show how transversal and natural our approach is, we create a notion of provability from a given $\mathcal {C}_{\alpha }$, we call it $\mathcal {P}_{\mathcal {C}_{\alpha }}$, and we present sufficient conditions on $\mathcal {C}_{\alpha }$ for the notion $\mathcal {P}_{\mathcal {C}_{\alpha }}$ to satisfy the standard derivability conditions. Moreover, we also develop a notion of interpretability from a given $\mathcal {C}_{\alpha }$, we call it $\rhd _{\mathcal {C}_{\alpha }}$, and we study some of its properties. All these new notions—of provability and interpretability—serve primarily to emphasize the naturalness of our notions, not necessarily to give insights on these topics.
This paper critically assesses Rizzo and Whitman’s theory of inclusive rationality in light of the ongoing cross-disciplinary debate about rationality, welfare analyses and policy evaluation. The paper aims to provide three main contributions to this debate. First, it explicates the relation between the consistency conditions presupposed by standard axiomatic conceptions of rationality and the standards of rationality presupposed by Rizzo and Whitman’s theory of inclusive rationality. Second, it provides a qualified defence of the consistency conditions presupposed by standard axiomatic conceptions of rationality against the main criticisms put forward by Rizzo and Whitman. And third, it identifies and discusses specific strengths and weaknesses of Rizzo and Whitman’s theory of inclusive rationality in the context of welfare analyses and policy evaluation.
We begin with the canonical status of the reals: this extends up to uniqueness to within isomorphism as a complete Archimedean ordered field, but not up to cardinality aspects. We discuss four ‘elephants in the room’ here (an elephant in the room is something obviously there but which no one wants to mention). The first elephant (from Gödel’s incompleteness theorem and the Continuum Hypothesis, CH): one cannot properly speak of the real line, but rather which real line one chooses to work with. The second is ‘which sets of reals can one use?’ (it depends on what axioms of set theory one assumes – in particular, the role of the Axiom of Choice, AC). The third is that there are sentences that are neither provable nor disprovable, and that no non-trivial axiom system is capable of proving its own consistency. Thus, we do not – cannot – know that mathematics itself is consistent. The fourth elephant is that even to define cardinals, the concept of cardinality needs AC.
Consider the class of two parameter marginal logistic (Rasch) models, for a test of m True-False items, where the latent ability is assumed to be bounded. Using results of Karlin and Studen, we show that this class of nonparametric marginal logistic (NML) models is equivalent to the class of marginal logistic models where the latent ability assumes at most (m + 2)/2 values. This equivalence has two implications. First, estimation for the NML model is accomplished by estimating the parameters of a discrete marginal logistic model. Second, consistency for the maximum likelihood estimates of the NML model can be shown (when m is odd) using the results of Kiefer and Wolfowitz. An example is presented which demonstrates the estimation strategy and contrasts the NML model with a normal marginal logistic model.
Diagnostic classification models are confirmatory in the sense that the relationship between the latent attributes and responses to items is specified or parameterized. Such models are readily interpretable with each component of the model usually having a practical meaning. However, parameterized diagnostic classification models are sometimes too simple to capture all the data patterns, resulting in significant model lack of fit. In this paper, we attempt to obtain a compromise between interpretability and goodness of fit by regularizing a latent class model. Our approach starts with minimal assumptions on the data structure, followed by suitable regularization to reduce complexity, so that readily interpretable, yet flexible model is obtained. An expectation–maximization-type algorithm is developed for efficient computation. It is shown that the proposed approach enjoys good theoretical properties. Results from simulation studies and a real application are presented.
This paper traces the course of the consequences of viewing test responses as simply providing dichotomous data concerning ordinal relations. It begins by proposing that the score matrix is best considered to be items-plus-persons by items-plus-persons, and recording the wrongs as well as the rights. This shows how an underlying order is defined, and was used to provide the basis for a tailored testing procedure. It also was used to define a number of measures of test consistency. Test items provide person dominance relations, and the relations provided by one item can be in one of three relations with a second one: redundant, contradictory, or unique. Summary statistics concerning the number of relations of each kind are easy to get and provide useful information about the test, information which is related to but different from the usual statistics. These concepts can be extended to form the basis of a test theory which is based on ordinal statistics and frequency counts and which invokes the concept of true scores only in a limited sense.
A closed form estimator of the uniqueness (unique variance) in factor analysis is proposed. It has analytically desirable properties—consistency, asymptotic normality and scale invariance. The estimation procedure is given through the application to the two sets of Emmett's data and Holzinger and Swineford's data. The new estimator is shown to lead to values rather close to the maximum likelihood estimator.
Previous studies have found some puzzling power anomalies related to testing the indirect effect of a mediator. The power for the indirect effect stagnates and even declines as the size of the indirect effect increases. Furthermore, the power for the indirect effect can be much higher than the power for the total effect in a model where there is no direct effect and therefore the indirect effect is of the same magnitude as the total effect. In the presence of direct effect, the power for the indirect effect is often much higher than the power for the direct effect even when these two effects are of the same magnitude. In this study, the limiting distributions of related statistics and their non-centralities are derived. Computer simulations are conducted to demonstrate their validity. These theoretical results are used to explain the observed anomalies.
This commentary concerns the theoretical properties of the estimation procedure in “A General Method of Empirical Q-matrix Validation” by Jimmy de la Torre and Chia-Yi Chiu. It raises the consistency issue of the estimator, proposes some modifications to it, and also makes some conjectures.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
Chang and Stout (1993) presented a derivation of the asymptotic posterior normality of the latent trait given examinee responses under nonrestrictive nonparametric assumptions for dichotomous IRT models. This paper presents an extention of their results to polytomous IRT models in a fairly straightforward manner. In addition, a global information function is defined, and the relationship between the global information function and the currently used information functions is discussed. An information index that combines both the global and local information is proposed for adaptive testing applications.
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized method of moments (GMM) estimation techniques in multilevel modeling, the authors present a series of estimators along a robust to efficient continuum. This continuum depends on the assumptions that the analyst makes regarding the extent of the correlated effects. It is shown that the GMM approach provides an overarching framework that encompasses well-known estimators such as fixed and random effects estimators and also provides more options. These GMM estimators can be expressed as instrumental variable (IV) estimators which enhances their interpretability. Moreover, by exploiting the hierarchical structure of the data, the current technique does not require additional variables unlike traditional IV methods. Further, statistical tests are developed to compare the different estimators. A simulation study examines the finite sample properties of the estimators and tests and confirms the theoretical order of the estimators with respect to their robustness and efficiency. It further shows that not only are regression coefficients biased, but variance components may be severely underestimated in the presence of correlated effects. Empirical standard errors are employed as they are less sensitive to correlated effects when compared to model-based standard errors. An example using student achievement data shows that GMM estimators can be effectively used in a search for the most efficient among unbiased estimators.
The asymptotic posterior normality (APN) of the latent variable vector in an item response theory (IRT) model is a crucial argument in IRT modeling approaches. In case of a single latent trait and under general assumptions, Chang and Stout (Psychometrika, 58(1):37–52, 1993) proved the APN for a broad class of latent trait models for binary items. Under the same setup, they also showed the consistency of the latent trait’s maximum likelihood estimator (MLE). Since then, several modeling approaches have been developed that consider multivariate latent traits and assume their APN, a conjecture which has not been proved so far. We fill this theoretical gap by extending the results of Chang and Stout for multivariate latent traits. Further, we discuss the existence and consistency of MLEs, maximum a-posteriori and expected a-posteriori estimators for the latent traits under the same broad class of latent trait models.