We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The present study aimed to adapt the actively open-minded thinking about evidence (AOT-E) scale to Brazilian Portuguese and assess its psychometric properties and nomological network in a Brazilian sample. It begins by investigating the underlying content structure of the AOT-E in its original form. Results from an exploratory factor analysis (EFA) of secondary data serve as the basis for the proposed adaptation, which is subjected to both EFA and confirmatory factor analysis (CFA). A total of 718 participants from various regions of Brazil completed an online survey that included the AOT-E, along with other instruments that allowed for an assessment of the scale’s nomological network, including measures of science literacy, attitude toward science, conspiracy beliefs, and religiosity. The EFA and CFA of both the original and Brazilian samples suggested a unidimensional solution. Despite the differences found in a multigroup CFA, polychoric correlations provided evidence of expected nomological relationships that replicate international findings. Overall, this study contributes to expanding the availability of adapted and validated research instruments in non-WEIRD samples.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.
A new algorithm to obtain the least-squares or MINRES solution in common factor analysis is presented. It is based on the up-and-down Marquardt algorithm developed by the present authors for a general nonlinear least-squares problem. Experiments with some numerical models and some empirical data sets showed that the algorithm worked nicely and that SMC (Squared Multiple Correlation) performed best among four sets of initial values for common variances but that the solution might sometimes be very sensitive to fluctuations in the sample covariance matrix.
Current practice in structural modeling of observed continuous random variables is limited to representation systems for first and second moments (e.g., means and covariances), and to distribution theory based on multivariate normality. In psychometrics the multinormality assumption is often incorrect, so that statistical tests on parameters, or model goodness of fit, will frequently be incorrect as well. It is shown that higher order product moments yield important structural information when the distribution of variables is arbitrary. Structural representations are developed for generalizations of the Bentler-Weeks, Jöreskog-Keesling-Wiley, and factor analytic models. Some asymptotically distribution-free efficient estimators for such arbitrary structural models are developed. Limited information estimators are obtained as well. The special case of elliptical distributions that allow nonzero but equal kurtoses for variables is discussed in some detail. The argument is made that multivariate normal theory for covariance structure models should be abandoned in favor of elliptical theory, which is only slightly more difficult to apply in practice but specializes to the traditional case when normality holds. Many open research areas are described.
Factor analysis for nonnormally distributed variables is discussed in this paper. The main difference between our approach and more traditional approaches is that not only second order cross-products (like covariances) are utilized, but also higher order cross-products. It turns out that under some conditions the parameters (factor loadings) can be uniquely determined. Two estimation procedures will be discussed. One method gives Best Generalized Least Squares (BGLS) estimates, but is computationally very heavy, in particular for large data sets. The other method is a least squares method which is computationally less heavy. In one example the two methods will be compared by using the bootstrap method. In another example real life data are analyzed.
In an addendum to his seminal 1969 article Jöreskog stated two sets of conditions for rotational identification of the oblique factor solution under utilization of fixed zero elements in the factor loadings matrix (Jöreskog in Advances in factor analysis and structural equation models, pp. 40–43, 1979). These condition sets, formulated under factor correlation and factor covariance metrics, respectively, were claimed to be equivalent and to lead to global rotational uniqueness of the factor solution. It is shown here that the conditions for the oblique factor correlation structure need to be amended for global rotational uniqueness, and, hence, that the condition sets are not equivalent in terms of unicity of the solution.
The Maximum-likelihood estimator dominates the estimation of general structural equation models. Noniterative, equation-by-equation estimators for factor analysis have received some attention, but little has been done on such estimators for latent variable equations. I propose an alternative 2SLS estimator of the parameters in LISREL type models and contrast it with the existing ones. The new 2SLS estimator allows observed and latent variables to originate from nonnormal distributions, is consistent, has a known asymptotic covariance matrix, and is estimable with standard statistical software. Diagnostics for evaluating instrumental variables are described. An empirical example illustrates the estimator.
It is proved for the common factor model with r common factors that under certain condition s which maintain the distinctiveness of each common factor a given common factor will be determinate if there exists an unlimited number of variables in the model each having an absolute correlation with the factor greater than some arbitrarily small positive quantity.
The component loadings are interpreted by considering their magnitudes, which indicates how strongly each of the original variables relates to the corresponding principal component. The usual ad hoc practice in the interpretation process is to ignore the variables with small absolute loadings or set to zero loadings smaller than some threshold value. This, in fact, makes the component loadings sparse in an artificial and a subjective way. We propose a new alternative approach, which produces sparse loadings in an optimal way. The introduced approach is illustrated on two well-known data sets and compared to the existing rotation methods.
Item-level response time (RT) data can be conveniently collected from computer-based test/survey delivery platforms and have been demonstrated to bear a close relation to a miscellany of cognitive processes and test-taking behaviors. Individual differences in general processing speed can be inferred from item-level RT data using factor analysis. Conventional linear normal factor models make strong parametric assumptions, which sacrifices modeling flexibility for interpretability, and thus are not ideal for describing complex associations between observed RT and the latent speed. In this paper, we propose a semiparametric factor model with minimal parametric assumptions. Specifically, we adopt a functional analysis of variance representation for the log conditional densities of the manifest variables, in which the main effect and interaction functions are approximated by cubic splines. Penalized maximum likelihood estimation of the spline coefficients can be performed by an Expectation-Maximization algorithm, and the penalty weight can be empirically determined by cross-validation. In a simulation study, we compare the semiparametric model with incorrectly and correctly specified parametric factor models with regard to the recovery of data generating mechanism. A real data example is also presented to demonstrate the advantages of the proposed method.
Some methods that analyze three-way arrays of data (including INDSCAL and CANDECOMP/PARAFAC) provide solutions that are not subject to arbitrary rotation. This property is studied in this paper by means of the “triple product” [A, B, C] of three matrices. The question is how well the triple product determines the three factors. The answer: up to permutation of columns and multiplication of columns by scalars—under certain conditions. In this paper we greatly expand the conditions under which the result is known to hold. A surprising fact is that the nonrotatability characteristic can hold even when the number of factors extracted is greater than every dimension of the three-way array, namely, the number of subjects, the number of tests, and the number of treatments.
Component loss functions (CLFs) similar to those used in orthogonal rotation are introduced to define criteria for oblique rotation in factor analysis. It is shown how the shape of the CLF affects the performance of the criterion it defines. For example, it is shown that monotone concave CLFs give criteria that are minimized by loadings with perfect simple structure when such loadings exist. Moreover, if the CLFs are strictly concave, minimizing must produce perfect simple structure whenever it exists. Examples show that methods defined by concave CLFs perform well much more generally. While it appears important to use a concave CLF, the specific CLF used is less important. For example, the very simple linear CLF gives a rotation method that can easily outperform the most popular oblique rotation methods promax and quartimin and is competitive with the more complex simplimax and geomin methods.
Factor analysis and principal component analysis result in computing a new coordinate system, which is usually rotated to obtain a better interpretation of the results. In the present paper, the idea of rotation to simple structure is extended to two dimensions. While the classical definition of simple structure is aimed at rotating (one-dimensional) factors, the extension to a simple structure for two dimensions is based on the rotation of planes. The resulting planes (principal planes) reveal a better view of the data than planes spanned by factors from classical rotation and hence allow a more reliable interpretation. The usefulness of the method as well as the effectiveness of a proposed algorithm are demonstrated by simulation experiments and an example.
Component loss functions (CLFs) are used to generalize the quartimax criterion for orthogonal rotation in factor analysis. These replace the fourth powers of the factor loadings by an arbitrary function of the second powers. Criteria of this form were introduced by a number of authors, primarily Katz and Rohlf (1974) and Rozeboom (1991), but there has been essentially no follow-up to this work. A method so simple, natural, and general deserves to be investigated more completely. A number of theoretical results are derived including the fact that any method using a concave CLF will recover perfect simple structure whenever it exists, and there are methods that will recover Thurstone simple structure whenever it exists. Specific CLFs are identified and it is shown how to compare these using standardized plots. Numerical examples are used to illustrate and compare CLF and other methods. Sorted absolute loading plots are introduced to aid in comparing results and setting parameters for methods that require them.
Joint correspondence analysis is a technique for constructing reduced-dimensional representations of pairwise relationships among categorical variables. The technique was proposed by Greenacre as an alternative to multiple correspondence analysis. Joint correspondence analysis differs from multiple correspondence analysis in that it focuses solely on between-variable relationships. Greenacre described one alternating least-squares algorithm for conducting joint correspondence analysis. Another alternating least-squares algorithm is described in this article. The algorithm is guaranteed to converge, and does so in fewer iterations than does the algorithm proposed by Greenacre. A modification of the algorithm for handling Heywood cases is described. The algorithm is illustrated on two data sets.
Situations sometimes arise in which variables collected in a study are not jointly observed. This typically occurs because of study design. An example is an equating study where distinct groups of subjects are administered different sections of a test. In the normal maximum likelihood function to estimate the covariance matrix among all variables, elements corresponding to those that are not jointly observed are unidentified. If a factor analysis model holds for the variables, however, then all sections of the matrix can be accurately estimated, using the fact that the covariances are a function of the factor loadings. Standard errors of the estimated covariances can be obtained by the delta method. In addition to estimating the covariance matrix in this design, the method can be applied to other problems such as regression factor analysis. Two examples are presented to illustrate the method.
This paper considers some mathematical aspects of minimum trace factor analysis (MTFA). The uniqueness of an optimal point of MTFA is proved and necessary and sufficient conditions for a point x to be optimal are established. Finally, some results about the connection between MTFA and the classical minimum rank factor analysis will be presented.
A general approach to the analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.
Several different types of covariance structures are considered as special cases of the general model. These include models for sets of congeneric tests, models for confirmatory and exploratory factor analysis, models for estimation of variance and covariance components, regression models with measurement errors, path analysis models, simplex and circumplex models. Many of the different types of covariance structures are illustrated by means of real data.
The relationship between the higher-order factor model and the hierarchical factor model is explored formally. We show that the Schmid-Leiman transformation produces constrained hierarchical factor solutions. Using a generalized Schmid-Leiman transformation and its inverse, we show that for any unconstrained hierarchical factor model there is an equivalent higher-order factor model with direct effects (loadings) on the manifest variables from the higher-order factors. Therefore, the class of higher-order factor models (without direct effects of higher-order factors) is nested within the class of unconstrained hierarchical factor models. In light of these formal results, we discuss some implications for testing the higher-order factor model and the issue of general factor. An interesting aspect concerning the efficient fitting of the higher-order factor model with direct effects is noted.
If stimulus responses are linearly related to squared distances between stimulus scale values and person scores along a latent continuum, (a) the stimulus × stimulus correlation matrix will display a simplex-like pattern, (b) the signs of first-order partial correlations can be specified in an empirically testable manner, and (c) the variables will have a semicircular, two-factor structure. Along the semicircle, variables will be ordered by their positions on the latent dimension. The above results suggest procedures for examining the appropriateness of the model and procedures for ordering the stimuli. Applications to developmental and attitudinal data are discussed.