We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Vale–Maurelli (VM) approach to generating non-normal multivariate data involves the use of Fleishman polynomials applied to an underlying Gaussian random vector. This method has been extensively used in Monte Carlo studies during the last three decades to investigate the finite-sample performance of estimators under non-Gaussian conditions. The validity of conclusions drawn from these studies clearly depends on the range of distributions obtainable with the VM method. We deduce the distribution and the copula for a vector generated by a generalized VM transformation, and show that it is fundamentally linked to the underlying Gaussian distribution and copula. In the process we derive the distribution of the Fleishman polynomial in full generality. While data generated with the VM approach appears to be highly non-normal, its truly multivariate properties are close to the Gaussian case. A Monte Carlo study illustrates that generating data with a different copula than that implied by the VM approach severely weakens the performance of normal-theory based ML estimates.
Unless data are missing completely at random (MCAR), proper methodology is crucial for the analysis of incomplete data. Consequently, methods for effectively testing the MCAR mechanism become important, and procedures were developed via testing the homogeneity of means and variances–covariances across the observed patterns (e.g., Kim & Bentler in Psychometrika 67:609–624, 2002; Little in J Am Stat Assoc 83:1198–1202, 1988). The current article shows that the population counterparts of the sample means and covariances of a given pattern of the observed data depend on the underlying structure that generates the data, and the normal-distribution-based maximum likelihood estimates for different patterns of the observed sample can converge to the same values even when data are missing at random or missing not at random, although the values may not equal those of the underlying population distribution. The results imply that statistics developed for testing the homogeneity of means and covariances cannot be safely used for testing the MCAR mechanism even when the population distribution is multivariate normal.
The paper clarifies the relationship among several information matrices for the maximum likelihood estimates (MLEs) of item parameters. It shows that the process of calculating the observed information matrix also generates a related matrix that is the middle piece of a sandwich-type covariance matrix. Monte Carlo results indicate that standard errors (SEs) based on the observed information matrix are robust to many, but not all, conditions of model/distribution misspecifications. SEs based on the sandwich-type covariance matrix perform most consistently across conditions. Results also suggest that SEs based on other matrices are either not consistent or perform not as robust as those based on the sandwich-type covariance matrix or the observed information matrix.
Efron's Monte Carlo bootstrap algorithm is shown to cause degeneracies in Pearson's r for sufficiently small samples. Two ways of preventing this problem when programming the bootstrap of r are considered.
We describe methods for assessing all possible criteria (i.e., dependent variables) and subsets of criteria for regression models with a fixed set of predictors, x (where x is an n×1 vector of independent variables). Our methods build upon the geometry of regression coefficients (hereafter called regression weights) in n-dimensional space. For a full-rank predictor correlation matrix, Rxx, of order n, and for regression models with constant R2 (coefficient of determination), the OLS weight vectors for all possible criteria terminate on the surface of an n-dimensional ellipsoid. The population performance of alternate regression weights—such as equal weights, correlation weights, or rounded weights—can be modeled as a function of the Cartesian coordinates of the ellipsoid. These geometrical notions can be easily extended to assess the sampling performance of alternate regression weights in models with either fixed or random predictors and for models with any value of R2. To illustrate these ideas, we describe algorithms and R (R Development Core Team, 2009) code for: (1) generating points that are uniformly distributed on the surface of an n-dimensional ellipsoid, (2) populating the set of regression (weight) vectors that define an elliptical arc in ℝn, and (3) populating the set of regression vectors that have constant cosine with a target vector in ℝn. Each algorithm is illustrated with real data. The examples demonstrate the usefulness of studying all possible criteria when evaluating alternate regression weights in regression models with a fixed set of predictors.
In the framework of a robustness study on maximum likelihood estimation with LISREL three types of problems are dealt with: nonconvergence, improper solutions, and choice of starting values. The purpose of the paper is to illustrate why and to what extent these problems are of importance for users of LISREL. The ways in which these issues may affect the design and conclusions of robustness research is also discussed.
A Monte Carlo study assessed the effect of sampling error and model characteristics on the occurrence of nonconvergent solutions, improper solutions and the distribution of goodness-of-fit indices in maximum likelihood confirmatory factor analysis. Nonconvergent and improper solutions occurred more frequently for smaller sample sizes and for models with fewer indicators of each factor. Effects of practical significance due to sample size, the number of indicators per factor and the number of factors were found for GFI, AGFI, and RMR, whereas no practical effects were found for the probability values associated with the chi-square likelihood ratio test.
An examination is made concerning the utility and design of studies comparing nonmetric scaling algorithms and their initial configurations, as well as the agreement between the results of such studies. Various practical details of nonmetric scaling are also considered.
The paper obtains consistent standard errors (SE) and biases of order O(1/n) for the sample standardized regression coefficients with both random and given predictors. Analytical results indicate that the formulas for SEs given in popular text books are consistent only when the population value of the regression coefficient is zero. The sample standardized regression coefficients are also biased in general, although it should not be a concern in practice when the sample size is not too small. Monte Carlo results imply that, for both standardized and unstandardized sample regression coefficients, SE estimates based on asymptotics tend to under-predict the empirical ones at smaller sample sizes.
Multidimensional scaling has recently been enhanced so that data defined at only the nominal level of measurement can be analyzed. The efficacy of ALSCAL, an individual differences multidimensional scaling program which can analyze data defined at the nominal, ordinal, interval and ratio levels of measurement, is the subject of this paper. A Monte Carlo study is presented which indicates that (a) if we know the correct level of measurement then ALSCAL can be used to recover the metric information presumed to underlie the data; and that (b) if we do not know the correct level of measurement then ALSCAL can be used to determine the correct level and to recover the underlying metric structure. This study also indicates, however, that with nominal data ALSCAL is quite likely to obtain solutions which are not globally optimal, and that in these cases the recovery of metric structure is quite poor. A second study is presented which isolates the potential cause of these problems and forms the basis for a suggested modification of the ALSCAL algorithm which should reduce the frequency of locally optimal solutions.
Asymptotic distributions of the estimators of communalities are derived for the maximum likelihood method in factor analysis. It is shown that the common practice of equating the asymptotic standard error of the communality estimate to the unique variance estimate is correct for standardized communality but not correct for unstandardized communality. In a Monte Carlo simulation the accuracy of the normal approximation to the distributions of the estimators are assessed when the sample size is 150 or 300.
We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump Lévy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance, and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving Lévy process, including the generalised hyperbolic, normal-gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure Lévy process, and to a Brownian-driven SDE in the case of the Lévy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for Lévy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo, expectation-maximisation, and sequential Monte Carlo.
The miniaturized conical cones for stereotactic radiosurgery (SRS) make it challenging in measurement of dosimetric data needed for commissioning of treatment planning system. This study aims at validating dosimetric characteristics of conical cone collimator manufactured by Varian using Monte Carlo (MC) simulation technique.
Methods & Material:
Percentage depth dose (PDD), tissue maximum ratio (TMR), lateral dose profile (LDP) and output factor (OF) were measured for cones with diameters of 5mm, 7·5mm, 10mm, 12·5 mm, 15 mm and 17·5 mm using EDGE detector for 6MV flattening filter-free (FFF) beam from Truebeam linac. Similarly, MC modelling of linac for 6MVFFF beam and simulation of conical cones were performed in PRIMO. Subsequently, measured beam data were validated by comparing them with results obtained from MC simulation.
Results:
The measured and MC-simulated PDDs or TMRs showed close agreement within 3% except for cone of 5mm diameter. Deviations between measured and simulated PDDs or TMRs were substantially higher for 5mm cone. The maximum deviations at depth of 10cm, 20cm and at range of 50% dose were found 4·05%, 7·52%, 5·52% for PDD and 4·04%, 7·03%, 5·23% for TMR with 5mm cone, respectively. The measured LDPs acquired for all the cones showed close agreement with MC LDPs except in penumbra region around 80% and 20% dose profile. Measured and MC full-width half maxima of dose profiles agreed with nominal cone size within ± 0·2 mm. Measured and MC OFs showed excellent agreement for cone sizes ≥10 mm. However, deviation consistently increases as the size of the cone gets smaller.
Findings:
MC model of conical cones for SRS has been presented and validated. Very good agreement was found between experimentally measured and MC-simulated data. The dosimetry dataset obtained in this study validated using MC model may be used to benchmark beam data measured for commissioning of SRS for cone planning.
This paper presents a set of theoretical models that links a two-phase sequence of cooperative political integration and conflict to explore the reciprocal relationship between war and state formation. It compares equilibria rates of state formation and conflict using a Monte Carlo that generates comparative statics by altering the systemic distribution of ideology, population, tax rates, and war costs across polities. This approach supports three core findings. First, war-induced political integration is at least 2.5 times as likely to occur as integration to realize economic gains. Second, we identify mechanisms linking endogenous organizations to the likelihood of conflict in the system. For example, a greater domestic willingness to support public goods production facilitates the creation of buffer states that reduce the likelihood of a unique class of trilateral wars. These results suggest that the development of the modern administrative state has helped to foster peace. Third, we explore how modelling assumptions setting the number of actors in a strategic context can shape conclusions about war and state formation. We find that dyadic modelling restrictions tend to underestimate the likelihood of cooperative political integration and overestimate the likelihood of war relative to a triadic modelling context.
This chapter elaborates on the calibration and validation procedures for the model. First, we describe our calibration strategy in which a customised optimisation algorithm makes use of a multi-objective function, preventing the loss of indicator-specific error information. Second, we externally validate our model by replicating two well-known statistical patterns: (1) the skewed distribution of budgetary changes and (2) the negative relationship between development and corruption. Third, we internally validate the model by showing that public servants who receive more positive spillovers tend to be less efficient. Fourth, we analyse the statistical behaviour of the model through different tests: validity of synthetic counterfactuals, parameter recovery, overfitting, and time equivalence. Finally, we make a brief reference to the literature on estimating SDG networks.
We report a combined experimental and theoretical study of uranyl complexes that form on the interlayer siloxane surfaces of montmorillonite. We also consider the effect of isomorphic substitution on surface complexation since our montmorillonite sample contains charge sites in both the octahedral and tetrahedral sheets. Results are given for the two-layer hydrate with a layer spacing of 14.58 Å. Polarized-dependent X-ray absorption fine structure spectra are nearly invariant with the incident angle, indicating that the uranyl ions are oriented neither perpendicular nor parallel to the basal plane of montmorillonite. The equilibrated geometry from Monte Carlo simulations suggests that uranyl ions form outer-sphere surface complexes with the [O=U=O]2+ axis tilted at an angle of ~45° to the surface normal.
We performed Monte Carlo and molecular dynamics simulations to investigate the interlayer structure of a uranyl-substituted smectite clay. Our clay model is a dioctahedral montmorillonite with negative charge sites in the octahedral sheet only. We simulated a wide range of interlayer water content (0 mg H2O/g clay — 260 mg H2O/g clay), but we were particularly interested in the two-layer hydrate that has been the focus of recent X-ray absorption experiments. Our simulation results for the two-layer hydrate of uranyl-montmorillonite yield a water content of 160 mg H2O/g clay and a layer spacing of 14.66 Å. Except at extremely low water content, uranyl cations are oriented nearly parallel to the surface normal in an outer-sphere complex. The first coordination shell consists of five water molecules with an average U-O distance of 2.45 Å, in good agreement with experimental data. At low water content, the cations can assume a perpendicular orientation to include surface oxygen atoms in the first coordination shell. Our molecular dynamics results show that complexes translate within the clay pore through a jump diffusion process, and that first-shell water molecules are exchangeable and interchangeable.
This work presents Atomistic Topology Operations in MATLAB (atom), an open source library of modular MATLAB routines which comprise a general and flexible framework for manipulation of atomistic systems. The purpose of the atom library is simply to facilitate common operations performed for construction, manipulation, or structural analysis. Due to the data structure used, atoms and molecules can be operated upon based on different chemical names or attributes, such as atom- or molecule-ID, name, residue name, charge, positions, etc. Furthermore, the Bond Valence Method and a neighbor-distance analysis can be performed to assign many chemical properties of inorganic molecules. Apart from reading and writing common coordinate files (.pdb, .xyz, .gro, .cif) and trajectories (.dcd, .trr, .xtc; binary formats are parsed via third-party packages), the atom library can also be used to generate topology files with bonding and angle information taking the periodic boundary conditions into account, and supports basic Gromacs, NAMD, LAMMPS, and RASPA2 topology file formats. Focusing on clay-mineral systems, the library supports CLAYFF (Cygan, 2004) but can also generate topology files for the INTERFACE forcefield (Heinz, 2005, 2013) for Gromacs and NAMD.
Advanced treatment modalities involve applying small fields which might be shaped by collimators or circular cones. In these techniques, high-energy photons produce unwanted neutrons. Therefore, it is necessary to know neutron parameters in these techniques.
Materials and methods:
Different parts of Varian linac were simulated by MCNPX, and different neutron parameters were calculated. The results were then compared to photoneutron production in the same nominal fields created by circular cones.
Results:
Maximum neutron fluence for 1 × 1, 2 × 2, 3 × 3 cm2 field sizes was 165, 40.4, 19.78 (cm–2.Gy-1 × 106), respectively. The maximum values of neutron equivalent doses were 17.1, 4.65, 2.44 (mSv/Gy of photon dose) for 1 × 1, 2 × 2, 3 × 3 cm2 field size, respectively, and maximum neutron absorbed doses reached 903, 253, 131 (µGy/Gy photon dose) for 1 × 1, 2 × 2, 3 × 3 cm2 field sizes, respectively.
Conclusion:
Comparing the results with those in the presence of circular cones showed that circular cones reduce photoneutron production for the same nominal field sizes.
In this chapter, we overview recent developments of a simulation framework capable of capturing the highly nonequilibrium physics of the strongly coupled electron and phonon systems in quantum cascade lasers (QCLs). In midinfrared (mid-IR) devices, both electronic and optical phonon systems are largely semiclassical and described by coupled Boltzmann transport equations, which we solve using an efficient stochastic technique known as ensemble Monte Carlo. The optical phonon system is strongly coupled to acoustic phonons, the dominant carriers of heat, whose dynamics and thermal transport throughout the whole device are described via a global heat-diffusion solver. We discuss the roles of nonequilibrium optical phonons in QCLs at the level of a single stage , anisotropic thermal transport of acoustic phonons in QCLs, outline the algorithm for multiscale electrothermal simulation, and present data for a mid-IR QCL based on this framework.