We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In order to take on arbitrary geometries, shape-changing arrays must introduce gaps between their elements. To enhance performance, this unused area can be filled with meta-material inspired switched passive networks on flexible sheets in order to compensate for the effects of increased spacing. These flexible meta-gaps can easily fold and deploy when the array changes shape. This work investigates the promise of meta-gaps through the measurement of a 5-by-5 λ-spaced array with 40 meta-gap sheets and 960 switches. The optimization and measurement problems associated with such a high-dimensional phased array are discussed. Simulated and in-situ optimization experiments are conducted to examine the differential performance of metaheuristic algorithms and characterize the underlying optimization problem. Measurement results demonstrate that in our implementation meta-gaps increase the average main beam power within the field of view (FoV) by 0.46 dB, suppress the average side lobe level within the FoV by 2 dB, and enhance the field-of-view by 23.5∘ compared to a ground-plane backed array.
In many regression applications, users are often faced with difficulties due to nonlinear relationships, heterogeneous subjects, or time series which are best represented by splines. In such applications, two or more regression functions are often necessary to best summarize the underlying structure of the data. Unfortunately, in most cases, it is not known a priori which subset of observations should be approximated with which specific regression function. This paper presents a methodology which simultaneously clusters observations into a preset number of groups and estimates the corresponding regression functions' coefficients, all to optimize a common objective function. We describe the problem and discuss related procedures. A new simulated annealing-based methodology is described as well as program options to accommodate overlapping or nonoverlapping clustering, replications per subject, univariate or multivariate dependent variables, and constraints imposed on cluster membership. Extensive Monte Carlo analyses are reported which investigate the overall performance of the methodology. A consumer psychology application is provided concerning a conjoint analysis investigation of consumer satisfaction determinants. Finally, other applications and extensions of the methodology are discussed.
In psychological research, one often aims at explaining individual differences in S-R profiles, that is, individual differences in the responses (R) with which people react to specific stimuli (S). To this end, researchers often postulate an underlying sequential process, which boils down to the specification of a set of mediating variables (M) and the processes that link these mediating variables to the stimuli and responses under study. Obviously, a crucial task is to chart how the individual differences in the S-R profiles are caused by individual differences in the S-M link and/or by individual differences in the M-R link. In this paper we propose a new model, called CLASSI, which was explicitly designed for this task. In particular, the key principle of CLASSI consists of reducing the S, M, and R nodes of a sequential process to a few mutually exclusive types and inducing an S-M and an M-R person typology from the data, with the S-M person types being characterized in terms of if S type then M type rules and the M-R person types in terms of if M type then R type rules. As such, the S-M and M-R person types and their associated if–then rules represent the important individual differences in the S-M and M-R links of the sequential process under study. An algorithm to fit the CLASSI model is described and evaluated in a simulation study. An application of CLASSI to data from the behavioral domain of anger and sadness is discussed. Finally, we relate CLASSI to other methods and discuss possible extensions.
Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite well in recovering the underlying truth but frequently end in a local minimum. In this paper we evaluate whether or not this local minimum problem can be mitigated by means of two common strategies for avoiding local minima in combinatorial data analysis: simulated annealing (SA) and use of a multistart procedure. In particular, we propose a generic SA algorithm for hierarchical classes analysis and three different types of random starts. The effectiveness of the SA algorithm and the random starts is evaluated by reanalyzing data sets of previous simulation studies. The reported results support the use of the proposed SA algorithm in combination with a random multistart procedure, regardless of the properties of the data set under study.
In this paper, we propose a cluster-MDS model for two-way one-mode continuous rating dissimilarity data. The model aims at partitioning the objects into classes and simultaneously representing the cluster centers in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of dissimilarities in a maximum likelihood framework. In each iteration, the probability that a dissimilarity belongs to each of the blocks conforming to a partition of the original dissimilarity matrix, and the rest of parameters, are estimated in a simulated annealing based algorithm. A model selection strategy is used to test the number of latent classes and the dimensionality of the problem. Both simulated and classical dissimilarity data are analyzed to illustrate the model.
In this paper, the notion of Markov move from algebraic statistics is used to analyze the weighted kappa indices in rater agreement problems. In particular, the problem of the maximum kappa and its dependence on the choice of the weighting schemes are discussed. The Markov moves are also used in a simulated annealing algorithm to actually find the configuration of maximum agreement.
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of “exemplars” as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed a new simulated annealing heuristic for the p-median problem and completed a thorough investigation of its computational performance. The salient findings from our experiments are that our new method substantially outperforms a previous implementation of simulated annealing and is competitive with the most effective metaheuristics for the p-median problem.
The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal or ordinal attributes. In such instances, the CPP incorporates edge costs obtained from an aggregation of binary equivalence relations among the attributes. We review existing theory and methods for the CPP and propose two versions of a new neighborhood search algorithm for efficient solution. The first version (NS-R) uses a relocation algorithm in the search for improved solutions, whereas the second (NS-TS) uses an embedded tabu search routine. The new algorithms are compared to simulated annealing (SA) and tabu search (TS) algorithms from the CPP literature. Although the heuristics yielded comparable results for some test problems, the neighborhood search algorithms generally yielded the best performances for large and difficult instances of the CPP.
How can groups best coordinate to solve problems? The answer touches on cultural innovation, including the trajectory of science, technology, and art. If everyone acts independently, different people will explore different solutions, but there is no way to leverage good solutions across the community. If everyone acts in consort, early successes can lead the group down dead ends and stifle exploration. The challenge is one of maintaining innovation but also communicating effective solutions once they are found. When solutions spaces are smooth – that is, easy – communication is good. But when solution spaces are rugged – that is, hard – the balance should tilt toward exploration. How can we best achieve this? One answer is to place people in social structures that reduce communication, but maintain connectivity. But there are other solutions that might work better. Algorithms, like simulated annealing, are designed to deal with such problems by adjusting collective focus over time, allowing systems to “cool off” slowly as they home in on solutions. Network science allows us to explore the performance of such solutions on smooth and rugged landscapes, and provides numerous avenues for innovation of its own.
Chapter 5 is dedicated to the most important part of predictive modeling for biomarker discovery based on high-dimensional data – multivariate feature selection. When dealing with sparse biomedical data whose dimensionality is much higher than the number of training observations, the crucial issue is to overcome the curse of dimensionality by using methods capable of elevating signal (predictive information) from the overwhelming noise. One way of doing this is to perform many (hundreds or thousands) parallel feature selection experiments based on different random subsamples of the original training data and then aggregating their results (for example, by analyzing the distribution of variables among the results of those parallel experiments). Two designs of such parallel feature selection experiments are discussed in detail: one based on recursive feature elimination, and the other on implementing the stepwise hybrid selection with T2. The chapter includes also descriptions of three evolutionary feature selection algorithms: simulated annealing, genetic algorithms, and particle swarm optimization.
The increase in Electrical and Electronic Equipment (EEE) usage in various sectors has given rise to repair and maintenance units. Disassembly of parts requires proper planning, which is done by the Disassembly Sequence Planning (DSP) process. Since the manual disassembly process has various time and labor restrictions, it requires proper planning. Effective disassembly planning methods can encourage the reuse and recycling sector, resulting in reduction of raw-materials mining. An efficient DSP can lower the time and cost consumption. To address the challenges in DSP, this research introduces an innovative framework based on Q-Learning (QL) within the domain of Reinforcement Learning (RL). Furthermore, an Enhanced Simulated Annealing (ESA) algorithm is introduced to improve the exploration and exploitation balance in the proposed RL framework. The proposed framework is extensively evaluated against state-of-the-art frameworks and benchmark algorithms using a diverse set of eight products as test cases. The findings reveal that the proposed framework outperforms benchmark algorithms and state-of-the-art frameworks in terms of time consumption, memory consumption, and solution optimality. Specifically, for complex large products, the proposed technique achieves a remarkable minimum reduction of 60% in time consumption and 30% in memory usage compared to other state-of-the-art techniques. Additionally, qualitative analysis demonstrates that the proposed approach generates sequences with high fitness values, indicating more stable and less time-consuming disassembles. The utilization of this framework allows for the realization of various real-world disassembly applications, thereby making a significant contribution to sustainable practices in EEE industries.
In this paper, we propose new Metropolis–Hastings and simulated annealing algorithms on a finite state space via modifying the energy landscape. The core idea of landscape modification rests on introducing a parameter c, such that the landscape is modified once the algorithm is above this threshold parameter to encourage exploration, while the original landscape is utilized when the algorithm is below the threshold for exploitation purposes. We illustrate the power and benefits of landscape modification by investigating its effect on the classical Curie–Weiss model with Glauber dynamics and external magnetic field in the subcritical regime. This leads to a landscape-modified mean-field equation, and with appropriate choice of c the free energy landscape can be transformed from a double-well into a single-well landscape, while the location of the global minimum is preserved on the modified landscape. Consequently, running algorithms on the modified landscape can improve the convergence to the ground state in the Curie–Weiss model. In the setting of simulated annealing, we demonstrate that landscape modification can yield improved or even subexponential mean tunnelling time between global minima in the low-temperature regime by appropriate choice of c, and we give a convergence guarantee using an improved logarithmic cooling schedule with reduced critical height. We also discuss connections between landscape modification and other acceleration techniques, such as Catoni’s energy transformation algorithm, preconditioning, importance sampling, and quantum annealing. The technique developed in this paper is not limited to simulated annealing, but is broadly applicable to any difference-based discrete optimization algorithm by a change of landscape.
This article examines large-time behaviour of finite-state mean-field interacting particle systems. Our first main result is a sharp estimate (in the exponential scale) of the time required for convergence of the empirical measure process of the N-particle system to its invariant measure; we show that when time is of the order
$\exp\{N\Lambda\}$
for a suitable constant
$\Lambda > 0$
, the process has mixed well and it is close to its invariant measure. We then obtain large-N asymptotics of the second-largest eigenvalue of the generator associated with the empirical measure process when it is reversible with respect to its invariant measure. We show that its absolute value scales as
$\exp\{{-}N\Lambda\}$
. The main tools used in establishing our results are the large deviation properties of the empirical measure process from its large-N limit. As an application of the study of large-time behaviour, we also show convergence of the empirical measure of the system of particles to a global minimum of a certain ‘entropy’ function when particles are added over time in a controlled fashion. The controlled addition of particles is analogous to the cooling schedule associated with the search for a global minimum of a function using the simulated annealing algorithm.
Aircraft sequencing and scheduling within terminal airspaces has become more complicated due to increased air traffic demand and airspace complexity. A stochastic mixed-integer linear programming model is proposed to handle aircraft sequencing and scheduling problems using the simulated annealing algorithm. The proposed model allows for proper aircraft sequencing considering wind direction uncertainties, which are critical in the decision-making process. The proposed model aims to minimise total aircraft delay for a runway airport serving mixed operations. To test the stochastic model, an appropriate number of scenarios were generated for different air traffic demand rates. The results indicate that the stochastic model reduces the total aircraft delay considerably when compared with the deterministic approach.
Phenological models for predicting the grapevine flowering were tested using phenological data of 15 grape varieties collected between 1990 and 2014 in Vinhos Verdes and Lisbon Portuguese wine regions. Three models were tested: Spring Warming (Growing Degree Days – GDD model), Spring Warming modified using a triangular function – GDD triangular and UniFORC model, which considers an exponential response curve to temperature. Model estimation was performed using data on two grape varieties (Loureiro and Fernão Pires), present in both regions. Three dates were tested for the beginning of heat unit accumulation (t0 date): budburst, 1 January and 1 September. The best overall date was budburst. Furthermore, for each model parameter, an intermediate range of values common for the studied regions was estimated and further optimized to obtain one model that could be used for a diverse range of grape varieties in both wine regions. External validation was performed using an independent data set from 13 grape varieties (seven red and six white), different from the two used in the estimation step. The results showed a high coefficient of determination (R2: 0.59–0.89), low Root Mean Square Error (RMSE: 3–7 days) and Mean Absolute Deviation (MAD: 2–6 days) between predicted and observed values. The UniFORC model overall performed slightly better than the two GDD models, presenting higher R2 (0.75) and lower RMSE (4.55) and MAD (3.60). The developed phenological models presented good accuracy when applied to several varieties in different regions and can be used as a predictor tool of flowering date in Portugal.
Since the introduction of spatial grammars 45 years ago, numerous grammars have been developed in a variety of fields from architecture to engineering design. Their benefits for solution space exploration when computationally implemented and combined with optimization have been demonstrated. However, there has been limited adoption of spatial grammars in engineering applications for various reasons. One main reason is the missing, automated, generalized link between the designs generated by the spatial grammar and their evaluation through finite-element analysis (FEA). However, the combination of spatial grammars with optimization and simulation has the advantage over continuous structural topology optimization in that explicit constraints, for example, modeling style and fabrication processes, can be included in the spatial grammar. This paper discusses the challenges in providing a generalized approach by demonstrating the implementation of a framework that combines a three-dimensional spatial grammar interpreter with automated FEA and stochastic optimization using simulated annealing (SA). Guidelines are provided for users to design spatial grammars in conjunction with FEA and integrate automatic application of boundary conditions. A simulated annealing method for use with spatial grammars is also presented including a new method to select rules through a neighborhood definition. To demonstrate the benefits of the framework, it is applied to the automated design and optimization of spokes for inline skate wheels. This example highlights the advantage of spatial grammars for modeling style and additive manufacturing (AM) constraints within the generative system combined with FEA and optimization to carry out topology and shape optimization. The results verify that the framework can generate structurally optimized designs within the style and AM constraints defined in the spatial grammar, and produce a set of topologically diverse, yet valid design solutions.
The hub location problems involve locating facilities and designing hub networks to minimize the total cost of transportation (as a function of distance) between hubs, establishing facilities and demand management. In this paper, we consider the capacitated cluster hub location problem because of its wide range of applications in real-world cases, especially in transportation and telecommunication networks. In this regard, a mathematical model is presented to address this problem under capacity constraints imposed on hubs and transportation lines. Then, a new hybrid algorithm based on simulated annealing and ant colony optimization is proposed to solve the presented problem. Finally, the computational experiments demonstrate that the proposed heuristic algorithm is both effective and efficient.
Detailed tephrochronologies are built to underpin probabilistic volcanic hazard forecasting, and to understand the dynamics and history of diverse geomorphic, climatic, soil-forming and environmental processes. Complicating factors include highly variable tephra distribution over time; difficulty in correlating tephras from site to site based on physical and chemical properties; and uncertain age determinations. Multiple sites permit construction of more accurate composite tephra records, but correctly merging individual site records by recognizing common events and site-specific gaps is complex. We present an automated procedure for matching tephra sequences between multiple deposition sites using stochastic local optimization techniques. If individual tephra age determinations are not significantly different between sites, they are matched and a more precise age is assigned. Known stratigraphy and mineralogical or geochemical compositions are used to constrain tephra matches. We apply this method to match tephra records from five long sediment cores (≤ 75 cal ka BP) in Auckland, New Zealand. Sediments at these sites preserve basaltic tephras from local eruptions of the Auckland Volcanic Field as well as distal rhyolitic and andesitic tephras from Okataina, Taupo, Egmont, Tongariro, and Tuhua (Mayor Island) volcanic centers. The new correlated record compiled is statistically more likely than previously published arrangements from this area.
A method for designing efficient sampling schemes for reconnaissance surveys of contaminated bed sediments in water courses is presented. The method can be used in networks of water courses, for instance to estimate the total volume of bed sediment of a defined quality class. The water courses must be digitised as arcs in a Geographical Information System.
The method comprises six steps: (1) stratifying the water courses; (2) choosing a variogram; (3) calculating the parameters of the variance model; (4) choosing a compositing scheme; (5) choosing the values for the cost-model parameters; and (6) optimising the sampling scheme. The method is demonstrated with a survey of the main water courses in the reclaimed areas of Oostelijk Flevoland and Zuidelijk Flevoland.
Let M be acomplete Riemannian manifold, M ∈ ℕ andp ≥ 1. Weprove that almost everywhere on x = (x1,...,xN) ∈ MNfor Lebesgue measure in MN, the measure \hbox{$\di \mu(x)=\f1N\sum_{k=1}^N\d_{x_k}$} has a unique p–mean ep(x).As a consequence, if X = (X1,...,XN)is a MN-valued randomvariable with absolutely continuous law, then almost surely μ(X(ω)) has aunique p–mean. In particular if (Xn)n ≥ 1is an independent sample of an absolutely continuous law in M, then the processep,n(ω) = ep(X1(ω),...,Xn(ω))is well-defined. Assume M is compact and consider a probability measureν inM. Usingpartial simulated annealing, we define a continuous semimartingale which converges inprobability to the set of minimizers of the integral of distance at power p with respect toν. When theset is a singleton, it converges to the p–mean.