To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the last three chapters we have discussed how to calculate the potential energy, and some of its derivatives, for a single geometry of the atoms in a system. Although the calculation of an energy for one or a small number of configurations may sometimes be necessary, it can give only limited information about a system's properties. To investigate the latter more thoroughly it is necessary to identify interesting or important regions on the system's potential energy surface and develop ways in which they can be explored. Methods to do this will be investigated in this chapter.
Exploring potential energy surfaces
The function that represents a system's potential energy surface is a multidimensional function of the positions of all the system's atoms. It is this surface that determines, in large part, the behaviour and the properties of the system. A little reflection shows that the number of configurations or geometries available to a system with more than a few atoms is enormous. A simple example should make this clear. Take a diatomic molecule or, more generally, any system comprising two atoms in vacuum. The geometry of such a molecule is completely determined by specifying the distance between the two atoms and so the potential energy surface is a function of only one geometrical variable. It is easy to search the entire potential energy surface for this system. Start with a small interatomic distance, calculate the energy, increase the distance by a certain amount and then repeat the procedure. In this way we can obtain a picture similar to those in Figures 5.1, 5.2 and 5.5.
In the last chapter we discussed quantum chemical methods for calculating the potential energy of a system whereas in this chapter we present an alternative class of approaches, those that use empirical energy functions. To start, though, a point of notation will be clarified. Several different terms are employed to denote empirical energy functions in the literature and, no doubt, inadvertently, in this book. Common terms, which all refer to the same thing, include empirical energy function, potential energy function, empirical potential and force field. The use of empirical potentials to study molecular conformations is often termed molecular mechanics (MM).
Some of the earliest empirical potentials were derived by vibrational spectroscopists interested in interpreting their spectra (this was, in fact, the origin of the term ‘force field’), but the type of empirical potential that is described here was developed at the end of the 1960s and the beginning of the 1970s. Two prominent proponents of this approach were S. Lifson and N. Allinger. These types of force field are usually designed for studying conformations of molecules close to their equilibrium positions and so would be inappropriate for studying processes, such as chemical reactions, in which this is not the case.
Typical empirical energy functions
This section presents the general form of the empirical energy functions that are used in molecular simulations. A diversity exists, because the form of an empirical potential function is, to some extent, arbitrary, but most functions have two categories of terms that deal with the bonding and the non-bonding interactions between atoms, respectively. These will be discussed separately.
The algorithms and capabilities of the pDynamo library discussed in this book represent only a portion of those that are available. The choice of which to include has been highly subjective and, due to space restrictions and the perseverance of the author (!), relatively small. The topics that I most regret omitting or skimping upon include density functional theory, the calculation of surfaces and volumes, continuum methods for including solvation effects, non-equilibrium methods, especially those for the determination of free energies, and enhanced methods, such as transition path sampling, for the investigation of chemical reactions and other rare events. In any case, readers are encouraged to investigate these and alternative methods themselves, for many of the techniques presented in the book are the subject of active research and are undergoing continual improvement.
pDynamo itself and the example programs described in the text are available on the World Wide Web. At the time of publication, the appropriate addresses were www.ibs.fr and www.pdynamo.org. The websites give full details about how to download, install and use the library and the types of machines upon which it has been tested.
For convenience, we include here tables of the methods and attributes of the System class and of the other classes and functions that were encountered in the book along with the section in which they were first described or in which significant new capabilities were introduced. These are in Tables A1.1, A1.2 and A1.3, respectively. The System class is, in many ways, the central class of the library although it has several other important features which could not be treated in the text.
The Calphad technique has reached maturity. It started from a vision of combining data from thermodynamics, phase diagrams, and atomistic properties such as magnetism into a unified and consistent model. It is now a powerful method in a wide field of applications where modeled Gibbs energies and derivatives thereof are used to calculate properties and simulate transformations of real multicomponent materials. Chemical potentials and the thermodynamic factor (second derivatives of the Gibbs energy) are used in diffusion simulations. The driving forces of the phases are used to simulate the evolution of microstructures on the basis of the Landau theory. In solidification simulations the fractions of solid phases and the segregation of components, as well as energies of metastable states, which are experimentally observed by carrying out rapid solidification, are used. Whenever the thermodynamic description of a system is required, the Calphad technique can be applied.
The successful use of Calphad in these applications relies on the development of multicomponent databases, which describe many different kinds of thermodynamic functions in a consistent way, all checked to be consistent with experimental data. The construction of these databases is still a very demanding task, requiring expertise and experience. There are many subjective factors involved in the decisions to be made when judging and selecting which among redundant experimental data are the most trustworthy. Even more subjective is the assessment of phases of which little or nothing is known, except perhaps in a narrow composition and temperature range.
In this chapter, two of the most commonly used types of software for optimization, BINGSS and PARROT, are described.
Common features
Handling bad starting coefficients
The definition of the “error” (vi in Eq. (2.52)) is based on the “calculated value” (Fi(Cj, xki)), which is often defined by an equilibrium calculation with two or more phases. The initial set of adjustable coefficients may result in improper Gibbs-energy functions, with which this equilibrium cannot be calculated. As an example, in Fig. 7.1 such a situation is shown for a two-phase equilibrium, liquid + bcc. There Gbcc at all compositions is larger than Gliquid and the construction of a common tangent is impossible, so also no equilibrium can be calculated numerically.
The experimental information is either “at temperature T1 there is a two-phase equilibrium, liquid + bcc, for which the composition of the bcc phase was measured as (xbcc = x′)” or “in a single-phase bcc sample of composition x′ on heating the first liquid appears at temperature T1” (see Fig. 4.4).
In the least-squares calculation for Eq. (2.52) no “calculated value” (Fi(Cj, xki)) can be provided as long as the starting values for the Gibbs-energy descriptions of liquid and bcc phases behave as in Fig. 7.1. To find better starting values by trial and error is not easy and is time-consuming. Therefore it is desirable to have a method whereby this problem is avoided and that can start even with a very bad initial set of adjustable coefficients.
As we know from the applications of thermodynamics, free energy is much more important than energy, since it determines phase equilibria, such as melting and boiling points and the pressure of saturated vapors, and chemical equilibria such as solubilities, binding or dissociation constants and conformational changes. Unfortunately, it is generally much more difficult to derive free energy differences from simulations than it is to derive energy differences. The reason for this is that free energy incorporates an entropic term –TS; entropy is given by an integral over phase space, while energy is an ensemble average. Only when the system is well localized in space (as a vibrating solid or a macromolecule with a well-defined average structure) is it possible to approximate the multidimensional integral for a direct determination of entropy. This case will be considered in Section 7.2.
Free energies of substates can be evaluated directly from completely equilibrated trajectories or ensembles that contain all accessible regions of configurational space. In practice it is hard to generate such complete ensembles when there are many low-lying states separated by barriers, but the ideal distribution may be approached by the replica exchange method (see Section 6.6). Once the configurational space has been subdivided into substates or conformations (possibly based on a cluster analysis of structures), the free energy of each substate is determined by the number of configurations observed in each substate.
We now move to considering the dynamics of a system of nuclei and electrons. Of course, both electrons and nuclei are subject to the laws of quantum mechanics, but since nuclei are 2000 to 200,000 times heavier than electrons, we expect that classical mechanics will be a much better approximation for the motion of nuclei than for the motion of electrons. This means that we expect a level of approximation to be valid, where some of the degrees of freedom (d.o.f.) of a system behave essentially classically and others behave essentially quantum-mechanically. The system then is of a mixed quantum/classical nature.
Most often the quantum subsystem consists of system of electrons in a dynamical field of classical nuclei, but the quantum subsystem may also be a selection of generalized nuclear coordinates (e.g., corresponding to high-frequency vibrations) while other generalized coordinates are supposed to behave classically, or describe the motion of a proton in a classical environment.
So, in this chapter we consider the dynamics of a quantum system in a non-stationary potential. In Section 5.2 we consider the time-dependent potential as externally given, without taking notice of the fact that the sources of the time-dependent potential are moving nuclei, which are quantum particles themselves, feeling the interaction with the quantum d.o.f. Thus we consider the time evolution of the quantum system, which now involves mixing-in of excited states, but we completely ignore the back reaction of the quantum system onto the d.o.f. that cause the time-dependent potential, i.e., the moving nuclei.
Computer simulations of real systems require a model of that reality. A model consists of both a representation of the system and a set of rules that describe the behavior of the system. For dynamical descriptions one needs in addition a specification of the initial state of the system, and if the response to external influences is required, a specification of the external influences.
Both the model and the method of solution depend on the purpose of the simulation: they should be accurate and efficient. The model should be chosen accordingly. For example, an accurate quantum-mechanical description of the behavior of a many-particle system is not efficient for studying the flow of air around a moving wing; on the other hand, the Navier–Stokes equations – efficient for fluid motion – cannot give an accurate description of the chemical reaction in an explosion motor. Accurate means that the simulation will reliably (within a required accuracy) predict the real behavior of the real system, and efficient means “feasible with the available technical means.” This combination of requirements rules out a number of questions; whether a question is answerable by simulation depends on:
the state of theoretical development (models and methods of solution);
the computational capabilities;
the possibilities to implement the methods of solution in algorithms;
the possibilities to validate the model.
Validation means the assessment of the accuracy of the model (compared to physical reality) by critical experimental tests.
In the previous chapters it has been shown how to obtain the best possible agreement between thermodynamic models and experimental data using adjustable model parameters for binary and ternary systems. Even if each such assessment can be very important by itself, the main purpose of these assessments is to provide the building blocks of multicomponent thermodynamic databases. This objective must be considered when performing an assessment because it imposes some restrictions on the assessment of the individual system and on the possibilities of adjusting data and models to new experimental data. Such problems will be discussed in this chapter, together with the general concepts concerning thermodynamic databases.
Experience has shown that thermodynamic databases based on a limited number of ternary assessments, all centered around a “base” element like Fe or Al, can give reliable extrapolations to multicomponent alloys based on that element. This means that the database can be used to calculate the amounts of phases, their compositions, and transformation temperatures and that the calculated values have an accuracy close to that of an experimental measurement. Such databases are a very valuable tool for planning new experimental work in alloy development, since detailed experimental investigations of multicomponent systems are very expensive to perform. It is important that the databases are based on ternary assessments, not just binaries, because the mutual solubilities in binary phases must be described, otherwise the extrapolations are not reliable.
For numerical use the parametric functions described in chapter 5 must be assessed using experimental data. To get a maximum of information, all types of measurements that are quantitatively related to any thermodynamic function of state must be considered. From this dataset quantitative numerical data for the adjustable parameters of the Gibbs energy functions are obtained using the methodology described in chapter 6.
In order to evaluate the reliability and accuracy of the experimental data, it is of great help to know about the various experimental techniques used. Therefore, the main experimental methods in thermodynamic and phase-diagram investigations shall be described here. Nevertheless, this cannot be done here as deeply as in textbooks teaching experimental techniques, for example Kubaschewski et al. (1993).
Here the main emphasis is on how to use various types of data for the optimization and how to connect typical as well as more-exotic measured values with the thermodynamic functions of state.
Since experiments are expensive and time-consuming, all data available in the literature should be sought and their validity checked before one's own experiments are planned. An optimization using only literature data may be a good start, to give an overview, and may reveal, where the knowledge is poor, which further experiments are best suited to fill these gaps. Careful planning of one's own experiments taking this overview into account can very effectively keep the effort involved to a minimum and results in a very significant improvement of the optimization.
In chapter 5, various models were described in order to understand how they can be fitted to the experimental features that were described in chapter 4. In the present chapter, we start from experimental evidence and search for the model best able to describe it. Therefore many topics of chapter 5 will be revisited here.
An assessor working on a system will experience almost all the steps described and discussed here. Since a system is often reassessed many times, by the same researcher or by another, it is very important to keep records about the decisions made in order to make it easy to restart the work, for example, when new experimental evidence requires a new optimization. The process of assessing a system is made easier if an assessment logbook is kept. An important function of this logbook is that one should record all mistakes and failures so that one does not repeat them later. In the final paper only the sucessful modeling will be reported and there is no information about the difficulties encountered in obtaining it.
The assessment methodology described here includes a critical assessment of the available literature in the way in which it is normally done, for example, in the Journal of Phase Equilibria and Diffusion. By combining this with thermodynamic models, an analytical description is created and the determination of adjustable model parameters is often done using the least-squares method to obtain a description that represents best the complete set of available consistent experimental data.
In this chapter we shall set out to average a system of particles over space and obtain equations for the variables averaged over space. We consider a Hamiltonian system (although we shall allow for the presence of an external force, such as a gravitational force, that has its source outside the system), and – for simplicity – consider a single-component fluid with isotropic behavior. The latter condition is not essential, but allows us to simplify notations by saving on extra indexes and higher-order tensors that would cause unnecessary distraction from the main topic. The restriction to a single component is for simplicity also, and we shall later look at multicomponent systems.
By averaging over space we expect to arrive at the equations of fluid dynamics. These equations describe the motion of fluid elements and are based on the conservation of mass, momentum and energy. They do not describe any atomic details and assume that the fluid is in local equilibrium, so that an equation of state can be applied to relate local thermodynamic quantities as density, pressure and temperature. This presupposes that such thermodynamic quantities can be locally defined to begin with.
For systems that are locally homogeneous and have only very small gradients of thermodynamic parameters, averaging can be done over very large numbers of particles. For the limit of averaging over an infinite number of particles, thermodynamic quantities can be meaningfully defined and we expect the macroscopic equation to become exact.