To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The term “mesoscopic” is used for any method that treats nanoscale system details (say, 10 to 1000 nm) but averages over atomic details. Systems treated by mesoscopic methods are typically mixtures (e.g., of polymers or colloidal particles) that show self-organization on the nanometer scale. Mesoscopic behavior related to composition and interaction between constituents comes on top of dynamic behavior described by the macroscopic equations of fluid dynamics; it is on a level between atoms and continuum fluids. In mesoscopic dynamics the inherent noise is not negligible, as it is in macroscopic fluid dynamics.
Mesoscopic simulations can be realized both with particles and with continuum equations solved on a grid. In the latter case the continuum variables are densities of the species occurring in the system. Particle simulations with “superatoms” using Langevin or Brownian dynamics, as treated in Chapter 8, are already mesoscopic in nature but will not be considered in this chapter. Also the use of particles to describe continuum equations, as in dissipative particle dynamics described in Chapter 11, can be categorized as mesoscopic, but will not be treated in this chapter. Here we consider the continuum equations for multicomponent mesoscopic systems in the linear response approximation. The latter means that fluxes are assumed to be linearly related to their driving forces. This, in fact, is equivalent to Brownian dynamics in which accelerations are averaged-out and average velocities are proportional to average, i.e., thermodynamic, forces.
The idea of this book came from Professor Petzow, during the PML Betriebsausflug in September 1991. In a very informal way Professor Petzow invited S. G. F. (who was ready to return to Brazil after two years working with H. L. Lukas) to help H. L. L. to collect all his ideas about, and experiences with, thermodynamic optimization and put them into a book. Work on optimizations has been going on at Stuttgart for a long time, and valuable experience has been accumulated. Dr Lukas' feeling for optimizations is very well defined and one can talk about a “Lukas school for optimizations.”
Later the project was enriched by the cooperation with Professor Sundman, at that time Dr. Sundman, who brought his own large experience on computational thermodynamics as well as the Stockholm group's approach to the theme with all the formalisms so well developed by Professor Mats Hillert.
The three authors were very motivated by the idea, since the lack of such a book had always made it difficult to introduce students and researchers to this field. The knowledge necessary in order to obtain a better thermodynamic description of a system is very broad, requiring a judgment of the experimental data provided by the literature and also a wise selection of the model best able to describe the experimental evidence. This judgment is difficult, but the better “educated” the assessor, the greater his ability to judge well.
A short overview on the rules of thermodynamics shall be given here, with special emphasis on their consequences for computer calculations. This part will not replace a textbook on thermodynamics, but shall help the reader to remember its rules and maybe present them in a more practically useful way, which facilitates the understanding of thermodynamic calculations.
Thermodynamics deals with energy and the transformation of energy in various ways. In thermodynamics all rules are deduced from three principal laws, two of which are based on axioms expressing everyday experiences. Even though these laws are very simple, they have many important consequences.
Thermodynamics can strictly be applied only to systems that are at equilibrium, i.e. in a state that does not change with time. As noted in the introduction, the thermodynamic models can be extrapolated outside the equilibrium state and provide essential information on thermodynamic quantities for simulations of time-dependent transformations of systems.
The equation of state
The concept of thermodynamic state must be introduced before beginning with the principal laws. This can be done by invoking the principle of the “equation of state.” This is connected with the introduction of temperature as a measurable quantity. If pressure–volume work is the only work considered, then one can state that in a homogeneous unary system the state is defined by two variables.
In this chapter we consider how continuum dynamics, described by continuum equations that are themselves generalizations of systems of particles, can be described by particles again. The particle description in this case is not meant to be more precise than the continuum description and to represent the system in more detail, but is meant to provide an easier and more physically appealing way to solve the continuum equations. There is the additional advantage that multicomponent systems can be modeled, and by varying the relative repulsion between different kinds of particles, phenomena like mixing and spinodal decomposition can be simulated as well. The particles represent lumps of fluid, rather than specified clusters of real molecules, and their size depends primarily on the detail of the boundary conditions in the fluid dynamics problem at hand. The size may vary from superatomic or nanometer size, e.g., for colloidal systems, to macroscopic size. Since usually many (millions of) particles are needed to fill the required volume with sufficient detail, is it for efficiency reasons necessary that the interactions are described in a simple way and act over short distances only to keep the number of interactions low. Yet, the interactions should be sufficiently versatile to allow independent parametrization of the main properties of the fluid as density, compressibility and viscosity.
Equilibrium statistical mechanics was developed shortly after the introduction of thermodynamic entropy by Clausius, with Boltzmann and Gibbs as the main innovators near the end of the nineteenth century. The concepts of atoms and molecules already existed but there was no notion of quantum theory. The link to thermodynamics was properly made, including the interpretation of entropy in terms of probability distributions over ensembles of particle configurations, but the quantitative counting of the number of possibilities required an unknown elementary volume in phase space that could only later be identified with Planck's constant h. The indistinguishability of particles of the same kind, which had to be introduced in order to avoid the Gibbs' paradox, got a firm logical basis only after the invention of quantum theory. The observed distribution of black-body radiation could not be explained by statistical mechanics of the time; discrepancies of this kind have been catalysts for the development of quantum mechanics in the beginning of the twentieth century. Finally, only after the completion of basic quantum mechanics around 1930 could quantum statistical mechanics – in principle – make the proper link between microscopic properties at the atomic level and macroscopic thermodynamics. The classical statistical mechanics of Gibbs is an approximation to quantum statistics.
In this review we shall reverse history and start with quantum statistics, proceeding to classical statistical mechanics as an approximation to quantum statistics. This will enable us to see the limitations of classical computational approaches and develop appropriate quantum corrections where necessary.
This book is not a textbook on thermodynamics or statistical mechanics. The reason to incorporate these topics nevertheless is to establish a common frame of reference for the readers of this book, including a common nomenclature and notation. For details, commentaries, proofs and discussions, the reader is referred to any of the numerous textbooks on these topics.
Thermodynamics describes the macroscopic behavior of systems in equilibrium, in terms of macroscopic measurable quantities that do not refer at all to atomic details. Statistical mechanics links the thermodynamic quantities to appropriate averages over atomic details, thus establishing the ultimate coarse-graining approach. Both theories have something to say about non-equilibrium systems as well. The logical exposition of the link between atomic and macroscopic behavior would be in the sequence:
(i) describe atomic behavior on a quantum-mechanical basis;
(ii) simplify to classical behavior where possible;
(iii) apply statistical mechanics to average over details;
(iv) for systems in equilibrium: derive thermodynamic; quantities and phase behavior; for non-equilibrium systems: derive macroscopic rate processes and transport properties.
The historical development has followed a quite different sequence. Equilibrium thermodynamics was developed around the middle of the nineteenth century, with the definition of entropy as a state function by Clausius forming the crucial step to completion of the theory. No detailed knowledge of atomic interactions existed at the time and hence no connection between atomic interactions and macroscopic behavior (the realm of statistical mechanics) could be made.
The systems described here are real assessments, most of which have been published and the reference is given; but the descriptions here include some of the mistakes made when solving the problems leading to the publication. Such things are never included in the final publication. Discussing such problems does not mean that the assessment technique described here is bad or wrong, only that learning from mistakes is the only way to become a successful assessor, in the same way as many mistakes are inevitably made before one can learn how to be a good experimentalist.
A complete assessment of the Cu–Mg system
The Cu–Mg system published by Coughanowr et al. (1991), shown in Fig. 9.2 later, is very simple but offers some interesting examples of modeling. Assessments with two different software packages will also be discussed.
Physical and experimental criteria for solution model selection
There are five phases in the system, the liquid phase, the Cu phase with fcc lattice with some solubility of Mg, the Mg phase with hcp lattice and hardly any solubility of Cu, and two intermetallic phases:
CuMg2, a stoichiometric phase, and
Cu2Mg, with some range of homogeneity, having the cubic Laves-phase structure, C15 in the Strukturbericht notation.
The Laves phase Cu2Mg
The range of homogeneity of the Laves phase is very well determined experimentally; it deviates on both sides from the ideal composition of 66.7% Cu.
As has become clear in the previous chapter, electrons (almost) always behave as quantum particles; classical approximations are (almost) never valid. In general one is interested in the time-dependent behavior of systems containing electrons, which is the subject of following chapters.
The time-dependent behavior of systems of particles spreads over very large time ranges: while optical transitions take place below the femtosecond range, macroscopic dynamics concerns macroscopic times as well. The light electrons move considerably faster than the heavier nuclei, and collective motions over many nuclei are slower still. For many aspects of long-time behavior the motion of electrons can be treated in an environment considered stationary. The electrons are usually in bound states, determined by the positions of the charged nuclei in space, which provide an external field for the electrons. If the external field is stationary, the electron wave functions are stationary oscillating functions. The approximation in which the motion of the particles (i.e., nuclei) that generate the external field, is neglected, is called the Born–Oppenheimer approximation. Even if the external field is not stationary (to be treated in Chapter 5), the non-stationary solutions for the electronic motion are often expressed in terms of the pre-computed stationary solutions of the Schrödinger equation. This chapter concerns the computation of such stationary solutions.
Thus, in this chapter, the Schrödinger equation reduces to a time-independent problem with a stationary (i.e., still time-dependent, but periodic) solution. Almost all of chemistry is covered by this approximation.
This book was conceived as a result of many years research with students and postdocs in molecular simulation, and shaped over several courses on the subject given at the University of Groningen, the Eidgenössische Technische Hochschule (ETH) in Zürich, the University of Cambridge, UK, the University of Rome (La Sapienza), and the University of North Carolina at Chapel Hill, NC, USA. The leading theme has been the truly interdisciplinary character of molecular simulation: its gamma of methods and models encompasses the sciences ranging from advanced theoretical physics to very applied (bio)technology, and it attracts chemists and biologists with limited mathematical training as well as physicists, computer scientists and mathematicians. There is a clear hierarchy in models used for simulations, ranging from detailed (relativistic) quantum dynamics of particles, via a cascade of approximations, to the macroscopic behavior of complex systems. As the human brain cannot hold all the specialisms involved, many practical simulators specialize in their niche of interest, adopt – often unquestioned – the methods that are commonplace in their niche, read the literature selectively, and too often turn a blind eye on the limitations of their approaches.
This book tries to connect the various disciplines and expand the horizon for each field of application. The basic approach is a physical one, and an attempt is made to rationalize each necessary approximation in the light of the underlying physics.
There are many cases of interest where the relevant question we wish to answer by simulation is “what is the response of the (complex) system to an external disturbance?” Such responses can be related to experimental results and thus be used not only to predict material properties, but also to validate the simulation model. Responses can either be static, after a prolonged constant external disturbance that drives the system into a non-equilibrium steady state, or dynamic, as a reaction to a time-dependent external disturbance. Examples of the former are transport properties such as the heat flow resulting from an imposed constant temperature gradient, or the stress (momentum flow) resulting from an imposed velocity gradient. Examples of the latter are the optical response to a specific sequence of laser pulses, or the time-dependent induced polarization or absorption following the application of a time-dependent external electric field.
In general, responses can be expected to relate in a non-linear fashion to the applied disturbance. For example, the dielectric response (i.e., the polarization) of a dipolar fluid to an external electric field will level off at high field strengths when the dipoles tend to orient fully in the electric field. The optical response to two laser pulses, 100 fs apart, will not equal the sum of the responses to each of the pulses separately. In such cases there will not be much choice other than mimicking the external disturbance in the simulated system and “observing” the response.
Often the interest in the behavior of large molecular systems concerns global behavior on longer time scales rather than the short-time details of local dynamics. Unfortunately, the interesting time scales and system sizes are often (far) beyond what is attainable by detailed molecular dynamics simulations. In particular, macromolecular structural relaxation (crystallization from the melt, conformational changes, polyelectrolyte condensation, protein folding, microphase separation) easily extends into the seconds range and longer. It would be desirable to simplify dynamical simulations in such a way that the “interesting” behavior is well reproduced, and in a much more efficient manner, even if this goes at the expense of “uninteresting” details. Thus we would like to reduce the number of degrees of freedom that are explicitly treated in the dynamics, but in such a way that the accuracy of global and long-time behavior is retained as much as possible.
All approaches of this type fall under the heading of coarse graining, although this term is often used in a more specific sense for models that average over local details. The relevant degrees of freedom may then either be the cartesian coordinates of special particles that represent a spatial average (the superatom approach, treated in Section 8.4), or they may be densities on a grid, defined with a certain spatial resolution. The latter type of coarse graining is treated in Chapter 9 and leads to mesoscopic continuum dynamics, treated in Chapter 10.