To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is usually more difficult to simulate continuous systems than discrete ones, especially when the properties under study are governed by nonlinear equations. The systems can be so complex that the length scale at the atomic level can be as important as the length scale at the macroscopic level. The basic idea in dealing with complicated systems is similar to a divide-and-conquer concept, that is, dividing the systems with an understanding of the length scales involved and then solving the problem with an appropriate method at each length scale. A specific length scale is usually associated with an energy scale, such as the average temperature of the system or the average interaction of each pair of particles. The divide-and-conquer schemes are quite powerful in a wide range of applications. However, each method has its advantages and disadvantages, depending on the particular system.
Hydrodynamic equations
In this chapter, we will discuss several methods used in simulating continuous systems. First we will discuss a quite mature method, the finite element method, which sets up the idea of partitioning the system according to physical condition. Then we will discuss another method, the particle-in-cell method, which adopts a mean-field concept in dealing with a large system involving many, many atoms, for example, 1023 atoms. This method has been very successful in the simulations of plasma, galactic, hydrodynamic, and magnetohydrodynamic systems.
The beauty of Nature is in its detail. If we are to understand different layers of scientific phenomena, tedious computations are inevitable. In the last half-century, computational approaches to many problems in science and engineering have clearly evolved into a new branch of science, computational science. With the increasing computing power of modern computers and the availability of new numerical techniques, scientists in different disciplines have started to unfold the mysteries of the so-called grand challenges, which are identified as scientific problems that will remain significant for years to come and may require teraflop computing power. These problems include, but are not limited to, global environmental modeling, virus vaccine design, and new electronic materials simulation.
Computational physics, in my view, is the foundation of computational science. It deals with basic computational problems in physics, which are closely related to the equations and computational problems in other scientific and engineering fields. For example, numerical schemes for Newton's equation can be implemented in the study of the dynamics of large molecules in chemistry and biology; algorithms for solving the Schrödinger equation are necessary in the study of electronic structures in materials science; the techniques used to solve the diffusion equation can be applied to air pollution control problems; and numerical simulations of hydrodynamic equations are needed in weather prediction and oceanic dynamics.
Important as computational physics is, it has not yet become a standard course in the curricula of many universities.
From the relevant discussions on function optimization covered in Chapters 3, 5, and 10, we by now should have realized that to find the global minimum or maximum of a multivariable function is in general a formidable task even though a search for an extreme of the same function under certain circumstances is achievable. This is the driving force behind the never-ending quest for newer and better schemes in the hope of finding a method that will ultimately lead to the discovery of the shortest path for a system to reach its overall optimal configuration.
The genetic algorithm is one of the schemes obtained from these vast efforts. The method mimics the evolution process in biology with inheritance and mutation from the parents built into the new generation as the key elements. Fitness is used as a test for maintaining a particular genetic makeup of a chromosome. The scheme was pioneered by Holland (1975) and enhanced and publicized by Goldberg (1989). Since then the scheme has been applied to many problems that involve different types of optimization processes (Bäck, Fogel, and Michalewicz, 2003). Because of its strength and potential applications in many optimization problems, we introduce the scheme and highlight some of its basic elements with a concrete example in this chapter. Several variations of the genetic algorithm have emerged in the last decade under the collective name of evolutionary algorithms and the scope of the applications has also been expanded into multi-objective optimization (Deb, 2001; Coello Coello, van Veldhuizen, and Lamont, 2002).
Most problems in physics and engineering appear in the form of differential equations. For example, the motion of a classical particle is described by Newton's equation, which is a second-order ordinary differential equation involving at least a second-order derivative in time, and the motion of a quantum particle is described by the Schrödinger equation, which is a partial differential equation involving a first-order partial derivative in time and second-order partial derivatives in coordinates. The dynamics and statics of bulk materials such as fluids and solids are all described by differential equations.
In this chapter, we introduce some basic numerical methods for solving ordinary differential equations. We will discuss the corresponding schemes for partial differential equations in Chapter 7 and some more advanced techniques for the many-particle Newton equation and the many-body Schrödinger equation in Chapters 8 and 10. Hydrodynamics and magnetohydrodynamics are treated in Chapter 9.
In general, we can classify ordinary differential equations into three major categories:
(1) initial-value problems, which involve time-dependent equations with given initial conditions;
(2) boundary-value problems, which involve differential equations with specified boundary conditions;
(3) eigenvalue problems, which involve solutions for selected parameters (eigenvalues) in the equations.
In reality, a problem may involve more than just one of the categories listed above. A common situation is that we have to separate several variables by introducing multipliers so that the initial-value problem is isolated from the boundary-value or eigenvalue problem.
Since the publication of the first edition of the book, I have received numerous comments and suggestions on the book from all over the world and from a far wider range of readers than anticipated. This is a firm testament of what I claimed in the Preface to the first edition that computational physics is truly the foundation of computational science.
The Internet, which connects all computerized parts of the world, has made it possible to communicate with students who are striving to learn modern science in distant places that I have never even heard of. The main drive for having a second edition of the book is to provide a new generation of science and engineering students with an up-to-date presentation to the subject.
In the last decade, we have witnessed steady progress in computational studies of scientific problems. Many complex issues are now analyzed and solved on computers. New paradigms of global-scale computing have emerged, such as the Grid and web computing. Computers are faster and come with more functions and capacity. There has never been a better time to study computational physics.
For this new edition, I have revised each chapter in the book thoroughly, incorporating many suggestions made by the readers of the first edition. There are more examples given with more sample programs and figures to make the explanation of the material easier to follow.
Computing has become a necessary means of scientific study. Even in ancient times, the quantification of gained knowledge played an essential role in the further development of mankind. In this chapter, we will discuss the role of computation in advancing scientific knowledge and outline the current status of computational science. We will only provide a quick tour of the subject here. A more detailed discussion on the development of computational science and computers can be found in Moreau (1984) and Nash (1990). Progress in parallel computing and global computing is elucidated in Koniges (2000), Foster and Kesselman (2003), and Abbas (2004).
Computation and science
Modern societies are not the only ones to rely on computation. Ancient societies also had to deal with quantifying their knowledge and events. It is interesting to see how the ancient societies developed their knowledge of numbers and calculations with different means and tools. There is evidence that carved bones and marked rocks were among the early tools used for recording numbers and values and for performing simple estimates more than 20,000 years ago.
The most commonly used number system today is the decimal system, which was in existence in India at least 1500 years ago. It has a radix (base) of 10. A number is represented by a string of figures, with each from the ten available figures (0–9) occupying a different decimal level.
Although Java first gained fame with applets in Web browsers and then became a popular tool for creating large enterprise services, the developers of Java originally intended it for embedded applications in consumer devices such as TV remote controls and Personal Data Assistants (see Chapter 1). The term “embedded” generally refers to encapsulating a processor into a device, along with programs stored in non-volatile memory, to provide services and features specific to that device. Microcontrollers, for example, are the most common type of embedded processors.
By embedded Java we refer to a device that contains either a conventional processor running a JVM or a special type of processor that either directly executes Java bytecodes or assists a conventional processor with executing bytecodes. The motivations for device designers to embed Java depend on the particular device, but, in general, Java provides flexibility, interactivity, networking, portability, and fast development of the software for embedded projects.
Today several types of commercial devices come with Java built into them. As mentioned in Chapter 1, over 600 million JavaCards have been sold around the world as of mid-2004, and several hundred cell phone models include Java.
Embedded applications typically must deal with very limited resources. A full-blown J2SE application on a desktop with a Swing graphical interface might require several megabytes of RAM.