To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Michael Wiescher, Department of Physics, University of Notre Dame, Notre Dame, IN 46556, USA
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
This paper presents a discussion of the characteristic observables of stellar explosions and compares the observed signatures such as light curve and abundance distribution with the respective values predicted in nucleosynthesis model calculations. Both the predicted energy generation as well as the abundance distribution in the ejecta depends critically on the precise knowledge of the reaction rates and decay processes involved in the nucleosynthesis reaction sequences. The important reactions and their influence on the production of the observed abundances will be discussed. The nucleosynthesis scenarios presented here are all based on explosive events at high temperature and density conditions. Many of the nuclear reactions involve unstable isotopes and are not well understood yet. To reduce the experimental uncertainties several radioactive beam experiments will be discussed which will help to come to a better understanding of the correlated nucleosynthesis.
Introduction
Historically, the field of nuclear astrophysics has been concerned with the interpretation of the observed elemental and isotopic abundance distribution (Anders & Grevesse 1989) and with the formulation and description of the originating nucleosynthesis processes (Burbidge et al. 1957; Wagoner 1973; Fowler 1984). Each of these nucleosynthesis processes can be characterized by a specific signature in luminosity and/or in the resulting abundance distribution.
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
The Mexican School on Nuclear Astrophysics was held in the Hotel Castillo Santa Cecilia, Guanajuato, México, from August 13 to August 20, 1997. The goal of the school was to gather together researchers and graduate students working on related problems in astrophysics – to present areas of current research, to discuss some important open problems, and to establish and strengthen links between researchers. The school consisted of eight courses and material presented in these forms the basis of this book.
Non–stop interaction between the participants, through both formal and informal discussions, gave the school a relaxed and productive atmosphere. It provided the opportunity for researchers from a wide range of backgrounds to share their interests in and different perspectives of the latest developments in astrophysics.
The productivity of the meeting reflected the strong interest of the Mexican and Latin American scientific communities in the subjects covered, Indeed, a second school is planned for 1999.
Professor David Schramm very sadly died not long after the conference, in December 1997. His lectures at the School were fascinating. He will be sorely missed by us and the rest of the astrophysics community.
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
By
Luis F. Rodriguez, Institute) de Astronomía, UNAM, Apdo. Postal 70–264, México, DF, 04510, MEXICO
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
A brief review of key concepts in multifrequency observational astronomy is presented. The basic physical scales in astronomy as well as the concept of stellar evolution are also introduced. As examples of the application of multifrequency astronomy, recent results related to the observational search for black holes in binary systems in our Galaxy and in the centers of other galaxies is described. Finally, the recently discovered microquasars are discussed. These are galactic sources that mimic in a smaller scale the remarkable relativistic phenomena observed in distant quasars.
Introduction
There have been many outstanding observational and theoretical discoveries made in astronomy during the twentieth century. However, in the future this ending century will most probably be remembered not by these achievements, but by being the time when astronomers started observing the Cosmos with a variety of techniques and in particular when we started to use all the “windows” in the electromagnetic spectrum.
During our century we started to investigate systematically the Universe using:
The whole electromagnetic spectrum. At the beginning of the century, practically all the data was coming from the visible photons (that is, those detected by the human eye) only.
Cosmic rays. These charged particles hit the Earth's atmosphere and can be detected by the air showers they produce. The origin of the most energetic cosmic rays (1019 ergs or more) remains a mystery.
By
Petr Vogel, Department of Physics, California Institute of Technology, Pasadena, CA 91125, USA
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
In these four lectures I will present a brief and rather elementary description of the physics of massive neutrinos as it emerges from studies involving nuclear physics, particle physics, astrophysics and cosmology. The lectures are meant for physicists who are not experts in this field, which I believe covers most of the participants in this School, and many potential readers elsewhere. I hope that such readers can find here enough information that they will be able to understand and appreciate the connection between the hunt for neutrino mass and mixing described here, and their own field of expertize.
Throughout I will use original references sparingly. Instead I refer to several monographs, written and published during the last decade [Boehm & Vogel (1992), Kayser, Gibrat–Debu k Perrier (1989), Winter (1991), Mohapatra & Pal (1991), Kim & Pevsner (1993), Klapdor–Kleingrothaus & Staudt (1995)] where an interested reader can find references to the original papers. When appropriate I will also refer to review papers on various aspects of the neutrino mass or related topics. For the experimental data, including the list of the most recent original experimental papers, the best source is the Review of Particle Physics, periodically updated, with the latest printed version in PDG (1996). The update of this very useful publication is available even between printed editions on the World–Wide Web at http://pdg.lbl.gov/.
Chapter 1 set the stage for the rest of the book: it reviewed Newton's equations and the basic concepts of Newton's formulation of mechanics. The discussion in that chapter was applied mostly to dynamical systems whose arena of motion is Euclidean three-dimensional space, in which it is natural to use Cartesian coordinates. However, we referred on occasion to other situations, such as one-dimensional systems in which a particle is not free to move in Euclidean 3-space but only in a restricted region of it. Such a system is said to be constrained: its arena of motion, or, as we shall define below, its configuration manifold, turns out in general to be neither Euclidean nor three dimensional (nor 3N-dimensional, if there are N particles involved). In such cases the equations of motion must include information about the forces that give rise to the constraints.
In this chapter we show how the equations of motion can be rewritten in the appropriate configuration manifold in such a way that the constraints are taken into account from the outset. The result is the Lagrangian formulation of dynamics (the equations of motion are then called Lagrange's equations). We should emphasize that the physical content of Lagrange's equations is the same as that of Newton's. But in addition to being logically more appealing, Lagrange's formulation has several important advantages.
Perhaps the first evident advantage is that the Lagrangian formulation is easier to apply to dynamical systems other than the simplest.
By
Mike Guidry, Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996–1200, USA, Theoretical and Computational Physics Section Oak Ridge National Laboratory, Oak Ridge, TN 37831–6373, USA
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
The mechanism for a core–collapse or type II supernova is a fundamental unresolved problem in astrophysics. Although there is general agreement on the outlines of the mechanism, a detailed model that includes microphysics self–consistently and leads to robust explosions having the observational characteristics of type II supernovae does not exist. Within the past five years supernova modeling has moved from earlier one–dimensional hydrodynamical simulations with approximate microphysics to multi–dimensional hydrodynamics on the one hand, and to much more detailed microphysics on the other. These simulations suggest that large–scale and rapid convective effects are common in the core during the first hundreds of milliseconds after core collapse, and may play a role in the mechanism. However, the most recent simulations indicate that the proper treatment of neutrinos is probably even more important than convective effects in producing successful explosions. In this series of lectures I will give a general overview of the core–collapse problem, and will discuss the role of convection and neutrino transport in the resolution of this problem.
Introduction
A type II supernova is one of the most spectacular events in nature, and is a likely source of the heavy elements that are produced in the rapid neutron capture or r–process. Considerable progress has been made over the past two decades in understanding the mechanisms responsible for such events.
The motion of rigid bodies, also called rotational dynamics, is one of the oldest branches of classical mechanics. Interest in this field has grown recently, motivated largely by problems of stability and control of rigid body motions, for example in robotics (for manufacturing in particular) and in satellite physics. Our discussion of rigid-body dynamics will first be through Euler's equations of motion and then through the Lagrangian and Hamiltonian formalisms. The configuration manifold of rotational dynamics has properties that are different from those of the manifolds we have so far been discussing, so the analysis presents special problems, in particular in the Lagrangian and Hamiltonian formalisms.
INTRODUCTION
RIGIDITY AND KINEMATICS
Discussions of rigid bodies often rely on intuitive notions of rigidity. We want to define rigidity carefully, to show how the definition leads to the intuitive concept, and then to draw further inferences from it.
DEFINITION
A rigid body is an extended collection of point particles constrained so that the distance between any two of them remains constant.
To see how this leads to the intuitive idea of rigidity, consider any three points A, B, and C in the body. The definition implies that the lengths of the three lines connecting them remain constant, and then Euclidean geometry implies that so do the angles: triangle ABC moves rigidly. Since this is true for every set of three points, it holds also by triangulation for sets of more than three, so the entire set of points moves rigidly. In other words, fix the lengths and the angles will take care of themselves.
Edited by
Jorge G. Hirsch, Center of Research and Advanced Studies, National Polytechnic Institute, Mexico City,Danny Page, Universidad Nacional Autónoma de México
Image compression is required for preview functionality in large image databases (e.g. Hubble Space Telescope archive); with interactive sky atlases, linking image and catalog information (e.g. Aladin, Strasbourg Observatory); and for image data transmission, where more global views are communicated to the user, followed by more detail if desired.
Subject to an appropriate noise model, much of what is discussed in this chapter relates to faithful reproducibility of faint and sharp features in images from any field (astronomical, medical, etc.)
Textual compression (e.g. Lempel-Ziv, available in the Unix compress command) differs from image compression. In astronomy, the following methods and implementations have wide currency:
hcompress (White, Postman and Lattanzi, 1992). This method is most similar in spirit to the approach described in this chapter, and some comparisons are shown below, hcompress uses a Haar wavelet transform approach, whereas we argue below for a nonwavelet (multiscale) approach.
FITSPRESS (Press, 1992; Press et al., 1992) is based on the non-isotropic Daubechies wavelet transform, and truncation of wavelet coefficients. The approach described below uses an isotropic multiresolution transform.
COMPFITS (Véran and Wright, 1994), relies on an image decomposition by bit-plane. Low-order, i.e. noisy, bit-planes may then be suppressed. Any effective lossless compression method can be used on the high-order bit-planes.
JPEG (Hung, 1993), although found to provide photometric and astrometric results of high quality (Dubaj, 1994), is not currently well-adapted for astronomical input images.
Astronomical images contain typically a large set of point-like sources (the stars), some quasi point-like objects (faint galaxies, double stars) and some complex and diffuse structures (galaxies, nebulae, planetary stars, clusters, etc.). These objects are often hierarchically organized: star in a small nebula, itself embedded in a galaxy arm, itself included in a galaxy, and so on. We define a vision model as the sequence of operations required for automated image analysis. Taking into account the scientific purposes, the characteristics of the objects and the existence of hierarchical structures, astronomical images need specific vision models.
For robotic and industrial images, the objects to be detected and analyzed are solid bodies. They are seen by their surface. As a consequence, the classical vision model for these images is based on the detection of the surface edges. We first applied this concept to astronomical imagery (Bijaoui et al, 1978). We chose the Laplacian of the intensity as the edge line. The results are independent of large-scale spatial variations, such as those due to the sky background, which is superimposed on the object images. The main disadvantage of the resulting model lies in the difficulty of getting a correct object classification: astronomical sources cannot be accurately recognized from their edges.
We encounter this vision problem of diffuse structures not only in astronomy, but also in many other fields, such as remote sensing, hydrodynamic flows or biological studies. Specific vision models were implemented for these kind of images.