To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In nuclear medicine scans a very small amount, typically nanogrammes, of radioactive material called a radiotracer is injected intravenously into the patient. The agent then accumulates in specific organs in the body. How much, how rapidly and where this uptake occurs are factors which can determine whether tissue is healthy or diseased and the presence of, for example, tumours. There are three different modalities under the general umbrella of nuclear medicine. The most basic, planar scintigraphy, images the distribution of radioactive material in a single two- dimensional image, analogous to a planar X-ray scan. These types of scan are mostly used for whole-body screening for tumours, particularly bone and metastatic tumours. The most common radiotracers are chemical complexes of technetium (99mTc), an element which emits mono-energetic γ-rays at 140 keV. Various chemical complexes of 99mTc have been designed in order to target different organs in the body. The second type of scan, single photon emission computed tomography (SPECT), produces a series of contiguous two-dimensional images of the distribution of the radiotracer using the same agents as planar scintigraphy. There is, therefore, a direct analogy between planar X-ray/CT and planar scintigraphy/SPECT. A SPECT scan is most commonly used for myocardial perfusion, the so-called ‘nuclear cardiac stress test’. The final method is positron emission tomography (PET). This involves injection of a different type of radiotracer, one which emits positrons (positively charged electrons).
X-ray planar radiography is one of the mainstays of a radiology department, providing a first ‘screening’ for both acute injuries and suspected chronic diseases. Planar radiography is widely used to assess the degree of bone fracture in an acute injury, the presence of masses in lung cancer/emphysema and other airway pathologies, the presence of kidney stones, and diseases of the gastrointestinal (GI) tract. Depending upon the results of an X-ray scan, the patient may be referred for a full three-dimensional X-ray computed tomography (CT) scan for more detailed diagnosis.
The basis of both planar radiography and CT is the differential absorption of X-rays by various tissues. For example, bone and small calcifications absorb X-rays much more effectively than soft tissue. X-rays generated from a source are directed towards the patient, as shown in Figure 2.1(a). X-rays which pass through the patient are detected using a solid-state flat panel detector which is placed just below the patient. The detected X-ray energy is first converted into light, then into a voltage and finally is digitized. The digital image represents a two-dimensional projection of the tissues lying between the X-ray source and the detector. In addition to being absorbed, X-rays can also be scattered as they pass through the body, and this gives rise to a background signal which reduces the image contrast. Therefore, an ‘anti-scatter grid’, shown in Figure 2.1(b), is used to ensure that only X-rays that pass directly through the body from source-to-detector are recorded.
A clinician making a diagnosis based on medical images looks for a number of different types of indication. These could be changes in shape, for example enlargement or shrinkage of a particular structure, changes in image intensity within that structure compared to normal tissue and/or the appearance of features such as lesions which are normally not seen. A full diagnosis may be based upon information from several different imaging modalities, which can be correlative or additive in terms of their information content.
Every year there are significant engineering advances which lead to improvements in the instrumentation in each of the medical imaging modalities covered in this book. One must be able to assess in a quantitative manner the improvements that are made by such designs. These quantitative measures should also be directly related to the parameters which are important to a clinician for diagnosis. The three most important of these criteria are the spatial resolution, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). For example, Figure 1.1(a) shows a magnetic resonance image with two very small white-matter lesions indicated by the arrows. The spatial resolution in this image is high enough to be able to detect and resolve the two lesions. If the spatial resolution were to have been four times worse, as shown in Figure 1.1(b), then only the larger of the two lesions is now visible. If the image SNR were four times lower, illustrated in Figure 1.1(c), then only the brighter of the two lesions is, barely, visible.
Tools developed by statistical physicists are of increasing importance in the analysis of complex biological systems. Physics in Molecular Biology, first published in 2005, discusses how physics can be used in modeling life. It begins by summarizing important biological concepts, emphasizing how they differ from the systems normally studied in physics. A variety of topics, ranging from the properties of single molecules to the dynamics of macro-evolution, are studied in terms of simple mathematical models. The main focus of the book is on genes and proteins and how they build systems that compute and respond. The discussion develops from simple to complex systems, and from small-scale to large-scale phenomena. This book will inspire advanced undergraduates and graduate students in physics to approach biological subjects from a physicist's point of view. It is self-contained, requiring no background knowledge of biology, and only familiarity with basic concepts from physics, such as forces, energy, and entropy.
Molecular and Cellular Biophysics provides advanced undergraduate and graduate students with a foundation in the basic concepts of biophysics. Students who have taken physical chemistry and calculus courses will find this book an accessible and valuable aid in learning how these concepts can be used in biological research. The text provides a rigorous treatment of the fundamental theories in biophysics and illustrates their application with examples. Conformational transitions of proteins are studied first using thermodynamics, and subsequently with kinetics. Allosteric theory is developed as the synthesis of conformational transitions and association reactions. Basic ideas of thermodynamics and kinetics are applied to topics such as protein folding, enzyme catalysis and ion channel permeation. These concepts are then used as the building blocks in a treatment of membrane excitability. Through these examples, students will gain an understanding of the general importance and broad applicability of biophysical principles to biological problems.
During development cells and tissues undergo changes in pattern and form that employ a wider range of physical mechanisms than at any other time in an organism's life. This book shows how physics can be used to analyze these biological phenomena. Written to be accessible to both biologists and physicists, major stages and components of the biological development process are introduced and then analyzed from the viewpoint of physics. The presentation of physical models requires no mathematics beyond basic calculus. Physical concepts introduced include diffusion, viscosity and elasticity, adhesion, dynamical systems, electrical potential, percolation, fractals, reaction-diffusion systems, and cellular automata. With full-color figures throughout, this comprehensive textbook teaches biophysics by application to developmental biology and is suitable for graduate and upper-undergraduate courses in physics and biology.
In this appendix we provide basic concepts aimed at introducing the formalism of networks. We first introduce graphs and make simple examples, and then discuss the topological properties of networks. Other general presentations of network structure can be found in review articles and books.
A network (or graph in a more mathematical language) is defined as a set of Nvertices (or nodes) connected by links (or edges). Links, and consequently the whole graph, can be either directed (oriented), if a direction is specified as in Fig. A.1a, or undirected (not oriented), if no direction is specified as in Fig. A.1b. More precisely, undirected links are rather bidirectional ones, since they can be traversed in both directions. For this reason an undirected graph can always be thought of as a directed one where each undirected link is replaced by two directed links pointing in opposite directions (see Fig. A.1c). A link in a directed network is said to be reciprocated if another link between the same pair of vertices, but with opposite direction, is there. Therefore, an undirected network can be regarded as a special case of a directed network where all links are reciprocated. The links of a network may also carry a number, referred to as the weight of the edge, representing the strength of the corresponding interaction. In such a case one speaks of a weighted network. In the present appendix we do not consider weighted networks explicitly.
Protein interactions can be identified by a multitude of experimental methods. In fact, the IntAct database of molecular interactions currently lists about 170 different experimental methods and variations thereof that can be used to detect and characterize protein–protein interactions (the main classes are listed in Table 4.1). While we present the commonly used methods in this chapter we will focus on the few technologies which are used in high-throughput studies and thus generated the vast majority of interaction data available today: the yeast two-hybrid (Y2H) assay and protein complex purification and identification by mass spectrometry (MS) (Table 4.2). These two methods represent two fundamentally different sources of interaction data and thus it is important to understand how they work and what strengths and weaknesses each of them has. This is especially important for theoretical analyses which often draw conclusions from datasets which may not be adequate for certain studies. For example, membrane proteins are underrepresented in both yeast two-hybrid and complex purification studies.
Complex versus binary interactions
It is important to note that most methods detect either direct binary interactions or indirect interactions without knowing which proteins are interacting. The yeast two-hybrid system usually detects direct binary interactions while complex purification detects the components of complexes (Fig. 4.1). Complex data are often interpreted as if the proteins that co-purify are interacting in a particular manner, consistent with either a spoke or matrix model.
Biologists now have access to a virtually complete map of all the genes in the human genome, and in the genomes of many other species. They are aggressively assembling a similarly detailed knowledge of the proteome, the full collection of proteins encoded by those genes, and the transcriptome, the diverse set of mRNA molecules that serve as templates for protein manufacture. We increasingly know the “parts list” of molecular biology. Yet we still lack a deep understanding of how all these parts work together to support the complex and coherent activity of the living cell; how cells and organisms manage the concurrent tasks of production and re-production, signalling and regulation, in fluctuating and often hostile environments.
Building a more holistic understanding of cell biology is the aim of the new discipline of systems biology, which views the living cell as a network of interacting processes and gives concrete form to the vision of François Jacob, one of the pioneers in the study of genetic regulatory mechanisms, who spoke in the 1960s of the “logic of life.” Put simply, systems biologists regard the cell as a vastly complex biological “circuit board,” which orchestrates diverse components and modules to achieve robust, reliable and predictable operation. Systems biology suggests that the mechanisms of cell biology can be related to the information sciences, to ideas about information flow and processing in de-centralized networks.
This view, of course, has long been implicit in the study of cell signalling and other key pathways of molecular bio-chemistry.
This section provides an overview of some of the statistical tools and concepts which are useful for data analysis and the study of complex networks. Our emphasis will be on the practical application of probability theory, rather than its mathematical foundations, which is why we will confine ourselves to self-consistent definitions of the basic ingredients of applied statistics, rather than their derivation from first principles. For those who desire a more rigorous and more detailed treatment of the material, a celebrated introduction to probability theory can be found in, which discusses the contents of this chapter in much greater detail.
Events and probabilities
Tossing a coin – with an outcome of ‘heads’ or ‘tails’ – is one of the simplest examples of a probabilistic event. More complicated examples could be to obtain ‘five’ and ‘two’ when throwing a pair of dice, the ball landing on a red number in a game of roulette, or the spreading of an infection from an infected individual to a healthy one. In all these cases the set of all possible outcomes of an experiment is the sample space. An event can be defined as any member (or subset) of the sample space.
Technical part In set theory we can write that very simply: Ω is the sample space and any set A ⊂ Ω is an event in the following sense.[…]
Complex systems that describe a wide range of interactions in nature and society can be represented as networks. In general terms, such networks are made of nodes, which represent the objects in a system, and connections that link the nodes, which represent interactions between the objects. In mathematical terms, a network is a graph which comprises of vertices and edges (undirected links) or arcs (directed links). Examples of complex networks include the World Wide Webs, social network of acquaintances between individuals, food webs, metabolic networks, transcriptional networks, signaling networks, neuronal networks and several others. Although the study of networks in the form of graphs is one of the fundamental areas of discrete mathematics, much of our understanding about the underlying organizational principles of complex real-world networks has come to light only recently. While traditionally most complex networks have been modeled as random graphs, it is becoming increasingly clear that the topology and evolution of real networks are not random but are governed by robust design principles.
A number of biological systems ranging from physical interaction between biomolecules to neuronal connections can be represented as networks. Perhaps the classic example of a biological network is the network of metabolic pathways, which is a representation of all the enzymatic inter-conversions between small molecules in a cell. In such a network, nodes represent small molecules, which are either substrates or products of an enzymatic reaction, and directed edges represent an enzymatic reaction that converts a substrate into a product.
Complexity in biological systems is the rule rather than the exception. Life seems to depend on structures that not only perform a wide variety of functions, but are adaptable and robust at the same time. For a long time the only scientific approach available to study these complex biological systems has been a purely descriptive one. In the second half of the twentieth century molecular biology emerged, along with the development of a variety of experimental methods that allowed an ever-deeper exploration of the constituent parts of a cell, as well as the ways in which these parts assemble. Our chromosomes were shown to be made of tightly packed DNA double helixes that store our genetic code. RNA polymerase, the protein complex responsible for transcription of the genetic code to messenger RNA, has been identified, along with many major constituents of the fascinating machinery between genetic code and cellular phenotype. As mRNA molecules leave the nucleus they are met by ribosomes, protein complexes that read the genetic code using groups of three mRNA letters to identify the corresponding amino-acid sequence, and thus translate the genetic code into proteins. Proteins are responsible for the majority of biological functions driving a living cell: they orchestrate metabolic reactions, form structural elements like the cytoskeleton, keep track of the extra- and intracellular environment and transmit the signals that constantly reshape gene transcription so that the cell can express precisely the proteins it needs.
Specific sensory and signaling systems allow living cells to gather and transmit information from the environment. All perceived signals are used in order to adjust cellular metabolism, growth, and development to environmental conditions. At the same time the cell is able to sense the intracellular milieu, e.g. energy and nutrient availability, redox state and so on and it accordingly adapts its physiological state. The importance of such changes in cellular processes is underlined by the presence of multiple regulatory systems (see Table 3.1), the most important of which controls the rate of transcription of a gene. The extremely different cell types in higher eukaryotes are a consequence of expression pattern differences, as well as cellular proliferation and differentiation, which are controlled by complex regulatory circuits originating space- and time-dependent transcriptional patterns. Thus, understanding the dynamic link between genotype and phenotype remains a central issue in biology.
Signals sensed by the cell are translated into changes in the rate of transcription of well-defined groups of genes through the activation of specific proteins (transcription factors, TF). TFs have high affinity for specific short sequences located upstream of genes and regulate transcription either positively or negatively. The binding of a TF to its target on the gene's promoter controls when expression occurs, at what level, under what conditions, and in which cells or tissues [662]. Interactions with other proteins, chromatin remodeling, modification complexes and the general transcription machinery affect the DNA-binding characteristics of a TF thereby influencing the rate of transcription.