To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this book, we have aimed to explain the principles of computational neuroscience by showing how the underlying mechanisms are being modelled, together with presenting critical accounts of examples of their use. In some chapters, we have placed the modelling work described in its historical context where we felt this would be interesting and useful. We now make some brief comments about where the field of computational neuroscience came from and where it might be going.
Candidate models for how neurons or networks operate must be validated against experimental data. For this, it is necessary to have a good model for the measurement itself. For example, to compare model predictions from cortical networks with electrical signals recorded by electrodes placed on the cortical surface or the head scalp, the so-called volume conductor theory is required to make a proper quantitative link between the network activity and the measured signals. Here we describe the physics and modelling of electric, magnetic and other measurements of brain activity. The physical principles behind electric and magnetic stimulation of brain tissue are the same as those covering electric and magnetic measurements, and are also outlined.
This book is about how to construct and use computational models of specific parts of the nervous system, such as a neuron, a part of a neuron or a network of neurons, as well as their measurable signals. It is designed to be read by people from a wide range of backgrounds from the neurobiological, physical and computational sciences. The word ‘model’ can mean different things in different disciplines, and even researchers in the same field may disagree on the nuances of its meaning. For example, to biologists, this term can mean ‘animal model’. In particle physics, the ‘standard model’ is a step towards a complete theory of fundamental particles and interactions. We therefore attempt to clarify what we mean by modelling and computational models in the context of neuroscience. We discuss what might be called the philosophy of modelling: general issues in computational modelling that recur throughout the book.
Plasticity in the nervous system describes its ability to adapt to change, in response to exposure to new information, fluctuations in the internal environment or external injury. In each case, computational models at different levels of detail are required. Given that memory traces are stored in modifiable synapses, to model the storage and retrieval of information requires models of the modifiable synapse and of a network of neurons. We discuss the processing ability of the network as a whole, given a particular mechanism for synaptic modification, modelled in less detail. Neurons also exhibit homeostatic plasticity, the ability to maintain their firing activity in response to a fluctuating environment. This can involve modulation of intrinsic membrane currents, as well as synaptic plasticity. It must work in concert with synaptic plasticity for learning and memory to enable neural networks to retain and recall stored information whilst still being responsive to new information.
In this chapter, a range of models with fewer details than those in previous chapters is considered. These simplified neuron models are particularly useful for incorporating into networks, as they are computationally more efficient and sometimes they can be analysed mathematically. Reduced compartmental models can be derived from large compartmental models by lumping together compartments. Additionally, the number of gating variables can be reduced whilst retaining much of the dynamical flavour of a model. These approaches make it easier to analyse the function of the model using the mathematics of dynamical systems. In the yet simpler integrate-and-fire model, first introduced inand elaborated on in this chapter, there are no gating variables, with action potentials being produced when the membrane potential crosses a threshold. At the simplest end of the spectrum, rate-based models communicate via firing rates rather than via individual spikes.
The membrane potential of a neuron varies widely across the spatial extent of a neuron. The membrane may have spatially distinct distributions of ion channels and synaptic inputs arrive at different dendritic locations and propagate to the cell body. The membrane potential varies along axons, as the action potential propagates. We therefore need neuron models that include spatial, as well as temporal, dimensions. The most common approach is compartmental modelling in which the spatial extent of a neuron is approximated by a series of small compartments, each assumed to be isopotential. In limited cases of simple neuron geometry, analytical solutions for the membrane potential at any point along a neuron can be obtained through the use of the cable theory. We describe both modelling approaches here. Two case studies demonstrate the power of compartmental modelling: (1) action potential propagation along axons; and (2) synaptic signal integration in pyramidal cell dendrites.
This chapter presents the first quantitative model of active membrane properties, the Hodgkin–Huxley model. This was used to calculate the form of action potentials in the squid giant axon. Our step-by-step account of the construction of the model shows how Hodgkin and Huxley used the voltage clamp method to produce the experimental data required to construct mathematical descriptions of how the sodium, potassium and leak currents depend on the membrane potential. Simulations of the model produce action potentials similar to experimentally recorded ones and account for the threshold and refractory effects observed experimentally. Whilst subsequent experiments have uncovered limitations in the Hodgkin–Huxley model descriptions of the currents carried by different ions, the Hodgkin–Huxley formalism is a useful and popular technique for modelling channel types.
There are many types of active ion channel beyond the squid giant axon sodium and potassium voltage-gated ion channels studied in , including channels gated by ligands such as calcium. This chapter presents methods for modelling the kinetics of any voltage-gated or ligand-gated ion channel. The formulation used by Hodgkin and Huxley of independent gating particles can be extended to describe many types of ion channel. This formulation is the foundation for thermodynamic models, which provide functional forms for the rate coefficients derived from basic physical principles. To improve on the fits to data offered by models with independent gating particles, the more flexible Markov models are introduced. When and how to interpret kinetic schemes probabilistically to model the stochastic behaviour of single ion channels will be considered. Experimental techniques for characterising channels are outlined, and an overview of the biophysics of channels relevant to modelling channels is given.
Using a modern matrix-based approach, this rigorous second course in linear algebra helps upper-level undergraduates in mathematics, data science, and the physical sciences transition from basic theory to advanced topics and applications. Its clarity of exposition together with many illustrations, 900+ exercises, and 350 conceptual and numerical examples aid the student's understanding. Concise chapters promote a focused progression through essential ideas. Topics are derived and discussed in detail, including the singular value decomposition, Jordan canonical form, spectral theorem, QR factorization, normal matrices, Hermitian matrices, and positive definite matrices. Each chapter ends with a bullet list summarizing important concepts. New to this edition are chapters on matrix norms and positive matrices, many new sections on topics including interpolation and LU factorization, 300+ more problems, many new examples, and color-enhanced figures. Prerequisites include a first course in linear algebra and basic calculus sequence. Instructor's resources are available.
Both a serious academic text and an intriguing story, this seventh edition reflects a significant update in research, theory, and applications in all areas. It presents a comprehensive view of the historical development of learning theories from behaviorist through to cognitive models. The chapters also cover memory, motivation, social learning, machine learning, and artificial intelligence. The author's highly entertaining style clarifies concepts, emphasizes practical applications, and presents a thought-provoking, narrator-based commentary. The stage is given to Mrs Gribbin and her swashbuckling cat, who both lighten things up and supply much-needed detail. These two help to explore the importance of technology for simulating human cognitive processes and engage with current models of memory. They investigate developments in, and applications of, brain-based research and plunge into models in motivation theory, to name but a few of the adventures they embark upon in this textbook.
Richly illustrated in full colour and packed with examples from every major continent and wetland type, this third edition has been completely rewritten to provide undergraduates with a thoroughly accessible introduction to the basic principles. It divides the world’s wetlands into six principal types and presents six major causal environmental factors, arranged by importance and illustrated with clear examples, making it easy for instructors to plan tailored lectures and field trips and avoid overwhelming students with unnecessary detail. It retains its rigour for more advanced students, with sections on research methods and experiments, and over a thousand classic and contemporary references. Each chapter ends with questions that review the content covered and encourage further investigation. With expanded sections on topical issues such as sea level rise, eutrophication, facilitation and the latest approaches to restoration and conservation, the new edition of this prize-winning textbook is a vital resource for wetland ecology courses.
While the previous chapter covered probability on events, in this chapter we will switch to talking about random variables and their corresponding distributions. We will cover the most common discrete distributions, define the notion of a joint distribution, and finish with some practical examples of how to reason about the probability that one device will fail before another.
Chapter 5 illustrates how demand responds to changes in the forces on which it depends. Using the theory of consumer behavior, we start out by explaining the concepts of price, income, and cross-price elasticity of demand, as well as how to derive these elasticities. We will also cover empirical evidence on the actual value of the price and income elasticities of demand for the live performing arts in several countries over several different time periods.
This chapter provides insights into how the arts labor market works. We seek to understand what motivates artists to pursue their chosen professions and discuss whether the concept of the “starving artist” is valid. Using an artist survey, we explore the stated opinions of an artist to describe who can be classified as a professional artist. We shed light on the labor market of artists by investigating the role of unions, the “superstar” phenomenon, and the decision problem of an artist using the human capital model. Finally, we discuss gender representation in the labor market as well as the gig labor market.
The general setting in statistics is that we observe some data and then try to infer some property of the underlying distribution behind this data. The underlying distribution behind the data is unknown and represented by random variable (r.v.) . This chapter will briefly introduce the general concept of estimators, focusing on estimators for the mean and variance.
Richly illustrated in full colour and packed with examples from every major continent and wetland type, this third edition has been completely rewritten to provide undergraduates with a thoroughly accessible introduction to the basic principles. It divides the world’s wetlands into six principal types and presents six major causal environmental factors, arranged by importance and illustrated with clear examples, making it easy for instructors to plan tailored lectures and field trips and avoid overwhelming students with unnecessary detail. It retains its rigour for more advanced students, with sections on research methods and experiments, and over a thousand classic and contemporary references. Each chapter ends with questions that review the content covered and encourage further investigation. With expanded sections on topical issues such as sea level rise, eutrophication, facilitation and the latest approaches to restoration and conservation, the new edition of this prize-winning textbook is a vital resource for wetland ecology courses.