To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Further exciting developments in rate equations are possible. Some of these more advanced uses and techniques are touched on in this chapter, singling out laser devices which will find applications in communications. The statistical information that can be provided by rate equations has been one theme in this book and is developed further to demonstrate how the output of a single mode injection laser changes from a chaotic distribution to a Poisson distribution as the drive current into the laser is increased.
The injection laser normally has several modes. It is useful then to show how rate equations can handle such multimode problems. In particular this section emphasises the importance of spontaneous emission in determining mode amplitudes.
Rate equations, as interpreted here, have been concerned with rates of change of energy, momentum, quanta, charge and so on. These equations have all removed any information of the phase of quantum or electromagnetic waves. In phenomena where phase is important more detailed discussions using full quantum or electromagnetic theories are usually required. To demonstrate the importance of phase and also to demonstrate how the rate equation approach can sometimes be modified to include phase, the ‘mode locked’ laser is discussed briefly. This topic follows on naturally from the multimode rate equations because in a mode locked laser there are many optical modes at equally spaced frequency intervals but with their amplitudes locked to zero phase at one time. The resultant output from such mode locked lasers can be a train of exceptionally short pulses, down to subpicosecond durations with nanosecond repetition rates.
The motion of charge carriers within a semiconductor is governed by a number of useful concepts which can be understood in straightforward ways from rates of change of particles in space, time, momentum and energy. In chapter 1 it was seen that the conservation of particles leads to the continuity equations (1.2.3) and (1.2.5) relating the rate of accumulation of charge density ∂ρ/∂t and the spatial rate of dispersal, div J, of current density J.
This section shows how rates of change of momentum and energy determine the velocity ν = J/ρ of charge carriers as the electric field changes within a semiconductor. In chapter 4 a more detailed approach to carrier transport is discussed using the Boltzmann collision equation, which brings in diffusion and also develops a model relevant to the Gunn effect.
The later sections of this present chapter continue with elementary transport discussing the rates at which semiconductors relax back to equilibrium. Chapter 3 outlines how such rates place limits on the engineering of devices for very high speed switching.
Rates of change of momentum: mobility
Quantum theory assures one that electrons behave as waves and that electron waves can travel freely through a perfectly periodic array of atoms such as is formed by a perfect crystal. The analogy is often made between electron quantum waves in a crystal and electromagnetic waves travelling through a periodic structure of inductors and capacitors. In such a filter only certain frequencies are permitted to propagate. In the crystal there is equally a limited range of frequencies for the quantum waves, and this means a limited range of energies for the mobile electrons (Fig. 2.1).
The motion of charge carriers within a semiconductor (or a gas) can be found by a more formal approach through the Boltzmann collision equation, which provides an elegant method by considering the rates of change of particles within ‘phase’ space – a concept to be introduced shortly. To keep the discussion clear, a one-dimensional classical approach will be considered with charge carriers having an effective mass, m*, assumed to be independent of energy or direction (not strictly valid in a semiconductor but still a most useful simplification). Extensions to three dimensions and the required corrections for quantum theory can be dealt with in later reading.
The Boltzmann collision equation for the flow of many particles is a statistical equation on the conservation of particles in a six-dimensional space referred to as phase space. The six dimensions consist of the three space dimensions for x combined with the three additional dimensions for momentum p. Momentum is considered here as an independent variable with the same independence as position. It is the dynamical equations which link these six independent variables together. On first acquaintance, this concept of momentum and position being independent variables appears absurd because it is easy to confuse the dynamical link (given through an equation such as m dx/dt = p), with functional interdependence of p and x.
W. R. Hamilton in 1834 introduced the idea that all dynamical motion could be described in terms of a function H(p, x, t) = 0 linking the momentum p and position x in time. Coordinates of position and momentum can be defined and are treated as of equal independence. […]
The work here concentrates on semiconductor devices, leaving gas and atomic lasers for further reading. This chapter considers tutorial models for the diode injection laser, the semiconductor light emitting diode (LED) and photodiode (PD), all of which can be understood from the same forms of rate equations, though with different constraints and interpretation. It will be helpful to have models for these three devices before starting on their rate equations.
The light emitting diode
The LED is a p-n junction driven by a current I into forward bias. Electrons and holes recombine close to the junction between the p- and n-materials and give out radiation with a frequency determined by the photon energy which in turn is determined by the material's impurities or band gap. Some fraction of the forward current I is then turned into useful light L (power) formed from photons with a mean energy hfm. The quantum efficiency is η = (eL/hfmI). The more efficient LEDs have a well-defined recombination region (volume Φ). For example, in GaAs LEDs (Fig. 6.1), the region Φ can be defined through the use of ‘heterojunctions’, where a p-type GaAs layer is sandwiched between n- and p-type Ga1−xAlx As materials which have a wider energy gap between conduction and valence band than GaAs materials. On forward bias, holes are driven from the p-Ga1−xAlxAs material into the GaAs, but the potential difference that arises between the valence band of p-type GaAs and n-Ga1−xAlxAs prevents the holes diffusing into the n-Ga1−xAlxAs material.
In the hope of avoiding analysis for its own sake, some of the ideas of the preceding chapter are applied to the dynamics of semiconductor switching devices which are important in computer and communication systems. Most texts on semiconductor devices concentrate on impurities, Fermi levels, diffusion equations and equilibrium starting conditions. In the approach here, the chief concerns are the rates at which a device can transport charge. So RC time constants, the dielectric relaxation rate, transit times and rates of recombination are the quantities that appear in approximate dynamic analyses of the selected switching devices. There is an inevitable tendency to digress from one or two themes of rate equations into standard semiconductor physics, and the forbearance of the reader is requested when the digressions into physics are too lengthy, inadequate or both.
Digital communications and computations rely on transmitting information by electromagnetic pulses (electrical, microwave or optical) which are either ‘on’ or ‘off’. In section 1.6 it was seen that such binary signals helped to maximise the probable information of a single symbol. Moreover, encoding signals into pulses leads to more accurate detection, regeneration and transmission of data through a variety of techniques such as error correcting codes, which can combat interference or noise in a transmission path. Faster switching rates lead to shorter pulses and so to higher rates of data and information processing.
An ideal switch between a load and a source would transfer power instantaneously to the load, but we remind the reader here how switches and loads have, of necessity, lead and contact resistances with stray capacitances which limit the rate of transfer of charge to the load.
Collecting fresh fruits becomes ever harder as the tree of knowledge grows higher and wider. However, there are certain branches that provide surer footholds to the new growths, and teachers must search these out. The rates of change of charge, energy, momentum, photon numbers, electron densities and so on, along with their detailed balances as particles and systems interact, provide fundamental footholds on sturdy branches in physics and chemistry. This book contains a collection of such topics applied to semiconductors and optoelectronics in the belief that such analyses provide valuable tutorial routes to understanding past, present and future devices. By concentrating on rates of change one focuses attention on these devices' dynamic behaviour which is vital to the ever faster flow of digits and information. The rates of the statistical interactions between electrons and photons determine distributions of energy amongst the particles as well as determining distributions in time, so controlling the ways in which devices work.
The first chapter is meant to be a fun chapter outlining some of the breadth and ideas of rate equation approaches. It is even hoped that some of these initial ideas may be picked up by sixth form teachers. Rates of reactions are mentioned in school chemistry but the implications are much broader.
Most electronic degree courses consider electron waves, holes and electrons, along with devices such as p-n junctions, FETS and bipolar transistors. Chapters 2 and 3 are adjuncts to this work. By considering rates of change of charge and emphasising transit times and recombination rates the dynamics of these devices can be highlighted.
An outstandingly innovative scientist, Rudi Kompfner, wrote that when his intuition was unengaged or disengaged then his creative faculties were paralysed. Although Kompfner was writing about quantum theory, his remarks apply to most aspects of science. How can one create and innovate when no understanding is present? The idea behind this book is that a useful contribution to understanding in science and engineering can be found by determining the rate at which an interactive process occurs and concentrating on the dominant features which limit the interaction rates.
Such thinking is not limited to science; it can have universal application. For example, before lending money to a client, a building society will ask how much that client is earning from any employers, and so obtain an estimate of the maximum rate at which the client can reasonably pay off the mortgage that will be advanced to buy a house. The rate of income being paid to the client determines to a first order the rate at which money can be spent! The maximum amount of traffic that can use a road may be limited not by the size of the road, but by the rate at which traffic can escape or enter from congested roundabouts that serve the road. In building electronic circuits to switch at high speeds, one may find that the speed is limited by the rate at which components can transfer charge into a capacitive load. It may alternatively be limited by the rate at which information can be transmitted from neighbouring devices, which have to be a certain distance away in order to accommodate enough devices to drive and be driven by any one single device.
Feedback occurs where some part of the output of a system also appears back at its input and so modifies the input signal. It occurs in every system. If the feedback is undesired, then the input and output circuits of a system must be well separated and screened. Remember that if the circuit is meant to handle signal frequencies of a few megahertz, which are in the broadcast band, these will be readily radiated from the output circuit wiring and could easily be picked up by the input wiring. Also, if the input and output stages share a common power supply, then care must be taken to ‘decouple’ this as far as signal frequencies are concerned. Otherwise feedback could occur.
This chapter describes intentional feedback, which, when it is properly applied, can improve almost every performance feature of an amplifier. It can widen its frequency response, reduce the effects of component ageing, microphony and hum pickup, and stabilise the overall gain closely to some figure required by a designer. These are a few of its benefits which are more fully explained in the following sections. The subject is important because virtually every good quality or precision amplifier made today is likely to use feedback.
There is one troublesome effect of feedback on amplifier performance; it can create a tendency towards instability if the system is poorly designed. However, feedback which is specifically designed to make oscillators or switches is useful in its own right and it is described in chapter 6.
Although a semiconductor amplifier using a field-effect phenomenon was postulated by Shockley in 1952, it was not successfully made until 1963. A bipolar transistor was devised and made by Brattain and Bardeen in 1948: it has developed from almost individually made devices which were sealed in glass envelopes like little valves to the mass produced, robust, cheap devices that we know today. Many of the present integrated circuits, described in chapter 4, use bipolar transistors as their active elements whether they be switches or amplifiers. Some integrated circuit designs using the field-effect transistor are also available but their higher cost must be offset by definite requirements for low noise or very high input resistance. A more detailed comparison of bipolar with field effect transistors is made in §3.16.
The bipolar transistor is used in the power amplifiers of our domestic sound equipment, in the largest computers, and in the most complex integrated circuits. This chapter describes first the principle of operation of the bipolar transistor and its typical characteristics. Then §§3.6 on will describe its use in simple amplifier circuits and discuss problems such as its biasing, stability of operating point, likely gain and frequency response. Lastly some more advanced circuits are considered and a numerical example is worked through.
Principle of operation
Consider the n–p–n sandwich of semiconductor shown in fig. 3.1 (a). This contains two back-to-back p–n junctions; see §§2.1 to 2.4 if you are not familiar with p- and n-type materials, junctions, leakage currents, etc.