To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
State-space models (SSMs) are a mathematical abstraction of many real-life dynamic systems. They have been proven to be useful in a wide variety of fields, including robot tracking, speech processing, control systems, stock prediction, and bio-informatics, basically anywhere there is a dynamic system [70–75]. These models are not only of great practical relevance, but also a good illustration of the power of factor graphs and the SPA. The central idea behind an SSM is that the system at any given time can be described by a state, belonging to a state space. The state space can be either discrete or continuous. The state changes dynamically over time according to a known statistical rule. We cannot observe the state directly; the state is said to be hidden. Instead we observe another quantity (the observation), which has a known statistical relationship with the state. Once we have collected a sequence of observations, our goal is to infer the corresponding sequence of states.
This chapter is organized as follows.
In Section 6.2 we will describe the basic concepts of SSMs, create an appropriate factor graph, and show how the sum–product and max–sum algorithms can be executed on this factor graph. Then, we will consider three cases of SSM in detail.
In Section 6.3, we will cover models with discrete state spaces, known as hidden Markov models (HMMs), where we reformulate the well-known forward–backward and Viterbi algorithms using factor graphs.
In the previous two chapters we have introduced estimation theory and factor graphs. Although these two topics may seem disparate, they are closely linked. In this chapter we will use factor graphs to solve estimation problems and, more generally, inference problems. In the context of statistical inference, factor graphs are important for two reasons. First of all, they allow us to reformulate several important inference algorithms in a very elegant way with an all-encompassing, well-defined notation and terminology. As we will see in the future chapters, well-known algorithms such as the forward–backward algorithm, the Viterbi algorithm, the Kalman filter, and the particle filter can all be cast in the factor-graph framework in a very natural way. Secondly, deriving new, optimal (or near-optimal) inference algorithms is fairly straightforward in the factor-graph framework. Applying the SPA on a factor graph relies solely on local computations in basic building blocks. Once we understand the basic building blocks, we need remember only one rule: the sum–product rule. In this chapter, we will go into considerable detail on how to perform inference using factor graphs. Certain aspects of this chapter were inspired by [62].
This chapter is organized as follows.
We start by explaining the various problems of statistical inference in Section 5.2, and then provide a general factor-graph-based framework for solving these problems.
We will deal with how messages should be represented (a topic we glossed over in Chapter 4) in Section 5.3. The representation has many important implications, as will become apparent throughout this book.
Factor graphs are a way to represent graphically the factorization of a function. The sum–product algorithm is an algorithm that computes marginals of that function by passing messages on its factor graph. The term and concept factor graph were originally introduced by Brendan Frey in the late 1990s, as a way to capture structure in statistical inference problems. They form an attractive alternative to Bayesian belief networks and Markov random fields, which have been around for many years. At the same time, factor graphs are strongly linked with coding theory, as a way to represent error-correcting codes graphically. They generalize concepts such Tanner graphs and trellises, which are the usual way to depict codes. The whole idea of seeing a code as a graph can be traced back to 1963, when Robert Gallager described low-density parity-check (LDPC) codes in his visionary PhD thesis at MIT. Although LDPC codes remained largely forgotten until fairly recently, the idea of representing codes on graphs was not, and led to the introduction of the concept trellis some ten years later, as well as Tanner graphs in 1981.
To get an idea of how factor graphs came about, let us take a look at the following timeline. It represents a selection of key contributions in the field.
As depicted in Fig. 11.1, in single-user, single-antenna transmission, both the receiver and the transmitter are equipped with a single antenna. There are no other transmitters. This is the most conventional and well-understood way of communicating. Many receivers for such a set-up have been designed during the past few decades. These receivers usually consist of a number of stages. The first stage is a conversion from the continuous-time received waveform to a suitable observation (to allow digital signal processing), followed by equalization (to counteract inter-symbol interference), demapping (where decisions with respect to the coded bits are taken), and finally decoding (where we attempt to recover the original information sequence). This is a one-shot approach, whereby no information flows back from the decoder to the demapper or to the equalizer. Here the terms decoder, demapper, and equalizer pertain to the more conventional receiver tasks, not to nodes in any factor graph. In a conventional mind-set it is hard to come up with a non-ad-hoc way of exploiting information from the decoder during the equalization process. In the factor-graph framework, the flow of information between the various blocks appears naturally and explicitly. These two approaches to receiver design are depicted in Fig. 11.2.
In this chapter we will see how to convert the received waveform into a suitable observation y. This conversion is exactly the same as in conventional receivers.
In 1993 the ITU recommended that in spectrum prices should follow a set of principles [1].
All spectrum users should pay a charge.
Non-discrimination – the spectrum charge should be calculated fairly, i.e. if two users are using the same amount of spectrum in the same way, both should pay the same charge.
The spectrum charge should be proportionate to the amount of bandwidth used.
The charges should reflect the spectrum's value to society, i.e. if need be, frequencies used for public services should be subject to lower charges.
The cost of spectrum regulation should not be borne by the state.
Spectrum users should be consulted about intended adjustments in spectrum charges.
The pricing structure should be clear, transparent and comprehensive, without unnecessarily lengthening the licensing process.
The pricing structure should reflect the scarcity of available spectrum and the level of demand for spectrum in different frequency bands.
The spectrum charge should be calculated so as to recover the costs of spectrum regulation. Spectrum pricing should not seek to maximise revenue for the government.
The ability to levy spectrum charges should be anchored in law.
As discussed in the previous chapter, the contemporary approach to the setting of incentive based spectrum prices places a greater emphasis on economic factors. While some of the principles above remain relevant to the setting of spectrum prices, the 1993 ITU recommendations contain contradictions. For example, it is generally not possible to set spectrum prices to reflect scarcity while at the same time recover only the administrative costs of regulating spectrum.
Much of the discussion in previous chapters has revolved around problems of spectrum management likely to be encountered in developed countries. It is thus pertinent to ask how the situation differs for developing countries.
If anything, their dependence on spectrum-using technologies is even greater. Lacking fixed networks to deliver communications services such as voice telephony and broadcasting, they are heavily reliant on spectrum for commercial and non-commercial services. This is illustrated by recent ITU data, which show the growth of penetration (per 100 inhabitants) of fixed and mobile lines in the least developed countries. It shows that mobile lines were roughly the same in number as fixed lines in 2001 (see Figure 16.1), but by 2004 they outnumbered fixed lines by three to one. Over the 2000–4 period the number of television receivers, mostly relying on terrestrial distribution, also increased by 50%. These data emphasise the importance of getting spectrum policy right.
Consequences for spectrum management
What special aspects of spectrum management are important in developing countries? It is helpful first to identify the differentiating factors between the two environments that are relevant.
Developing countries are characterised by a lower per capita income, which reduces consumption of all items including spectrum-using services.
Conversely, a lack of alternative platforms places a high priority on the development of wireless systems; there is also growing evidence that mobile communications can improve business efficiency, widen markets and promote income growth in developing countries.
At the same time, much spectrum in developing countries is not yet assigned, or assigned wastefully to government departments, especially defence forces.
In earlier chapters we have stated that there is a need for, and a benefit associated with, regulating radio spectrum use. In practice the costs of regulation are typically recovered through licence fees paid by radio spectrum users and hence there is a price associated with the use of licensed radio spectrum. For example, in the USA the FCC applies two types of fees – application fees and regulatory fees which cover the administrative cost of managing the use of spectrum, respectively. They may also serve to discourage the filing of frivolous applications. If set too high, however, fees can result in under-utilisation of the spectrum, while if set too low hoarding and congestion may arise.
The simple recovery of administrative costs via licence fees, while practised by almost every spectrum management agency around the world, fails to make use of one of the most powerful incentive mechanisms available to encourage more efficient use of radio spectrum. By varying licence fees in a suitable way, a spectrum manager can improve the economic and technical efficiency of spectrum management. The setting of incentive based prices is especially attractive in circumstances where spectrum has been assigned and/or allocated via administrative means rather than auctions. Incentive based pricing also works well in the absence of secondary trading, but as we show in this chapter, it can also work alongside spectrum trading.
Licence fees are a potent means of achieving greater efficiency for radio spectrum licensees holding non-auctioned spectrum.