To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To study observations, we return yet again to the definition of the cognitive radio laid out in Chapter 1 and note once more that ‘A cognitive radio is a device which has four broad inputs, namely, an understanding of the environment in which it operates, an understanding of the communication requirements of the user(s), an understanding of the regulatory policies which apply to it and an understanding of its own capabilities.’ Getting these four inputs is what we mean by the phrase ‘observing the outside world’.
We can further detail some of the observations that are needed if we go through the various action categories outlined in the last chapter. To take action from a frequency perspective the cognitive radio must observe which signals are currently being transmitted, which channels are free, the bandwidth of those channels and perhaps whether the available channels are likely to be short lived or more durable. To take action from a spatial perspective, the cognitive radio needs to make observations about the spatial distribution of systems that must be avoided, or the spatial distribution of interferers and of the target radios. The cognitive radio needs to be able to monitor its power output and the power output of other systems. To take action to make a signal more robust or to maximise the throughput of the transmitted signal, the cognitive radio needs to make observations about the signal-to-noise ratio (SNR) at the target receivers, about the bit error rates and about the propagation conditions experienced by the transmitted signal (e.g. delay spread, doppler spread).
To discuss regulation and standardisation in the context of cognitive radio is a challenge. Currently there are almost no regulations or standards in place for cognitive radio, as cognitive radios are still very much a thing of the future. Hence this chapter is more about classifying the general types of regulations that may be needed and the standards that are emerging than discussing what is already in place. In reality there is a wealth of regulatory issues that relate directly, indirectly or just ‘kind of relate’ to cognitive radio. Chapter 1 explored the role of cognitive radios in delivering new ways of managing the spectrum and looked at applications in the military, public safety and commercial domains. The new spectrum management regimes and the various potential applications may each give rise to the need for new regulations, some of which are specifically related to cognitive radios and some of which are related to creating the kind of environment in which cognitive radio applications can thrive. The purpose of this chapter, therefore, is to give a broad sense of what those issues might be, as well as to describe the current status of the standardisation efforts.
Regulatory issues and new spectrum management regimes
Much of the discussion about ‘regulations for cognitive radio’ is about ‘regulations for new spectrum management regimes in which cognitive radios can operate’.
The first chapter of this book focused on the application areas that will drive cognitive radio technology. This chapter acts as a bridge to the remainder of the book. It seeks to provide the reader with a broad sense of all that is involved in cognitive radio technology. In order to do this we go to the heart of the cognitive radio but not at first using technology as an example. Instead we step back and take a look at how decisions are made in a more abstract manner before returning to the radio world. The final part of the chapter provides a roadmap for the rest of the book.
Setting the scene for understanding cognitive radio
The first question to think about is: how do we make decisions? How do we reason and come to conclusions? We begin this discussion by looking at a simple example.
The lone radio
Scenario 1: I am about to go out and must decide whether I should take an umbrella with me or not. The umbrella is heavy and cumbersome and, while I don't want to get wet, I don't want to take the umbrella with me if it is not necessary.
In this example two actions are possible, namely take umbrella or don't take umbrella. I need to determine how likely it is to rain in order to decide whether to take the umbrella or not.
We now reach the ‘decide’ part of the ‘observe, decide and act’ cycle. In very simple terms the decision-making process is about selecting the actions the cognitive radio should take. Using the vocabulary introduced in Chapter 2, it is about choosing which ‘knobs’ to change and choosing what the new settings of those ‘knobs’ should be. Decision-making goes very much to the heart of a cognitive radio.
The decision-making process: part 1
In Table 3.2 a variety of cognitive radio applications and the main highlevel actions associated with them were presented. On examining the table we noted that many of the actions, whether commercial, public safety or military based, centre on two activities:
The cognitive radio shapes its transmission profile and configures any other relevant radio parameters to make best use of the resources it has been given or identified for itself, while at the same time not impinging on the resources of others.
If and when those resources change, it reshapes its transmission profile and reconfigures any other relevant operating parameters, and in doing so it redirects resources around the network.
A re-examination of Table 3.2 will confirm that these actions are standard throughout a whole variety of applications. It therefore comes as no surprise that two kinds of decisions that regularly need to be made are decisions that map to these two activities, namely decisions about how resources are distributed and decisions about how those resources are exactly used.
During the production phase of this book, the FCC released two reports that are of relevance to this book. At that stage it was too late to include details of the reports in the main body of the text. This short appendix addresses the issues briefly.
On 15 October 2008 the FCC released their report (FCC/OET 08-TR-1005) on the Evaluation of the Performance of Prototype TV-Band White Space Devices Phase II. The opening paragraph of the report summarises what the report shows:
The Federal Communications Commission's Laboratory Division has completed a second phase of its measurement studies of the spectrum sensing and transmitting capabilities of prototype TV white space devices. These devices have been developed to demonstrate capabilities that might be used in unlicensed low power radio transmitting devices that would operate on frequencies in the broadcast television bands that are unused in each local area. At this juncture, we believe that the burden of ‘proof of concept’ has been met. We are satisfied that spectrum sensing in combination with geo-location and database access techniques can be used to authorize equipment today under appropriate technical standards and that issues regarding future development and approval of any additional devices, including devices relying on sensing alone, can be addressed.
The report goes on to state that
All of the devices were able to reliably detect the presence a clean DTV signal on a single channel at low levels in the range of – 116 dBm to – 126 dBm; the detection ability of each device varied little relative to the channel on which the clean signal was applied.
In Chapter 1 the working definition for cognitive radio used throughout this book was presented. That definition ended with the statement ‘A cognitive radio is made from software and hardware components that can facilitate the wide variety of different configurations it needs to communicate.’ In this chapter we look at the hardware involved. There is no one right way to build a cognitive radio so the chapter merely aims to give a sense of what kind of hardware can be used and some of the related performance issues.
A complete cognitive radio system
In a cognitive radio receiver, the antenna captures the incoming signal. The signal is fed to the RF circuitry and is filtered and amplified and possibly downconverted to a lower frequency. The signal is converted to digital format and further manipulation occurs in the digital domain. On the transmit side the opposite occurs. The signal is prepared and processed and at some stage is converted from digital to analogue format for transmission, upconverted to the correct frequencies and launched on to the airwaves via the antenna.
Throughout this book we have been using the terms ‘cognitive radio’ and ‘cognitive node’ interchangeably. The reason for this is that a cognitive radio will almost all of the time function as a node in a network. Therefore it is useful to think of the complete cognitive radio system in terms of a communication stack.
Having covered the fundamentals of meshes, we now arrive at the point where we may begin to consider the big and often asked questions about mesh, four of which we consider together, via our list of hypotheses. As a reminder, these are that
meshes self-generate capacity,
meshes improve spectral efficiency,
directional antennas help a mesh, and
meshes improve the overall utilisation of spectrum.
We will examine them formally, via analysis of existing peer reviewed publications, followed by some more recent analysis and insight of our own [1, 2]. A key problem in assessing the published literature is that different assumptions are made in different published papers; a direct comparison is thus at risk of being inconsistent. We spend some time at the outset to ensure we avoid this issue.
We will bear in mind that we are predominantly interested in our six application examples of Chapter 2. This will set helpful bounds to our scope for testing the hypotheses.
When we look at Hypothesis 1 which is concerned with capacity, we form our initial viewpoint via a simple thought experiment, which looks at how we expect the capacity of a mesh might behave versus demand, relative to the known case of cellular. This is followed by a summary of four important peer reviewed research papers in the field, which concern system capacity. We contend that the important conclusions presented in these papers were never intended to be used by readers as evidence that a real-world mesh can self-generate capacity.
The aim of using a mobility model is to reflect as accurately as practicable the real conditions themselves. One way to do this is to use motion traces, which are logs of real-life node movements over a representative period of time. There are not many such logs available for use even with established cellular schemes, and none are known to this author which cover mesh environments. The focus then must move to synthetic models. Such a model will deal with a number of nodes and may include parameters such as speed and direction of movement, the ability to pause at some locations and a bound to the model area. The models available are mostly fairly simple to implement, since they are intended for use in simulators where a tractable run time is expected. It is probably the case that present models err on the side of simplicity at the expense of realism. On the other hand, moving too close to the actual environment requires a very specific model – which may then not be adequately representative of all environments. The choice of model is thus a subject which needs to be understood, in order to interpret specific protocol and other simulation results for wider contexts.
Camp et al. [1] review 12 different mobility models which have been applied to mesh simulations at various points in the published literature. Their work is an often quoted indication that the choice of model alone can strongly affect the results when testing the exact same routing protocol. For the purposes of this book three models are noted as being appropriate.
Building upon the fast growing technological advance of video compression in the 1980s, along with the availability of affordable fast computing processors and digital memories in the early 1990s, the evolution in use of digital multimedia broadcasting proceeded rapidly (see Table 6.1). The arrival of digital broadcasting was significant; what was happening was not just a simple move from an analog system to a digital system. Rather, digital broadcasting permits a level of quality and flexibility unattainable with analog broadcasting and provides a wide range of convenient services, thanks to its high picture and sound quality, interactivity, and storage capability. European broadcasters initiated the first attempt to implement a complete direct-to-home satellite digital television program delivery infrastructure having a capacity in excess of 100 channels from a single satellite. This was the digital video broadcasting (DVB) project in 1993, and the main standardization work for satellite (DVB-S) and cable (DVB-C) delivery systems was completed in 1994 [1] [2]. The fixed terrestrial version (DVB-T) was soon added to the DVB family to offer one-to-many broadband wireless data broadcasting based on roof-top antenna and the use of IP packets.
All these DVB sub-standards basically differ only in the specifications to the physical representation, modulation, transmission, and reception of the signal. Digital video broadcasting is, however, much more than a simple replacement for existing analog television transmission. More specifically, DVB provides superior picture quality with the opportunity to view pictures in standard format or wide screen (16:9) format, along with mono, stereo, or surround sound.
The rapid growth of wireless broadband networking infrastructures, such as 3G and 3.5G, WLAN and WLAN-mesh, and WiMAX, makes available multimedia (audio and video) information and entertainment (“infotainment”) in our lives anytime, anywhere, on any device. However, wireless multimedia delivery faces several challenges, such as a high error rate, bandwidth variation and limitation, battery power limitation, and so on. Take, for example, the voice over IP (VoIP) and video streaming applications, which are quite mature in wireline infrastructure. At the same time, wireless broadband based on WLAN and WiMAX is also becoming widespread. While these wireless networks were not designed with real-time multimedia communication services in mind, their widespread availability and low cost makes them an inviting solution for adding mobility to these communication services. The major issue is how to achieve a wireless broadband system which can deliver real-time interactive multimedia smoothly and still satisfy the QoS metrics typically used to define the quality of a VoIP or video conferencing session, e.g., the one-way delay, jitter, packet loss rate, and throughput (see Section 7.2).
Advances in media coding over wireless networks are governed by two dominant rules [1]. One is the well-known Moore's law, which states that computing power doubles every 18 months. Moore's law certainly applies to media codec evolution, and there have been huge advances in technology in the ten years since the adoption of MPEG-2. The second governing principle is the huge bandwidth gap (one or two orders of magnitude) between wireless and wired networks. This bandwidth gap demands that coding technologies must achieve efficient compact representation of media data over wireless networks.
Owing to the proliferation of digitized media applications, such as e-Book, streaming videos, web images, shared music, etc., there is a growing need to protect the intellectual property rights of digital media and prevent illegal copying and falsification. This explains the strong demand for digital rights management (DRM), which is an access control technology that protects and enforces the rights associated with the use of digital content, such as multimedia data. The most important functions of DRM are to prevent unauthorized access and the creation of unauthorized copies of digital content, and moreover to provide a mechanism by which copies can be detected and traced (content tracking). Digital rights management is the most critical component of the intellectual property management and protection (IPMP) protocol widely promoted in the MPEG standards. Under the IPMP's scope, intellectual property (IP) is anything whose use owes the inventors or the owners some form of compensation. This could be a particular media application or it could be the technology used by the media. The management of IP involves storage and serving, appropriate authorization of use, and correct billing and tracking. The protection of IP prevents unauthorized use or misuse of the IP and can make legitimate use easy. According to [1], an effective DRM system should have the following four requirements.
(1) The DRM system must package the content to be protected in a secure manner.
(2) The DRM system must obtain the access conditions (license) specified by the owner of the protected content.
(3) The DRM system must determine whether the access conditions have been fulfilled
With the great advances in digital data compression (coding) technologies and the rapid growth in the use of IP-based Internet, along with the quick deployment of last-mile wire line and wireless broadband access, networked multimedia applications have created a tremendous impact on computing and network infrastructures. The four most critical and indispensable components involved in a multimedia networking system are: (1) data compression (source encoding) of multimedia data sources, e.g., speech, audio, image, and video; (2) quality of service (QoS) streaming architecture design issues for multimedia delivery over best-effort IP networks; (3) effective dissemination multimedia over heterogeneous IP wireless broadband networks, where the QoS is further degraded owing to the dynamic changes in end-to-end available bandwidth caused by wireless fading or shadowing and link adaptation; (4) effective digital rights management and adaptation schemes, which are needed to ensure proper intellectual property management and protection of networked multimedia content.
This book has been written to provide an in-depth understanding of these four major considerations and their critical roles in multimedia networking. More specifically, it is the first book to provide a complete system design perspective based on existing international standards and state-of-the-art networking and infrastructure technologies, from theoretical analyses to practical design considerations. The book also provides readers with learning experiences in multimedia networking by offering many development-software samples for multimedia data capturing, compression, and streaming for PC devices, as well as GUI designs for multimedia applications. The coverage of the material in this book makes it appropriate as a textbook for a one-semester or two-quarter graduate course.
With the fast advances in computing and compression technologies, high-bandwidth storage devices, and high-speed networks, it is now feasible to provide real-time multimedia services over the Internet. This is evident from the popular use of the three most commonly used streaming media systems, i.e., Microsoft's Windows Media [1], RealNetwork's RealPlayer [2], and Apple's QuickTime [3]. The real-time transport of live or stored audio and video is the predominant part of real-time multimedia. In the download mode of multimedia transport over the Internet, a user downloads the entire audio/video file and then plays back the media file. However, full file transfer in the download mode usually suffers long and perhaps unacceptable transfer times. In contrast, in the streaming mode the audio and video content need not be downloaded in full but is being played out while parts of the content are being received and decoded. Multimedia streaming is an important component of many Internet applications such as distance learning, digital libraries, video conferencing, home shopping, and video-on-demand. The best-effort nature of the current Internet poses many challenges to the design of streaming video systems. Owing to its real-time nature, audio and video streaming typically has bandwidth, delay, and loss requirements. However, the current best-effort Internet does not offer any quality of service (QoS) guarantees to streaming media over the Internet.
The design of some early streaming media programs, such as VivoActive 1.0 [4], was based on the use of the H.263 video and G.723 audio protocols, with HTTP-based web servers, to deliver encoded media content.