To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the early 1990s, the need for a ‘third-generation’ cellular standard was recognised by many agencies worldwide. The European Union had funded a series of research programmes since the late 1980s, such as RACE [1], aimed at putting in place the enabling technology for 3G, and similar work was underway in Japan, the USA and other countries. The runaway success of GSM, however, had fortuitously put ETSI in the driving seat, as most countries wanted a GSM-compatible evolution for 3G. Recognising this need, and to reduce the risk of fragmented cellular standards that characterised the world before GSM, ETSI proposed that a partnership programme should be established between the leading national bodies to develop the new standard. The result was the formation of 3GPP (the 3G Partnership Programme) between national standards bodies representing China, Europe, Japan, Korea and the USA. It was agreed that ETSI would continue to provide the infrastructure and technical support for 3GPP, ensuring that the detailed technical knowledge of GSM, resident in the staff of the ETSI secretariat, was not lost.
Although not explicitly written down in the form of detailed 3G requirements at the time, the consensus amongst the operator, vendor and administration delegates that drove standards evolution might best be summarised as:
Provide better support for the expected demand for rich multimedia services,
Provide lower-cost voice services (through higher voice capacity),
Reuse GSM infrastructure wherever possible to facilitate smooth evolution.
This was clearly a very sensible set of ambitions, but limited experience of multimedia in the fixed network, and of the specific needs of packet-based services in particular, meant that some aspects of the resulting UMTS standard were not ideal.
As discussed earlier, one method for increasing capacity in cellular networks is to make the cell sizes smaller, allowing more subscribers to use the available radio spectrum without interfering with each other. A similar approach is used in 802.11 (Wi-Fi) – either to provide public ‘hot-spot’ broadband connections or to link to broadband connections in the home via a Wi-Fi router. The range of such systems is limited to a few tens of metres by restricting the power output of the Wi-Fi transmitter. The goal of a wireless mesh is to extend the 802.11 coverage outdoors over a wider area (typically tens of square kilometres) not simply by increasing the power, but by creating contiguous coverage with dozens of access points (APs) or nodes, separated by distances of 100–150 metres. For such a solution to be economically viable, the access points themselves need to be relatively cheap to manufacture and install, and the back-haul costs must be tightly managed. To address this latter requirement, only a small percentage of APs (typically 10–20%) have dedicated back-haul to the Internet; the other APs pass their traffic through neighbouring APs until an AP with back-haul is reached. At the time of writing, the IEEE 802.11s standard for mesh networking is still being drafted, and various proprietary flavours of mesh networks exist. The following discussion outlines the principal characteristics and properties of most commercially available mesh networks, which will be embodied in 802.11s when it is finalised.
The life-cycle of any wireless telecommunications network broadly follows the process illustrated in Figure 10.1. As discussed in Chapter 3, the initial planning makes technology choices to meet the overall business goals. This leads to the design phase, where detailed studies are made of system capacity and coverage to ensure that the network performance criteria are likely to be met. Once the design is complete, the network infrastructure is ordered, installed and commissioned – depending on the scale of the network, this phase can take many months or even years. Once the network has been built and commissioned, subscribers are given access to the network, which is then said to be operational. The performance of the network is subsequently routinely monitored to ensure that any equipment failures, software problems or other issues are quickly identified. Any problems that do occur are fixed by operational support engineers or automatically dealt with by equipment redundancy and fault correction procedures. Finally, the performance of the stable network is examined, and its configuration is fine-tuned to maximise capacity or optimise the quality of service delivered to the subscribers. If necessary, the cycle is repeated, as new network expansion is planned and the network grows to cope with growth in the subscriber base.
In practice, the phases of the network life-cycle are rarely as distinct as this, and there is much overlap and iteration around the cycle.
At the beginning of 2003, results from the first commercial deployments of UMTS were coming in and the inefficiency of Release 99 both spectrally and in terms of its long ‘call’ set-up times were becoming apparent. Coincident with this, Wi-Fi networks were becoming omnipresent in businesses and cities and broadband was achieving significant penetration in homes through much of the developed world. The user expectation was shifting from ‘dial-up’ latencies of tens of seconds to delays of less than 200 ms! These events together made it clear that there was a need for change if operators using 3GPP-based networks were to remain competitive. The requirements for a ‘long-term evolution’ of UMTS can thus be traced to this time when a study, which eventually led to the publication of a document Evolution of 3GPP System [1], commenced. The results from this study made it clear that, in future systems, support of all services should be via a single ‘all IP network’ (AIPN), a fact already recognised in the 3GPP study ‘IP Multimedia Services’ with an initial functionality release in Release 5 (June 2002). What was different, however, was that it also identified that both the core network and access systems needed to be updated or replaced in order to provide a better user experience. Even at this initial stage, control and user plane latencies were itemised as major issues with latencies of <100 ms targeted alongside peak user rates of 100 Mbits/s.
The development of a framework to assess system capacity for any air interface typically falls into two parts: firstly estimation of the S/N or C/I necessary to deliver the required bit error rate (BER) for the system in a point-to-point link and secondly understanding the limiting system conditions under which the most challenging S/N or C/I will be encountered. Once these two conditions are understood, calculations of maximum ranges and capacities for the system can be made.
C/I assessment for GSM
The base modulation scheme employed by GSM is Gaussian minimum shift keying (GMSK). This is a form of binary frequency shift keying with the input bit stream used to drive the frequency modulator filtered by a network with a Gaussian impulse response. This filter removes the high frequencies, which would otherwise be present because of the ‘fast’ edges in the modulating bit stream. If BT, the product of the filter – 3 dB bandwidth and the modulating bit period is chosen to be around 0.3, it enables most of the energy from the 270 kbits/s bit rate of GSM to be accommodated in a 200 kHz channel with low interference from the adjacent channels and negligible interference from those beyond.
Figure 5.1 illustrates the effect of the Gaussian filtering on the original rectangular pulse train and goes on to show the way inter-symbol interference arises, as multi-path introduces longer delays than that due to propagation of the direct ‘ray’.
Telecommunications networks have always fascinated me. My interest was sparked when, as an engineering student in the late sixties, I was told that the telephone network was the biggest machine on earth yet it was constructed from a few basic building blocks replicated many, many times over. It seemed to me that a machine with those characteristics was both already a remarkable engineering feat and a perfect platform for the rapid development of more sophisticated services. So I decided upon a career in telecommunications engineering.
I soon discovered that there was nothing basic about either the building blocks or the architecture of those networks. They were already engineeringly sophisticated at every layer and in every enabling technology. That sophistication was to lead to the continuing development of telecoms networks at a far greater pace than any of us working in the field three or four decades ago could possibly have imagined.
From voice to data; analogue to digital; terrestrial to satellite; tethered to untethered, the progress has been remarkable. Yet undoubtedly the most remarkable development of all has been in wireless networks. Nearly half of the world's population take it for granted that the purpose of telecoms networks is to connect people, not places. An increasing proportion of them use those connections for exchanging text and images as readily as voice. The transformational effect on national economies, education, health and many other factors that bear upon the quality of life is apparent.
This chapter aims to provide an overview of the role of the core network and transmission in wireless solutions. Insight is given into the factors that have influenced network evolution from early cellular architectures, such as GSM Release 98, through to systems currently being standardised for the future, exemplified by Release 8. The chapter will conclude with a worked example illustrating the dimensioning of an IP multimedia system (IMS) transmission for a system supporting multiple applications.
It is useful to establish a common terminology before discussing networks in more detail. In the early 1990s, ETSI proposed the convention shown in Figure 9.1 [1], to distinguish between two distinct types of circuit service that a network might provide, namely bearer services and end-to-end applications, which it called teleservices. In the case of bearer services, a wireless network is ‘providing the capability to transmit signals between two access points’. Support of teleservices, however, requires the provision of ‘the complete capability, including terminal equipment functions, for communication between users according to protocols established by agreement between network operators’. Defining teleservices in this way has standardised the details of the complete set of services, applications and supplementary services that they provide. As a consequence, substantial effort is often required to introduce new services or simply to modify the existing one (customisation). This makes it more difficult for operators to differentiate their services.
The discussion in Chapter 1 indicated that dividing the planned coverage area into a number of radio cells results in a more spectrally efficient solution with smaller and lighter end-user devices. In such a network, as the individual moves further away from the cell to which he or she is currently connected, the signal strength at the mobile eventually falls to a level where correct operation cannot be guaranteed and the call may ‘drop’. However, because the cellular system is designed to ensure good coverage over the plan region there will be one or more other cells at this location that can be received at adequate signal strength, provided some mechanism is found to ‘hand over’ the call to one of these cells. Most of the complexity in practical cellular systems arises from the need to achieve this handover in a way that makes this process as imperceptible to the user as possible.
This chapter aims to establish a common understanding of the way most cellular networks operate, using the ubiquitous GSM system as a baseline, and highlight the key differences that can be expected in networks providing fixed or ‘nomadic’ wireless access. It will also explore the factors that significantly contribute to cellular network operating expense and thus determine activities that impact the operators' profit and loss account. Finally, the profit and loss account will be used as an agenda to identify wireless network technologies that are likely to change in the future.
At the time of writing, and to an extent never seen before, there is an expectation that almost any information or service that is available through communication systems in the office or home will be available wherever the user happens to be. This is placing incredible demands on wireless communications and has been the driver for the genesis and deployment of three generations of cellular systems in the space of 20 years. In parallel with this revolution in access technology has come the recognition that any information, whether for communication, entertainment or, indeed, for other purposes as yet unenvisaged, can be stored and transported in a universal digital format. The former technology-driven distinctions of analogue storage and transport for high bandwidth signals, such as video, and digital storage for other content are no more. These changes, together with an increasing international consensus on a ‘light-touch’ regime for regulation to stimulate competition, have enabled the first generation of quad-play multi-national companies to become established. Such companies seek to spread a strong base of content and services across what would formerly have been known as broadcast (cable, satellite, terrestrial), fixed telephony, mobile and broadband access channels. However, the ability for such companies to deliver applications and services that operate reliably and consistently, regardless of user location, is ultimately predicated on their ability to design solutions that deliver an appropriate and guaranteed quality of service (QoS) over what will certainly be a finite and potentially narrow-access data pipe.
The human species is unique amongst all life forms in developing a sophisticated and rich means of communication speech. While communication may have had its origins in the need for individuals to work co-operatively to survive, it is now deeply embedded in the human psyche and is motivated as much by social as business needs. Historically, this was met simply as individuals with similar interests and values chose to form small settlements or villages and all communication was face to face. It was not until the introduction of the telephone in the late nineteenth century that social and business networks could be sustained even when the individuals concerned did not live in the vicinity. Although the coverage and level of automation of the fixed telephony network improved dramatically over the next 100 years, the next major step, communication on the move, was only possible with the introduction of wireless networks.
The term ‘wireless network’ is very broad and, at various points in history, could have included everything from Marconi's first transatlantic communication in 1901, the first truly mobile (tactical) networks, in the form of the Motorola walkie-talkie in use during the Second World War, to the wide-area private mobile networks in use by the emergency services and large companies since the late 1940s. However, ‘wireless networks’ didn't really enter the public consciousness until the commercial deployment of cellular mobile radio in the 1980s.
In Chapters 1 and 2, the drivers for the development of the cellular system architecture were discussed and an overview of the key network elements and principles of operation for a GSM cellular solution was provided. The remainder of the book will address the activities necessary to design and deploy profitable wireless networks. Figure 3.1 summarises where these key processes are to be found by chapter.
In this chapter, the principles and processes that are used to plan wireless access networks will be developed. The major focus will be on cellular networks, as these usually represent the most complex planning cases, but an overview of the corresponding processes for 802.11 is also provided. Circuit voice networks will be examined initially; the treatment will then be extended to understand the additional considerations that come into play as first circuit-data and subsequently packet-data-based applications are introduced.
With the planning sequence understood, the way in which information from such processes can be used to explore the potential profitability of networks well in advance of deployment will be addressed. Choices regarding which applications are to be supported in the network and the quality of service offered will be shown to have a major impact on the profitability of network projects.
Circuit voice networks
In most forms of retailing, the introduction of new products follows the ‘S curve’ sequence first recognised by Rogers [1].
In Chapter 3, the generic principles, processes and deployment configurations applicable to cellular network planning were developed. In this chapter and the following four, a detailed design process will be developed to address, step by step, the practical deployment process for four different wireless networks.
This chapter describes the steps in the planning process that are essentially common, regardless of the specific air interface under consideration. The high-level planning of Chapter 3 will have estimated the total number of cell sites and the maximum cell size, and made decisions on the applications to be deployed. The detailed plan will define the actual locations of cell sites, antenna types, mast heights, etc., using topographical data for the specific regions. The plan will also assure guaranteed levels of coverage, capacity and availability for the applications to be supported.
The changing relative implementation cost and impact on battery life of particular technologies at points in time over the last 25 years has given rise to three distinct RAN standards:
TDMA (as employed in GSM, GPRS, EDGE),
CDMA (as employed in UMTS releases 99, 4, 5, 6, 7 and CDMA 2000),
OFDMA (as employed in 802.11, 802.16e (WiMAX) and as planned for 3G LTE).
These technologies are expected to dominate wireless deployments over the next 20 years and it is a comprehensive understanding of major factors, such as coverage, capacity and latency, that will enable system designers to exploit their potential fully.
The effects of roundoff noise in control and signal processing systems, and in numerical computation have been described in detail in the previous chapters.
There is another kind of quantization that takes place in these systems however, which has not yet been discussed, and that is coefficient quantization.
The coefficients of an equation being implemented by computer must be represented according to a given numerical scale. The representation is of course done with a finite number of bits. The same would be true for the coefficients of a digital filter or for the gains of a control system.
If a coefficient can be perfectly represented by the allowed number of bits, there would be no error in the system implementation. If the coefficient required more bits than the allowed word length, then the coefficient would need to be rounded to the nearest number on the allowed number scale. The rounding of the coefficient would result in a change in the implementation and would cause an error in the computed result. This error is distinct from and independent of quantization noise introduced by roundoff in computation. Its effect is bias–like, rather than the PQN nature of roundoff, studied previously.
If a numerical equation is being implemented or simulated by computer, quantization of the coefficients causes the implementation of a slightly different equation.
The purpose of this chapter is to provide an introduction to the basics of statistical analysis, to discuss the ideas of probability density function (PDF), characteristic function (CF), and moments. Our goal is to show how the characteristic function can be used to obtain the PDF and moments of functions of statistically related variables. This subject is useful for the study of quantization noise.
PROBABILITY DENSITY FUNCTION
Figure 3.1(a) shows an ensemble of random time functions, sampled at time instant t = t1 as indicated by the vertical dashed line. Each of the samples is quantized in amplitude. A “histogram” is shown in Fig. 3.1(b). This is a “bar graph” indicating the relative frequency of the samples falling within the given quantum box. Each bar can be constructed to have an area equal to the probability of the signal falling within the corresponding quantum box at time t = t1. The sum of the areas must total to 1. The ensemble should have an arbitrarily large number of member functions. As such, the probability will be equal to the ratio of the number of “hits” in the given quantum box divided by the number of samples. If the quantum box size is made smaller and smaller, in the limit the histogram becomes fx(x), the probability density function (PDF) of x, sketched in Fig. 3.1(c).