To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wireless networking is quickly becoming a defacto standard in the enterprise, streamlining business processes to deliver increased productivity, reduced costs and increased profitability. Security has remained one of the largest issues as companies struggle with how to ensure that data is protected during transmission and the network itself is secure. Wi-Fi Protected Access (WPA) offered an interim security solution, but was not without constraints that resulted in increased security risks. The new WPA2 (802.11i) standards eliminate these vulnerabilities and offer truly robust security for wireless networks. As a global leader in wireless networking, Motorola, through the acquisition of the former Symbol Technologies, not only offers this next-generation of wireless security - but also builds on the new standard with value-added features that further increase performance and the mobility experience for all users.
Overview
Corporations are increasingly being asked to allow wireless network access to increase business productivity, and corporate security officers must provide assurance that corporate data is protected, security risks are mitigated and regulatory compliance is achieved. This chapter will discuss:
The risks of wireless insecurity;
The progression of security standards and capabilities pertaining to Wi-Fi security;
How the 802.11i standard provides robust security for demanding wireless environments;
How Motorola incorporates 802.11i in its wireless switching products in a way that optimizes scalability, performance and investment protection.
Risks of Wireless Insecurity
The advent of wireless computing and the massive processing power available within portable devices provides organizations with an unprecedented ability to provide flexible computing services on-demand to enable business initiatives.
What you can learn from this chapter is what services are being commonly deployed in Municipal Wireless networks, for what type of customers, some of the networking considerations that each service may drive, and some high-level architectural diagrams. As you will see, one of the key issues is where and how much network control should be implemented. One of the fundamental decisions that a network operator has to determine for an IP network is whether to centralize or distribute the network control. In this context, network control is based on the control of data flow associated with each user. There are advantages and disadvantages to both approaches. Note that this chapter does not intend to make any recommendations regarding this design issue. The high-level diagrams shown throughout this chapter convey the design concepts that network operators will encounter as they build their network infrastructures.
Introduction
Municipal Wireless networks are a hot new topic that is changing the face of telecom today. With the ability to offer broadband speeds over the airwaves, governments and service providers have all looked at this network approach as a way to enhance their services to the community. Over 300 governments have created Municipal Wireless networks, ranging in size up to 2 square miles. Many more governments are planning deployments with the world's largest cities planning deployments of over 100 square miles.
The drivers for the creation of these networks are varied.
The proliferation of wireless multi-hop communication infrastructures in office or residential environments depends on their ability to support a variety of emerging applications requiring real-time video transmission between stations located across the network. We propose an integrated cross-layer optimization algorithm aimed at maximizing the decoded video quality of delay-constrained streaming in a multi-hop wireless mesh network that supports quality-of-service (QoS). The key principle of our algorithm lays in the synergistic optimization of different control parameters at each node of the multi-hop network, across the protocol layers - application, network, medium access control (MAC) and physical (PHY) layers, as well as end-to-end, across the various nodes. To drive this optimization, we assume an overlay network infrastructure, which is able to convey information on the conditions of each link. Various scenarios that perform the integrated optimization using different levels (“horizons”) of information about the network status are examined. The differences between several optimization scenarios in terms of decoded video quality and required streaming complexity are quantified. Our results demonstrate the merits and the need for cross-layer optimization in order to provide an efficient solution for real-time video transmission using existing protocols and infrastructures. In addition, they provide important insights for future protocol and system design targeted at enhanced video streaming support across wireless mesh networks.
Introduction
Wireless mesh networks are built based on a mixture of fixed and mobile nodes interconnected via wireless links to form a multi-hop ad-hoc network.
Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage.
So what is a mesh network? Simply put, it is a set of fully interconnected network nodes that support traffic flows between any two nodes over one or more paths or routes. Adding wireless to the above brings the additional ability to maintain connectivity while the network nodes are in motion. The Internet itself can be viewed as the largest scale mesh network formed by hundreds of thousands of nodes connected by fiber or other means, including, in some cases, wireless links.
In this chapter we will look more closely into wireless mesh networks.
History
Mesh networking goes back a long time; in fact tactical networks of the military have relied on stored and forward nodes with multiple interconnections since the early days of electronic communications. The advent of packet switching allowed the forwarding function of these networks to be buried in the lower layers of communication systems, which opened up many new possibilities of improving the capacity and redundancy of these networks. Attracted by the inherent survivability of mesh networks, the US Defense research agency DARPA has funded a number of projects aimed at creating a variety of high-speed mesh networking technologies that support troop deployment on the battlefield as well as low speed, high survival sensor networks.
“Epilogue” – it sounds like the story is ending. But obviously the Wi-Fi story is continuing strong, evidenced by the contents of this book.
So let us consider this as not an “epilogue”, but as just a brief pause to catch our breath. This book has covered so many of the topics that we know are important today. But based on our past experience, who really knows what future applications will be dreamed up? Who really knows which new technologies will prove to be important in the future evolution of Wi-Fi? It is very humbling to recall that back in the early and mid-1990s, when the IEEE 802.11 standards were originally being developed, the primary application on the minds of the key participants was not networking in the home, or wireless Internet access, or public hotspots, or voice over IP, or multimedia services, or city-wide wireless – but things like wireless bar code scanning and retail store inventory management. These ”vertical” applications for Wi-Fi technology continue to be important today, but oh how far we have travelled.
So only an actual seer could predict the real future of Wi-Fi over the next 10 years. But one thing is clear: Wi-Fi will continue to play a role in our lives. Everything in technology has a finite lifespan – hardware products have a lifespan, software products have a lifespan – but the lifespan of a successful protocol, implemented in millions of devices worldwide, can be very, very long.
Let us now return to our original problem from Chapter 2: receiver design. Our ultimate goal is to recover (in an optimal manner) the transmitted information bits b from the received waveform r(t). In this chapter, we will formulate an inference problem that will enable us to achieve this task. In the first stage the received waveform is converted into a suitable observation y.We then create a factor graph of the distribution p(B, Y = y|M). This factor graph will contain three important nodes, expressing the relationship between information bits and coded bits (the decoding node), between coded bits and coded symbols (the demapping node), and between coded symbols and the observation (the equalization node). Correspondingly, the inference problem breaks down into three sub-problems: decoding, demapping, and equalization.
Decoding will be covered in Chapter 8, where we will describe some state-of-the-art iterative decoding take over schemes, including turbo codes, RA codes, and LDPC codes. The demapping problem will be considered in Chapter 9 for two common modulation techniques: bit-interleaved coded modulation and trellis-coded modulation. Equalization highly depends on the specific digital communication scheme. In Chapter 10, we will derive several general-purpose equalization strategies. In the three subsequent chapters, we will then show how these general-purpose strategies can be applied to the digital communication schemes from Chapter 2. In Chapter 11 the focus is on single-user, single-antenna communication.
Claude E. Shannon was one of the great minds of the twentieth century. In the 1940s, he almost single-handedly created the field of information theory and gave the world a new way to look at information and communication. The channel-coding theorem, where he proved the existence of good error-correcting codes to transmit information at any rate below capacity with an arbitrarily small probability of error, was one of his fundamental contributions. Unfortunately, Shannon never described how to construct these codes. Ever since his 1948 landmark paper “A mathematical theory of communication” [1], the channel-coding theorem has tantalized researchers worldwide in their quest for the ultimate error-correcting code. After more than forty years, state-of-the-art errorcorrecting codes were still disappointingly far away from Shannon's theoretical capacity bound. No drastic improvement seemed to be forthcoming, and researchers were considering a more practical capacity benchmark, the cut-off rate [2], which could be achieved by practical codes.
In 1993, two until then little-known French researchers from the ENST in Bretagne, Claude Berrou and Alain Glavieux, claimed to have discovered a new type of code, which operated very close to Shannon capacity with reasonable decoding complexity. The decoding process consisted of two decoders passing information back and forth, giving rise to the name “turbo code.” They first presented their results at the IEEE International Conference on Communications in Geneva, Switzerland [3]. Quite understandably, they were met with a certain amount of skepticism by the traditional coding community. Only when their findings were reproduced by other labs did the turbo idea really take off.
Before we can even consider designing iterative receivers, we have a lot of ground to cover. The ultimate goal of the receiver is to recover optimally the sequence of information bits that was sent by the transmitter. Since these bits were random in the first place, and they are affected by random noise at the receiver, we need to understand and quantify this randomness, and incorporate it into the design of our receiver.We need to specify what it means to recover optimally the information bits from the received signal. Optimal in what sense? To answer this problem, we call on estimation theory, which will allow us to formulate a suitable optimization problem, the solution of which will give our desired optimal data recovery. Unfortunately, as is common to most optimization problems of any practical relevance, the issue of finding a closed-form solution is intractable. There exists a set of tools that can solve these problems approximately by means of sampling. They go under the name of Monte Carlo (MC) techniques and can help in describing difficult distributions, and in obtaining their characteristics, by means of a list of samples. These MC methods will turn out to be very useful in the factor-graph framework.
This chapter is organized as follows.
We will start with the basics of Bayesian estimation theory in Section 3.2, covering some important estimators both for discrete and for continuous parameters.
In Section 3.3, we will provide a brief introduction to MC techniques, including particle representations of distributions, importance sampling, and Markov-chain Monte Carlo (MCMC) methods.
As with any good story, it is best to start at the very beginning. Digital communication deals with the transmission of binary information (obtained as the output of a source encoder) from a transmitter to a receiver. The transmitter converts the binary information to an analog waveform and sends this waveform over a physical medium, such as a wire or open space, which we will call the channel. As we shall see, the channel modifies the waveform in several ways. At the receiver, this modified waveform is further corrupted due to thermal noise. Not only does the receiver have to recover the original binary information, but also it must deal with channel effects, thermal noise, and synchronization issues. All in all, the receiver has the bad end of the deal in digital communications. For this reason, this book deals mainly with receiver design, and only to a very small extent with the transmitter.
In this chapter we will describe several digital transmission schemes, detailing how binary information is converted into a waveform at the transmitter side, and how a corrupted version of this waveform arrives at the receiver side. Right now, our focus is not on how the corresponding receivers should be designed. Since there is a myriad of digital transmission schemes, we are obliged to limit ourselves to some of the most important ones.
In early 2002, I was absent-mindedly surfing the Internet, vaguely looking for a tutorial on turbo codes. My PhD advisor at Ghent University, Marc Moeneclaey, thought it wise for his new students to become familiar with these powerful error-correcting codes. Although I finally settled on W. E. Ryan's “A turbo code tutorial,” my search led me (serendipitously?) to the PhD thesis of NiclasWiberg. This thesis shows how to describe codes by means of a (factor) graph, and how to decode them by passing messages on this graph. Although interesting, the idea seemed a bit far-fetched, and I didn't fully appreciate or understand its significance. Nevertheless, Wiberg's thesis stayed in the back of my mind (or at least, I'd like to think so now).
During 2002 and 2003, I worked mainly on synchronization and estimation algorithms for turbo and LDPC codes. A colleague of mine, Justin Dauwels, who was at that time a PhD student of Andy Loeliger at the ETH in Zürich, had developed a remarkable synchronization algorithm for LDPC codes, based on Wiberg's factor graphs. Justin was interested in comparing his synchronization algorithm with ours, and became a visiting researcher in our lab for the first two months of 2004. I fondly remember many hours spent in the department lunchroom, with Justin (painstakingly) explaining the intricacies of factor graphs to me. His discussions motivated me to re-write my source code for decoding turbo and LDPC codes using the factor-graph framework.
Error-correcting codes are a way to protect a binary information sequence against adverse channel effects by adding a certain amount of redundancy. This is known as encoding. The receiver can then try to recover the original binary information sequence, using a decoder. The field of coding theory deals with developing and analyzing codes and decoding algorithms. Although coding theory is fairly abstract and generally involves a great deal of math, our knowledge of factor graphs will allow us to derive decoding algorithms without delving too deep. As we will see, using factor graphs, decoding becomes a fairly straightforward matter. In contrast to conventional decoding algorithms, our notation will be the same for all types of codes, which makes it easier to understand and interpret the algorithms.
In this chapter, we will deal with four types of error-correcting codes: repeat–accumulate (RA) codes, low-density parity-check (LDPC) codes, convolutional codes, and turbo codes. Repeat–accumulate codes were introduced in 1998 as a type of toy code. Later they turned out to have a great deal of practical importance [81]. We then move on to the LDPC codes, which were invented by Gallager in 1963 [50], and reintroduced in the early 1990s by MacKay [82]. Both types of codes can easily be cast into factor graphs; these factor graphs turn out to have cycles, leading to iterative decoding algorithms. Convolutional codes, on the other hand, are based on state-space models and thus lead to cycle-free factor graphs [83].