To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A coherent image-formation system is degraded if the coherence of the received waveform is imperfect. This reduction in coherence is due to an anomalous phase angle in the received signal, which is referred to as phase error. Phase errors can be in either the time domain or the frequency domain, and they may be described by either a deterministic model or a random model. Random phase errors in the time domain arise because the phase varies randomly with time. Random phase errors in the frequency domain arise because the phase of the Fourier transform varies randomly with frequency. We will consider both unknown deterministic phase errors and random phase errors in both the time domain and the frequency domain.
Random phase errors in the time domain appear as complex time-varying exponentials multiplying the received complex baseband signal and are called phase noise. Phase noise may set a limit on the maximum waveform duration that can be processed coherently. Random phase errors in the frequency domain appear as complex exponentials multiplying the Fourier transform of the received complex baseband signal and are called phase distortion. Phase distortion may set a limit on the maximum bandwidth that a single signal can occupy. We shall primarily study phase noise in this chapter. Some of the lessons learned from studying phase noise can be used to understand phase distortion.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Some types of large data sets have a hierarchal substructure that leads to new kinds of surveillance algorithms for tasks such as data combination and tracking. The term data combination typically refers to a task in which several estimates based on partial data are combined. Perhaps several snapshots of the same scene or object are available, and multiple measurements or estimates are to be combined or averaged in some way. The topics of this chapter refer to partially processed data, and the methods may be used subsequent to correlation or tomographic processing.
Various sets of data may be combined either before detection or after detection. The combination of closely associated data prior to detection is called integration, and can be either coherent integration or noncoherent integration. The combination of data from multiple sensors, usually in the form of parameters that have been estimated from the data, is called data fusion. This term usually conveys an emphasis that there is a diversity of types of data. Sometimes, only a tentative detection is made before tracking, while a hard detection is deferred until after tracking. In some applications, this is called “track before detect.” In this chapter, we shall study the interplay between the functions of detection, data fusion, and tracking, which leads us to think of sensor processing on a much longer time scale than in previous chapters.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
A conventional radar consists of a transmitter that illuminates a region of interest, a receiver that collects the signal reflected by objects in that region, and a processor that extracts information of interest from the received signal. A radar processor consists of a preprocessor, a detection and estimation function, and a postprocessor. In the preprocessor, the signal is extracted from the noise, and the entire signal reflected from the same resolution cell is integrated into a single statistic. An imaging radar uses the output of the preprocessor to form an image of the observed scene for display. A detection radar makes further inferences about the objects in the scene. The detection and estimation function is where individual target elements are recognized, and parameters associated with these target elements are estimated. The postprocessor refines postdetection data by establishing track histories on detected targets.
This chapter is concerned with the preprocessor, which is an essentially linear stage of processing at the front end of the processing chain. The radar preprocessor usually consists of the computation of a sample cross-ambiguity function in some form. Sometimes the computation is in such a highly approximated form that it will not be thought of as the computation of a cross-ambiguity function. The output of the preprocessor can be described in a very compact way, provided that several easily satisfied approximations hold. The output is the two-dimensional convolution of the reflectivity density of the radar scene and the ambiguity function of the transmitted waveform.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
The dream behind the Web is of a common information space in which we communicate by sharing information.
Tim Berners-Lee
Objectives
The HyperText Transfer Protocol and the Apache web server.
The Common Gateway Interface.
The Dynamic Host Configuration Protocol.
The Network Time Protocol.
The Network Address Translator and the Port Address Translator.
An introduction to socket programming.
The HyperText Transfer Protocol
The HyperText Transfer Protocol and the Web
In the early days of the Internet, email, FTP, and remote login were the most popular applications. The first World Wide Web (WWW) browser was written by Tim Berners-Lee in 1990. Since then, WWW has become the second “Killer App” after email. Its popularity resulted in the exponential growth of the Internet.
In WWW, information is typically provided as HyperText Markup Language (HTML) files (called web pages). WWW resources are specified by Uniform Resource Locators (URL), each consisting of a protocol name (e.g., http, rtp, rtsp), a “://”, a server domain name or server IP address, and a path to a resource (an HTML file or a CGI script (see Section 8.2.2)). The HyperText Transfer Protocol (HTTP) is an application layer protocol for distributing information in the WWW. In common with many other Internet applications, HTTP is based on the client–server architecture. An HTTP server, or a web server, uses the well-known port number 80, while an HTTP client is also called a web browser.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.
J. H. Saltzer, D. P. Reed and D. D. Clark
Objectives
Study sock as a traffic generator, in terms of its features and command line options.
Study the User Datagram Protocol.
IP fragmentation.
MTU and path MTU discovery.
UDP applications, using the Trivial File Transfer Protocol as an example.
Compare UDP with TCP, using TFTP and the File Transfer Protocol.
The User Datagram Protocol
Since the Internet protocol suite is often referred to as TCP/IP, UDP, it may seem, suffers from being considered the “less important” transport protocol. This perception is changing rapidly as realtime services, such as Voice over IP (VoIP), which use UDP become an important part of the Internet landscape. This emerging UDP application will be further explored in Chapter 7.
UDP provides a means of multiplexing and demultiplexing for user processes, usingUDP port numbers. It extends the host-to-host delivery service of IP to the application-to-application level. There is no other transport control mechanism provided by UDP, except a checksum which protects the UDP header (see Fig. 0.14), UDP data, and several IP header fields.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
We are now in a transition phase, just a few years shy of when IP will be the universal platform for multimedia services.
H. Schulzrinne
Objectives
Multicast addressing.
Multicast group management.
Multicast routing: configuring a multicast router.
Realtime video streaming using the Java Media Framework.
Protocols supporting realtime streaming: RTP/RTCP and RTSP.
Analyzing captured RTP/RTCP packets using Ethereal.
IP multicast
IP provides three types of services, i.e., unicast, multicast, and broadcast. Unicast is a point-to-point type of service with one sender and one receiver. Multicast is a one-to-many or many-to-many type of service, which delivers packets to multiple receivers. Consider a multicast group consisting of a number of participants, any packet sent to the group will be received by all of the participants. In broadcasts, IP datagrams are sent to a broadcast IP address, and are received by all of the hosts.
Figure 7.1 illustrates the differences between multicast and unicast. As shown in Fig. 7.1(a), if a node A wants to send a packet to nodes B, C, and D using unicast service, it sends three copies of the same packet, each with a different destination IP address. Then, each copy of the packet will follow a possibly different path from the other copies. To provide a teleconferencing-type service for a group of N nodes, there need to be N(N - 1)/2 point-to-point paths to provide a full connection.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Metcalfe's Law: “The value of a network grows as the square of the number of its users.”
Robert Metcalfe
Objectives
Network interfaces and interface configuration.
Network load and statistics.
The Address Resolution Protocol and its operations.
ICMP messages and Ping.
Concept of subnetting.
Duplicate IP addresses and incorrect subnet masks.
Local area networks
Generally there are two types of networks: point-to-point networks or broadcast networks. A point-to-point network consists of two end hosts connected by a link, whereas in a broadcast network, a number of stations share a common transmission medium. Usually, a point-to-point network is used for long-distance connections, e.g., dialup connections and SONET/SDH links. Local area networks are almost all broadcast networks, e.g., Ethernet or wireless local area networks (LANs).
Point-to-Point networks
The Point-to-Point Protocol (PPP) is a data link protocol for PPP LANs. The main purpose of PPP is encapsulation and transmission of IP datagrams, or other network layer protocol data, over a serial link. Currently, most dial-up Internet access services are provided using PPP.
PPP consists of two types of protocols. The Link Control Protocol (LCP) of PPP is responsible for establishing, configuring, and negotiating the datalink connection, while for each network layer protocol supported by PPP, there is a Network Control Protocol (NCP). For example, the IP Control Protocol (IPCP) is used for transmitting IP datagrams over a PPP link. Once the link is successfully established, the network layer data, i.e., IP datagrams, are encapsulate in PPP frames and transmitted over the serial link.
The Fourier transform of a two-dimensional function — or of an n-dimensional function — can be defined by analogy with the Fourier transform of a one-dimensional function. A multidimensional Fourier transform is a mathematical concept. Because many engineering applications of the two-dimensional Fourier transform deal with two-dimensional images, it is common practice, and ours, to refer to the variables of a two-dimensional function as “spatial coordinates” and to the variables of its Fourier transform as “spatial frequencies.”
The study of the two-dimensional Fourier transform closely follows the study of the one-dimensional Fourier transform. As the study develops, however, the two-dimensional Fourier transform displays a richness beyond that of the one-dimensional Fourier transform.
The two-dimensional Fourier transform
A function, s(x, y), possibly complex, of two variables x and y is called a two-dimensional signal or a two-dimensional function (or, more correctly, a function of a two-dimensional variable). A common example is an image, such as a photographic image, wherein the variables x and y are the coordinates of the image, and s(x, y) is the amplitude. In a photographic image, the amplitude is a nonnegative real number. In other examples, the function s(x, y) may also take on negative or even complex values. Figure 3.1 shows a graphical representation of a two-dimensional complex signal in terms of the real and imaginary parts. Figure 3.2 shows the magnitude of the function depicted in two different ways: one as a three-dimensional graph, and one as a plan view.
As processing technology continues its rapid growth, it occasionally causes us to take a new point of view toward many long-established technological disciplines. It is now possible to record precisely such signals as ultrasonic or X-ray signals or electromagnetic signals in the radio and radar bands and, using advanced digital or optical processors, to process these records to extract information deeply buried within the signal. Such processing requires the development of algorithms of great precision and sophistication. Until recently such algorithms were often incompatible with most processing technologies, and so there was no real impetus to develop a general, unified theory of these algorithms. Consequently, it was scarcely noticed that a general theory might be developed, although some special problems were well studied. Now the time is ripe for a general theory of these algorithms. These are called algorithms for remote image formation, or algorithms for remote surveillance. This topic of image formation is a branch of the broad field of informatics.
Systems for remote image formation and surveillance have developed independently in diverse fields over the years. They are often very much driven by the kind of hardware that is used to sense the raw data or to do the processing.
Our immediate environment is a magnificent tapestry of information-bearing signals of many kinds: some are man-made signals and some are not, reaching us from many directions. Some signals, such as optical signals and acoustic signals, are immediately compatible with our senses. Other signals, as in the radio and radar bands, or as in the infrared, ultraviolet, and X-ray bands, are not directly compatible with human senses. To perceive one of these signals, we require a special apparatus to convert it into observable form.
A great variety of man-made sensors now exist that collect signals and process those signals to form some kind of image, normally a visual image, of an object or a scene of objects. We refer to these as sensors for remote surveillance. There are many kinds of sensors collected under this heading, differing in the size of the observed scene, as from microscopes to radio telescopes; in complexity, as from the simple lens to synthetic-aperture radars; and in the current state of development, as from photography to holography and tomography. Each of these devices collects raw sensor data and processes that data into imagery that is useful to a user. This processing might be done by a digital computer, an optical computer, or an analog computer. The development and description of the processing algorithms will often require a sophisticated mathematical formulation.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
SNMP and MIBs, using NET-SNMP as an example, and using NETSNMP utilities to query MIB objects.
Encryption, confidentiality, and authentication, including DES, RSA, MD5 and DSS.
Application layer security, using SSH and Kerberos as examples.
Transport layer security, including SSL and the secure Apache server.
Network layer security, IPsec and Virtual Private Networks.
Firewalls and IPTABLES.
Accounting, auditing, and intrusion detection.
Network management
The Simple Network Management Protocol
In addition to configuring network devices when they are initially deployed, network management requires the performing of many tasks to run the network efficiently and reliably. A network administrator may need to collect statistics from a device to see if it is working properly, or monitor the network traffic load on the routers to see if the load is appropriately distributed. When there is a network failure, the administrator may need to go through the information collected from the nearby devices to identify the cause. The Simple Network Management Protocol (SNMP) is an application layer protocol for exchanging management information between network devices. It is the de facto network management standard in the Internet.
Figure 9.1 illustrates a typical SNMP management scenario, consisting of an SNMP manager and multiple managed devices. A managed device, e.g., a host computer or a router, maintains a number of Management Information Bases (MIB), which record local management related information.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York
Image formation is the task of constructing an image of a scene when given a set of noisy data that is dependent on that scene. Possibly some prior information about the scene is also given. Image formation also includes the task of refining a prior image when given additional fragmentary or degraded information about that image. Then the task may be called image restoration.
In the most fundamental problem of image restoration, one is given an image of a two-dimensional scene, but the detail of the image, in some way, is limited. For example, the image of the scene may be blurred or poorly resolved in various directions. Sophisticated signal-processing techniques, called deconvolution or deblurring, can enhance such an image. When the blurring function is not known but must be inferred from the image itself, these techniques are called blind deconvolution or blind deblurring. Problems of deconvolution are well known to be prone to computational instability, and great care is needed in the implementation of deconvolution algorithms.
Another task of image construction is estimating an image from partial knowledge of some of the properties of the image. An important instance of this task is estimating an image from the magnitude of its two-dimensional Fourier transform.
Shivendra S. Panwar, Polytechnic University, New York,Shiwen Mao, Polytechnic University, New York,Jeong-dong Ryoo, Electronics and Telecommunications Research Unit, South Korea,Yihan Li, Polytechnic University, New York