To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter establishes the foundation for network machine learning. We begin with network fundamentals: adjacency matrices, edge directionality, node loops, and edge weights. We then explore node-specific properties such as degree and path length, followed by network-wide metrics including density, clustering coefficients, and average path lengths. The chapter progresses to advanced matrix representations, notably degree matrices and various Laplacian forms, which are crucial for spectral analysis methods. We examine subnetworks and connected components, tools for focusing on relevant network structures. The latter half of the chapter delves into preprocessing techniques. We cover node pruning methods to manage outliers and low-degree nodes. Edge regularization techniques, including thresholding and sparsification, address issues in weighted and dense networks. Finally, we explore edge-weight rescaling methods such as z-score standardization and ranking-based approaches. Throughout, we emphasize practical applications, illustrating concepts with examples and code snippets. These preprocessing steps are vital for addressing noise, sparsity, and computational challenges in network data. By mastering these concepts and techniques, readers will be well-equipped to prepare network data for sophisticated machine learning tasks, setting the stage for the advanced methods presented in subsequent chapters.
A graph $G=(V,E)$ is called an expander if every vertex subset U of size up to $|V|/2$ has an external neighborhood whose size is comparable to $|U|$. Expanders have been a subject of intensive research for more than three decades and have become one of the central notions of modern graph theory. We first discuss the above definition of an expander and its alternatives. Then we present examples of families of expanding graphs and state basic properties of expanders. Next, we introduce a way to argue that a given graph contains a large expanding subgraph. Finally weresearch properties of expanding graphs, such as existence of small separators, of cycles (including cycle lengths),and embedding of large minors.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.