To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we address so-called “error generic” data poisoning (DP) attacks (hereafter called DP attacks) on classifiers. Unlike backdoor attacks, DP attacks aim to degrade overall classification accuracy. (Previous chapters were concerned with “error specific” DP attacks involving specific backdoor patterns and source and target classes for classification applications.) To effectively mislead classifier training using relatively few poisoned samples, an attacker introduces “feature collision” to the training samples by, for example, flipping the class labels of clean samples. Another possibility is to poison with synthetic data, not typical of any class. The information extracted from the clean and poisoned samples labeled to the same class (as well as from clean samples that originate from the same class as the (mislabeled) poisoned samples) is largely inconsistent, which prevents the learning of an accurate class decision boundary. We develop a BIC based framework for both detection and cleansing of such data poisoning. This method is compared with existing DP defenses for both image data domains and document classification domains.
This chapter introduces the interdisciplinary field of relationship science. It describes the human drive for belonging, including the biological underpinning of sociality and the harmful consequences of social isolation and social exclusion. It also defines romantic relationships and the characteristics that differentiate romantic relationships from other close relationships (high interdependence, high intimate knowledge, high commitment). In addition to emphasizing the core commonalities across romantic relationships, this chapter explores the many ways in which romantic relationships are diverse (e.g., structure, exclusivity, composition, duration, motives). Finally, this chapter highlights the critical importance of close relationships for individual health and well-being as well, as for society more broadly.
This chapter examines how early relationships become established relationships. It reviews varying relationship trajectories (e.g., ascent, peak, and descent) and then describes the three key components of the relationship that develop over time: love, intimacy, and commitment. First, the chapter defines and differentiates the various forms of love (e.g., passionate love, companionate love, compassionate love) and reviews how love develops and changes over time. Second, this chapter explores how interpersonal intimacy develops through repeated instances of self-disclosure and perceived partner responsiveness and how developing intimate relationships change the self. Third, this chapter reviews how people make and communicate their commitment decisions, as well as how social network members shape commitment. Finally, it provides an overview of common major transitions (cohabitation, marriage, parenthood) and some key challenges therein.
This chapter describes how relationship scientists conduct research to answer questions about relationships. It explains all aspects of the research process, including how hypotheses are derived from theory, which study designs (e.g., experiments, cross-sectional studies, experience sampling) best suit specific research questions, how scientists can manipulate variables or measure variables with self-report, implicit, observational, or physiological measures, and what scientists consider when recruiting a sample to participate in their studies. This chapter also discusses how researchers approach questions about boundary conditions (when general trends do not apply) and mechanisms (the processes underlying their findings) and describes best practices for conducting ethical and reproducible research. Finally, this chapter includes a guide for how to read and evaluate empirical research articles.
In this chapter, we introduce the design of statistical anomaly detectors. We discuss types of data – continuous, discrete categorical, and discrete ordinal features – encountered in practice. We then discuss how to model such data, in particular to form a null model for statistical anomaly detection, with emphasis on mixture densities. The EM algorithm is developed for estimating the parameters of a mixture density. K-means is a specialization of EM for Gaussian mixtures. The Bayesian information criterion (BIC) is discussed and developed – widely used for estimating the number of components in a mixture density. We also discuss parsimonious mixtures, which economize on the number of model parameters in a mixture density (by sharing parameters across components). These models allow BIC to obtain accurate model order estimates even when the feature dimensionality is huge and the number of data samples is small (a case where BIC applied to traditional mixtures grossly underestimates the model order). Key performance measures are discussed, including true positive rate, false positive rate, and receiver operating characteristic (ROC) and associated area-under-the-curve (ROC AUC). The density models are used in attack detection defenses in Chapters 4 and 13. The detection performance measures are used throughout the book.
This chapter looks at the mechanics of Agreement, and the role that Agreement plays in Case-marking and A-Movement. It starts (Module 3.1) by characterising Agreement, Case-marking and A-Movement as involving a probe-goal relation, and outlines how agreement and case features are valued in the course of a derivation. Module 3.2 goes on to look at how agreement works in expletive it clauses, and contrasts this with multiple agreement in expletive there clause: it also examines conditions on the use of expletives. Module 3.3 then turns to explore the potential role of abstract agreement in Passive, Raising, Exceptional Case-marking, and Control infinitives. Next Module 3.4 investigates non-standard structures involving agreement across a finite clause boundary (e.g. He seems is very active), and at Copy Raising structures (e.g. He looks like he’s winning). The chapter concludes with a Summary (Module 3.5), Bibliography (Module 3.6), and Workbook (Module 3.7), with some Workbook exercise examples designed for self-study, and others for assignments/seminar discussion.
In this chapter we consider attacks that do not alter the machine learning model, but “fool” the classifier (plus supplementary defense, including human monitoring) into making erroneous decisions. These are known as test-time evasion attacks (TTEs). In addition to representing a threat, TTEs reveal the non-robustness of existing deep learning systems. One can alter the class decision made by the DNN by making small changes to the input, changes which would not alter the (robust) decision-making of a human being, for example performing visual pattern recognition. Thus, TTEs are a foil to claims that deep learning, currently, is achieving truly robust pattern recognition, let alone that it is close to achieving true artificial intelligence. Thus, TTEs are a spur to the machine learning community to devise more robust pattern recognition systems. We survey various TTE attacks, including FGSM, JSMA, and CW. We then survey several types of defenses, including anomaly detection as well as robust classifier training strategies. Experiments are included for anomaly detection defenses based on classical statistical anomaly detection, as well as a class-conditional generative adversarial network, which effectively learns to discriminate “normal” from adversarial samples, and without any supervision (no supervising attack examples).
This chapter examines the syntax of the subperiphery (between the verb phrase and the periphery). It begins (Module 6.1) by arguing that subjects are housed in a separate SUBJP/subject projection which is positioned above the TP/tense projection housing finite auxiliaries. Module 6.2 goes on to argue that subperipheral adverbials are not adjuncts (as in earlier work), but rather specifiers of dedicated functional heads (e.g. probably is the specifier of an epistemic modal head). Module 6.3 then looks at word order variation in the position of adverbs with respect to subjects and auxiliaries, noting that this can arise when subjects/auxiliaries move around adverbs. Module 6.4 subsequently argues that subperipheral prepositional phrases and floating quantifiers are likewise housed in functional projections of their own, leading to the broader conclusion that all peripheral and subperipheral constituents are housed in dedicated functional projections. The chapter concludes with a Summary (Module 6.5), Bibliography (Module 6.6), and Workbook (Module 6.7), with some Workbook exercise examples designed for self-study, and others for assignments/seminar discussion.