To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Filipe Calvão, Graduate Institute of International and Development Studies, Geneva,Matthieu Bolay, University of Applied Sciences and Arts Western Switzerland,Elizabeth Ferry, Brandeis University, Massachusetts
In exploring the politics of corporate versus small-scale mining of rubies and the ongoing struggles over a potentially enormous rare earth element (REE) deposit, this chapter hinges upon a critical analysis of transparency, opacity, and the politics of sovereignty in a country that is increasingly framed as a synecdoche for climate change in this century. Recent decades have seen the growth of two emergent forms in the international aid industry: (1) transparency and accountability initiatives (TAIs) that endeavor to bring aid organizations in line with standard expectations around their operations; and (2) modest, small-scale do-it-yourself (DIY) aid projects that emerge from and depend on trusting relationships between benefactors and beneficiaries. This chapter considers the ambiguous coexistence of these forms, drawing from ethnographic research with a small-scale healthcare project in Madagascar to illustrate how DIY aid can be effective (for better or worse) despite operating outside the purview of TAIs.
The distribution of the raised variants of the Canadian English diphthongs is standardly analyzed as opaque allophony, with derivationally ordered processes of diphthong raising and of/t/- flapping. This short report provides an alternative positional contrast analysis in which the preflap raised diphthongs are licensed by a language-specific constraint. The basic distributional facts are captured with a weighted constraint grammar that lacks the intermediate level of representation of the standard analysis. The paper also provides a proposal for how the constraints are learned and shows how correct weights can be found with a simple, widely used learning algorithm.
DiCanio et al. (2020) (this volume) argue that San Martin Itunyoso Triqui has a morphophonological exchange (also called ‘polarity’), where morphemes are realized by switching feature values: e.g. the bare root [anĩɦ] ‘get dirty’ is realized as [anĩ:] in the 1s, while the root [ani:] ‘stop’ is 1s [amĩɦ]. In this Reply, I seek to clarify how the descriptive use of ‘exchange’ relates to and differs from its meaning in phonological theories. I also show that the issue of whether exchanges exist is highly theory-dependent. For SPE, Lexical Phonology and Morphology, and single-level parallelist OT with opacity mechanisms, the IT forms do not provide evidence for exchange mechanisms. In contrast, a version of OT that lacks opacity mechanisms probably cannot generate the IT forms without an exchange mechanism. Issues facing the analyst, such as how to prove that exchanges exist, and which apparent exchanges one should expect to observe, are also discussed.
Frank Jackson’s Knowledge Argument claims that Mary—a neuroscientist who knows all the physical facts about color perception but has never seen color—learns something new when she sees red, posing a challenge to physicalism. While physicalists deny that Mary acquires knowledge of new facts, they must still explain her apparent epistemic progress. I argue that the intuition that Mary gains new knowledge upon seeing red stems from the alleged opacity of propositional attitude ascriptions—the same phenomenon underlying Frege puzzles.
This article examines a case of phonological opacity in Uyghur resulting from an interaction between backness harmony and a vowel reduction process that converts harmonic vowels into transparent vowels. A large-scale corpus study shows that although opaque harmony with the underlying form of a reduced vowel is the dominant pattern, cases of surface-apparent harmony also occur. The rate of surface-apparent harmony varies across roots and is correlated with a number of factors, including root frequency. These data pose problems for standard accounts of opacity, which do not predict such variation. I propose an analysis where variation emerges from conflict between a paradigm uniformity constraint mandating that the harmonising behaviour of a root remains consistent, and surface phonotactic constraints. This is implemented in a parallel model by scaling constraint violations according to certainty in a root’s harmonic class. This aligns with past work suggesting some opacity is driven by paradigm uniformity.
The coda draws out the implications of modernist physiognomy for our contemporary moment. As we move from nineteenth-century physiognomy to modernist physiognomy, we encounter more minimalist descriptions of faces – facial sketches, outlines. We encounter faces reduced to a minimalist form. This form is taken up by contemporary facial recognition technologies. Across the scholarly literature on facial recognition technologies, there is a growing awareness of bias: technology is biased because training sets are biased. As Cathy O’Neil writes, “data embeds the dark past.” At the conclusion of this book, the coda frames its contribution to the call issued by Soshana Zuboff in Surveillance Capitalism: “If the digital future is to be our home, then it is we who must make it so.” This book will have aimed to give historicized substance to a fragment of this past: algorithmic data embeds the long history of the face, including elements of modernist physiognomy.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
This chapter focuses on the Black body in the narrative genre of passing literature, which combines issues of embodiment with those of visuality. It begins by arguing that, whereas recent literary culture habituates us to immediacy, access, and confession, the passing plot operates on different terms. At a moment when many artists and critics are arguing for the importance of opacity to relational frameworks, the passing plot comes into focus as a special testing ground for viewing racialized embodiment and ethical sociality in fresh ways. The chapter goes on to argue that just as the passing plot proves a rich container for considering the ethics of relation, dramatic literature offers a particularly productive platform for considering passing literature today. My case study for these claims is Branden Jacob-Jenkins’s play An Octoroon (2014). A metatheatrical riff on a prominent nineteenth-century melodrama called The Octoroon (1859), the play avoids conveying some intimate truth about racial embodiment – the secret ostensibly kept by the passing figure – in order to offer new opportunities for Jacobs-Jenkins’s audience to become aware of their embodied participation in acts of racialization.
Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to what extent the two kinds of opacity can be mitigated by explainability methods.
There is a broad consensus that human supervision holds the key to sound automated decision-making: if a decision-making policy uses the predictive outputs of a statistical algorithm, but those outputs form only part of a decision that is made ultimately by a human actor, use of those outputs will not (per se) fall foul of the requirements for due process in public and private decision-making. Thus, the focus in academic and judicial spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. Yet, proprietary software obscures the reasons for any given prediction; this is true both for machine learning and deterministic algorithms. And without these reasons, the decision-maker cannot accord appropriate weight to that prediction in their reasoning process. Thus, a policy of using opaque statistical software to make decisions about how to treat others is unjustified, however involved humans are along the way.
This chapter closes Part 1 by analysing how the opacity surrounding the use of AI and ADM tools by financial corporations is enabled, and even encouraged by the law. As other chapters in the book demonstrate, such opacity brings about significant risks to fundamental rights, consumer rights, and the rule of law. Analysing examples from jurisdictions including the US, UK, EU, and Australia, Bednarz and Przhedetsky unpack how financial entities often rely on rules and market practices protecting corporate secrecy such as complex credit scoring systems, proprietary rights to AI models and data, as well as the carve out of ‘non-personal’ information from data and privacy protection laws. The authors then focus on the rules incentivising the use of AI and ADM tools by financial entities, showing how they provide a shield behind which corporations can hide their consumer scoring and rating practices. The authors also explore potential regulatory solutions that could break the opacity and ensure transparency, introducing direct accountability and scrutiny of ADM and AI tools, and reducing the control of financial corporations over people’s data.
This essay identifies two approaches to theorizing the relationship between financialization and contemporary art. The first departs from an analysis of how market logics in non-financial spheres are being transformed to facilitate financial circulation; the other considers valuation practices in financial markets (and those related to derivative instruments in particular) from a socio-cultural perspective. According to the first approach, the contemporary art market is in theory a hostile environment for financialization, although new practices are emerging that are increasing its integration with the financial sphere. The second approach identifies socio-cultural similarities between the logics by which value is extracted, amplified, and distributed through derivative instruments and contemporary art. The two approaches present a discrepancy: on the one hand, contemporary art functions as an impediment to outright financialization because of market opacity; on the other, contemporary art represents a socio-cultural analog to derivative instruments. The essay concludes by setting out the terms for a more holistic understanding of contemporary art's relationship to financialization, which would enable an integration of its economic and socio-cultural dimensions.
This chapter discusses the reflexive relationship between qualitative researchers and the process of selecting, forming, processing and interpreting data in algorithmic qualitative research. Drawing on Heidegger’s ideas, it argues that such research is necessarily synthetic – even creative – in that these activities inflect, and are in turn inflected by, the data itself. Thus, methodological transparency is key to understanding how different types of meanings become infused in the process of algorithmic qualitative research. While algorithmic research practices provide multiple opportunities for creating transparent meaning, researchers are urged to consider how such practices can also introduce and reinforce human and algorithmic bias in the form of unacknowledged introduction of perspectives into the data. The chapter demonstrates this reflexive dance of meaning and bias using an illustrative case of topic modelling. It closes by offering some recommendations for engaging actively with the domain, considering a multi-disciplinary approach, and adopting complementary methods that could potentially help researchers in fostering transparency and meaning.
Kant’s moral philosophy both enjoins the acquisition of self-knowledge as a duty, and precludes certain forms of its acquisition via what has become known as the Opacity Thesis. This article looks at several recent attempts to solve this difficulty and argues that they are inadequate. I argue instead that the Opacity Thesis rules out only the knowledge that one has acted from genuine moral principles, but does not apply in cases of moral failure. The duty of moral self-knowledge applies therefore only to one’s awareness of one’s status as a moral being and to the knowledge of one’s moral failings, both in particular actions and one’s overall character failings, one’s vices. This kind of knowledge is morally salutary as an aid to discovering one’s individual moral weakness as well as the subjective ends for which one acts, and in this way for taking up the morally required end of treating human beings as human beings. In this way, moral self-knowledge can be understood as a necessary element of moral improvement, and I conclude by suggesting several ways to understand it thereby as genuinely primary among the duties to oneself.
It is natural to think that social groups are concrete material particulars, but this view faces an important objection. Suppose the chess club and nature club have the same members. Intuitively, these are different clubs even though they have a common material basis. Some philosophers take these intuitions to show that the materialist view must be abandoned. I propose an alternative explanation. Social groups are concrete material particulars, but there is a psychological explanation of nonidentity intuitions. Social groups appear coincident but nonidentical because they are perceived to be governed by conflicting social norms.
In this article, the three co-authors collaboratively address practices of queering in relation to the Parisian choreographer of color Nyota Inyoka (1896–1971), whose biography and identity remain mysterious even after extensive research. Writing from three different research perspectives and relating to three different aspects of her life and work, the co-authors analyze Nyota Inyoka and practices of Queering the Archive, her staging of Shiva as a performance of (culturally) “queer possibility,” and the act of remembering Nyota Inyoka in a contemporary context in terms of queering ethnicity and “cultural belonging.” Juxtaposing and interweaving notions and practices of queering and créolité/creolizing over the course of the article, the co-authors attempt to respect Nyota Inyoka's “right to opacity” (Glissant [1996] 2020, 45) and remember her on her own terms.
Chapter 4 conducts a qualitative assessment of the substantive rules underlying Eurozone economic governance, with the view of testing their actual contribution to trust-generation and uncertainty-reduction. Focused on the Eurozone’s fiscal policy rules, the Chapter shows that the common fiscal discipline of the Eurozone suffers from serious qualitative flaws, pertaining to their complexity, their opacity, their internal inconsistency and the unconstrained discretion that the Commission enjoys as their main enforcer. It argues that the system’s reliance on policy rules has become excessive and counterproductive, as it now works against the objectives of certainty, stability and equality that it was supposed to achieve, instills distrust and facilitates arbitrariness. Hence, the Chapter highlights the pressing need for an overhaul of the existing policy rules and a deeper institutional reflection about the legitimacy of the rules-based approach to economic and fiscal governance.
The primary interest of sandhi in Romance is as a morphological phenomenon. Adaptation of word forms to a variety of sandhi contexts gives rise to allomorphy (paradigmatic variation). Such adaptation reflects natural phonological processes which tend to reduce the markedness of sequences of phonological elements. We illustrate from Catalan and French strategies to avoid hiatus, and from Catalan and Occitan strategies to simplify consonant clusters. Romance also attests subphonemic alternations in sandhi environments, and we draw attention to cases such as intersonorant lenition of initial voiced stops in much of south-western Romance. A striking feature of Romance sandhi alternations is the readiness with which they may become morphologized or lexicalized. This outcome may arise from subsequent sound changes that make the original motivated alternation opaque, or from levelling of allomorphic alternation that makes the distribution of allomorphs opaque. We review an example of such a change in progress: the aspiration/loss of coda /s/ in Andalusian Spanish. Occasionally, a morphologized/lexicalized alternation may be (partly) remotivated, as is famously the case with rafforzamento fonosintattico ‘phonosyntactic strengthening’ in standard Italian. However, the phenomena of elision and liaison in modern French exemplify morphophonemic arbitrariness with very extensive incidence.
Umlaut and ablaut as morphological (rather than phonological) processes, affix order and bracketing paradoxes, subcategorization and stratum ordering, critique of Optimality Theory with respect to its ability to account for major phonological patterns in English, as described in rule terms in the preceding chapters. These include stress, vowel shift, and laxing. Special attention is given to opacity. Opacity presents the same problem to Optimality Theory as it does to pre-Generative structuralist phonology, due to its output orientation. Velar Softening is opaque in medicate (underapplication) and in criticize (overapplication). Various patches proposed to deal with this issue have involved the reintroduction of the intermediate derivational stages that Optimality Theory was designed to eliminate. These patches do not allow for Duke of York derivations such as that which appears in English in the derivation of pressure. The device of stratal Optimality Theory, combining level ordering and constraints differently ranked on different strata, can account for some Duke of York derivations but at the expense of making some postlexical processes lexical.
One characteristic interpretive technique in the discourse of customary international law is the identification of such norms as 'possibly emerging' or possibly in existence. Thus it is frequently asserted that a putative norm 'may' have or 'probably has' customary status. This hypothetical mode of analysis can give rise to the speculative construction of international obligations driven more by preference than by evidence. This speculative rhetorical technique is examined by reference to the account of temporal dimensions of the emergence of customary international law provided in the Chagos Archipelago Advisory Opinion of 2019. Here the International Court of Justice endeavoured to pin down the time of origin and path of evolution of a customary norm requiring territorial integrity in the context of decolonialisation as self-determination. This chapter engages with this ubiquitous characteristic of the interpretation of customary international law and argues that the accompanying opacity in relation to international legal norms – norms that are held to generate obligations – is to be deplored.