To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One might presume that the job of an expert in ethics is to mark the line between what is right and what is wrong. And – one might add – the way the ethicist is supposed to achieve this goal is by drawing up a list of general principles of behaviour. In the celebrated essay ‘Virtue and Reason’ (1979), McDowell's central concern is precisely to delineate an alternative to this approach to the study of ethics. As we shall see in ‘Virtue and Uncodifiable Reasons’, in fact, McDowell claims that the task of ethics cannot be approached by identifying a set of rules of conduct to govern our actions. Any such endeavour is in fact condemned to failure for, he argues, our moral outlook is simply not susceptible to codification in a set of rules.
At first, the impossibility of encoding our moral outlook into a finite number of principles might even strike us as a plausible thesis. After all – it could be argued – anyone who has ever attempted the feat has certainly failed. However, upon reflection, we can detect a certain tension between such uncodifiability and our ordinary intuitions about rationality. Indeed, to accept an action as rational is to regard it as appropriate to an acknowledged goal, be it aiming at the truth or, in the case of morality, aiming at the good. For this reason, rationality requires consistency – acting rationally implies acting consistently towards a given end. But – we normally assume – one's consistent actions must be explicable in terms of their being guided by some general rule – how could one go on doing the same thing if not by following something like a rule; a universal principle? Therefore, since moral action is a form of rational action, we intuitively conclude that it too should be explicable in rule-following terms. Yet, McDowell insists, this conclusion is too quick. Indeed – as we shall see in ‘A Wittgesteinian Route to Aristotle?’ – he argues that it is Wittgenstein's great merit to have compellingly shown us that a rule-following conception of rationality is nothing but a deep-rooted prejudice.
Introduction to the Black-Box Problem in Artificial Intelligence
The word ‘hallucinate’ was chosen as Cambridge dictionary's word of the year for 2023, reflecting this word's recent extension as a label for AI models producing false data. This is not only telling of AI's increasing importance in our lives: it also says a lot about how often its behaviour is unexpected. This unexpected behaviour is related to the fact that, nowadays, many AI models are designed in such a way that humans can't know how what algorithm is being implemented in the machine. This is often called Black-Box AI (BBAI hereafter). While the function telling the machine what to do may not be untraceable in the training stages, the purpose of machine learning is for the machine itself to adapt and change its parameters so as to perform its task more efficiently – while the programmers become ‘largely ignorant of quite what is going on inside’ (1950, 458), to put it in Turing's own words. Due to the interconnectedness of the values of the potentially billions of parameters making up the functions that describe how such machines work, modifying a single value in a complex black-box function – say, when it ‘learns’ how to improve outputs during training – can affect the value of millions of others. This means that after a few iterations of the initial training model, the actual function being implemented in the machine – and any variant thereof – is essentially unknowable to humans.
This is different from, say, a very large decision tree, whose size makes it unsurveyable at a single glance, but where each step is easily inspected. It is also different from cases where the technology is black-boxed in practice, but not in principle, such as proprietary BBAI, where corporations who own the machines hide how they work in order to prevent others from stealing their technology, even though they might know how its insides work.
Importantly, such BBAI models are not outliers: most deep-learning models rely on BBAI. This means that, in many cases, we end up relying on machines that are implementing an algorithm that is completely unknown to us – in some cases, for high-stake decisions. Given the large-scale deployment of machines relying on BBAI in many critical domains, our lack of epistemic access to BBAI has many real-life consequences, both practical and ethical.
This Americky is heaven's own spot, ma’am, and there's no denyin’ it.
—Augustin Daly, A Flash of Lightning
In the New World […] ‘no slum was as fearful as the Irish slum.’ Of all the immigrant nationalities in Boston, the Irish fared the least well, beginning at a lower rung and rising more slowly on the economic and social ladder than any other group.
—Doris Kearns Goodwin, The Fitzgeralds and the Kennedys: An American Saga (1987)
After the great influx of Irish immigrants […] the Scotch-Irish insisted upon differentiating between the descendants of earlier immigrants from Ireland and more recent arrivals. Thus, as a portion of the Irish diaspora became known as the ‘the Irish’, a racial (but not ethnic) line invented in Ireland was recreated as an ethnic (but not racial) line in America.
—Noel Ignatiev, How the Irish Became White (1995)
My twin brother and I were born in Springfield, Illinois, the “Land of Lincoln,” and celebrated our ninth birthdays three months before John F. Kennedy's election as president in November 1960. But even though we watched the evening news attentively on a fuzzy black-and-white television with our parents—a “mixed marriage” of a Protestant father and Catholic mother—neither of us quite understood why the election seemed to matter so much. A few weeks later on Thanksgiving morning, our view got clearer. The day began with helping our mother and grandmother, a secondgeneration Irish American, prepare for the traditional feast. Our jobs included tearing a mound of stiff, day-old bread into stuffing for the turkey, polishing silverware and carrying plates and other necessities to the table. This year was different, though, because not long after we started our chores mother and grandmother, cooking at the stove, began laughing and crying softly at the same time. I remember asking, “Mom, what's wrong? Did something bad happen?” With a smile she reassured us, “No, boys. These are tears of joy, because grandmother never thought she would live to see an Irish Catholic president in the White House. Now she will.”
Is it possible for a machine to think? At first glance, this question may appear to be about the possible future capacities of artificial intelligence (AI) systems. Indeed, the recent public discussion on ChatGPT, other AI systems and the future of this technology has, to a great extent, been concerned with that question. However, there is also a philosophically more profound question about what we mean when we say that something is thinking. Answering this philosophical problem is also crucial for building machines or computers with thinking capacities if we wish to create such machines. We must first analyse the conditions under which we would say something – or someone – is thinking. Only after that can we ask what psychological or artificial processes are needed for this to be the case.
Philosophers who discussed this issue during the early days of computers include Alan Turing, who famously argued for the possibility of thinking machines, and Ludwig Wittgenstein, who drew the opposite conclusion. Despite their differences, the two seem to agree on how thinking should be attributed. This chapter returns to what they wrote about thinking and why they disagreed. We will bring some lessons from Wittgenstein and Turing on how we should think about thinking. Moreover, we shall apply their ideas to contemporary robotics and AI discussions and engineering. Also, we outline a view on thinking machines that bridges Turing's optimism and Wittgenstein's scepticism on whether a machine could be said to think.
Our primary objective is not to settle a conceptual dispute between Wittgenstein and Turing. Our main goal lies in the opposite direction: to contribute to the discussion on the philosophy of robotics and AI. We approach the problem of thinking machines by identifying what Wittgenstein and Turing agree on and then apply what we have learned to the current debate on AI and robotics. In particular, we consider the idea of modelling robots according to an enactivist conception of thinking (see, e.g. Varela et al. 1991; Noë 2004; Rohde 2010; Stewart et al. 2010; Hutto and Myin 2012; Gallagher 2017; Egbert and Barandiaran 2022; Lassiter 2022). According to enactivism, cognition does not primarily consist of the internalist processing of represen-tations.
To say that concepts need to be understood through reference to their use in action is something of a truism. Such reference can help clarify and untie knots that may have emerged when concepts have been used without their everyday anchors. This is one view on what philosophy entails – famously described by some commentators on Wittgenstein as a kind of therapy (Fischer, 2018). Often this therapy is for philosophers themselves whose mode of enquiry can lead them away from everyday life. The view also presupposes that these concepts are stable, somewhere already settled in everyday use. Philosophers need to revisit these contexts. Other disciplines have a different interest in how the meaning of concepts is found in use. For one thing, these contexts – and hence the usages to be found in them – may be unfamiliar, the practices of another culture, say. It may be that contexts are changing, and as they do so, altering the meanings of terms used in those contexts. For these disciplines, these concerns are essentially empirical; anthropologists, certain sociologists (others too no doubt), seek to define context and then get to meaning. Whether there are conceptual muddles that might require therapy is a second-order concern, perhaps not even one at all.
Take the concept of the ‘user’ in human–computer interaction (HCI). This is surely to be understood with regard to its use, in relation to the contexts in which persons and computers do things together. It is these doings that make ‘a user’ a real phenomenon, a concept with pragmatic consequences, and it is these consequences that get expressed in the grammar of use for the term, ‘user’. In this grammar, the ‘user’ partly expresses something about machines and partly the humans that use them, or perhaps more accurately, expresses something about the interaction the two engage in (Agre, 2008). The grammar doesn't say what that interaction might be, but somehow what is implied is a complex phenomenon, tacitly encompassing both flesh and silicon, expressing agencies of moral intention and those of machinic calculation. The way I am using the term grammar echoes Wittgenstein's thinking about the same term in his Philosophical Investigations (1953).
Is AI art really art? This question has been the subject of much public discussion and is one that philosophical aesthetics should be well-placed to address. Unfortunately, there is no clear consensus within the discipline on how to tackle key definitional questions such as this. In the case of AI, we can add to this the unique challenge of works not made by humans. In this chapter, I argue for the utility of a Wittgensteinian approach to the question of whether AI art is art. This approach typically repudiates the need to provide necessary and sufficient conditions. Using Gaut's cluster account, I show that AI art can indeed count as art. I also demonstrate that the cluster account of art is particularly useful for thinking about art made by AI.
The Cluster Account of Art
The perceived failures of contemporary definitions of art (particularly a failure to garner any broad consensus amongst philosophers) led Berys Gaut to take a Wittgensteinian approach to art. Gaut was not the first to consider Wittgenstein's work in relation to the definition of art. Gaut's theory re-visits the work of philosophers in the 1950s who applied Wittgenstein's family resemblance approach to the question ‘what is art?’, arguing for an anti-definitionalist approach (see Weitz 1956; Ziff 1953; Kennick 1958). These philosophers argue for two key points: first, that art cannot be defined (in terms of individually necessary and jointly sufficient conditions), and second, that art is a concept best characterized in terms of family resemblance (Gaut 2000). Instead of resemblance-to-paradigm as the model for the concept of art, however, Gaut turns to a ‘cluster account’ construal of family resemblance (Gaut 2000, 26). The cluster version of family resemblance that Gaut adopts comes from Wittgenstein's discussion of proper names (Wittgenstein 2009: PI §79) and was further developed by Searle (1958). As Gaut writes,
A cluster account is true of a concept just in case there are properties whose instantiation by an object counts as a matter of conceptual necessity toward an object's falling under a concept […] There are several [properties] criteria for a concept.
A key type of reasoning in everyday life and science is reasoning by analogy. Roughly speaking, such reasoning involves the transposition of solutions that work well in one domain to another, on the basis of pre-existing analogous properties between the two domains. If we are to automate scientific reasoning with artificial intelligence (AI), then we need adequate models of analogical reasoning that clearly specify the conditions under which good analogical inferences can be made and bad ones avoided. Two general approaches to such modelling exist: universal and local. In this chapter, we assess the merits and demerits of both approaches. We concede that there are substantial obstacles standing in the way of the universal model view, but that these may be mitigated to some extent by supplementing existing models with additional criteria. One such criterion is defended, particularly against a challenge due to Wittgenstein. We argue that this challenge can be met and thus that there is hope for a one-size-fits-all model in the study of analogical reasoning.
The structure of the chapter is as follows. The section titled ‘Philosophical Models of Analogical Reasoning’ provides an overview of the main philosophical models of analogical reasoning, identifying some of their strengths and weaknesses. The next section, titled ‘AI Models of Analogical Reasoning’, briefly looks at one model of analogical reasoning that originates in the symbolic AI tradition, and offers some very general remarks about the prospects of modelling analogical reasoning with neural AI. Following that, the section titled ‘Norton's Material Challenge’ sets out the key issue of concern for this chapter, namely whether a universal model of analogical reasoning can be constructed. In the section titled ‘Relevant Conceptual Uniformity’, we consider one promising route towards a universal model via the supplementary criterion that the concepts involved are relevantly uniform. The subsequent section, titled ‘A Wittgensteinian Spanner in the Works?’, presents a challenge to this route that can be found in Wittgenstein's family resemblance metaphor, whose ultimate target is the rejection of concept uniformity. An attempt is made to meet this challenge by arguing that some concepts in natural science are uniform, or at least more uniform than others, but also that scientific inquiry strives towards, and manages to increase, uniformity.
Sonia Sanchez is known for her contributions as a poet, activist, dramatist, educationist, and a champion of African American culture. She is regarded worldwide as “a living legend,” a revered female writer of the Black community (Wood 2010, xi). Like Langston Hughes, Sterling Brown, Jean Toomer and Margaret Walker, with whom she is compared, Sanchez has opened a “space in American letters where the racial self may be heard, affirmed, and strengthened” (Andrews, Foster, and Harris 1997, 643). Her poetics and politics are inseparable. As she recalled in 2000: “The cultural thing, I think, was the existence of us as black folk in a place that did not speak well of us, a country that not only had enslaved us but afterward had ignored us—had segregated us and conspired to keep us from learning even the simplest things” (Sanchez and Kelly 2017, 1034). Sanchez, alongside her fellow revolutionary Black Arts poets, frequently saw her activism questioned: “[People asked,] Why do you agitate? You have brains, talent, education. You can find a nice comfortable niche and forget about others” (quoted in Randall 1970, 9). But to find a nice comfortable niche was to negate or compromise her blackness and to forget about the systematic discrimination of African Americans, her “brothers” and “sisters,” who were deprived of a comfortable niche.
Sanchez's lifelong sociopolitical activism has earned diverse praise. Nicole Moore (2010, 2) argues that Sanchez “infuses her writing with the type of historical and cultural significance and power that makes each word sharp as a razor blade and as hard as any Tupac [Shakur] lyric.” Essence magazine has called her poetry “a must for all readers,” while writer, poet and civil rights activist Maya Angelou, has called Sanchez “a lion in literature's forest” (Leopold 2013, para. 4). Reflecting on herself, Sanchez says, “[a]s a poet, I know that I have sharp words” (quoted in Ballin 2015, 3). The self-appraisal is not unlike Oodgeroo's view of her poetic purpose: “I’d rather hit them with my words than pick up a gun and shoot them” (quoted in Fox 2011, 62). In his introduction to Sanchez's Home Coming, Don L. Lee (1969, 7–8) argues that “Sonia wants us to/live & to/live is not synonymous with to/exist.
This chapter describes three cases involving legal claims by extraditees that their surrender would have serious consequences on their physical and mental health. In the case of Julian Assange, extensive delays have contributed to his deteriorating mental health and a high risk of suicide if he is surrendered to the US to face numerous charges of espionage. The case of Dorin Savu introduced evidence that surrender would raise potential exposure to torture in the requesting state, while the extradition proceedings considered the significance of his status and rights as a refugee in Canada. The case of Elias Perez raised arguments against extradition that focused on his potential exposure to violations of the CAT if he was surrendered to face trial in Mexico and related arguments involving the rule of noninquiry. The individual rights arguments in all of these cases were largely rejected because courts considering the request were willing to support the overarching authority of the requesting nation to prosecute each suspect for various crimes. The chapter will describe key factual and legal details raised in each case and demonstrate how they reveal the need for reform to the process of extradition given the limited recognition of the obligation to protect the rights of extraditees, as well as their physical and mental welfare.
Julian Assange, Mental Health and Delay
Australian citizen Julian Assange's attempts to avoid extradition from the UK, first to Sweden to face questioning over alleged sexual offences and then the US on serious espionage charges, spanned a total of 13 years. This includes seven years of self-imposed exile in the Ecuadorian Embassy in London to avoid surrender to Sweden, largely due to fears he would be unlawfully sent to the US despite prohibitions against surrender to third-party states in extradition treaties. Assange has repeatedly claimed the legal processes for extradition would lead to his unlawful transfer to the US, resulting in a possible maximum term of 175 years in prison (Australian Associated Press 2021). His charges are linked to the activities of WikiLeaks, a website launched in Sweden by Assange in 2006 as a forum for maintaining accountability for governments throughout the world through the ‘principled leaking’ of confidential documents (Karhula 2011, 1).
How does one summarize a play? A play is an experience, which is made as thick as pos¬sible by the theater artists who gather to put it in focus. Anything else is just synopsis, a reduction of a story to its self-evident facts, but no one would pay attention to the art if it consisted only of that. Yet summation is a fair term for what we do in watching a play, bringing its characters and incidents to a totality, even though we know that only in the full-scale experience do we get taken in by the fiction and the effects.
Long Day's Journey unfolds over the course of a single day in August 1912. The setting is described as “James Tyrone's summer house,” and so at once we know of a man, a father, and several pages of scenic and character description will give us an unusually detailed sense of how we should see this place and these characters in relation to him.
(Act 1) It is around 8:30 a.m. The day begins in good humor and affection, with James (65) appreciating the home he shares with his adored wife, Mary (54), and their two adult sons, Jamie (33) and Edmund (23), who are also in good humor. A few dis¬cordant notes are heard. Mary seems troubled by the sense that she is being watched over, and Edmund shows signs of ill health. The tension is broken by a funny anecdote Edmund tells of an Irish tenant farmer on a piece of property owned by the family, but even that bit of humor brings out underlying tensions, like the sons’ cynicism about the father's status as landlord and the father's bitterness about the waywardness of his sons, who both depend on him at the moment.
Jamie has taken up his father's profession as an actor, but he enjoys nothing—not wealth, not art, not pride—of his position, and antagonism has grown between father and son as a result. The younger son has led a vagabond life according to some roman¬tic and decadent fantasies. At a low point, he attempted suicide, but lately he has been working for the local newspaper and publishing some poetry, giving his family hope that he has turned a corner in his life.