To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
I was brought up to think myself Irish, without question or qualification; but the new nationalism prefers to describe me and the like of as Anglo-Irish.
—Stephen Gwynn
There must be some way out of here,
said the joker to the fool.
—Bob Dylan
Our world does not need tepid souls. It needs burning hearts,
men who know the proper place of moderation.
—Albert Camus
The point of confronting the past is to deal with its effects in the present. That it is possible to confront the past presupposes that we are not determined by it but have some leeway to distinguish ourselves in our present from the past's multifaceted shaping power.
The key ethical claim of this book is that while we are inescapably shaped by our history, we are not imprisoned by it unless we choose to be. In Nietzsche's words, we can brace ourselves against it: we can push back. That ethical claim was treated in a general philosophical fashion in Chapter 1 and applied to specific historical details in Chapters 3 and 4. Chapter 5 addressed the theme of developing an ethics of political memory. This concluding chapter discusses the acceptance of nationalist, unionist and other identities, recognition of those identities and construction of a new political community open to updating and reinterpretation, not dissolution, of those identities.
Nationalist leaders occasionally show awareness that such is required. But their comments to that effect are couched in vague, abstract or poetic language, reflecting fear to enrage the ultra-nationalist SF-tending enragés by mild criticism of the 1916 leaders and other dead republicans. This is a failure in leadership and a refusal of Nietzsche's challenge. Specific items in the past, above all in the events of the 1912–23 period, must be reinterpreted with an eye to present needs.
Nietzsche claimed that history must be used for the purposes of life: in Ireland today, north and south, that purpose is political reconciliation, enough acceptance and mutual recognition between nationalists, unionists and others so that a live-and-let-live political community can be built. That can't be done with uncritical following of an unqualified version of the political philosophy of the 1912 Ulster Solemn League and Covenant or the 1916 Proclamation. The 1998 Agreement's political philosophy clashes with both, and the issue is: which shall we follow?
The philosophy of AI encompasses epistemological, psychological, ontological, technical and ethical issues. Even though these matters have different natures and theoretical implications, they are closely related to the fundamental problem in the philosophy of AI – whether machines can think.
The question ‘Can machines think?’ has received two plausible answers captured in a now-standard distinction in the field, namely, the weak and strong conceptions of AI. In this chapter, the former represents the view of AI as a valuable tool that simulates but does not display mentality (see Searle 1980, 417). Strong AI represents the conception of AI as being itself a mind rather than merely a set of simulating devices. In strong AI, as the programs are themselves minds and display cognitive states, their workings directly explain the functioning of the human mind (see Searle 1980, 417). In this regard, both conceptions of AI relate to different uses of psychological language. In weak AI, psychological language is applied to machines not literally but figuratively – it is as if machines learn, think or perceive, but they really don’t. In contrast, the strong view of AI is related to literal or ‘primary’ uses of psychological language – machines do (or in principle could) think, learn and perceive in the way humans do. Most of the answers to the question ‘Can machines think?’ lie either on one or the other side of the dichotomy and are based on different theories with specific ontological commitments, for instance, dualism, functionalism, biological naturalism, identity theory and the computational theory of the mind. Consider the following quote.
To the extent that rational thought corresponds to the rules of logic, a machine can be built that carries out rational thought. […] computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors.
(Pinker 1997, 68, 78)
According to this perspective, machines can display actual mental powers given that the nature of mentality can be instantiated in artificial devices. Strong AI might emerge from different philosophical sources.
I know nothing of the application of freedom, as I know nothing of the application of tyranny.
—Ernie O’Malley
The struggle of the Volunteers was a struggle with the Irish people more than a struggle with the invader and indeed the real uphill fight which Sinn Féin has had is with the Irish people.
—Art O’Connor, Director of agriculture in the first Dáil2
The hon. member must remember that in the South they boasted of a Catholic nation.
They still boast of Southern Ireland being a Catholic state.
All I boast of is that we are a Protestant parliament and a Protestant state.
—Sir James Craig Prime Minister of Northern Ireland
Introduction
In 1912, Ireland was a united country, part of the UK, governed directly from Westminster. By 1922, it was divided into Northern Ireland with Home Rule status in the UK and the Irish Free State with dominion status in the British Empire (or Commonwealth). Almost none of those whose political leadership had led to that outcome had desired it in 1912.
Nationalists had wanted independence, sovereignty or at least self-governance, and that was what most got. Their desire that Ireland not be partitioned was frustrated. Unionists had wanted no devolution from Westminster, and that desire was not fulfilled; they had wanted not to be ruled by nationalists, and those in the north-east achieved that.
Hardly any nationalist or unionist leaders had given thought to what it would mean to govern. Northern unionists began to envisage the prospect of self-governance for their part of Ireland only from 1914; but they thought only of defending the perimeter, not of what it would mean to govern with a large recalcitrant nationalist minority. Failure to plan for governing a minority opposed to Home Rule was a more striking failure on the nationalist side, since they had worked for self-government for decades. They had constantly complained of British government neglect of Ireland, so more could have been expected of them in the 1890–1920 period as regards how they would govern better. They were so focussed on the issue of who should govern that they neglected the issue of how to govern, taking it as axiomatic that a native government would deliver better governance (Hoppen 2016, 11–62).
Ludwig Wittgenstein (1889–1951) is widely regarded as one of the most significant philosophers of the twentieth century. His influence has been deep and wide-ranging, extending well beyond philosophy. Alongside a lasting impact in areas where he wrote extensively (the philosophies of language, logic, mathematics, mind and psychology, as well as metaphysics, and the theory of knowledge), Wittgenstein's philosophy has continued to influence areas beyond his focus (e.g. aesthetics, ethics and jurisprudence). Many of the themes of his work bear – either directly, or indirectly – on issues that arise in (both scientific and commercial) attempts to produce artificial intelligence (AI).
The link between Wittgenstein and AI should perhaps not be a surprising one. During his life, Wittgenstein interacted with Alan Turing (1912–1954), considered by many to be the founder of AI as an area of inquiry: Turing attended Wittgenstein's 1939 lectures in Cambridge on the foundations of mathematics (Wittgenstein 1976: LFM; see also Copeland 2012, 32–34; and see Floyd 2019 for an account of what their mutual intellectual influence may have been). And years earlier, in the Blue Book of 1933, Wittgenstein had even discussed the central animating question of AI, ‘Is it possible for a machine to think?’ (Wittgenstein 1958: BB, 47).
There is therefore ample reason to suspect both that historical investigations of Wittgenstein's work and the context in which it was situated can reveal the intellectual landscape at the dawn of AI, and that philosophical engagement with his work might shed light on important issues in the theory and application of AI. The present collection touches on the former, historical variety of inquiry (see especially Proudfoot's contribution), but it focuses primarily on the latter, more philosophical project.
Why Now?
This collection is not the first work to explore the interaction between Wittgenstein and AI. However, the last concentrated look at the topic was published over a quarter of a century ago: Shanker's (1998) single-authored monograph, Wittgenstein's Remarks on the Foundations of AI. Since then, there have been significant advances in AI – most notably, the advent of big data and the use of deep neural networks for machine learning (ML), which underpin recent generative AI systems (such as ChatGPT, Dall-E and Midjourney) – as well as changes in Wittgenstein scholarship. So the time is ripe for bringing the two together afresh.
The essence of the economic problem of Northern Ireland is that it is an economy with a rapidly growing labor force tied to a slow growing national economy […]. Equally worrying is the fact that recovery in the national economy since 1982 has largely excluded Northern Ireland.
—Northern Ireland Economic Research Centre, qtd. in Frank Gaffikin and Mike Morrissey, Northern Ireland: The Thatcher Years (1990)
Prior to the 2007–09 recession, the 1981–82 recession was the worst economic downturn in the United States since the Great Depression.
—Tim Sablik, “Recession of 1981–82,” Federal Reserve History (2013)
The story of American investment in Ireland that Charles Haughey related to the Economic Club of New York in May 1982 must have amazed many listeners. Because, as Tim Sablik of the Federal Reserve Bank of Richmond characterizes it, the implementation of tight monetary policy to contain soaring inflation between July 1981 and November 1982 ignited the “worst economic downturn” since the Great Depression. Here, “downturn” denotes the “largest cumulative business cycle decline of employment and output” in America's post-World War II period (Goodfriend and King, 1). When Paul Volcker was named Chairman of the Federal Reserve on August 6, 1979, inflation had already risen to over 13% and the unemployment rate stood at 7.5% as manufacturing, residential construction and automobile sales languished. In the latter two sectors, unemployment reached levels of 22% and 24%, respectively, and mortgage rates in 1981 climbed to 18.63% in October. By 1989, they were still over 10%. However, eventually Volcker and the “Reagan recovery” brought inflation under control. During the president's two terms, the Standard and Poor 500 Index more than doubled; new jobs were created, and mortgage rates came down (though, speaking from personal experience, a 30-year fixed mortgage of nearly 12% in 1985 was hardly a panacea for first-time homebuyers). As economists Marvin Goodfriend and Robert King put it, Volcker's eventual victory over inflation made the “inflation peak” of early 1980 “stand out dramatically in the U.S. experience” (1).
In this chapter, I would like to explore a connection between Wittgenstein's Lecture on Ethics (Wittgenstein 1965: LE) and the ethics of Artificial Intelligence (AI ethics). Part of the project in AI Ethics is pursued by attempting to provide a formal account of a group of interrelated concepts determining what von Wright (1951), in a seminal article on the subject, called modes of obligation. Amongst such concepts, dubbed deontic by von Wright, are: the obligatory (what ought to be done); the permissible (what is allowed to be done); and the forbidden (what must not be done).
Since von Wright's paper, it has become standard to treat deontic concepts as forming a distinctive class of modalities in addition to other perhaps more familiar such classes. An obvious example comprises alethic (or sometimes metaphysical) modalities, represented in language by what are known as the modes of truth: what cannot but be the case; what can be the case; and what cannot be the case.
Prima facie, the classes of deontic and alethic modalities respectively single out two ways in which we could use language: normatively, to express obligations (and cognate notions); and descriptively, to express necessities (ditto). The Lecture on Ethics is the only place in which Wittgenstein elaborated on the relationship between normative and descriptive uses of language, at any length. In it, as is known, Wittgenstein joins a tradition stretching back at least to Hume in maintaining that no normative claim could ever logically follow from a descriptive claim; an ‘is’ can never entail an ‘ought’. What is perhaps less known is that the thesis defended by Wittgenstein in the Lecture on Ethics might have an important role to play in debates around the existential risk posed by the rapid development of AI.
This chapter has thus a twofold goal. In the first section I would like to show how, by accepting the thesis that an ‘is’ never entails an ‘ought’, the strength of some arguments purporting to show that AI poses an existential threat appears rather dim. In the second section, I would like to propose my own reading of Wittgenstein's arguments in favour of the no-‘ought’-from-an-‘is’ thesis, put forward in the Lecture on Ethics.
This chapter investigates the role of apologies issued by AIs – AIpologies – for AI regulation. It is therefore only appropriate to start the discussion with an apology: even though the chapter will try to bring some of Wittgenstein's ideas to the discussion, I am not a Wittgenstein scholar, and I apologize in advance that my discussions will overlook substantial parts of the contemporary Wittgenstein debate, or may even give a very idiosyncratic interpretation of his writing.
There has been substantial Wittgenstein reception in legal theory, centring on his conception of rules, rule-following and rule-interpretation. Some legal theorists such as Patterson found Wittgenstein's writing of great relevance to answer the question ‘What does it mean to say that a proposition of law is true?’ (Patterson 1999, 3). Others, like Scott Hershowitz (2002), or Brian Bix (2005) argued that legal rules and legal rule-following are too different from the examples discussed by Wittgenstein for the latter to be of much interest to legal theory. As Hershowitz put it, ‘nothing much can be learned about legal rules or legal interpretation by attending to Wittgenstein's remarks, because they were aimed at wholly different phenomena’ (2002, 619).
This debate in legal theory asks, and challenges, our very understanding of what law is, and how we can make sense of legal rules (or fail to do so). The aim of this chapter is more limited. It takes law and a functioning legal system as a given, and asks instead if we can learn something from Wittgenstein that can help us to interpret better one type of legal rules, rules that govern how we interact with increasingly autonomous machines.
If one were to adopt Tushnet's (1983) interpretation of Wittgenstein and deny with him the very possibility of rational adjudication between different interpretations of a law, this seems a futile endeavour, almost a self-contradiction. I will not try to address this issue for most of this chapter. Only in the final section, I will very briefly indicate how using James Tully's Wittgenstein interpretation could lead to a very different analysis of one of the examples that are at the focus of this chapter.
The land of Ireland is a sword land; let all men be challenged to show that there is any inheritance to the Island of Destiny except of conquest by din of battle.
—Cogadh Gaedhel re Gallaibh
I am glad that the North has ‘begun’. I am glad that the Orangemen have armed, for it is a goodly thing to see arms in Irish hands.
—Patrick Pearse
All moral problems vanished in the fire of patriotism and death and destruction.
—Seán O Faoláin, recalling his time in the IRA from 1918 to 1924
Violence can destroy power; it is utterly incapable of creating it.
—Hannah Arendt
Introduction
This chapter and the following one engage in ethical analysis of certain aspects of the 1912–23 period: the violent conflicts in this chapter, policy and political issues in the following chapter. In neither do I attempt a full historical outline or explanation of the conflicts or the political elements. I aim at providing analysis of their ethically significant aspects.
In Chapter 1, I referred to Nietzsche's distinction between monumentalist (heroic), antiquarian (academic) and critical types of history. In Chapter 2, history as an academic discipline with self-conscious distancing from ethical and political causes was central. Ethical analysis of the past brings us into the zones of monumentalist and critical types of history.
At the outset, it is important to distinguish between the two main concepts in ethics: the Right and the Good (Rawls 1999, 21). The Right has to do with morality narrowly understood, concerned with the moral law or the rightness and wrongness of actions. It typically proposes relatively rigid universal norms or rules governing it, differentiating between obligatory, prohibited and permitted acts.
Commonly, this is what people assume ethics is about. Unsurprisingly, historians think that such a focus has little applicability in understanding past events. The one historical area in which universalisable moral rules for action are relevant is that of war, genocide and violent civil conflict. Forming militias (such as the Ulster Volunteers and the Irish Volunteers in 1913), importing arms and launching uprisings are open to such ethical analysis. But character, values or policies cannot be illuminatingly evaluated in a rule-governed ethic.
The problem I discuss in this chapter, with reference to Ludwig Wittgenstein's later philosophy of logic, its development and AI, concerns the issue of how to simplify complex information without falsification. Two relevant modes of simplification are abstraction and idealization. When we abstract, we leave out some features of the actual cases. When we idealize, we make things neater, for example, more uniform or exact than they are. As I will explain, the problem of how to simplify without falsifying quickly brings us to the notion of relevance. What can be abstracted or idealized away without falsification is that which is not relevant (essential, important or significant), and in general simplification without falsification requires that whatever is relevant (essential, important or significant) is taken into account. However, the notion of relevance in turn assumes or involves the perception of things being significant; it presupposes that the acting or thinking agent or entity has goals, purposes or interests. Thus, to get an AI system to simplify without falsifying, and to make it able to handle complex information in an intelligent way in this sense, seems to require that its behaviour is informed by goals, purposes or interests.
The assumption I am making about intelligent behaviour is worth making explicit: In what follows, I assume that simplification without falsification is an essential aspect of intelligent behaviour, even though it does not exhaust it. This is what enables an intelligent agent or entity to pick out what is relevant from a wealth of information and to update its perception of what is relevant when needed. Accordingly, I assume that to create an AI system whose behaviour could be described as intelligent (whatever intelligence in general is or means) requires that the system is capable of simplification without falsification. This seems important also for AI systems that are specialized rather than generally intelligent and might be used, for example, for diagnosing illnesses, examining images to find a certain kind of object, and so on for a great number of possible tasks. Before an AI system can be relied on in performing such tasks without a human being checking whether it might have ignored something relevant, we must be confident that it can simplify without falsifying.
Opening: Crises of Values and Governance in Artificial Intelligence
The twenty-first century has witnessed the rise of ‘big data’, and the widespread deployment of Artificial intelligence (AI). At the turn of the century, according to Wooldridge, ‘AI was a rather niche area with a somewhat questionable academic reputation’ (2020: 5). But in 2014, Google bought a ‘tiny AI company [DeepMind] for a huge sum […] Artificial intelligence was suddenly big news – and big business’ (2020: 167). What happened?
Contemporary AI uses large data sets – which may or may not be representative of the contexts in which the AI trained will be deployed (see below) – to train models using machine learning (ML) techniques, and specifically, ‘deep learning’ (LeCun et al. 2015). The information processing that results is often opaque: precisely what mapping from inputs to outputs the model supports is unclear, even to the engineers who develop the systems. The AI systems that result are often ‘narrow’, dedicated to solving a relatively small set of problems. As a result, some might not be inclined to call this true AI – much of it certainly doesn't meet the requirements of Searle's (1980) ‘strong AI’, actually possessing the various mental states involved in sapience. But it is also insufficiently general to even simulate much of the variety of human intelligence, and so it fails the requirements for his ‘weak’ AI too. Nevertheless, for better or worse, the term ‘AI’ is used in connection with these information processing systems.
Recently, we have seen the emergence of what is known as ‘generative’ AI – AI that produces outputs of the same kind as its training data. For example, whereas image classifiers take visual images as inputs and produce labels (i.e. words) as outputs (see e.g. Krizhevsky et al. 2012), certain generative systems (such as generative adversarial networks (GANs) and text-to-image models) are able to generate (often novel) images as outputs. Similarly, large language models (LLMs) are trained on vast corpuses of textual data, and are able to generate grammatical, and even meaningful, text (e.g. semantically appropriate answers to questions).