At the beginning of 2023 the Bulletin of the Atomic Scientists set its doomsday clock at 90 seconds to midnight, the closest it has ever been. The war in Ukraine has led to fearful discussions of the possibility of nuclear war. Seeing himself as a great leader with a historical mission, in March 2023 Vladimir Putin announced plans for stationing Russian tactical nuclear warheads in Belarus; a month later the US reportedly installed radiation sensors to detect Russia’s potential use of nuclear weapons on Ukrainian battlefields.Footnote 1 In October 2023 Russia claimed to have tested a new, nuclear-powered missile.Footnote 2 Coupled with high tensions over Taiwan, China’s evolving nuclear doctrine away from minimum deterrence has prompted a refashioning of US nuclear strategy to focus on the growing size and diversity of China’s nuclear forces.Footnote 3 At a cost estimated to run around $1.5 trillion, the United States is planning to modernize every part of its nuclear arsenal in the coming decade.Footnote 4 It is therefore not surprising that anxiety is coursing through broader segments of the American public. Released in the summer of 2023, the blockbuster movie Oppenheimer drew one of the largest audiences in years. Reflecting and amplifying public concern, in the spring of 2024 the New York Times started an ambitious series of articles dealing with the threat of nuclear war.Footnote 5 Questions about the effects that nuclear deterrence had on American national security and democratic institutions during the Cold War have not slowed the move to enhance the readiness to fight a nuclear war.Footnote 6 And neither did a lengthy report on an ultra-secret 1983 war game called Proud Prophet. With the Pentagon’s top brass participating, it illustrated the inescapable pressure of military escalation.Footnote 7 Although on nuclear issues nobody knows what the future holds, the risk-uncertainty conundrum is in clear sight.
After the US had dropped the first two nuclear bombs to destroy Hiroshima and Nagasaki in 1945, weapon modernization in the 1950s created an arsenal that included hydrogen bombs with yields measured in megatons, a thousand times larger than the destructive power of the atomic weapons used at the end of World War II. The American arsenal of nuclear weapons grew from about 7,000 in 1958 to about 20,000 in 1960, with a total yield exceeding 20,000 megatons – three times larger than what American nuclear war games in the 1950s had imagined possible. In 1955 the Soviet Union had about 200 nuclear weapons; the stockpile grew about five-fold by 1960. The estimate of worldwide casualties in a nuclear war was in the hundreds of millions. By the mid-1950s, French and British experts expected that ten to fifteen thermonuclear weapons would make normal life in their countries completely impossible. Although Secretary of Defense Robert McNamara estimated in 1964 that it would take only about 400 nuclear weapons to kill about 25 percent of the Soviet population and destroy more than two-thirds of Soviet industrial capacity, in the era of “overkill” the number of US warheads grew further, to 31,000 in 1967. In the 1980s the Soviet and the US militaries commanded about 70,000 nuclear warheads. After the end of the Cold War the number of warheads declined sharply. By 2020 nine nuclear weapon states controlled about 13,400 nuclear weapons, with the US and Russia accounting for about 12,000, or 90 percent, of the total.Footnote 8
For eighty years nuclear weapons have not been used in any international conflict – a fact that would have astonished experts in the late 1940s. Then the expectation was that human civilization might not survive much longer. What does “much longer” mean in different time scales? In geological time, eighty years is a blink. Over tens of millions of years it would indeed be very surprising if there was no nuclear war. But humans live, experience, and calculate in human time. And on that scale the fears of the late 1940s have not materialized. Nate Silver assumes, not unreasonably, that the Bayesian probability of large-scale nuclear war may be around 0.35 percent per year. That sounds small. But if such a war killed 1 billion people that would mean for the next 200 years about 5 million extra deaths per year and the expectation of at least one nuclear war. Admittedly, the context of such calculations is changing. Historical memories of Hiroshima and Cuba are fading and with them the nuclear taboo.Footnote 9 This will shift incalculable odds in a situation marked by “grave uncertainty.”Footnote 10 Furthermore, as this chapter shows, there has been a frighteningly large number of close calls all over the world. In the war of aggression that Russia is waging in Ukraine, for now Russia and the US are protected by their nuclear shields, argues Ulrich Kühn, a German arms control scholar. “And the Ukrainians are fucked.”Footnote 11
This is an opportune time to recall the stand-off between the United States and the Soviet Union more than sixty years ago. One of the main pieces of advice that the political leaders of that time left for those who came after them was simple: stay away from a major crisis which might spiral out of control. Without the direct experience of the threat of a global catastrophe, that earlier advice has largely been forgotten. The October 1962 crisis left the world on the precipice of global nuclear war that has been studied more intensely than any other. Since it ended peacefully the story was originally told as a splendid victory of cool heads and shrewd maneuvers as the American President and his advisors prevailed in a potentially deadly contest. In the words of former Secretary of State Dean Rusk, the US came eyeball-to-eyeball with the Soviet Union, and the Soviet Union blinked. This chapter will show how untrue this claim is. James Blight and Janet Lang do not mince words. They call it bullshit.Footnote 12
Despite the revolutionary impact quantum mechanics and the deployment of nuclear weapons have had on world politics, students of world politics have ignored post-Newtonianism and what it can tell us about the risk-uncertainty conundrum. Wedded instead to Newtonian humanism and the concept of risk, they are blocking out theories and models that highlight the importance of uncertainty.Footnote 13 This chapter starts with conventional small world, risk-inflected analyses of the October crisis. It ends with a discussion of theoretical approaches informed by a post-Newtonian worldview that explicitly incorporates large world uncertainty. The manipulation of risk and uncertainty by decision makers who are more or less rational and are experiencing more or less fear offers a first cut of the crisis (section 1). A second cut enriches the individual-level analysis by attending to organizational malfunctioning as a potential cause of inadvertent nuclear war. In this analysis political agency is not restricted to elite decision makers in the White House and the Kremlin. Instead, it is widely dispersed across many layers of the American and Russian militaries. Base commanders and submarine captains, for example, were crucial in stopping potentially catastrophic escalations brought about by organizational or technological malfunction (section 2). Although this second cut accords uncertainty a greater role, it remains securely anchored in a humanist Newtonian worldview. The analyst imposes rationality and irrationality on a reality that is “out there,” separate from the observer. Such dualities operate like two hooks, Bruno Latour once joked in a different context, for suspending a hammock that invites us to snooze.Footnote 14 The point is illustrated vividly by a 2014 symposium on nuclear politics which brought together political scientists and historians. The symposium exhibited plenty of disagreements on methodological issues, but nobody questioned that analysts were looking at the world from a distance, with a God’s eye perspective. The symposium’s editor, Scott Sagan, refers briefly to “very innovative” work on nuclear issues.Footnote 15 But neither he nor any of the other participants showed any interest in moving beyond the familiar small world to engage with work in science, technology, and society (STS) that offers a third cut (section 3). Exemplifying large world thinking, it does away with dualities such as rational and irrational, politics and technology, risk and uncertainty. It integrates human agency, organizational functioning and malfunctioning, and politics across all levels. And embedding the observer fully in a world that does not exist “out there,” it acknowledges the importance of the risk-uncertainty conundrum. Language matters greatly in the politics of the crisis, its meaning for different actors, and its effect on shaping the complementarity of risk and uncertainty (section 4).Footnote 16 Furthermore, the analysis of nuclear politics has shaped profoundly a widely accepted rational model of war (section 5). And the conclusion illustrates that many close calls have made Machiavelli’s fortuna a major force in the evolution of a crazy nuclear politics and the persistence of the risk-uncertainty conundrum (section 6).
1. October Crisis.1: Brinkmanship, Emotions, and Fear in a Small World
Against the backdrop of a continuing Berlin crisis and the 1961 US-sponsored, failed Bay-of-Pigs invasion, President Kennedy’s October 22, 1962 televised address to the American people initiated the public phase of a crisis that had started with the clandestine installment of medium- and intermediate-range nuclear Soviet missiles in Cuba. The President insisted that the Soviet Union had lied about its policy for many months, and that the US would not tolerate the deployment. A “quarantine” was the first step, to be followed, if unsuccessful, by military means including an invasion of Cuba to remove the missiles. A Soviet attack on any target in the United States or Latin America, Kennedy announced, would be met with a full retaliatory response on the homeland of the Soviet Union. In the following five days the crisis grew as Soviet ships approached the quarantine line and then changed course, as construction of the missile bases in Cuba was rushed toward completion, as an American reconnaissance plane was downed, and as more than 100,000 American combat troops were assembled in south Florida for a possible invasion. On the morning of October 28 the crisis ended unexpectedly when Chairman Khrushchev announced that he accepted Kennedy’s offer: the missiles would be withdrawn as long as the US pledged not to invade Cuba and, privately, committed to dismantling its missiles in Turkey in the near future.
For two generations scholars have thought, with Herman Kahn, about the unthinkable, drawing on their implicitly held worldviews. Graham Allison’s foundational study relied on three models – rational actor (model 1), organizational process (model 2), and governmental politics (model 3) – to highlight different aspects of the Cuban missile crisis that escaped the analytical power of any one of them.Footnote 17 These models offer different lenses to highlight different aspects of the crisis. All of them, however, operate firmly in a Newtonian and humanist small world filled with discrete objects and persons, displayed on a stage that exists external to both actors and observers. In the rational actor model, ultimate decisions rest with President John F. Kennedy of the United States and the First Secretary of the Communist Party of the Soviet Union, Nikita Khrushchev. The degree of their rationality and the accuracy of the information on which they made their choices have been subject to intense study. In the organizational process and governmental politics models, agency is dispersed and human control over events is not assured. The rational actor and governmental politics models place human actors and their relations at the center; the organizational process model emphasizes organizational routines and relations; the government politics model highlights bureaucratic politics.
Conventional theorizing about nuclear strategy has adopted and refined the rational actor model. John Mearsheimer and Sebastian Rosato, for example, adopt a liberal standard of democratic deliberation to make their case for the rational actor model. For them the meetings of President Kennedy’s advisors were “models of free-wheeling discussions.” They view the President as an active participant in the deliberations who eventually decides “to break the deadlock and make the ultimate decision,” closing the deal with Khrushchev.Footnote 18 Their discussion makes the American management of the crisis look like an exemplar of rationality in action.Footnote 19
It is not clear, however, how to square that interpretation with the fact that Kennedy compartmentalized decision making by not only participating in open discussion but also operating behind the scenes without the knowledge of his advisors; that ten days of around-the-clock deliberations never yielded an agreement between hawks and doves on which the President could have acted; and that Mearsheimer and Rosato cite a barrel of scholarship pointing to many aspects of irrationality without engaging or rebutting any of it.Footnote 20 Furthermore, the American 17:1 superiority in nuclear weapons and the greater vulnerability of Soviet forces should have deterred Khrushchev from placing Russian missiles on Cuban soil in the first place. And it should have made Kennedy take larger risks than he did to remove the missiles with a pre-emptive strike. And if the stress was as great as was reported by several participants, then war might have happened because irrational actors, incapacitated by stress, poor judgments, and fear should have lashed out with a first strike. That did not happen. Both rational and irrational actor models were proved wrong. The October crisis ended peacefully.Footnote 21 In the experience of the most active participants, at its peak the gap between probability and possibility evaporated, creating what Presidential Advisor Richard Neustadt inaccurately called a “new dimension of risk.”Footnote 22 The intrusion of profound uncertainty into calculable risk pushed Kennedy and Khrushchev to the same conclusion. Rather than fight each other they chose to fight a new, common enemy, the dread of inadvertent nuclear war.Footnote 23
Concerned with the possibility of surprise attack, starting in the late 1950s future Nobel Prize winning economist Thomas Schelling had made signal contributions to public policy by adapting macroeconomics and game theory to nuclear strategy. Stability was an overriding concern.Footnote 24 But so was purposeful human action. Schelling placed both at the center of his theorizing. In 1962 his theory convinced him that Russia would suffer a momentous defeat in Cuba.Footnote 25 As the crisis broke, Schelling felt exuberance over Khrushchev’s coming humiliation. He simply could not understand the tremendous anxiety gripping the President and his inner circle.Footnote 26 Schelling’s optimism was based on his belief that nuclear risk could be manipulated following the rational logic of deterrence theory. Since the costs of war are far greater than any possible gain, for Schelling nuclear war was volitional and therefore avoidable.
His formulation of nuclear strategy as “the threat that leaves something to chance” is based on an understanding of the concept of “chance” that deploys some and excludes other potentialities inherent in uncertainty.Footnote 27 Schelling’s Strategy of Conflict refers to “risk” 102 times; “chance,” for him is mostly coterminous with “risk,” 75 times; “uncertainty” 10 times; and “luck” only once.Footnote 28 Schelling was a brilliant, nuanced thinker who incorporated the role of uncertainty into coercive nuclear bargaining and war.Footnote 29 Perhaps inadvertently, he equivocated about the meaning of the concept of “chance.” And perhaps unwittingly, many of Schelling’s readers mistook the ambiguous meaning of “chance” to mean the manipulation of “risk” rather than of “uncertainty.”Footnote 30 Schelling himself acknowledges the existence of uncertainty that “must come from somewhere outside of the threatener’s control. Whether we call it ‘chance,’ accident … or just processes that we do not entirely understand, it is an ingredient in the situation that neither we nor the party we threaten can entirely control.”Footnote 31 Even though his work has been misinterpreted by many as being focused only on risk, Schelling was fully cognizant of the importance of uncertainty.
Schelling’s theory transforms nuclear weapons into means of control. He does not permit the theoretical possibility of their accidental use to impose limits on a risk-based deterrence model. Instead, he deploys the possibility of inadvertent nuclear war neither party wants as a coercive instrument of brinkmanship used to prevail in a conflict.Footnote 32 In a briefing paper Schelling wrote for President Kennedy in the summer of 1961, after Khrushchev had issued an ultimatum on Berlin, his advice was “for a ‘selective and threatening use’ of nuclear weapons to manipulate,” in the words of Benjamin Wilson, “in the most vivid way imaginable, the risk of a total cataclysm, and to use that risk to coerce the Soviets into backing down.”Footnote 33 Schelling’s advice implied a nuclear demonstration explosion as one possible means for defending Berlin and defeating the Soviet Union. In establishing an actor’s full control over “the ultimate weapon” and showing the limits of that control in inadvertent war, the controlled and the uncontrollable fuse into one and squeeze out of deterrence theory all other unacknowledged contingencies and unfathomable unknowns.
In Schelling’s theory probabilistic and possibilistic thinking are entangled and co-evolving. His work was not only derived from macroeconomics and game theory operating in the small world of risk. It was also an “imaginative conceptual exercise” dealing with the problem of uncertainty.Footnote 34 A generation of scholars who have developed the bargaining model of war may have overlooked this important aspect of Schelling’s theory.Footnote 35 Occupied by the problem of credibility, Schelling argued that to be credible required ceding control. Uncertainty is created or enhanced by decision makers who voluntarily tie their own hands. Acknowledging an inescapable tension in his thought, Schelling writes, “in the final analysis we are dealing with imagination as much as logic … poets may do better than logicians at this game … Logic helps … but usually not until imagination has selected some clue to work on.”Footnote 36
Largely oblivious to differences in context, variability of the processes that unfold as decision makers climb the ladder of nuclear escalation, and the importance of language, Schelling holds that parties moving closer to the brink of mutual annihilation are always compelled to become increasingly careful. In his theory accidents do not cause nuclear war; decisions do.Footnote 37 Schelling’s psychology is based on economics and supposedly universally valid cost-benefit calculations. He argues that we do not “fear” the onrush of traffic passing us by as we stand on the curb. For we know better than to step into the flow of traffic. This analogy proves to him the rational actor model. Nassim Taleb begs to differ. For him many models of rationality can blindfold us at street crossings.Footnote 38 Stepping off the curb and walking straight and at a steady pace, into a sea of bicycles moving like a school of fish, is rational in Ho Chi Minh City. In Boston it would be irrational and land you in the hospital. This is not to deny that Schelling made insightful comments about the psychology of nuclear crisis politics. For example, he mentions the process of mutually escalating anxiety sapping an adversary’s confidence in the ability to keep the crisis from exploding into war, and the transformation of adversary into collaborator as the disappearance of non-catastrophic policy options makes the general environment rather than the adversary the subject of fear.Footnote 39 Furthermore, “the stress of responsibility,” Schelling told the participants at a conference convened after the crisis had passed, “may simply make a very great deal of difference.”Footnote 40 Indeed. But his unshakeable commitment to an economic psychology makes Schelling uninterested in the existence of “nervousness.” He was an exemplar of Amadae’s “prisoner of reason.”Footnote 41 The very term “crisis,” with its connotation of shortening decision times and growing anxieties and confusions, had no special meaning for Schelling; he would have had no interest in chains of contingencies and the appearance of interlinked polycrises. The concept of crisis was merely a linguistic convention to which he agreed with some reluctance.Footnote 42
For Schelling decisions are made in the domain of rationality not contingency. And that domain subordinates, for the most part, uncertainty to risk. Film director Stanley Kubrick visited Schelling shortly after the Cuban missile crisis. He wanted to make a movie about the start of a nuclear war that experts on nuclear strategy and arms control would find plausible. Schelling told Kubrick that there was no plausible path toward nuclear war he could think of. Eliding the difference between risk and uncertainty, which he understood so well, he relied on the concept of chance, bridging small and large world thinking. This misled many of his followers to think that deterrence was only about the manipulation of risk in a small world. In contrast, Kubrick’s artistic vision and post-Newtonian sensibilities inhabit also the large world of uncertainty. Convinced that he had entered a scary, real-life version of the theater of the absurd, Kubrick created in Dr. Strangelove a dark satire drenched in the bizarre assumptions of members of the “nuclear priesthood” who were thinking about the unthinkable. For Schelling the movie was an absurdist comedy. No one, he thought, could conceivably imagine it as a realistic tragedy.Footnote 43
By including emotional factors Reid Pauly and Rose McDermott, among others, insist that the study of humans in nuclear crisis politics must incorporate chance and choice and make place for human emotions.Footnote 44 In the early stages of the missile crisis, for example, the two leaders reacted to each other’s actions with raw emotion. President Kennedy agreed with the assessment of McNamara and other advisors that the missiles did not change the military balance of power between the two superpowers. However, acutely aware of the far-reaching domestic political implications of the Soviet move, when he first learned about the deployment of Soviet missiles in Cuba, Kennedy’s enraged reaction was not an example of cool rationality. “He can’t do that to me” was his initial response; subsequently he called Khrushchev a “fucking liar” and “immoral gangster.”Footnote 45 For his part, when learning that the US navy was preparing a blockade as its first countermeasure after the missiles were discovered, a sleep-deprived Khrushchev, in front of Romanian visitors, “flew into a rage, yelling, cursing and issuing an avalanche of contradictory orders.”Footnote 46 Stripping emotions out of nuclear crisis politics is like painting an exploding volcano without red lava.Footnote 47
For Pauly and McDermott, chance, understood as an uncertain event or cause that cannot be controlled, and choice, understood as the exercise of control operating in the domain of risk, are overlapping parts of the risk-uncertainty conundrum that offer different sorts of leverage in bargaining situations.Footnote 48 But in their analysis they distinguish only among different kinds of risk – autonomous, intrinsic, and extrinsic – and fail to acknowledge the existence of uncertainty. For them “the causal component in chance derives from the probabilistic nature of the universe.”Footnote 49 Although they acknowledge that risk and uncertainty are interlaced, in the end, in their Newtonian worldview the universe is probabilistic and, in principle, all-knowable.Footnote 50
This is in sharp contrast to post-Newtonian uncertainty operating in large worlds, exemplified in the October crisis by the fear of contingencies that could result in nuclear war.Footnote 51 Although he was thinking in an entirely different register from Heisenberg’s, the philosopher Kierkegaard hits a similar note that fits roundly into a humanist post-Newtonian worldview.Footnote 52 For Kierkegaard life is inevitably lived forward, and it is always understood backward. But for James Blight there exists a vast difference between “a rational reconstruction, derived by looking backward at a selected, distorted, artificially coherent set of mental snapshots of the past, on the one hand, and, on the other, the uncertain phenomenology of living the event forward without the slightest idea of how it will turn out.”Footnote 53 In a disorienting, modernizing Europe, during the first half of the nineteenth century, Kierkegaard experiences existential unease. For him present and future are incommensurable. Anxiety points human beings to the future where they will become who they are but in a profoundly indeterminate way.Footnote 54
Blight’s analysis of emotions leans heavily on Kierkegaard for his central insight: the human dread of the unknowable and the importance of uncertainty in nuclear crisis politics.Footnote 55 Blight argues that “the manipulation of risk, of the probability of nuclear war, would in the darkest hours of the missile crisis be transformed into the tremendous fear of the possibility of nuclear war.”Footnote 56 In October 1962 life in the White House and the Kremlin was lived forward. Participants were ignorant and fearful of the outcome of the crisis. This is the starting point for Blight’s psychological analysis. He attempts to recapture “from somewhere” as closely as possible the lived experience of the participants in the crisis. The conventional alternative which he rejects is to enter “from nowhere,” taking a Newtonian, God’s eye perspective as a dispassionate observer looking at the world from a distance. To look at past events from the standpoint of the present overlooks the importance of explanations focusing on human intentions.Footnote 57 Living what will become history has very little to do with what has become history. Psychologists and historians of the October 1962 crisis, Blight writes, have acted not as caretakers of living history but as undertakers of the “refrigerated, intellectualized corpse that this event has become.”Footnote 58
What to Blight is a “commonsensical” approach is open to criticism by conventional psychological approaches. That criticism is reminiscent of economists who substitute statistical post-dictions for the predictions their methods often fail to provide. Blight’s experiential-phenomenological “forward-looking” psychology is less replicable than empirical-conventional “backward-looking” psychology.Footnote 59 Often it is simply disregarded.Footnote 60 In different contexts both approaches may have merit.Footnote 61 Because they appeal to different universes of facts found in different contexts, shaped by different processes, and deploying language differently, they may be complementary. Ascertaining facts, intentional explanations of actions must come first; deductive explanations typical of the physical sciences, imitated by conventional psychological approaches, come second.Footnote 62 To help in the first step, French psychologists have developed new elicitation protocols to minimize the problem of inaccurate self-reporting and false memory.Footnote 63 As an alternative to direct survey responses, elicitation interviews probe multiple layers of memory and consciousness. Retroactive awareness can activate passive memory and the constant, involuntary memorization of lived experience that escapes our notice. To access that awareness requires bypassing explanations and abstractions. Elicitation guides the subject to increasingly detailed descriptions of how past choice processes unfolded. This method offers a remedy for our normal blindness to lived experience and avoids the trap of providing post-hoc reasons for past choices. Decisions can thus be realistically conceptualized as inherently indeterminate without denying that, located in a different perspective and context, they can also acquire determinative shape.
Because this approach had not yet been developed, Blight could not rely on elicitation protocols. To transport himself and the reader back in time, he relies on secret tape recordings of the committee advising President Kennedy in October 1962, written records of other meetings, archival documents as well as interviews, and a large secondary literature.Footnote 64 His disagreement with mainstream psychology is due to his reliance on a non-technical, humanist phenomenology reminiscent of Dilthey, who criticized empirical-rationalist psychological approaches.Footnote 65 For Dilthey the humanist, life in its totality exceeds the reach of objective scientific analysis. Humans do not think, let alone live, by theoretical reason alone. With some justification Dilthey has been called the Newton of the humanities, a giant who developed the interpretive approach to social life.Footnote 66 In a vision reminiscent of quantum mechanics, Dilthey embraces life and human experience in all of its richness as part of the ongoing reconfiguration of a world that is always becoming and always filled with unexplored potentialities. Life encompasses both basic experience (Erfahrung) implicating perception and intellect and lived experience (Erlebnis) involving emotion and volition.Footnote 67 Life is a material biological process that includes the totality of all subjective processes.Footnote 68 In short, for Dilthey life-created history occurred in the large world, a spontaneous and self-generating creativity that was not subject “to the mechanical laws of nature” operating in the small one.Footnote 69
Dilthey thus offers a foundation for Blight’s empirical analysis of “McNamara’s fear” of being responsible for a global catastrophe.Footnote 70 Circumspect in his analysis Blight concludes from his different sources that McNamara, Kennedy, and other advisors experienced being sucked, against their will, into an escalating conflict they might not be able to stop.Footnote 71 Begun for narrow military reasons, conventional war in Cuba might inadvertently escalate to a global nuclear war nobody wanted. McNamara’s fear was that “he and his colleagues will be unable to discover an acceptable, peaceful solution to the predicament” – the risk-uncertainty conundrum generating unwanted nuclear war.Footnote 72 His phrase “damned dangerous” is “as close as one can come in a group discussion of this sort to naming the calamity they are all trying desperately to avoid: a major nuclear war.”Footnote 73 Without renouncing his commitment to data and efficiency, reflecting on his role in both the Kennedy and Johnson administrations toward the end of his career, McNamara stressed empathizing with one’s enemies by sharing in their worldview. He believed that President Kennedy and his team had succeeded in doing this in Cuba, while conceding that President Johnson and his team had failed to do so in Vietnam. At the end of his life, McNamara saw no contradiction between a hard head and a soft heart.Footnote 74
McNamara attended a conference in Havana in 1992, the thirtieth anniversary of the crisis, when his fear had receded and perhaps been forgotten. But it was still there. He was so stunned by what he learned at the conference that he had to grab the table to steady himself. Instead of the 10,000 Soviet troops he and the American military believed in 1962 had been stationed in Cuba, there were 40,000.Footnote 75 All of the Pentagon’s war plans had been based on a vast underestimation of Soviet conventional military strength in Cuba. McNamara also learned that American intelligence had missed the fact that the Soviet Union had already deployed tactical nuclear weapons in Cuba. For several days, at the height of the crisis, these weapons were controlled not by the political and military leadership in Moscow but by the local Soviet commander in Cuba. An American invasion force most likely would have been met by devastating nuclear strikes from short-range LUNA missiles. The depth of “misinformation, miscalculation, and misjudgment” stunned McNamara.Footnote 76 He told the conference participants that the crisis was far greater than even he had feared. Had a US invasion of Cuba been carried out, “there was,” in words expressing McNamara’s penchant for numerical precision where none exists, “a 99 percent probability that nuclear war would have been initiated.”Footnote 77
In his forthright assessment in 1992 McNamara no longer had access to the fear that had gripped him in 1962. Why? The rapid and totally unexpected end of the crisis only magnified the confusion and unexpected twists and turns the crisis had taken in October 1962. Humans resist the notion that dumb luck and happenstance can be pivotal in their lives, and that it was fear of the unknown that decided how things turned out. In their reconstruction of the past humans are strongly motivated to repress their confusions, terror, and inability to predict and control events. We all prefer “fearless victories” while forgetting the “terror of the unknown.” Having had the experience, in looking back we often miss its meaning.Footnote 78 We “explain away” elements such as “McNamara’s fear” as simple anomalies that no longer fit into conventional causal stories.Footnote 79 Yet even decades after the crisis had passed, in McNamara’s behavior there remained unmistakable traces of the depth of his emotional experience. Queried in a TV interview about the October crisis there were tears in his eyes when he spoke.Footnote 80 Over time, McNamara’s fears had changed to tears, perhaps tokens of a suppressed memory of an existential fear.Footnote 81 Confronted at another occasion by an interlocutor who insisted that the October crisis was nothing but a “game of chicken” in which Khrushchev had “chickened out,” “McNamara stared at his interlocutor, eyes wide, unspeaking for a long moment until he exploded: ‘It was not a game and there were no chickens.’ That’s not the way it was.”Footnote 82 Games are maps that distort the territory.Footnote 83 They wash away the emotions with which humans meet unknowable unknowns. And they conceal the urge to repress the memory of those moments. “Whatever insights may be derived from game analogies,” Blight concludes, “are gained only by annihilating the psychological life of the crisis.”Footnote 84
2. October Crisis.2: Organizational Malfunctioning
Organizational and bureaucratic politics models of the Cuban missile crisis reach further than the cognitive and emotional states of central decision makers. They express both Newtonian-humanist and post-Newtonian worldviews and describe a politics that goes well beyond the White House and the Kremlin. Presidential action is deeply embedded in relations that often cut against the rational actor model informing deterrence theory.
Scott Sagan’s careful archival research, for example, has helped shake the conviction that human rationality kept the Cuban missile crisis from spinning out of control.Footnote 85 Sagan applies two theories of nuclear policy making – high reliability theory and normal accident theory. High reliability theory expresses confidence in human rationality and the government’s ability to exercise full control over a country’s nuclear arsenal. Normal accident theory postulates that accidents in tightly coupled, complex systems are inevitable.Footnote 86 Sagan finds strong evidence supporting both theories but settles on normal accident theory for one reason: the active opposition of organizations to learning from their own mistakes.Footnote 87
Sagan’s research uncovered an unnerving number of serious lapses in security protocols and near-misses that could have started an accidental nuclear attack. He makes a compelling argument that the focus on autonomous individuals who are in full control of a state’s nuclear forces is little more than a soothing chimera. The relationship between human operators and tightly coupled, highly interactive, complex technical systems should lead us to expect that “the unexpected will occur, that unimaginable interactions will develop, that accidents will happen.”Footnote 88 Hawks and doves alike argued that nuclear weapons had an overwhelmingly stabilizing effect, inhibiting rash action by either the Soviet Union or the United States. Believing ardently in the stability-enhancing and war-avoiding logic of nuclear deterrence, both shared a basic optimism: deliberate and nuclear war was extremely unlikely. Sagan shows that they overlooked the specter of accidental war caused by organizational malfunctioning.Footnote 89
His central message is contained in the chilling episode with which he opens his book.Footnote 90 On October 25, 1962, an air force sentry patrolling the perimeter of a base near Duluth, Minnesota saw someone climbing the fence of the base where nuclear-armed bombers and interceptor aircraft were parked on runways, ready for immediate take-off. He shot at the intruder and sounded an alarm, and on airfields throughout the region guards rushed out to foil Soviet agents from sabotaging US nuclear forces. At a base in Wisconsin the wrong alarm bell rang, signaling that nuclear war had started. Pilots ran to their aircraft, started the engines, and headed down the runway. An officer drove onto the runway to stop the planes from taking off. The suspected Soviet saboteur turned out to have been a bear – not a Russian bear but an American bear. Two other near-accidents occurred in the next three days. On October 26–27, an American U-2 high-altitude plane was flying on a mission to collect radioactive samples from prior Soviet nuclear tests. At a relatively low altitude, the American plane strayed off course over Siberia. Soviet fighter jets were about to intercept the spy plane but held their fire at the last moment as the U-2 corrected its course. And on October 28, at the height of the crisis, radar operators in Moorestown, New Jersey informed the national command post that a missile had been launched from Cuba and was about to detonate in the vicinity of Tampa, Florida.Footnote 91 Later it was discovered that a software test tape, simulating a missile launch from Cuba, had been inserted into an operator’s radar screen at the very moment that a satellite appeared over the horizon, causing operator confusion. Although radar operators were supposed to get advance information about satellites passing overhead, the key facility providing that information had been taken off that mission to assist in providing advance warning of a looming Soviet missile test launch. Sagan’s list of near-misses is long.Footnote 92 Safety rule disputes, bombers straying off safe routes while on mission during the crisis, interceptor aircraft launched with nuclear-armed air-to-air missiles under their wings, erroneous emergency missile alert operations, undetected problems about the risks posed by false warnings, serious security lapses at various bases and airfields, and inconsistencies in the decentralized management of nuclear alert activities in Europe – all lead Sagan to conclude that “President Kennedy may well have been prudent. He did not, however, have unchallenged final control over U.S. nuclear weapons.”Footnote 93 In these October days in 1962, World War III was an accident waiting to happen.
Going beyond the October crisis the still incomplete historical record of near-misses is chilling.Footnote 94 For example, soon after the January 1961 inauguration of President John F. Kennedy a routine B-52 bomber flight with two nuclear bombs on board went into an uncontrolled spin and broke up over Goldsboro, North Carolina. Both bombs were released by a control device in the cockpit. One of the two bombs began the detonation process. Three safety mechanisms failed. Two-hundred-sixty times as powerful as those the US air force had released over Hiroshima and Nagasaki, the bomb was prevented from detonating only by the fourth arming mechanism, a low-voltage switch – partial confirmation, perhaps, for high reliability theory.Footnote 95 But perhaps not. The same switch failed more than twenty times in the following twenty years.Footnote 96 Similarly, in January 1968 a B-52 bomber on an airborne alert mission was flying above Thule, Greenland at 33,000 feet. Soon after the co-pilot turned the cabin heater dial to its maximum setting to alleviate the arctic cold, a fire broke out in the plane’s lower cabin. Seven minutes later the aircraft lost all electric power. The pilot ordered immediate evacuation and six members of the crew successfully ejected. A seventh member was killed. The pilotless aircraft passed over the Thule airbase and, with its four thermonuclear bombs, crashed into the ice 7 miles away at a speed in excess of 500 miles per hour. Immediately, the 225,000 pounds of jet fuel exploded. Although the conventional high explosives in all four bombs went off, the nuclear components of the bombs did not. Furthermore, for almost twenty years the launch code for the land-based Minuteman missile strike force was an identical, preset code – 00000000. This contravened a special 1962 Presidential order to create a more secure system.Footnote 97 These episodes, and others like them, confirm central claims of normal accident theory.Footnote 98
Near-misses that might have caused the outbreak of a nuclear war also happened on the Soviet side.Footnote 99 One close call occurred in the morning hours of September 26, 1983. The Soviet early warning system detected an incoming missile attack from the United States. According to computer readouts several US missiles appeared to have been launched. The protocol for the eventuality of an unprovoked attack was simple: the Soviet Union would retaliate with a nuclear attack of its own. Stanislav Petrov was the officer on duty that morning at the secret Serpukhov-15 command center outside Moscow. His assignment was simple. He was charged to report any apparent enemy missile attack on the Soviet Union. The computer system told him that the reliability level of the alerts that flashed across his screen was at its highest as the first, second, third, fourth, and finally fifth launch was reported. He was a specialist on the computer programs that the Soviet military used and had encountered this particular glitch before. What made him suspicious was how strong and clear the alerts were. All of them cleared all but one of twenty-nine security levels. Petrov’s suspicion was raised further by the fact that a support team of satellite radar operators told him that they had no data verifying a missile launch. Unlike the other members of his team, it should be noted, Petrov happened to be the only officer who had received a civilian education and thus was less prone than his colleagues to obey orders under all circumstances. Dismissing the computer warning as a false alarm, Petrov decided to do nothing. As Petrov recalled, it was a gut decision, at best a “50–50” guess, based on his distrust of the reliability of the Soviet Union’s early warning system, the report of the radar team, and the relatively small number of missiles that had purportedly been launched.Footnote 100 Petrov’s neural channels conveying gut reactions, neuroscience tells us, were biochemically “naked,” permitting body structures and nervous systems to talk directly to each other.Footnote 101 As matters turned out, the Soviet computer system had mistaken the sun’s reflection off clouds for missiles.Footnote 102
Breaching his instruction to report the apparent launch up the chain of command, Petrov’s dereliction of duty was very serious. It shone a light on the problems of the early warning system of the Soviet Union, which embarrassed his superiors. He received an official reprimand – not for his action but for a mistake in the logbook. The real reason, however, may have been for not having told his superiors, thus robbing them of the opportunity of taking credit for preventing a disastrous response. Relentlessly interrogated he was denied promotions, reassigned, and took early retirement.Footnote 103 His action remained largely unknown. He was never rewarded for his decision by the Soviet Union or Russia. After the collapse of the Soviet Union he was honored at the United Nations and received the Dresden Peace Prize.Footnote 104 Gut instinct, this time by an American Lieutenant General, Leonard Perroots, also played an important role in helping defuse a potentially dangerous escalation in the same year, during the 1983 NATO Able Archer exercises.Footnote 105 Other episodes occurred after the collapse of the Soviet Union. For example, on January 25, 1995 Russian early warning radars suggested an incoming American first nuclear strike. President Yeltsin was alerted and given the nuclear briefcase (cheget) with instructions about launching a nuclear strike at the US. Yeltsin either did not have enough time to open the cheget or decided against a counterstrike that would have been prompted by the malfunctioning of a supposedly fool-proof system.Footnote 106
In several episodes timely human action prevented accidental war.Footnote 107 For example, at the height of the Cuban crisis, on October 27th, a naval officer named Vasili Arkhipov was serving on a Soviet nuclear submarine near Cuba. After hunting it for several days, US naval forces began dropping practice depth charges, and eventually the submarine had to surface for lack of battery power and oxygen. US navy planes dropped flares, covered the sub with their search lights bow to stern, and fired tracer bullets. The roar of the cannonade drowned out voice communications on the bridge of the floating submarine. One of the commanders thought the ship was under attack. He ordered an emergency dive and readying the sub’s nuclear torpedo for use. Luckily, Arkhipov stayed on the bridge a little longer and was able to decode an apologetic signal that the commander of one of the American ships sent after the captain had left the bridge. The nuclear torpedo was not fired.Footnote 108
3. October Crisis.3: Post-Newtonianism and Nuclear Politics
Continuing discussions about the importance of organizational malfunctioning prompted Scott Sagan to convene in 2014 an excellent symposium on the state of nuclear security studies. It offers us an opportunity to probe implicitly held worldviews about nuclear crisis politics and war.Footnote 109 In his introductory essay, Sagan focuses on two “renaissances” in nuclear security studies, one among political scientists, the other among historians.Footnote 110 A new generation of political scientists has probed old and new topics with statistical methods that are used to analyze new quantitative datasets. Historians have mined new archives, sometimes in new, collaborative ways. Since these two renaissances occurred side-by-side and in relative isolation from each other, Sagan calls for more collaboration across disciplinary divides between history and political science and between qualitative and quantitative approaches. Despite the evident diversity in their different contributions and their spirited disagreements on issues of methods and approach, most contributors echo Sagan’s call.
Sagan’s hope was not pious. Arguably the most important book written on the Cuban missile crisis is the second edition of Essence of Decision, co-authored by Graham Allison, the original author and a political scientist, and Philip Zelikow, a historian.Footnote 111 The two disagree strongly in their philosophy of science and worldviews.Footnote 112 Allison has a Newtonian understanding of the scientific enterprise and subscribes to the conventional understanding of explanation: objective facts in specific situations are covered by general laws. Zelikow is a humanist who does not believe that this scientific approach carries over to the domain of history. History, not replicable experiments with objective phenomena, provides the empirical material for human choices that aggregate to government action. Because of their differences the book is an outstanding example of humanist Newtonianism. United in a common, incoherent worldview, Allison and Zelikow were able to collaborate productively, providing the reader with a book that integrates much of the voluminous archival evidence that has become available since the publication of the book’s first edition in 1971.
Humanist Newtonianism characterizes also the 2014 symposium, strong methodological disagreements to the contrary notwithstanding.Footnote 113 The quantitative and experimental methods that the three political scientists – Fuhrmann, Kroenig, and Sechser – embrace are viewed with great skepticism by the historian Gavin. But none questions that humans – individual decision makers or groups of individuals – are the unit of analysis. And all share in a Newtonian worldview. The world is real, understood not as infinite post-Newtonian potentialities but as a Newtonian object, existing out there, knowable and governed by discoverable laws. Perceiving the “real world” correctly is hard. Hence misperception of what is real is a source of human error and confusion. The real world can be observed at a distance by those who study it. It is, furthermore, composed of entities with attributes, like the possession of nuclear weapons, that act like a force.Footnote 114 These entities are conceptualized in terms of “dependent” and “independent” variables.Footnote 115 And despite its many complications, this Newtonian world is closed and determinate, unlike the complexity of the post-Newtonian quantum world, which is open and indeterminate. “Pre-dictions” in this time-reversible, Newtonian world are really “post-dictions.”Footnote 116 In contrast to determinist laws in the natural world, the social world is governed by probabilistic laws, with specified confidence intervals and robustness checks. These probabilities seek to establish patterns and regularities in a world that investigators can access unproblematically. This contrasts with quantum probabilities that are measuring the confidence we have in an inherently uncertain world.Footnote 117 And in line with the mechanical, Newtonian worldview of the late nineteenth century, the step from correlation to causation requires the identification of mechanisms.Footnote 118 It is noteworthy that in this symposium the spirited exchange between historian and political scientists focuses solely on methodological issues. It leaves unaddressed how we know what we know, and what are the basic units that make up the world. Epistemological and ontological questions are neglected. And why not? Historians and political scientists who are debating nuclear issues in this symposium inhabit the same humanist Newtonian world.
There is a throw-away sentence in Sagan’s introductory essay to the symposium that neither he nor any of the participants follows up on. Sagan mentions that approaches in science, technology, and society (STS) studies have produced “very innovative work on missile accuracy, the Indian nuclear power and weapons program, the fire and blast effects of nuclear weapons use, and the links between nuclear technology and development in Africa.”Footnote 119 “Innovative” refers here to a post-Newtonian worldview that, unlike Newtonian humanism, acknowledges large world uncertainty.
Post-Newtonianism is deeply relational. Nuclear assemblages of technical objects and human actors are linked in complex processes that often misfire as normal accident theory holds. Human choice at those moments is consequential. The historical record confirms that entanglements among widely dispersed agents and technical objects undercut the optimistic view that nuclear deterrence is a guarantor of a “long nuclear peace.”Footnote 120 Post-Newtonian humanism does not deny that deterrence has helped stabilize great power relations and steered major states away from war, including in 1962 and 1983. But human control and risk management have been shot through with unforeseeable events that on many occasions threatened to plunge rational decision makers and tightly coupled, complex systems into inadvertent war. The familiar story about the origins of the system of nuclear deterrence charts a natural and unstoppable march of weapons technologies driven by competition between states that operates in an anarchic international system. This anthropomorphizes states as unitary, rational actors, as in Allison’s model 1. But states are not unitary actors. Disaggregating them yields organizations and bureaucracies, Allison’s models 2 and 3. The black box of technological innovation is an interlacing of processes operating at different scales across different sites. Policies are outcomes of these processes.Footnote 121 The American and Soviet missile guidance systems, for example, were not determined by technology and top-down politics. They were instead products of entangled processes that resulted in conflicts and collaborations among scientists and engineers, weapons laboratories and corporations, military and political leaders.Footnote 122
Looking closely undermines the notion that an all-encompassing structure of technology and politics produces outcomes. It is not Big Determinism that shapes the world but entangling networks, variegated social practices, and human choices. The human element is more important in such accounts than the Newtonian one. Weapon modernization is not a natural outcome of technological change in a world of objects. Technological innovation results from decades of work by large numbers of people tied together in different networks. For inertial navigation systems to be adopted, for example, required undermining established scientific knowledge and eliminating a technological rival and its supporters, who were interested in civil and military air navigation and favored reliability, producibility, and economy over extreme accuracy. Creation of missile guidance technology required the self-fulfilling prophecy that predicted changes in accuracy would become real only if established organizations were assured of secure, long-term funding. Absent that prophecy, alternative technological paths might well have been taken.Footnote 123 Interested in counterforce strategy for its own organizational and strategic reasons, the air force was receptive to the prophecy; the navy was not. The persuasiveness of a make-believe world rooted in human imagination mattered greatly.Footnote 124 Inventions in weapons technology were not made because they were needed but because individual imagination and political and technological entanglements made them possible. The search for their application came subsequently.Footnote 125 In short, technology and politics are so deeply entangled that they cannot be separated out in simple cause and effect sequences. They provide, to invoke an ugly phrase, a “co-constitutive context” that is heterogeneous and varies greatly across different sites. The further one disaggregates them, the harder it gets to distinguish one from the other as processes turn out to be highly variable.
India’s nuclear program offers a second example illustrating the same point. This story can be told in different ways: as the creation of an object, the bomb, produced by a developing country responding to international and domestic pressures; as a story of heroic individuals, scientists and engineers, doing amazing work; and as a story of entangling relations between scientific research centers and Indian political leaders.Footnote 126 The evolution of modern science in India, Robert Anderson argues, put the atomic nucleus at the center of the Indian nation. There was no blueprint the first two generations of scientists could follow. They had to act as brokers between their support of modern science and a diffuse skepticism rooted in ancient Indian beliefs. The 1990s provided two examples: the “scientific temper” movement around the BJP and the “India Shining” celebration of a possible synthesis between science and spirituality in the “Hindu consciousness movement.” Throughout the twentieth century, but especially after India won its independence from Britain in 1947, scientists were seizing a new kind of power while developing persuasive visions for different political constituencies. India’s nuclear scientists succeeded in transcending political polarities and creating an entangled environment for their work “in which their realization that ‘you are either this, or you are that’ was too simplistic and ‘you are both this and that’ was sometimes confusing.”Footnote 127
For more than half a century US nuclear war planning calculated nuclear blast effects as the sole criterion for assessing the damage caused by nuclear weapons. The damage done by mass fires was neglected because it was not thought to be clearly predictable.Footnote 128 However, mass fire effects extend two to five times further than blast effects, with similar horrible consequences. Why did this systematic failure in damage assessment persist for so long? The conventional wisdom holds that this is due to nature, the inherent unpredictability of mass fire effects. Lynn Eden argues instead that it is due to different processes and entanglements. Knowledge-laden routines respond to everyday social life, specifically expert understandings and predictions acquired at great cost and over long periods of time. Routines offer solutions to problems that an organization has decided to solve. Blast effects could be measured accurately, fire effects could not. Blast effects thus were a problem; mass fires were not. The tendency of making things that can be measured accurately a problem and neglecting others that cannot be measured is true in the study of international relations and the social sciences more generally. It helps explain why small world thinking all too often eclipses large world thinking.
Individual and organizational knowledge give representations of the world that actors take to be reliable information. Such representations can be explicit or tacit. Like language, they are not shaped directly by the natural world. Rather, they are socially constructed. This does not mean that we must reject the existence of a reality outside of the social world. But knowledge of that reality is always molded by its broader historical, social, and discursive context.Footnote 129 Lynn Eden writes that “the choice made to solve problems of blast damage prediction but not fire damage prediction seemed to be based not on prior choices but grounded in nature itself. This became, in turn, a self-fulfilling prophecy.”Footnote 130 Innovation did occur in the 1980s, constrained by past lessons and existing routines. Fire damage prediction models became more formal and acquired greater predictive accuracy – without, however, influencing nuclear targeting. Organization-made disasters, such as the Titanic, Mad Cow disease, and the collapse of the World Trade Center, are neither fully determined nor fully contingent.Footnote 131 Self-reinforcing processes and bounded change co-occur, thus avoiding overly deterministic or overly contingent accounts.Footnote 132 They are explicable as the result of relations that left the American military largely blind to the prospect of incinerating the world in a possible nuclear war.
“Nuclearity” is therefore not an essential property of things; it is a property that is distributed among things. Because uranium was not born nuclear, it took a great deal of work to make African uranium mines nuclear. For almost half a century, in this complex set of entanglements African uranium miners were kept invisible and at risk.Footnote 133 The mining of uranium in different parts of Africa and different workplace and public health regulations in the United States, Europe, and Africa illustrate this point. Geological nuclearity can be established through Geiger counters; but that does not translate into a medical nuclearity that workers in Africa could translate into political or economic claims. Radiation is a physical phenomenon that exists independently of how it is detected or politicized. Nuclearity thus emerges from highly variable political, cultural configurations of technical and scientific objects.
Such insights and arguments do not receive any commentary either by Sagan or any of the other symposium participants. The reason for this collective silence lies arguably in the fact that by their very nature, in contrast to international relations and other social sciences, STS studies are more fully conversant with post-Newtonianism as one of the main intellectual developments in various fields of science and technology during the twentieth century. Hence, scholars of STS are not as strongly attached to the late nineteenth-century Newtonian worldview that defines many strands of political science and some other social sciences at the outset of the twenty-first century. Developing and incorporating elements of post-Newtonianism and grafting them onto STS studies is innovative. The collective silence which they encounter illustrates how much could potentially be learned about the nuclear world if scholars of international relations broadened their accustomed worldview.
International relations scholar and political theorist Jairus Grove’s interpretation of the October crisis expresses an even deeper kind of post-Newtonianism. His relationalism extends all the way down. For investigators following Max Weber, humans are a natural unit. For others they are mere fiction. Antonio Damasio, for example, argues that neural networks constitute the “I” and make emotions an integral part of the machinery of reason with which we seek to chart a course of action into an uncertain future.Footnote 134 Stressing the entanglement of different elements within the individual undercuts Max Weber’s methodological and substantive individualism.Footnote 135 Relations do not stop at the level of the human individual. Going well beyond the methodological individualism characteristic of Newtonian humanism, Grove writes “We are not constituted by relations, we are relations.”Footnote 136
This view holds for objects just as much as it does for individuals. Far from breaking with traditional practices of war, automated lethal warfare is the result of the scientific control of killing humans on the battlefield. Weapons are not static material objects that simply emanate from purposive human intentions. Nuclear weapons, for example, must rely on elaborate and problematic theories of deterrence to control and harness their effects.Footnote 137 Biological weapons are lethal in ways that are often unknown or unknowable and simply do not fit the logic of state warfare. Objects become weapons through relational changes. Weaponizing objects has long social tails. “The improvised explosive device” that was so important in the Afghan and Iraqi wars is a “wholly non-standardized design cobbled together from a medley of repurposed objects.”Footnote 138 Weapons are not passively channeling human volition but generate practices as they shape desires for violence. As such they offer an entry point “into a veritable thicket of actors, relations, events, knowledge and incipient transformations” that reveals “the profound entanglement of modern scientific and technological practice with our contemporary ways of war.”Footnote 139
The scale at which this relational view applies depends on the cut the investigator takes into the world. Following Grove, we could focus on particular parts of the brain, matching the medical metaphors that many political scientists extol as a highly desirable but often unobtainable way of establishing causation in the social sciences.Footnote 140 Whatever its advantage for establishing causation, choosing a neural cut, as Damasio does, raises thorny issues about brain, mind, and consciousness that may have to rely on notions of causation more complex than Hume’s efficient cause. All scientific inquiry leaves an investigator’s cut into the world to her question and her interest. The cut is not preordained by the world she seeks to explore and navigate, as in Weber’s holistic individuals or Damasio’s decomposable neural networks. The fact that we think of nuclear decision making and crisis politics as the result of human action depends on how we think and feel about the “I.” Grove’s deeply relational sensibility adds a new element to the renaissance that Sagan speaks of.Footnote 141
In his analysis of the October 1962 crisis Grove shifts attention away from the President as the most important decision maker and highlights instead entangling processes. His approach focuses on how people and objects relate across geographical and temporal scales and heterogeneous assemblages that are not caught by conventional models.Footnote 142 Yet he also leaves room for human agency exercising control over the deployment of nuclear weapons that is neither fully autonomous nor fully submerged in relational processes. Humans matter like everything else. But they are not exceptional in their mattering. Human beliefs and desires do not in and of themselves translate automatically into efficacy, mastery, and control. The decision to launch or not to launch nuclear weapons is a result of the relations among thousands of people involved in intelligence gathering and interpretation, security briefings and consultations, weapons maintenance and manning, and a huge assemblage of material objects that make up the US nuclear striking force on land, on sea, and in the air. There is no unequivocal starting point for any launch decision. There is only a tangle of relations and apparatuses. Presidential power at the very top as an embodiment of sovereignty is part of a much larger nuclear assemblage. Grove’s ecological approach to the October crisis is not replacing sovereignty with assemblage but conceptualizing sovereignty in terms of assemblages. He thus offers an account of the fundamental entanglement of things and humans in the nuclear assemblage of a sovereign state. As section 1 above has shown, a focus on human rationality and choice avoids thinking of the relations and assemblages of objects that are necessary for the full articulation of human interests and desires.Footnote 143 Humans are not the sole determinant of thinking and acting. They are part of national assemblages of humans and objects that are brought together in preparation for initiating unimaginable, organized violence.Footnote 144 Because assemblages do not follow the simple logics of sovereign states, deterrence does not rule out nuclear war. In Grove’s unsparing words “the geopolitical project of planet Earth is a violent pursuit of a form of life at the cost of others – full stop. … [T]he violence of geopolitics is an ecological principle of world making that renders some forms of life principle [sic] and other forms of life useful or inconsequential … Geopolitics, enacted through global war, is itself a form of life.”Footnote 145
For Grove investigation is a form of story-telling about entanglements of widely distributed formations of actors and objects. Those formations are not well captured by the conceptual language of dependent and independent variables, for him arbitrary shortcuts into reality that are inadequate for capturing the effects of entanglement. They do not have any inherent meaning; rather, the attribution of causal relations is an effect of the investigation. The observation of action is part of the relations that make the world, just like the action itself. Observation does not stand at a distance, apart from the world. For our answers to questions depend on the scale at which questions are posed and answers sought.Footnote 146 Grove’s entire line of argument is informed by a post-Newtonian worldview and large world thinking.
In his account of the Cuban missile crisis, preparation for nuclear war is an ongoing set of relational processes without any specific ethics.Footnote 147 The assemblage of nuclear war fighting matters and so does the office and person of the President. In nuclear politics assemblages of objects and networks of organizations and people are the sites for unfolding entanglements among different actors. The reduction of complex processes to unitary actors is not necessarily wrong. Even when they are misleading or false, such simplifications can yield useful insights while concealing complexity and causality operating in other sites and at other scales. The President’s unquestioned, ultimate authority appears in full sight only when methodological individualism primes an observer to look for an already constituted single decider, as shown in sections 1 and 2 above. However, as a sovereign decision maker the President is not self-constituted. To be relevant, effective, and equipped with social resonance and political legitimacy, his actions depend on relations with other individuals and objects. “The issue was not whether Kennedy and Khrushchev wanted to control events,” writes journalist Michael Dobbs; “it was whether they could.”Footnote 148 Unlike several of his advisors, Kennedy believed that he was operating in the large world of uncertainty. This belief made him cautious. When he could, the President played for time, as he tried to keep open rather than close down the list of agonizing choices he faced.Footnote 149 Because of his limited knowledge and control, the President called fewer plays than most people think. In a large world of uncertainty, Grove writes, Kennedy operated “more like a mascot than a quarterback.”Footnote 150 His playbook favored an incremental, iterative, and experimental approach. He searched for useful rather than optimal solutions to the grave problems he faced. Janice Stein calls this a pragmatic strategy of “learning by doing.”Footnote 151
4. Jargon Anesthetizing Awful Possibilities: Language and the October Crisis
Post-Newtonianism shows that facts on the ground are rarely straightforward. For example, the bewildering concatenation of events experienced by the makers and the victims of the events in France in 1789 subsequently became an object with its own name, “the French revolution.” The accumulation of human experiences of those chaotic months was “shaped by millions of printed words into a ‘concept’ on the printed page, and, in due course, into a model. Why ‘it’ broke out, what ‘it’ aimed for, why ‘it’ succeeded or failed, became subjects for endless polemics … but of its ‘it-ness,’ as it were, no one ever after had much doubt.”Footnote 152 The reverse is true for the Cuban missile crisis. Until the mid-1980s, James Blight and Janet Lang argue, the Cuban missile crisis was thought of as “simple, short, clean, salutary.” On closer study they found it instead to be “complex, immense, horribly violent, ongoing.” Footnote 153 Compression and simplification is how Newtonianism deals with complexity. Lack of attention to language is one illustration. To give but two examples: In the development of weapons of war, justification in terms of “modernization” is not a neutral description but an exercise of discursive power.Footnote 154 A second example are the devices exploded during India’s 1974 and 1998 nuclear tests. They were not military weapons. Called bombs by everybody, these devices became bombs.Footnote 155 In small worlds, language “represents,” passively mirroring the real world. In large worlds, language “re-presents,” actively shaping the real world.
As was true in financial affairs discussed in Chapter 4, the choice of language in which to tell a story is politically consequential. The Cuba crisis was not primarily about signals sent or received, as in rational deterrence models. Debates about language were fierce. Take, for instance, the ten letters Kennedy exchanged with Khrushchev between October 22 and October 27. The participants knew what has escaped many deterrence theorists: An exchange of texts can create weak bonds of social trust through a process of positive mutual identification that can help alleviate or resolve a crisis.Footnote 156 Beyond the letters, many aspects of the missile crisis involved genuine creativity in communication and decision making.Footnote 157 In fact, language cut to the core of the crisis. American warnings issued to the Soviet Union against the deployment of offensive missiles were made for domestic political reasons. According to the President’s speechwriter and advisor Ted Sorenson, if Kennedy had known about the deployment of forty offensive missiles, he might well have put the red line for the US at 100 missiles.Footnote 158 Neither number changed the strategic balance of forces one iota. The declaration of an interest that Kennedy apparently did not believe in created the interest deemed so important that it was worth risking nuclear war.Footnote 159
The language in which political actors tell their stories about the world can be profoundly revealing. For example, the crisis of October 1962 which put the world closest to the outbreak of a global nuclear war went by different names.Footnote 160 Americans call it the “Cuban missile crisis,” Russians the “Caribbean crisis,” and Cubans the “October crisis.” The “crisis” terminology not only represents facts on the ground. It also creates entirely different worlds of understanding. Geographically and temporally confined, Americans and Russians refer to thirteen days in October 1962 when a conflict over the Soviet decision to station nuclear-tipped missiles in Cuba threatened to spiral out of control. Referring to a thin slice of history, cut surgically from its broader historical context, that crisis was resolved peacefully, to the great satisfaction of most Americans, Russians, and many others. When it comes to the events of October 1962, Americans and Russians search the past for applicable lessons for the future.Footnote 161
Not so the Cubans. This was illustrated dramatically in a series of conferences convened in the late 1980s in which former officials and scholars tried to understand what had happened.Footnote 162 For the Cuban participants the “October crisis” starts in the nineteenth century; it was ratcheted up by the 1958 Cuban revolution and the 1962 October crisis; and it has continued ever since. The “crisis” is merely a small instance of the United States seeking regime change in Havana. What participants in these conferences learned is that for Cubans the October crisis must be measured not in days but decades; that it was not resolved peacefully but led to thousands of casualties once the more or less covert actions of the US and Cuban governments, both before and after October 1962, are included; and that the events of October 1962 cannot be understood unless they are put in the context of the implacable hostility between the two countries after the success of the 1959 revolution. The empirical record of October 1962 is clear – the “crisis” does not exist as an uncontestable object of study. In contrast to Americans and Russians, Cubans insist on traveling “forward into the past,” to understand better which lessons can and which cannot be learned from the narrow focus on crisis management.Footnote 163
Going beyond the “crisis” terminology, October 1962 also illustrates the importance of language in the discursive construction of the national interest. That concept makes policy makers understand the goals they wish to pursue with their foreign policy. And it serves as a rhetorical device generating political legitimacy and support for state action. Policy makers interpret the situation they face through a language they share among themselves and the publics they seek to influence. In short, the national interest norm is produced through language.Footnote 164 It was not the Soviet missiles as physical objects that created the nuclear crisis in October 1962. It was the meaning attached to them. The Soviet missiles “had to be made to mean something before it was possible to know what to do about them.”Footnote 165 The conventional US representation of the crisis tells the story of October 1962 as one of sudden, secret Soviet aggression that upset the status quo.Footnote 166 Soviet officials lied about what they were planning to do numerous times in various fora. They invaded America’s sphere of influence. And they threatened the American heartland with the installation of offensive, nuclear-tipped missiles in Cuba. Soviet conduct thus posed a serious test for American credibility and resolve. But the crisis “does not simply reflect ‘the facts’.”Footnote 167 Rendered in different language, other stories, told from the vantage points of Moscow and Havana, also fit “the facts” on the ground.
One is a defensive story. Moscow moved simply to deter the US from waging war against an ally of the Soviet Union.Footnote 168 Defeating the Cuban revolution had become the declared policy of an encompassing bipartisan coalition in American politics. The early months of the Kennedy administration witnessed a crescendo of hostile actions, ending in the failed invasion at the Bay of Pigs in April 1961. Putting Soviet missiles on Cuban territory was a deterrent, warning the US not to escalate from clandestine actions to full-scale invasion. It was the US not the Soviet Union that was the aggressor and threatened to bring about war in the Caribbean. The US had neither right nor reason to seek the removal of the Soviet missiles.Footnote 169
A second story focuses on the strategic imbalance in US and Soviet nuclear forces.Footnote 170 Kennedy had won the Presidential election of 1960 charging that the Republican administration had been asleep at the wheel and failed to counter the growing “missile gap” between the Soviet Union and the United States. Once in power, the new Democratic administration realized that rather than being behind, the US had an enormous 17:1 superiority in nuclear weapons. At the time, defense intellectuals estimated that with a ratio of 20:1 it would be feasible for the US to initiate a first strike with the hope of winning a nuclear war with the Soviet Union. All American decision makers agreed in October 1962 that the addition of forty Soviet nuclear warheads stationed in Cuba had no effect on American superiority. But it might have helped recalibrate the psychological imbalance at least a little and thus enhance the security of both countries.Footnote 171
Going beyond October 1962, language and discourse matter greatly for how defense experts discuss nuclear war. Their jargon conceals awful possibilities. Together with the general public they are, in the words of philosopher Günther Anders, “lazy people of the apocalypse.”Footnote 172 Even though, as historians Matthew Wald and Benjamin Zeimann write, “‘the bomb’ itself became the central metaphor” of the Cold War, defense experts were unable to visualize what they were actually producing and failed to exercise their imagination and sense of morality.Footnote 173 The secrecy, complexity, and special competence required for their jobs render defense experts unequipped to shed light on the concrete manifestations of a possible nuclear war. As agents of the system of deterrence their role is to convince different audiences of the adequacy, stability, robustness, and effectiveness of nuclear deterrence. They believe in perfect control of a passive technology, as anthropologist Hugh Gusterson found out in his interviews at the Lawrence Livermore National Laboratory.Footnote 174 Deeply knowledgeable about the disaster of nuclear war and the possibility of civilizational collapse, most defense experts are not charged with informing the public. They, after all, hold the firm belief that nuclear war will never happen.Footnote 175
Defense experts rely on what Carol Cohn calls a “technostrategic” way of talking.Footnote 176 Their language captures the rationale of deterrence, nuclear arms races and rivalries, and attempts to stop the unthinkable from happening. This language masks the consequences should the unthinkable happen. It does not “represent” the reality of nuclear war. In fact, it conceals it. The potential destruction of nuclear war is so grotesquely high as to defy all forms of rational thought or speech. The professional discourse is absurdly abstract and antiseptic, with its “first and second,” “counter force and counter city,” and “surgical” strikes, and “limited” or “total” nuclear war. In the past, such distancing occurred in the violence-enabling dehumanization that preceded the mass murder of Jews and others in the Holocaust.Footnote 177 Similarly, the reliance on distancing through technostrategic language conceals the horrors of a possible nuclear holocaust in the future. “Clean” bombs, for example, are nuclear devices that are created largely by fusion rather than by fission and therefore release more energy as blast rather than radiation. Because they are “clean” their enormous destructive power is freed from the baggage of emotional fallout. “Collateral damage” is a linguistic way of concealing “mass murder.” Part of the US strategic force between 1986 and 2005, the MX missile, was the bearer of 250–400 times the destructive potential of the bomb dropped over Hiroshima. President Reagan referred to it as the “Peacekeeper.” Defense analysts preferred to call it a “damage limitation weapon” instead.
The chasm between “technostrategic” talk and the reality it conceals is wide and deep. Language “represents” bombs as giant killing machines and “re-presents” them antiseptically and abstractly to conceal the terrifying violence they can spread. Furthermore, Cohn refers to the pervasiveness of sexualized and patriarchal language that covers the full spectrum from male domination to boyish mischief. The urge to domesticate and tame these wild weapons also infuses the language of defense analysts. Missiles lined up in their silos on a Trident submarine are “the Christmas tree farm.” Weapons are not “bombs” but “reentry vehicles.” They are not “dropped” but “delivered by a bus.” And weapon systems can “marry up” or “couple.” In this language insentient weapons are humanized while sentient human beings are ignored. “Fratricide” describes a situation when one warhead kills another warhead. “Vulnerability” and “survivability” refer to weapon systems not humans left nameless by technostrategic talk. Religious language is also part of the stock in trade of this specialized language. The first atomic bomb test in July 1945 was called “Trinity,” the male forces of creation. And members of the group of defense intellectuals refer to their community as “the nuclear priesthood.” Acronyms such as ABMs and MIRVs can be fun. They neuter the destructive force of nuclear weapons. Developing language competency, Cohn reports, makes the speaker feel like she is in control of the uncontrollable and removes her from the reality of nuclear war. Learning this language shifts the content of what can be talked about “monumentally,” as well as “the perspective from which you speak.”Footnote 178 The antiseptic, cool language of the user of nuclear weapons differs radically from the emotionally charged language of those who survived the nuclear attacks on Hiroshima and Nagasaki. Technostrategic language affords distance by abstraction, and a sense of control that extends the self rather than threatens it. And expert language makes it impossible for certain questions to be asked and certain values to be articulated. It is a language tailored not to the reality of nuclear incineration and destruction on an unimaginable scale but to a world of abstractions. Validity is measured only by its internal logic. Language produces a world of experts and a perception of full control. Correspondence with the large world of possibilities is not at issue. Language is a tool to think about the unthinkable not to describe relations on the ground. Thus, technostrategic language does not only represent but it also creates reality.
Language also changes the context of politics, and not only during the October 1962 crisis. These days, American administrations are struggling with how to name the rising tensions between China and the United States. Washington seeks to avoid “Cold War” terminology as outmoded. But increasing strategic clashes, technological rivalries, and military maneuvers define a competitive-cooperative relationship that defies easy classification. Choosing the wrong label might create a self-fulfilling prophecy, driving the relationship toward new lows. It remains to be seen how today’s “competitive coexistence” differs from the “peaceful coexistence” of yesteryear. “In the Biden White House,” concludes journalist David Sanger, “there is no area where words are measured more carefully than in talking about relations with Beijing.”Footnote 179 The “Bomb as God” metaphor puts the nuclear bomb beyond human control and transforms it into a metaphysical ultimate that will be with and in us, and possibly destroy us.Footnote 180 Robert Oppenheimer invoked this image when on July 16, 1945, after the first nuclear test in Los Alamos, he paraphrased the Bhagavad Gita: “Now, I am become Death, the destroyer of worlds.” A couple of decades into the nuclear era, October 1962 was a very close call with global destruction. Whether it is measured in human or geological time, this existential threat is not disappearing.
5. The Bargaining Model of War
For more than half a century Thomas Schelling’s seminal work in the 1950s and 1960s has had a pervasive influence on the theory and practice of nuclear deterrence as well as the study of war. Schelling’s creative conceptual moves reduced uncontrollable large world uncertainty to manageable small world risk. For Schelling accidental factors feed seamlessly into controlled, competitive risk taking. In his theory accidents are drawn from a known probability distribution. As parties draw closer to the brink in a crisis, they become increasingly careful. Accidents are uninteresting appendages of rational decisions. And decisions are constrained by the powerful logic that deterrence theory articulates: nuclear war is unwinnable and non-sensical. Schelling’s approach is normally interpreted as squeezing out of deterrence theory unfathomable unknowns, contingencies, and indeterminacies. Establishing the power of control over “the ultimate weapon” reinforces the claim that rational deterrence theory, operating in small worlds, explains large world uncertainties. Schelling’s analysis thus fuses probabilistic and possibilistic thinking. In this retelling of Schelling’s theory the “threat that leaves something to chance” is not given its proper due. It is regarded instead as an implicit form of randomization akin to a teenager in a 1950s drag race deliberately and visibly losing control over steering his own car and leaving it to his enemy to make the adjustment that prevents a deadly crash. With the elaboration of the concept of an organizational Doomsday Machine scholarship on nuclear deterrence has taken this approach to its extreme, both logical and illogical.Footnote 181 Such a machine is illogical because it kills unnecessarily, without purpose, and at a terrifying scale in the event of a successful surprise attack – satisfying nothing but the instinct for revenge. And it is logical because that instinct is what makes deterrence work in the first place. Cognitive biases at the individual level are overridden by something that for evolutionary psychologist Rose McDermott creates behavioral predispositions that enhance the stability of nuclear deterrence.Footnote 182
Informed by Schelling’s work students of security studies have developed and tested extensively what has come to be known as the bargaining model of war. In the story it tells Clausewitz’s fog of war is not permitted to obstruct the cloudless sky of human rationality. It offers a risk-based view of war that disregards uncertainty.Footnote 183 This is made possible by the bargaining model’s first core assumption, which it shares with the rational expectations theory of economics: the parties to a conflict subscribe to the same understanding of how the world works.Footnote 184 This is vital for the model to work. Yet, it is often wildly implausible to believe that parties locked in bitter conflict share the same understanding of the world. Often fed by different worldviews, imagination and potentialities of how the world might work thus tend to escape the attention of the bargaining model. Irreducible and consequential deviations away from expectations created by risk-based models stop the putative convergence of views around a single model. Based on the implausible assumption of convergence, rationalist models follow an implausible chain of reasoning about actors with different preferences. If these actors decide to fight, each side will pay a cost. These costs open up a range of bargained solutions that both sides should prefer to war. Since war is always inefficient after its outbreak, for the bargaining model the puzzle of war is why the two parties fail to settle before war breaks out. The answer to the puzzle lies in the existence of imperfections in information and the incentive to misrepresent on the one hand, and the inability to credibly commit to an agreement that prevents war on the other.
The model introduces a second core assumption: updating of information will select out inferior models of the world. The idea is that the occurrence of war in world politics is a game with many trials. The world is an urn from which history pulls red and white balls. Over time, with many trials the results fit a normal, bell-shaped statistical distribution. This assumption is unwarranted. Wars are rare events with large effects best captured statistically by scale-invariant power laws that do not look anything like a bell-shaped curve. Akin to theoretical physics, power laws model systems not humans. With their long and fat tails, power laws capture the discontinuous jumps of rare events in history missed by the smooth, normal distribution of Gaussian statistics with their short and thin tails. Power laws are counting the frequencies and severities of events, not their presence or absence. Rare and severe events are, statistically speaking, not outliers but normal. One of the best studies of war contradicts the conventional wisdom of the decline of war in recent decades. Relying on power-law statistics, Bear Braumoeller concludes that “the potential of war to escalate remains nothing short of horrifying.”Footnote 185 Cirillo and Taleb confirm this assessment with their statistical analysis of war casualties during the last 2,000 years. They conclude that “all the statistical pictures obtained are at variance with the prevailing claims about ‘long peace,’ namely that violence has been declining over time … one may perhaps produce a convincing theory about better, more peaceful days ahead, but this cannot be stated on the basis of statistical analysis.”Footnote 186 Power laws have also been found in the frequency of severe terrorist attacks, inter- and intrastate conflicts, and estimates of war casualties when data are missing.Footnote 187
One of the original proponents of the bargaining model of war, James Fearon, conflates risk and uncertainty when he writes that “given identical information, truly rational agents should reason to the same conclusion about the probability of one uncertain outcome or another.”Footnote 188 This conflation of the concepts of risk and uncertainty has become deeply engrained in many theoretical extensions and empirical applications of the bargaining model. Important analyses of nuclear deterrence, terrorist violence, nuclear brinkmanship, ceasefires in civil conflicts, and power shifts are either reducing uncertainty to risk or treating the terms as synonyms.Footnote 189 This is odd in light of the model’s focus on bargaining, which is conducted by specific actors with specific experiences. Hunches and intuitions are hard to measure and cannot, by definition, be systematized into a single model; nevertheless, they can play important roles in shaping bargaining outcomes.
World politics is not a game with many trials. And it is not an urn from which history pulls red or white balls. Players are colorblind. And there is no way of updating expectations based on the number of balls left in the urn. Misperceptions, the fog of war, and a host of other factors prevent the emergence of converging expectations based on updated information generating shared models of how the world works. There is only one play, a lot of bluffing, and different interpretations. Crises understood as polycrisis are generators of chains of contingencies and uncertainty rather than of risks that are known or knowable. In short, on issues of war and peace world politics simply does not offer, as the bargaining model assumes, a sufficiently large number of trials to select out inferior models. Even if all actors shared the same model of the world, which they do not, these models would fail. By making strong but implausible assumptions, the bargaining model of war focuses on the calculable directionality of decisions in small risky worlds. Unlike Schelling it slights creative imagination and improvised coping, which generate entirely different decision logics. Carl von Clausewitz, author of one of the classic studies of war and fully attuned to the importance of chancy uncertainty in what he called “the fog of war,” disagreed with the bargaining model (which he could not have known about): he wrote that “no other human activity is so continuously or universally bound up with chance.”Footnote 190
The bargaining model holds that different conclusions about future outcomes are possible because of differences in information. It fails to take into account how differences in worldviews create enormous problems associated with Bayesian updating, discussed in Chapter 1. The probability of victory in any conflict and the cost of fighting are assumed to be calculable and subject to known or knowable probabilities by all parties to the conflict. However, disagreements are unavoidable when actors put the same information to work in different worldviews. As is true elsewhere, in world politics rationality takes the form of many situationally specific kinds of reasonableness. And standards of reasonableness differ in worldviews marked by different cosmologies, different historical memories, different conspiracy theories, different emotions, and different moral prescriptions. For example, the 1970s détente period rested on a bedrock of illusions that US and Soviet decision makers shared about each other. “The superpowers,” writes Eric Grynaviski, “were simply wrong; they did not understand each other as well as they thought.”Footnote 191 Misunderstanding in this instance secured cooperation that accurate information would have stymied. Filtered through different worldviews, shared information can be constructive. However, it is not the information but the worldview that drives actors toward peace or war. Worldviews that make room for both small world risk and large world uncertainty can capture more of the fabric of world politics and crises than bargaining models that exclude those elements and put too much stock in rationality and control.Footnote 192
The problem with the bargaining model lies in the realm of theory rather than in its empirical application to different questions of security. Hedley Bull noticed long ago that the central ideas in Thomas Schelling’s work were not derived only from formal game theory operating in the world of risk; they also represented “an imaginative conceptual exercise” dealing with the problem of uncertainty.Footnote 193 In contrast to Schelling himself, scholars applying the bargaining model of war have overlooked the centrality of imagination.Footnote 194 Bypassing the technical virtuosity of formal models of war, Jonathan Mercer similarly stresses the importance of creativity. In neglecting the importance of creativity political scientists risk “turning sophisticated political actors into lab rats … They have done so because predicting creativity is difficult and perhaps impossible – if one can predict creativity it cannot be very creative.”Footnote 195 Imagination and creativity highlight the central importance of different worldviews, models of how the world works, and uncertainty in world politics.
Informed by what the history of the October crisis tells us about how decision makers operate under conditions of uncertainty, Janice Stein offers an instructive analysis of the early stages of the Ukraine war.Footnote 196 When a norm-breaking Russia invaded Ukraine in February 2022 President Putin coupled the invasion with vague threats of nuclear war. President Biden and his administration faced a situation of radical uncertainty. How should they respond in a situation where Ukraine was not a NATO ally but a country of interest and concern for the US and its NATO allies? Uncertainty was enhanced by the fact that Putin did not know what to do once the war failed to end in the quick victory he had envisaged. In a lengthy war what would constitute an “existential threat” to the Russian state – the red line that the US should not get close to?Footnote 197 Most likely Biden did not know his own preferences either, after the deterrence strategy he had adopted in the run-up to the war had failed. Contrary to established, risk-based deterrence theory, actor preferences did not shape strategy. Instead, strategy instructed preferences – a reminder of the old joke: “How can I know what I think till I hear what I say?”
Stein centers her analysis of the war on uncertainty about the escalatory dynamics rather than on cost-benefit calculations.Footnote 198 For the bargaining model, the main sources of war are information asymmetries about the preferences that adversaries hold and the price they are willing to pay to get what they want. Wars come to an end when the opposing parties reveal private information. And the information that matters concerns the preferences and capabilities of others. But in Stein’s story of the Ukraine war what really mattered for Biden was to discover his own preferences.Footnote 199 This insight agrees with other psychological studies. Often preferences are not “revealed” but “constructed” in a process of decision making that is highly sensitive to its context. Until they have made their choice, decision makers often do not know what “type” they are.Footnote 200 Defying linguistic “representation” they invite linguistic “re-presentation.”
Poised in uncertain terrain and with the nuclear brink shrouded by the fog of war, both leaders learned about their preferences from the choices they made. The American response in particular illustrates a learning-by-doing approach to policy.Footnote 201 Since it might lead to an escalatory dynamic and perhaps World War III, from the outset the President was strongly opposed to committing American ground troops. In addition, in a prominent newspaper article that Putin’s advisors could not miss, he imposed boundary conditions for American policy. These conditions included the desire of NATO not to seek war with Russia; not to send troops into the conflict as long as NATO countries were not attacked; not to seek regime change in Moscow; and not to encourage or enable Ukraine to strike beyond its borders. They also included a public declaration that the use of Russian nuclear weapons would be completely unacceptable to the US, implying that it would push the US into a massive conventional war with Russia.
Biden’s policies reduced the uncertainty created by Russia’s invasion and Putin’s nuclear threats. Within those markers the Biden administration adhered to a pragmatic policy of learning by doing. It tried to fence in uncertainty with policies that assisted Ukraine and responded to Russia’s aggression and manipulation of uncertainty. America’s provision of increasingly sophisticated weapons, for example, was experimental, incremental, and inductive.Footnote 202 The US waited and watched for Russian reactions as it gradually changed the technological sophistication and range of the weapons it shipped to Ukraine. The approach seemed to successfully control the risk of an escalating conflict with Russia and supply Ukraine adequately with most of the weapons it was asking for, until Congress stopped the flow of funds at the end of 2023. After the resumption of weapons shipments in May 2024, the Biden administration followed the British and French policy of permitting US-provided weapons to be deployed against targets on Russian territory that were attacking Ukrainian targets with impunity. There is no guarantee that a policy that worked in the past will work in the future.Footnote 203 Contexts change. Political relations and processes are not stationary. The past is never a sure guide to the future. Guarding against false certainty from adversaries and overconfidence in themselves, policymakers who rely on good decision-making processes and search pragmatically for practical knowledge do not guarantee success. Without these virtues, though, failure is more likely.Footnote 204
6. The Fragility of a Crazy Politics
Reduced to its simplest the October crisis illustrated the difference between two types of people – rationalist deterrence believers such as General Taylor who count warheads and think first strike, and uncertainty believers such as President Kennedy who worry over dropping one bomb. Following a common quip, the crisis gave the President three options to remove the missiles from Cuba: “bomb ’em out” (invasion), “squeeze ’em out” (blockade), and “buy ’em out” (trading the missiles in Turkey). But the crisis did not unfold in a metaphorically speaking small, Newtonian world. It took almost thirty years of intense study to get a full sense of how out of control were events in October 1962.Footnote 205 The results confirm the lessons from those harrowing days. Expressing the sentiments of many of the key decision makers meeting twenty-five years later, Ted Sorensen insisted on “the importance of avoiding any such crisis in the first place.”Footnote 206 Over sixty years later that lesson appears to have been largely forgotten. A new generation of scholars is analyzing the dynamics of abstract deterrence and spiral models unencumbered by the emotional experience of confronting the unthinkable. And a new generation of leaders has renounced stabilizing treaties, is modernizing nuclear arsenals, and is threatening the use of nuclear weapons on the battlefield. Having avoided World War III once, we are no longer, it seems, burdened by McNamara’s fears and tears, and willing to give that war another chance.
Amnesia, however, cannot eradicate the legacy of nuclear testing. The first Trinity test explosion in New Mexico, on July 16, 1945, saw the mushroom cloud rise to a height of around 7 miles, much higher than had been calculated. Radioactive contamination was dangerously high close to and downwind from the testing site. As many as half a million people were living within 150 miles of the test site, some as close as 12 miles. Within ten days of the detonation radioactive fallout had reached forty-six states, Mexico, and Canada. Subsequently another ninety-three aboveground tests were conducted in Nevada. Did America nuke itself? And did it, like all of the other nuclear states, wage undeclared, limited nuclear war on the communities near the test sites?Footnote 207 These questions seem absurd. But the US moved its largest tests to the Marshall Islands; Britain did not do nuclear tests in the Midlands; France did not test upwind from Paris. And the Soviet Union did not test between Leningrad and Moscow. The political invisibility and marginality of the communities affected, especially those located in the Global South, offer strong evidence that obvious overlooked answers to absurd-sounding questions contain a large measure of truth.Footnote 208 In any case, the clean-up of the original test site is now scheduled to last at least until 2043. Nuclear waste has been left behind on, around, or under a second American test site on the Marshall Islands, France’s test site in the Algerian desert, and on many test sites scattered around Russia. Between 1945 and 1998 more than 2,000 nuclear explosions worldwide permanently altered the earth’s atmosphere. In those years “a nuclear weapon was detonated, on average, every 9.6 days” writes historian Robert Jacobs. And at the height of testing, in 1962, the year of the missile crisis, there were 177 nuclear tests, one every other day.Footnote 209 As the discussion of global warming in Chapter 6 shows, nuclear tests have connected human “power” in world history to planetary physical “force” in earth history, increasing future uncertainties for both.Footnote 210
The history since 1945 records many nuclear near-accidents. Numbers are incomplete and impossible to interpret. Only the US and the UK have reported figures, which include all sorts of minor and major malfunctioning. Schlosser’s inventory refers only to documented numbers running perhaps as high as one thousand.Footnote 211 Presumably this is overcounting minor security lapses and undercounting the real number of brushes with disaster, as seven other and less forthcoming nuclear states have experienced their own problems. The author of an influential 1990 report of close calls and near-misses, Sidney Drell, identified thirty-two nuclear weapons accidents and is quoted as believing that multiple others have remained secret.Footnote 212 The insistence that nuclear deterrence is fail-safe and the evidence of repeated failure, to date without catastrophic consequences, suggest that craziness is part of nuclear politics. It has some salutary effects. When operators of nuclear forces were warned that hostile missiles had been launched, without fail they concluded that the system had malfunctioned. They simply did not believe that nuclear war could ever happen. By behaving the way they did in such crisis moments operators were reducing the chances that it will. But craziness is unnerving. The construction of a Russian Doomsday Machine during the Cold War and the powerful emotions that may have caused the mental instability of President Trump after he lost the election of 2020 are two documented instances that unfolded, in secret and in public, lodged in the large world of uncertainty.
After the transformational experience of those awful days in October 1962, American decision makers believed that the Soviet Union and all other nuclear powers had come to accept the stabilizing logic of transparent nuclear deterrence. They overlooked the fact that US policy itself was destabilizing. Having agreed to numerical limits on its nuclear arsenal, in the 1970s and 1980s US policymakers were determined to gain qualitative nuclear superiority over the Soviet Union. To this end in the early 1980s the Reagan administration increased greatly the vast US defense budget. And the President’s bellicose language reinforced the impression of implacable US hostility toward the Soviet Union.Footnote 213 Reagan was not one to mince his words, either as candidate on the campaign trail or as President in major foreign policy speeches. Terms like “evil empire” permeated his speeches and reflected his life-long anti-communist, Christian, and conservative credo. He was truly stunned when he learned, toward the end of his first term, that his incendiary language had had a deep effect on the leadership of the Soviet Union. Later, the President simply could not believe what many intelligence sources were reporting at the time and what has since been confirmed by additional archival material – that Soviet hostility and propaganda were based on genuine fear of an unprovoked first strike by the US.
In 1983 a NATO exercise named Able Archer created more tensions with the Soviet Union. In the context of President Reagan’s evil empire speech in March 1983 and the downing of a Korean Air Lines Boeing 747 by a Soviet interceptor aircraft in September 1983, later that fall the Soviet Union put its nuclear forces on alert, a state of readiness reportedly higher than during any other NATO exercise.Footnote 214 Yuri Andropov, the Soviet leader at the time, worried that nuclear war would result not necessarily from a calculated pre-emptive strike but from miscalculation and inadvertence.Footnote 215 In a situation of high diplomatic tension, Able Archer contained multiple non-routine elements – radio silences, the loading of warheads, reports of “nuclear strikes” on open sources, and a count-down to “general alert” – that were similar to actual preparations for nuclear war. The fall of 1983 was perhaps the most dangerous moment since October 1962.
In 1985 the Soviet Union put a new secret defensive system in use without informing the US. After achieving nuclear parity in the 1970s, in the tense early 1980s Soviet leaders were genuinely concerned about the possibility of the Reagan administration preparing for a pre-emptive nuclear strike. Against all dictates of conventional deterrence theory and unbeknownst to everybody, including the US leadership, the Soviet Union adopted a “Doomsday Machine” guaranteeing retaliation – launching all Soviet missiles – even in the eventuality that the entire Soviet leadership died in a US attack or was unable to make a decision when and how to retaliate.Footnote 216 The Soviet Union created a command system that under some conditions bypassed political leaders. The decision to retaliate was semi-automatic and the burden of making the decision was shifted to a few officers in a concrete bunker buried so deep underground that it would survive a nuclear surprise attack.
The program was called Perimeter. Three conditions had to be met: the Perimeter system had to be activated by the Kremlin leadership, indicating advance permission for the system to fire; contact with the Kremlin’s political and military leadership had been lost; and nuclear detonations were recorded by a nationwide system of sensors. If all three conditions were met, then the officers were expected to activate the system by launching Perimeter command rockets, which would fly for about 30 minutes over Soviet territory and order operational Soviet missiles to attack the US. In a fully automated and regimented system a handful of officers were trained only to check whether the three conditions had been met prior to Perimeter’s final activation. The last human protection against a final spasm of destruction was paper-thin. If American decision makers, bent on a first strike, had known about the system, it might have reinforced the deterrence power of the Soviet Union. With reality imitating Kubrick’s art in Dr. Strangelove, the illogic of the plan was that they did not. The Soviet Union did everything to conceal the system’s existence. To avoid detection by satellites, Perimeter command missiles were disguised to look like ordinary missiles, and in exercises the firing of other missiles obeying the Perimeter signal was delayed. The system strengthened deterrence by removing from the Soviet leadership the burden of launch-on-warning decisions. But it weakened deterrence by removing all but a handful of middle-level officers from making the ultimate decision. And those officers were trained to be cogs in an almost automatic dead-hand Doomsday Machine.Footnote 217
Like machines, leaders can also be crazy. In 1971 Yehezkel Dror developed a brief scenario of how the United States might transform into a crazy state.Footnote 218 While that scenario differed from what transpired in 2020, it would be crazy to rule out the possibility of a crazy politics. That President Trump was psychologically in a fragile state after he had lost the 2020 Presidential election was no secret. What was not known was the extent of his impairment. Susan Glasser and Peter Baker detail the progressive deterioration of relations between President Trump and General Mark Milley, Trump’s last Chairman of the Joint Chiefs of Staff. President Trump and his closest political advisors were eager to take full control of the military and to create a 1933 Reichstag moment as pretext for stopping a peaceful transfer of power after losing the 2020 election.Footnote 219 When in the fall of 2021 two journalists published Peril, public media were filled with astonished commentary.Footnote 220 Mark Milley had assured his Chinese counterpart twice that the US would not launch a nuclear surprise attack, once on October 30, 2020 and a second time on January 8, 2021, two days after the storming of the Capitol.
By January 8, 2021, Milley had become convinced that the President had suffered a serious mental decline. Believing that China might lash out at a US ruled by an unpredictable President, Milley conveyed his impression to Speaker Nancy Pelosi. He also summoned senior officers to review the procedures for launching a nuclear strike. Only the President could give the order, he told his officers. But he, Milley, also had to be involved. “Looking each in the eye, Milley asked the officers to affirm that they had understood … what he considered to be an ‘oath’.”Footnote 221 The general said,
there’s a process here, there’s a procedure. No matter what you’re told, you do the procedure. You do the process. And I’m part of that procedure. You’ve got to make sure that the right people are on the net … The strict procedures are explicitly designed to avoid inadvertent mistakes or accident or nefarious, unintentional, illegal, immoral, unethical launching of the world’s most dangerous weapons.Footnote 222
In doing so Milley was taking measures resembling those of James Schlesinger, Secretary of Defense at the stressful time when President Nixon faced impeachment.Footnote 223
The immediate trigger of Milley’s second contact with his Chinese counterpart was a conversation with Speaker Pelosi on January 8, 2021. The Speaker wanted to know what checks were in place to stop an unstable President from ordering a nuclear strike. Pelosi and Milley agreed that the President was “crazy” and Milley assured Pelosi that there were “a lot of checks in the system.”Footnote 224 Milley was not in the chain of command and, legally speaking, had no way of stopping the President from acting. “But Milley could try to persuade others not to comply. He does have informal influence as the military’s senior officer, so he could try to persuade others not to launch.”Footnote 225 In the words of the late General Colin Powell: “I was the chairman of the Joint Chiefs of Staff and I can tell you for sure that if something like this ever happened and someone suddenly said, ‘We want to use nuclear weapons,’ they would never get near it.”Footnote 226 When it became public, this revelation was shocking for several reasons. Milley, the top military leader of the US, contacted his Chinese counterpart without informing the President. Although he was not in the chain of command Milley considered it his duty to second-guess the President’s authority to use military force. Milley and Speaker Pelosi judged the President to be “crazy” without professional medical authority supporting their assessment. And Milley insisted that there were numerous checks in the system when in fact no formal checks exist. Because it totally undercuts the logic of the rational actor model of strategic deterrence, invoking “fortuitous disobedience” in moments of psychological distress is unnerving.Footnote 227 There is, writes Fred Kaplan, “no safety switch in place, no circuit breaker that someone could throw, if the human turned out to be crazy.”Footnote 228
Some experts and many opponents of Donald Trump expressed the worry even before he was elected in 2016 that he would have the sole power to order a nuclear strike. In 2017 the President traded insults with North Korea’s leader Kim Jong-un and threatened him with “fire and fury like the world has never seen” – leaving it to the world to decipher this as a threat, a bluff, or his customary blustering self-aggrandizement. Trump also discussed a pre-emptive nuclear strike with his aides. He thought he could blame some other country for the act. His Chief of Staff, John Kelly, a retired general, eventually succeeded in talking him out of it. The control of nuclear escalation requires “consistency, focus, trustworthiness and coherence … traits that seem to be absent in Trump’s make-up.”Footnote 229 The possibility of nuclear war was still very much on Donald Trump’s mind well after he had lost the 2020 election. Under oath in a seven-hour court deposition in New York in April 2023, and never shy about extolling his contribution to the world’s well-being, the former and future President ruminated that “I think you would have nuclear holocaust if I didn’t deal with North Korea.”Footnote 230
Since humans are not rational, especially when operating in high-stress situations, Trump’s return to the White House is a source of dread for more than a few scholars of nuclear war. Preliminary results of nuclear crises simulated in a laboratory setting show that people who never expected to order the launch of nuclear weapons end up doing so in experiments that they report afterwards to have been grippingly realistic.Footnote 231 Even under the best of circumstances and in the darkest of moments, such as October 1962, reliance on a single decision maker makes the system of nuclear deterrence extremely fragile and highly unpredictable. The creation of potentially world-destroying weapons by very smart humans does not mean that humans will be smart enough to manage such weapons after they have been created.Footnote 232 Because it covers about one human life span, the nuclear era now counting eighty years may seem long, longer than the collective memory span of much of the world. In fact, it is a very short time when measured in human time and a mere tick in geological time.
The fragile nuclear peace is threatened by the craziness of machines and men. Yet perhaps it is the interaction of machines and men that has to date helped save humankind from a nuclear Armageddon. Distributed governance of men and machines with its human and technical protocols and procedures can stabilize the fragilities of machines and men left to their own devices. Technical redundancy and institutional obesity may offer a protective buffer of sorts against unchecked impulses. Facing this historical record of the nuclear era, humans are unreasoned optimists or pessimists. Causing and preventing nuclear escalation and accidents, machines are “indifferentists.” Confounding optimists, pessimists, and indifferentists, the interaction of machines and men may have stopped the unthinkable from happening in the past. But there is no reason why the unthinkable may not happen in the future. The scariest part of the Cuban missile crisis is that nobody was crazy. At the height of the crisis not fully understanding the abyss they confronted, leaders improvised and muddled themselves back from the brink.Footnote 233 Crazy politics can happen within earshot of complementary risk and uncertainty.
Relying on the post-Newtonian concept of potentiality that is not easily aligned with their Newtonian humanism, Vipin Narang and Scott Sagan write that “the past is a poor template for the future, and the future nuclear world has the potential to be distinctly different – and more fragile – from anything we have previously confronted.”Footnote 234 Looking back, Dean Acheson, a former Secretary of State and a realist, believed that the Kennedy administration succeeded in October 1962 because of “dumb luck.” Contradicting his firm rationalist beliefs, Thomas Schelling appears to have shared that assessment, at least occasionally.Footnote 235 Based on new evidence from Soviet archives, Radchenko and Zubok conclude that October 1962 sends a chilling reminder about the risks of brinkmanship and the importance of pure chance.Footnote 236 “Luck” they write, “played a large role.”Footnote 237 And Benoît Pelopidas writes about the “unbearable lightness of luck” in October 1962, an assessment shared by high-ranking officials at the time, like US Secretary of Defense Robert McNamara and the KGB head of Cuban Affairs. Since then, leading scholars have concurred with this assessment.Footnote 238 With the large world deeply entangled with the small, everywhere and always uncertainty and risk are tightly tethered together in the risk-uncertainty conundrum. Writing about the role of fortuna and uncertainty in politics, Machiavelli, smiling knowingly, might well mutter “I told you so.”