To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This thesis presents my contributions to various aspects of the theory of universally Baire sets. One of these aspects is the smallest inner model containing all reals whose all sets of reals are universally Baire (viz., $L(\mathbb {R})$) and its relation to its inner model $\mathsf {HOD}$. We verify here that $\mathsf {HOD}^{L(\mathbb {R})}$ enjoys a form of local definability inside $L(\mathbb {R})$, further justifying its characterization as a “core model” in $L(\mathbb {R})$. We then study a “bottom-up” construction of more complicated universally Baire sets (more generally, determined sets). This construction allows us to give an “L-like” description of the minimum model of $\mathsf {AD}_{\mathbb {R}} + \mathsf {Cof}(\Theta ) = \Theta $. A consequence of this description is that this minimum model is contained in the Chang-plus model. Our construction, together with Woodin’s work on the Chang-plus model, shows that a proper class of Woodin cardinals which are limits of Woodin cardinals implies the existence of a hod mouse with a measurable limit of Woodin cardinals whose strategy is universally Baire.
Another aspect of the theory of universally Baire sets is the generic absoluteness and maximality associated with them. We include some results concerning generic $\Sigma _1^{H(\omega _2)}$-absoluteness with universally Baire sets as predicates or parameters, as well as generic $\Pi _2^{H(\omega _2)}$-maximality with universally Baire sets as predicates. In the second case, we are led to consider the general question of when a model of an infinitary propositional formula can be added by a stationary-set-preserving poset. We characterize when this happens in terms of a game which is a variant of the Model Existence Game. We then give a sufficient condition for this in terms of generic embeddings.
This chapter explains why cognition (Erkenntnis) is its own kind of cognitive good, apart from questions of justification. I argue against reducing the work of thought experiments to their epistemological results, such as their potential to provide prima facie justification. As an apparatus for cognition, a thought experiment enacts the three core elements of Ørsted’s Kantian account: (1) it is a tool for variation; (2) it proceeds from concepts, and (3) its goal is the genuine activation or reactivation of mental processes. Cognition has two components: givenness and thought. I will show in this chapter how givenness and thought are both achieved through thought experiments.
Within the determinacy setting, ${\mathscr {P}({\omega _1})}$ is regular (in the sense of cofinality) with respect to many known cardinalities and thus there is substantial evidence to support the conjecture that ${\mathscr {P}({\omega _1})}$ has globally regular cardinality. However, there is no known information about the regularity of ${\mathscr {P}(\omega _2)}$. It is not known if ${\mathscr {P}(\omega _2)}$ is even $2$-regular under any determinacy assumptions. The article will provide the following evidence that ${\mathscr {P}(\omega _2)}$ may possibly be ${\omega _1}$-regular: Assume $\mathsf {AD}^+$. If $\langle A_\alpha : \alpha < {\omega _1} \rangle $ is such that ${\mathscr {P}(\omega _2)} = \bigcup _{\alpha < {\omega _1}} A_\alpha $, then there is an $\alpha < {\omega _1}$ so that $\neg (|A_\alpha | \leq |[\omega _2]^{<\omega _2}|)$.
We prove that every $\Sigma ^0_2$ Gale-Stewart game can be won via a winning strategy $\tau $ which is $\Delta _1$-definable over $L_{\delta }$, the $\delta $th stage of Gödel’s constructible universe, where $\delta = \delta _{\sigma ^1_1}$, strengthening a theorem of Solovay from the 1970s. Moreover, the bound is sharp in the sense that there is a $\Sigma ^0_2$ game with no strategy $\tau $ which is witnessed to be winning by an element of $L_{\delta }$.
Carnap’s (Categoricity) Problem concerns the relationship between (rules of) inference and model-theoretic values. In particular, it asks whether proof-theoretic constraints are ‘strong enough’ to uniquely determine intended semantic values. Carnap [20] demonstrated that already in the classical bivalent setting this is not the case for the majority of the usual logical constants. To remedy this underdetermination of ‘semantics by syntax’ a variety of solution strategies has been explored in the literature. This article is a philosophical-logical survey of these attempts, comparing them with respect to scope, motivation, and success. Besides the mathematical interest held by Carnap’s Problem, the underdetermination it uncovers has significant consequences for a variety of philosophical projects and positions, warranting a systematic study of attempts at resolving it.
There is a millennia-old tradition of practical reason in the law. For the last two centuries, various determinist imaginaries have chipped away at that tradition, with one of the newest being strict textualism. This chapter contrasts the interpretive methods that Cicero put forward in his early work, De Inventione, dating to the early first century BCE, with those presented by a greatly influential 2012 book coauthored by Justice Antonin Scalia, Reading Law. The chapter contends that Reading Law offers a method for interpreting, or construing, legal texts that is replete with the hallmarks of practical reason, but the rhetoric with which Reading Law characterizes its method is thoroughly deterministic. This chapter contends that this rhetoric encourages judges to hide their reasoning behind application of simplistic (and often incorrect) “rules” for textual interpretation. The chapter illustrates the contrast in the two approaches by discussing a Texas Court of Appeals opinion – which exhibits Ciceronian practical reason – and the Texas Supreme Court’s opinion in the same case – which exhibits Scalian determinism.
This paper explores the role of the cost channel in a behavioral New Keynesian model where households and firms have different degrees of cognitive discounting. Our findings are summarized as follows. First, we demonstrate how the degree of cognitive discounting significantly affects the determinacy condition through the cost channel model. Second, a high degree of cognitive discounting attenuates the response of inflation to a monetary tightening shock, and the cost channel amplifies this effect. Third, the degree of cognitive discounting significantly impacts the effect of the cost channel on the design of optimal monetary policy.
Taking advantage of what have been learned about “Japanese collectivism,” this chapter theoretically examines cultural stereotype, which is a simplified and distorted image of a culture. The cultural stereotype tends to create the following four basic illusions: uniformity (“The Japanese are all collectivists”), polarity (“The Japanese are collectivists, whereas the Americans are individualists”), determinacy (“Japanese culture causes Japanese collectivism”), and permanency (“Japanese collectivism is immutable”). Contrary to the illusions of uniformity and polarity, actual data typically show large individual difference within each group, and the distributions of individual difference typically have a large overlap between groups. Contrary to the illusion of determinacy, human behavior tends to be affected more strongly by situation than by culture. Contrary to the illusion of permanency, culture as well as human mind and behavior tends to change as a result of intellectual activity and situational change.
This chapter introduces the main concepts and the problems to be investigated by the book. In particular, the chapter defines the Largest Suslin Axiom (LSA) and the minimal model of LSA. The chapter summarizes the main theorems to be proved in the book: HOD of the minimal model of LSA satisfies the Generalized Continuum Hypothesis, the Mouse Set Conjecture holds in the minimal model of LSA, the consistency of LSA from large cardinals, the consistency of LSA from strong forcing axioms like PFA.
Developing the theory up to the current state-of-the art, this book studies the minimal model of the Largest Suslin Axiom (LSA), which is one of the most important determinacy axioms and features prominently in Hugh Woodin's foundational framework known as the Ultimate L. The authors establish the consistency of LSA relative to large cardinals and develop methods for building models of LSA from other foundational frameworks such as Forcing Axioms. The book significantly advances the Core Model Induction method, which is the most successful method for building canonical inner models from various hypotheses. Also featured is a proof of the Mouse Set Conjecture in the minimal model of the LSA. It will be indispensable for graduate students as well as researchers in mathematics and philosophy of mathematics who are interested in set theory and in particular, in descriptive inner model theory.
We provide new evidence about US monetary policy using a model that: (i) estimates time-varying monetary policy weights without relying on stylized theoretical assumptions; (ii) allows for endogenous breakdowns in the relationship between interest rates, inflation, and output; and (iii) generates a unique measure of monetary policy activism that accounts for economic instability. The joint incorporation of endogenous time-varying uncertainty about the monetary policy parameters and the stability of the relationship between interest rates, inflation, and output materially reduces the probability of determinate monetary policy. The average probability of determinacy over the period post-1982 to 1997 is below 60% (hence well below seminal estimates of determinacy probabilities that are close to unity). Post-1990, the average probability of determinacy is 75%, falling to approximately 60% when we allow for typical levels of trend inflation.
Saul Kripke famously raised two sorts of problems for responses to the meaning skeptic that appealed to how we were disposed to use our words in the past. The first related to the fact that our “dispositions extend to only finitely many cases” while the second related to the fact that most of us have “dispositions to make mistakes.” The second of these problems has produced an enormous, and still growing, literature on the purported “normativity” of meaning, but the first has received (at least comparatively) little attention. It will be argued here, however, that (1) the fact that we can be disposed to make mistakes doesn’t present a serious problem for many disposition-based responses to the skeptic, and (2) considerations of the “finiteness” of our dispositions point, on their own, to an important way that the relation between meaning and use must be understood as “normative.”
Work by Chomsky et al. (2019) and Epstein et al. (2018) develops a third-factor principle of computational efficiency called “Determinacy”, which rules out “ambiguous” syntactic rule-applications by requiring one-to-one correspondences between the input or output of a rule and a single term in the domain of that rule. This article first adopts the concept of “Input Determinacy” articulated by Goto and Ishii (2019, 2020), who apply Determinacy specifically to the input of operations like Merge, and then proposes to extend Determinacy to the labeling procedure developed by Chomsky (2013, 2015). In particular, Input Determinacy can explain restrictions on labeling in contexts where multiple potential labels are available (labeling ambiguity), and it can also provide an explanation for Chomsky's (2013, 2015) proposal that syntactic movement of an item (“Internal Merge”) renders that item invisible to the labeling procedure.
The question with which this chapter grapples is the following: What kind of a concept is coherence and what is its content? The chapter begins by a general introduction on concepts. Three different concept types are identified: criterial concepts, natural-kind concepts, and interpretative concepts. As coherence is clearly not a natural-kind concept, the chapter analyses coherence as a potential candidate concept of the criterial kind. It identifies three elements often associated with, and deemed necessary for, the existence of coherence in a legal setting, namely: consistency, correctness, and comprehensiveness. Incidentally, these are also key concerns regarding the existing ISDS regime as expressed by state delegations and scholars. The section ultimately concludes that none of the three elements is necessary for coherence to exist in non-ideal practical situations. Based on this examination, the chapter then shifts perspectives and characterises coherence as a concept of the interpretative kind. In so doing, the chapter makes a preliminary case for the existence of a dual, substantive and methodological, dimension of the interpretative concept of coherence
Assume $\mathsf {ZF} + \mathsf {AD}$ and all sets of reals are Suslin. Let $\Gamma $ be a pointclass closed under $\wedge $, $\vee $, $\forall ^{\mathbb {R}}$, continuous substitution, and has the scale property. Let $\kappa = \delta (\Gamma )$ be the supremum of the length of prewellorderings on $\mathbb {R}$ which belong to $\Delta = \Gamma \cap \check \Gamma $. Let $\mathsf {club}$ denote the collection of club subsets of $\kappa $. Then the countable length everywhere club uniformization holds for $\kappa $: For every relation $R \subseteq {}^{<{\omega _1}}\kappa \times \mathsf {club}$ with the property that for all $\ell \in {}^{<{\omega _1}}\kappa $ and clubs $C \subseteq D \subseteq \kappa $, $R(\ell ,D)$ implies $R(\ell ,C)$, there is a uniformization function $\Lambda : \mathrm {dom}(R) \rightarrow \mathsf {club}$ with the property that for all $\ell \in \mathrm {dom}(R)$, $R(\ell ,\Lambda (\ell ))$. In particular, under these assumptions, for all $n \in \omega $, $\boldsymbol {\delta }^1_{2n + 1}$ satisfies the countable length everywhere club uniformization.
Schmidt’s game and other similar intersection games have played an important role in recent years in applications to number theory, dynamics, and Diophantine approximation theory. These games are real games, that is, games in which the players make moves from a complete separable metric space. The determinacy of these games trivially follows from the axiom of determinacy for real games, $\mathsf {AD}_{\mathbb R}$, which is a much stronger axiom than that asserting all integer games are determined, $\mathsf {AD}$. One of our main results is a general theorem which under the hypothesis $\mathsf {AD}$ implies the determinacy of intersection games which have a property allowing strategies to be simplified. In particular, we show that Schmidt’s $(\alpha ,\beta ,\rho )$ game on $\mathbb R$ is determined from $\mathsf {AD}$ alone, but on $\mathbb R^n$ for $n \geq 3$ we show that $\mathsf {AD}$ does not imply the determinacy of this game. We then give an application of simple strategies and prove that the winning player in Schmidt’s $(\alpha , \beta , \rho )$ game on $\mathbb {R}$ has a winning positional strategy, without appealing to the axiom of choice. We also prove several other results specifically related to the determinacy of Schmidt’s game. These results highlight the obstacles in obtaining the determinacy of Schmidt’s game from $\mathsf {AD}$.
We consider a real-valued function f defined on the set of infinite branches X of a countably branching pruned tree T. The function f is said to be a limsup function if there is a function $u \colon T \to \mathbb {R}$ such that $f(x) = \limsup _{t \to \infty } u(x_{0},\dots ,x_{t})$ for each $x \in X$. We study a game characterization of limsup functions, as well as a novel game characterization of functions of Baire class 1.
Chapter 8 concludes the book by reviewing its arguments and conclusions, identifying weaknesses in the methods used, and proposing ways in which these might be addressed and the research developed further, for example by involving participants in other countries. The chapter importantly notes that uncertainty and contestation remain marginal problems in the jus ad bellum, as in most areas of international law. In most cases, the requirements of international law governing resort to military force are clear and uncontested. There are many reasons to believe that the contemporary jus ad bellum has contributed to global peace and stability. This book’s examination of legal and factual uncertainty and extra-legal intuitions seeks to support and assist lawyers and states in their mission to uphold and apply this crucial area of international law, not to encourage lawyers or states to undermine or abandon it.
We show that, assuming the Axiom of Determinacy, every non-selfdual Wadge class can be constructed by starting with those of level $\omega _1$ (that is, the ones that are closed under Borel preimages) and iteratively applying the operations of expansion and separated differences. The proof is essentially due to Louveau, and it yields at the same time a new proof of a theorem of Van Wesep (namely, that every non-selfdual Wadge class can be expressed as the result of a Hausdorff operation applied to the open sets). The exposition is self-contained, except for facts from classical descriptive set theory.