We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Given a Fell bundle $\mathcal {B}=\{B_t\}_{t\in G}$ over a locally compact group G and a closed subgroup $H\subset G,$ we construct quotients $C^{*}_{H\uparrow \mathcal {B}}(\mathcal {B})$ and $C^{*}_{H\uparrow G}(\mathcal {B})$ of the full cross-sectional C*-algebra $C^{*}(\mathcal {B})$ analogous to Exel–Ng’s reduced algebras $C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B})\equiv C^{*}_{\{e\}\uparrow \mathcal {B}}(\mathcal {B})$ and $C^{*}_R(\mathcal {B})\equiv C^{*}_{\{e\}\uparrow G}(\mathcal {B}).$ An absorption principle, similar to Fell’s one, is used to give conditions on $\mathcal {B}$ and H (e.g., G discrete and $\mathcal {B}$ saturated, or H normal) ensuring $C^{*}_{H\uparrow \mathcal {B}}(\mathcal {B})=C^{*}_{H\uparrow G}(\mathcal {B}).$ The tools developed here enable us to show that if the normalizer of H is open in G and $\mathcal {B}_H:=\{B_t\}_{t\in H}$ is the reduction of $\mathcal {B}$ to $H,$ then $C^{*}(\mathcal {B}_H)=C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B}_H)$ if and only if $C^{*}_{H\uparrow \mathcal {B}}(\mathcal {B})=C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B});$ the last identification being implied by $C^{*}(\mathcal {B})=C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B}).$ We also prove that if G is inner amenable and $C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B})\otimes _{\max } C^{*}_{\mathop {\mathrm {r}}}(G)=C^{*}_{\mathop {\mathrm { r}}}(\mathcal {B})\otimes C^{*}_{\mathop {\mathrm {r}}}(G),$ then $C^{*}(\mathcal {B})=C^{*}_{\mathop {\mathrm {r}}}(\mathcal {B}).$
A more expressive axiomatic theory of syntax is presented. It is shown that this theory generalises the theory of chapter 5 and allows the derivation of many natural properties of syntax.
Most intelligence models include an array of processes and mechanisms that enable experts to generalize their knowledge and processes during career transitions and produce flexibility in cognitive structures that enable individuals to overcome limitations in applying expert knowledge and processes across domains and functional areas. These processes have been described variously as insightful thinking, induction, eduction, elaborating and mapping, novelty and metaphorical capacity, inductive inference, divergent production abilities, analogy, flexibility of use, and closure. They are discussed in light of the retrospective interviews with twenty-four elite performers in three domains (business, sports, and music) who successfully and repeatedly transitioned to higher positions within their field.
In ‘Aristotle on the Stages of Cognitive Development’, Thomas Kjeller Johansen examines Aristotle’s contributions to our thinking about concepts from a different perspective, namely in connection to Aristotle’s psychology. He revisits Aristotle’s account of how we acquire universal concepts mainly on the basis of Metaphysics A.1, Posterior Analytics 1.31 and 2.19, and the De Anima. The chapter begins by articulating the following puzzle. On the one hand, Aristotle points out (An. Post. 1.31, 2.19) that we perceive the universal in the particular. On the other, he suggests (Metaph. A.1) that it is only when we have craft and science that we grasp the universal, while perception, memory, and experience all are concerned with the particular. Building on the widespread view that, according to Aristotle, the universal grasped in craft and science is the universal cause, Johansen argues that we should understand perception, memory, and experience teleologically, as stages in the ordering of perceptual information that allows this causal concept to emerge.
This chapter begins by differentiating qualitative and quantitative research. While some have argued that these approaches are incommensurable paradigms, this chapter argues that they are commensurable but suited to answering different research questions. It introduces a typology of research questions, with six types of question – three qualitative (describing phenomena, theoretical framing, and generating explanations) and three quantitative (measuring phenomena, testing theory, and exploring explanations). The chapter ends by reviewing heuristics to help researchers generate novel and productive research questions.
Equality and equal treatment are the principal purpose of WTO law. However, that purpose is accomplished in varying conditions, which make it difficult to regularly attain the consistency and coherence that an egalitarian and obligatory conception of the law assumes. Consequently, this chapter proceeds to demonstrate how WTO law is focused secondarily on fairness and corrective justice and how this focus begets a subordinate emphasis in law on rights, which in turn gives rise to a contractual structure that is retrospectively oriented and reasoned inductively. It also demonstrates how various features of WTO law like the non-violation cause of action and implementation are consistent with such a rights-based ethos.
Theoreticians that defend a form of realism regarding natural kinds minimally entertain the belief that the world features divisions into kinds and that the natural kind concept is a useful tool for philosophy of science. The objective of this paper is to challenge these assumptions. First, we challenge realism toward natural kinds by showing that the main arguments for their existence, which rely on the epistemic success of natural kinds, are unsatisfactory. Second, we show that, whether they exist or not, natural kinds are expendable when it comes to describing and analyzing scientific explanations accurately.
Much scholarship on customary international law has examined the merits of induction, deduction, and assertion as approaches to custom identification. Save for where international tribunals identify custom by assertion, writers have viewed custom identification that does not rely on evidence of State practice and opinio juris as an example of deductive reasoning. However, writers have stated that, at best, deduction is reasoning from the general to the particular. This article draws on legal philosophy to define the contours of deductive reasoning and argues that pure deduction, namely deduction not combined with other forms of reasoning, is an unsound approach to custom identification. This argument is tested by reference to cases of custom identification by the International Court of Justice, categorised according to three types of deduction: normative, functional, and analogical. This article also explores the authority and utility of custom identification by pure deduction and its impact on content determination.
Most social policies cannot be defended without making inductive inferences. For example, consider certain arguments for racial profiling and affirmative action, respectively. They begin with statistics about crime or socioeconomic indicators. Next, there is an inductive step in which the statistic is projected from the past to the future. Finally, there is a normative step in which a policy is proposed as a response in the service of some goal—for example, to reduce crime or to correct socioeconomic imbalances. In comparison to the normative step, the inductive step of a policy defense may seem trivial. We argue that this is not so. Satisfying the demands of the inductive step is difficult, and doing so has important but underappreciated implications for the normative step. In this paper, we provide an account of induction in social contexts and explore its implications for policy. Our account helps to explain which normative principles we ought to accept, and as a result it can explain why it is acceptable to make inferences involving race in some contexts (e.g., in defense of affirmative action) but not in others (e.g., in defense of racial profiling).
An influential strand in philosophy of science claims that scientific paradigms can be understood as relativized a priori frameworks. Here, Kant’s constitutive a priori principles are no longer held to establish conditions of possibility for knowledge which are unchanging and universally true, but are restricted only to a given scientific domain. Yet it is unclear how exactly a relativized a priori can be construed as both stable and dynamical, establishing foundations for current scientific claims while simultaneously making intelligible the transition to a subsequent framework. In this article, I show that important resources for this problem have been overlooked in Kant’s theory of reflective judgement in the third Critique. I argue that Kant accorded the task of formulating new scientific laws to reflective judgement, which is charged with forming new ‘universals’ that guide the experience of nature. I show that this is the very task attributed to the relativized a priori: the constitution of a given conceptual framework, not of the conditions for object-reference as such. I conclude that Kant’s considered conception of science encompasses the operations of both reflective and determining judgement. Relativizations of the a priori should follow Kant’s lead.
This chapter details the practical, theoretical, and philosophical aspects of experimental science. It discusses how one chooses a project, performs experiments, interprets the resulting data, makes inferences, and develops and tests theories. It then asks the question, "are our theories accurate representations of the natural world, that is, do they reflect reality?" Surprisingly, this is not an easy question to answer. Scientists assume so, but are they warranted in this assumption? Realists say "yes," but anti-realists argue that realism is simply a mental representation of the world as we perceive it, that is, metaphysical in nature. Regardless of one's sense of reality, the fact remains that science has been and continues to be of tremendous practical value. It would have to be a miracle if our knowledge and manipulation of the nature were not real. Even if they were, how do we know they are true in an absolute sense, not just relative to our own experience? This is a thorny philosophical question, the answer to which depends on the context in which it is asked. The take-home message for the practicing scientist is "never assume your results are true."
Paranoia is common in clinical and nonclinical populations, consistent with continuum models of psychosis. A number of experimental studies have been conducted that attempt to induce, manipulate or measure paranoid thinking in both clinical and nonclinical populations, which is important to understand causal mechanisms and advance psychological interventions. Our aim was to conduct a systematic review and meta-analysis of experimental studies (non-sleep, non-drug paradigms) on psychometrically assessed paranoia in clinical and nonclinical populations. The review was conducted using PRISMA guidelines. Six databases (PsycINFO, PubMed, EMBASE, Web of Science, Medline and AMED) were searched for peer-reviewed experimental studies using within and between-subject designs to investigate paranoia in clinical and nonclinical populations. Effect sizes for each study were calculated using Hedge's g and were integrated using a random effect meta-analysis model. Thirty studies were included in the review (total n = 3898), which used 13 experimental paradigms to induce paranoia; 10 studies set out to explicitly induce paranoia, and 20 studies induced a range of other states. Effect sizes for individual studies ranged from 0.03 to 1.55. Meta-analysis found a significant summary effect of 0.51 [95% confidence interval 0.37–0.66, p < 0.001], indicating a medium effect of experimental paradigms on paranoia. Paranoia can be induced and investigated using a wide range of experimental paradigms, which can inform decision-making about which paradigms to use in future studies, and is consistent with cognitive, continuum and evolutionary models of paranoia.
The digitalization of business organizations and of society in general has opened up the possibility of researching behaviours using large volumes of digital traces and electronic texts that capture behaviours and attitudes in a broad range of natural settings. How is the availability of such data changing the nature of qualitative, specifically interpretive, research and are computational approaches becoming the essence of such research? This chapter briefly examines this issue by considering the potential impacts of digital data on key themes associated with research, those of induction, deduction and meaning. It highlights some of the ‘nascent myths’ associated with the digitalization of qualitative research. The chapter concludes that while the changes in the nature of data present exciting opportunities for qualitative, interpretive researchers to engage with computational approaches in the form of mixed-methods studies, it is not believed they will become the sine qua non of qualitative information systems research in the foreseeable future.
Inductive reasoning involves generalizing from samples of evidence to novel cases. Previous work in this field has focused on how sample contents guide the inductive process. This chapter reviews a more recent and complementary line of research that emphasizes the role of the sampling process in induction. In line with a Bayesian model of induction, beliefs about how a sample was generated are shown to have a profound effect on the inferences that people draw. This is first illustrated in research on beliefs about sampling intentions: was the sample generated to illustrate a concept or was it generated randomly? A related body of work examines the effects of sampling frames: beliefs about selection mechanisms that cause some instances to appear in a sample and others to be excluded. The chapter describes key empirical findings from these research programs and highlights emerging issues such as the effect of timing of information about sample generation (i.e., whether it comes before or after the observed sample) and individual differences in inductive reasoning. The concluding section examines how this work can be extended to more complex reasoning problems where observed data are subject to selection biases.
Chapter 12 criticizes Karl Poppers and Imre Lakatos’ views on theory appraisal, which have been particularly influential among writers on economic methodology, although their influence has waned. Popperian critics of economics are right to claim that economists seldom practice the falsificationism that many preach, but, in contrast to authors such as Mark Blaug (), I argue that the problem is with the preaching, not with the practice: falsificationism is not a feasible methodology. Although Lakatos provides more resources with which to defend economics than does Popper, his views are also inadequate and for a similar reason. Both Popper and Lakatos deny that there is ever reason to believe that scientific statements are close to the truth or likely to be true, and neither provides a viable construal of tendencies. In denying that such reasons to accept generalizations have a role in either engineering or in theoretical science, Popper and Lakatos are implicitly calling for a radical and destructive transformation of human practices.
Chapter 10 considers what conditions must be met if one is to have good reason to accept tendency claims or inexact laws, and it presents an interpretation of J. S. Mills views on confirmation, which still appear to dominate methodological practice in economics. It begins in §10.1 by discussing well-known Bayesian, hypothetico-deductive, and likelihood approaches to confirmation, before focusing on an indirect inductive method, which Mill calls “the method a priori” or “the deductive method.” §10.2 lays out the broad outlines of Mill’s deductive method. §10.3 expands upon Mill’s method a posteriori – his direct inductive method – to address the question of how economists can know whether their fundamental generalizations express inexact laws or tendencies. §10.4 examines in detail what Mill has to say about his deductive method, while §10.5 lays out the implicit algorithm that Mill offers for testing the implications of theoretical hypotheses in an inexact and separate science.
Answering a question by Chatterji–Druţu–Haglund, we prove that, for every locally compact group $G$, there exists a critical constant $p_G \in [0,\infty ]$ such that $G$ admits a continuous affine isometric action on an $L_p$ space ($0< p<\infty$) with unbounded orbits if and only if $p \geq p_G$. A similar result holds for the existence of proper continuous affine isometric actions on $L_p$ spaces. Using a representation of cohomology by harmonic cocycles, we also show that such unbounded orbits cannot occur when the linear part comes from a measure-preserving action, or more generally a state-preserving action on a von Neumann algebra and $p>2$. We also prove the stability of this critical constant $p_G$ under $L_p$ measure equivalence, answering a question of Fisher.
Before venturing into the study of choreographies, we introduce the formalism of inference systems. Inference systems are widely used in the fields of formal logic and programming languages and they were later applied to theory of choreographies as well.
Inductive reasoning involves using existing knowledge to make predictions about novel cases. This chapter reviews and evaluates computational models of this fundamental aspect of cognition, with a focus on work involving property induction. The review includes early induction models such as similarity coverage, and the feature-based induction model, as well as a detailed coverage of more recent Bayesian and connectionist approaches. Each model is examined against benchmark empirical phenomena. Model limitations are also identified. The chapter highlights the major advances that have been made in our understanding of the mechanisms that drive induction, as well as identifying challenges for future modeling. These include accounting for individual and developmental differences and applying induction models to explain other forms of reasoning.
The basics of magnets, magnetism, induced magnetism, magnetic fields, and electromagnets are introduced. The force between a magnet and an electromagnet is the basis for speakers, devices that turn electrical signals into sounds, as well as other electromechanical devices. The process works in reverse, in that a sound incident on a speaker can produce an electrical signal. The latter principle is used for some microphones. In general, a time-dependent magnetic field, for example, due to a moving magnet, will tend to induce electrical currents that oppose the change. This is known as Faraday’s law of induction. Several of the principles of magnetism are used together to create an electric guitar pickup. A scheme to use a pair of pickups to cancel out environmental signals, known as a humbucker, is shown. Electrical transformers work based on a time-changing magnetic field from one electromagnet experienced by another, and they are useful for generating electrical signals that better match the destination for the signals—typically an amplifier.