We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we consider the question of smoothness of slowly varying functions satisfying the modern definition that, in the last two decades, gained prevalence in the applications concerning function spaces and interpolation. We show that every slowly varying function of this type is equivalent to a slowly varying function that has continuous classical derivatives of all orders.
We show that there is a set $S \subseteq {\mathbb N}$ with lower density arbitrarily close to $1$ such that, for each sufficiently large real number $\alpha $, the inequality $|m\alpha -n| \geq 1$ holds for every pair $(m,n) \in S^2$. On the other hand, if $S \subseteq {\mathbb N}$ has density $1$, then, for each irrational $\alpha>0$ and any positive $\varepsilon $, there exist $m,n \in S$ for which $|m\alpha -n|<\varepsilon $.
Let $X$, $Y$ be nonsingular real algebraic sets. A map $\varphi \colon X \to Y$ is said to be $k$-regulous, where $k$ is a nonnegative integer, if it is of class $\mathcal {C}^k$ and the restriction of $\varphi$ to some Zariski open dense subset of $X$ is a regular map. Assuming that $Y$ is uniformly rational, and $k \geq 1$, we prove that a $\mathcal {C}^{\infty }$ map $f \colon X \to Y$ can be approximated by $k$-regulous maps in the $\mathcal {C}^k$ topology if and only if $f$ is homotopic to a $k$-regulous map. The class of uniformly rational real algebraic varieties includes spheres, Grassmannians and rational nonsingular surfaces, and is stable under blowing up nonsingular centers. Furthermore, taking $Y=\mathbb {S}^p$ (the unit $p$-dimensional sphere), we obtain several new results on approximation of $\mathcal {C}^{\infty }$ maps from $X$ into $\mathbb {S}^p$ by $k$-regulous maps in the $\mathcal {C}^k$ topology, for $k \geq 0$.
The brain must make inferences about, and decisions concerning, a highly complex and unpredictable world, based on sparse evidence. An “ideal” normative approach to such challenges is often modeled in terms of Bayesian probabilistic inference. But for real-world problems of perception, motor control, categorization, language understanding, or commonsense reasoning, exact probabilistic calculations are computationally intractable. Instead, we suggest that the brain solves these hard probability problems approximately, by considering one, or a few, samples from the relevant distributions. Here we provide a gentle introduction to the various sampling algorithms that have been considered as the approximation used by the brain. We broadly summarize these algorithms according to their level of knowledge and their assumptions regarding the target distribution, noting their strengths and weaknesses, their previous applications to behavioural phenomena, as well as their psychological plausibility.
People must often make inferences about, and decisions concerning, a highly complex and unpredictable world, on the basis of sparse evidence. An “ideal” normative approach to such challenges is often modeled in terms of Bayesian probabilistic inference. But for real-world problems of perception, motor control, categorization, language comprehension, or common-sense reasoning, exact probabilistic calculations are computationally intractable. Instead, we suggest that the brain solves these hard probability problems approximately, by considering one, or a few, samples from the relevant distributions. By virtue of being an approximation, the sampling approach inevitably leads to systematic biases. Thus, if we assume that the brain carries over the same sampling approach to easy probability problems, where the “ideal” solution can readily be calculated, then a brain designed for probabilistic inference should be expected to display characteristic errors. We argue that many of the “heuristics and biases” found in human judgment and decision-making research can be reinterpreted as side effects of the sampling approach to probabilistic reasoning.
Chapter 9 develops the traditional conception of economics as an inexact science that investigates deductively the implications of assumptions that are known to be true statements of tendencies, but that are only approximately true as generalizations concerning behavior. I consider interpretations of the problematic notion of inexactness as probabilistic, as approximate, as qualified with ceteris paribus clauses, and as stating tendencies that state the contribution that causes make to outcomes and hence how things would be in the absence of interferences. The chapter argues for an account that combines a view of inexactness in terms of tendencies with an account in terms of implicit qualification. Chapter 9 explains how statements of tendencies can be true.
This Element offers an opinionated and selective introduction to philosophical issues concerning idealizations in physics, including the concept of and reasons for introducing idealization, abstraction, and approximation, possible taxonomy and justification, and application to issues of mathematical Platonism, scientific realism, and scientific understanding.
In this chapter, we give a comprehensive overview of the large variety of approximation results for neural networks. Approximation rates for classical function spaces as well as the benefits of deep neural networks over shallow ones for specifically structured function classes are discussed. While the main body of existing results is for general feedforward architectures, we also review approximation results for convolutional, residual and recurrent neural networks.
In this chapter, we define infinite orbits and show how to approximate such orbits. We prove that for every function that has no fixed point there is an approximate infinite orbit with unbounded variation, and we use this result to show that a certain class of quitting games admits undiscounted $\ep$-equilibria.
We study the detection and the reconstruction of a large very dense subgraph in a social graph with n nodes and m edges given as a stream of edges, when the graph follows a power law degree distribution, in the regime when
$m=O(n. \log n)$
. A subgraph S is very dense if it has
$\Omega(|S|^2)$
edges. We uniformly sample the edges with a Reservoir of size
$k=O(\sqrt{n}.\log n)$
. Our detection algorithm checks whether the Reservoir has a giant component. We show that if the graph contains a very dense subgraph of size
$\Omega(\sqrt{n})$
, then the detection algorithm is almost surely correct. On the other hand, a random graph that follows a power law degree distribution almost surely has no large very dense subgraph, and the detection algorithm is almost surely correct. We define a new model of random graphs which follow a power law degree distribution and have large very dense subgraphs. We then show that on this class of random graphs we can reconstruct a good approximation of the very dense subgraph with high probability. We generalize these results to dynamic graphs defined by sliding windows in a stream of edges.
Chapter 3 focuses on the uses of general extenders that are addressee-oriented and express an interpersonal function in interaction. The underlying concept is described as intersubjectivity, which is tied to an awareness of the addressee’s needs. Participants in an interaction are taken to be cooperative fellow speakers, adhering toGrice's Quality and Quantity maxims. The use of adjunctive forms to indicate common ground can also create a sense of solidarity, indicating similarity, and hence also signaling positive politeness. In other situations, speakers can use disjunctive forms to signal negative politeness, that is, a concern with potentially imposing on the addressee. When general extenders are used as part of these politeness strategies, they are often described as hedges, used to indicate possible inaccuracy or imposition and a desire to avoid such things, resulting in an association with approximation.
In the problems we have considered so far, we have either ignored the actual material consumption/production (sequential environments) or assumed that materials are consumed/produced at fixed proportions (network environments). There are problems, however, where the proportions in which materials are consumed can vary provided that some specifications are satisfied. This problem, which is termed multiperiod blending or simply blending, is fundamentally different from the ones discussed thus far because it leads to nonlinear models. There are two types of blending problems: (1) different streams/inputs are blended before they are processed/converted (process blending); and (2) streams/inputs are blended to produce final products (product blending). In Section 11.1, we introduce some preliminary concepts and a formal problem statement for product blending. In Section 11.2, we present two alternative formulations for product blending, and in Section 11.3, we present two approximate linear reformulations. We close, in Section 11.4, with a discussion of models for process blending. We focus on the equations necessary to account for the key new features of blending problems: (1) the selection of input materials and their blending in variable proportions, and (2) the requirement to satisfy given property specifications.
We investigate convergence in the cone of completely monotone functions. Particular attention is paid to the approximation of and by exponentials and stretched exponentials. The need for such an analysis is a consequence of the fact that although stretched exponentials can be approximated by sums of exponentials, exponentials cannot in general be approximated by sums of stretched exponentials.
Given a stationary and isotropic Poisson hyperplane process and a convex body K in
${\mathbb R}^d$
, we consider the random polytope defined by the intersection of all closed half-spaces containing K that are bounded by hyperplanes of the process not intersecting K. We investigate how well the expected mean width of this random polytope approximates the mean width of K if the intensity of the hyperplane process tends to infinity.
Chapter 5 compares the phraseology of usage to exposure. It shows that more than half of patterns extracted from a student’s usage corpus also occur in her exposure corpus. At the same time the figure drops significantly if these patterns are compared to a different student’s exposure corpus supporting the assumption of representativeness. The chapter then proceeds to compare usage patterns to exposure qualitatively focusing on the processes of variation and change. It finds support for the process of approximation through which a more or less fixed pattern loosens and becomes variable on the semantic or grammatical axis presumably due to frequency effects and the properties of human memory. The chapter also proposes a reverse process, fixing, through which the pattern extends and develops verbatim associations through repeated usage. Both processes are suggested to occur within meaning-shifts units and thus be characteristic of co-selection.
Chapter 7 summarizes the findings and offers a bigger picture with regard to (1) the idiom principle in L2 acquisition and use, (2) the model of a unit of meaning and (3) the processes behind the phraseological tendency of language. It argues that the idiom principle is available to L2 users to a larger degree than is often thought. It then proposes an ‘atomic’ model of a unit of meaning, shows how the processes of fixing and approximation fit into the larger processes of delexicalization and meaning-shift, further develops the idea of a continuum of delexicalization suggested in Chapter 2 as well as explains the connection between these ideas and the concepts of relexicalization and re-metaphorization. The chapter ends with a discussion of limitations and promising directions of future research.
Chapter 3 offers an interdisciplinary overview of research on multi-word units in second language (L2) processing and use. It is motivated by a long-standing puzzle in the field of second language acquisition suggesting that while the idiom principle is the main mechanism of language production in native speakers, non-native speakers cannot benefit from it to a similar degree. The chapter shows that researchers are not unanimous in assessing the degree to which learners operate on the idiom principle and raises a few problems which might be obscuring such operation in commonly used research designs, such as a focus on specific multi-word units rather than co-selection as such, questionable representativeness of reference corpora and inability of statistical measures to capture abstracted associations. The chapter concludes by offering an alternative interpretation based on the concept of approximation adopted from studies of English as a lingua franca.
We are interested in the Korteweg–de Vries (KdV), Burgers and Whitham limits for a spatially periodic Boussinesq model with non-small contrast. We prove estimates of the relations between the KdV, Burgers and Whitham approximations and the true solutions of the original system that guarantee these amplitude equations make correct predictions about the dynamics of the spatially periodic Boussinesq model over their natural timescales. The proof is based on Bloch wave analysis and energy estimates and is the first justification result of the KdV, Burgers and Whitham approximations for a dispersive partial differential equation posed in a spatially periodic medium of non-small contrast.
Let P be the transition matrix of a positive recurrent Markov chain on the integers with invariant probability vector πT, and let (n)P̃ be a stochastic matrix, formed by augmenting the entries of the (n + 1) x (n + 1) northwest corner truncation of P arbitrarily, with invariant probability vector (n)πT. We derive computable V-norm bounds on the error between πT and (n)πT in terms of the perturbation method from three different aspects: the Poisson equation, the residual matrix, and the norm ergodicity coefficient, which we prove to be effective by showing that they converge to 0 as n tends to ∞ under suitable conditions. We illustrate our results through several examples. Comparing our error bounds with the ones of Tweedie (1998), we see that our bounds are more applicable and accurate. Moreover, we also consider possible extensions of our results to continuous-time Markov chains.
We address the construction and approximation for feed-forward neural networks (FNNs) with zonal functions on the unit sphere. The filtered de la Vallée-Poussin operator and the spherical quadrature formula are used to construct the spherical FNNs. In particular, the upper and lower bounds of approximation errors by the FNNs are estimated, where the best polynomial approximation of a spherical function is used as a measure of approximation error.