To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the modern world, the importance of information can hardly be overestimated. Information also plays a prominent role in scientific computations. A branch of computational complexity which deals with problems for which information is partial, noisy and priced is called informationbased complexity.
In a number of information-based complexity books, the emphasis was on partial and exact information. In the present book, the emphasis is on noisy information. We consider deterministic and random noise. The analysis of noisy information leads to a variety of interesting new algorithms and complexity results.
The book presents a theory of computational complexity of continuous problems with noisy information. A number of applications is also given. It is based on results of many researchers in this area (including the results of the author) as well as new results not published elsewhere.
This work would not have been completed if I had not received support from many people. My special thanks go to H. Woźniakowski who encouraged me to write such a book and was always ready to offer his help. I appreciate the considerable help of J.F. Traub. I would also like to thank M. Kon, A. Werschulz, E. Novak, K. Ritter and other colleagues for their valuable comments on various portions of the manuscript.
I wish to express my thanks to the Institute of Applied Mathematics and Mechanics at the University of Warsaw, where the book was almost entirely written.
In Chapters 2 to 5, we fixed the set of problem elements and were interested in rinding single information and algorithm which minimize an error or cost of approximation. Depending on the deterministic or stochastic assumptions on the problem elements and information noise, we studied the four different settings: worst, average, worst-average, and average-worst case settings.
In this chapter, we study the asymptotic setting in which a problem element f is fixed and we wish to analyze asymptotic behavior of algorithms. The aim is to construct a sequence of information and algorithms such that the error of successive approximations vanishes as fast as possible, as the number of observations increases to infinity.
The asymptotic setting is often studied in computational practice. We mention only the Romberg algorithm for computing integrals, and finite element methods (FEM) for solving partial differential equations with the meshsize tending to zero. When dealing with these and other numerical algorithms, we are interested in how fast they converge to the solution.
One might hope that it will be possible to construct a sequence φn(yn) of approximations such that for the element f the error ∥S(f) − φn(yn)∥ vanishes much faster than the error over the whole set of problem elements (or, equivalently, faster than the corresponding radius of information). It turns out, however, that in many cases any attempts to construct such algorithms would fail. We show this by establishing relations between the asymptotic and other settings.
There is a rich literature on the design of formal languages for music representation on computers. Over the last thirty years, several generations of software technology have been applied to this problem, including structured software engineering, artificial intelligence, and object-oriented (O-O) software technology. This article introduces the basic notions of O-O software technology, and investigates how these might be useful for music representation. In particular, the author's Smalltalk music object kernel (Smoke) music representation language is described and examples given that illustrate the most important of Smoke's features.
It is rare to see music and technology being used in combination in therapy and special education. This article is an account of work in a special school as part of a festival of popular music. The style of the music was dance/rave. This was made accessible using a specialised range of MIDI devices to enable students with physical and learning disabilities to participate. There are many benefits to be derived from studying popular music. In special education this can help with physical coordination and social skills. Most important, young people with special needs are given access to youth cultures from which, traditionally, they have tendedto be excluded.
It is common to oppose formalist and referentialist approaches to music. However, in Francis Dhomont's work Points de fuite, these approaches appear complementary when we consider the relationship between sounds and sources. Adopting the analytical approach of the American theorist Leonard B. Meyer, we show how the syntactic flow of Points de fuite generates formal implications through the impact of tension and relaxation archetypes. The piece explores metaphors based upon recurrent anecdotal events – the recorded signifiers of the source. These extra-musical elements define the work's structure to such an extent that they eliminate the traditional gap between formalism andreferentialism in music.
This article approaches the definition of the important term 'acousmatic' by reference to its origins in the sound studios of the French National Radio. The links from France to Québec are outlined and the Québecois acousmatic school, largely based in Montreal, is introduced. Aspects of a typical piece are discussed, and the author is able to answer the title question positively.
Within the context of discussing contemporary music the European tendency to overvalue abstraction is questioned. The use of environmental sounds in electroacoustic music is highlighted as an example of the questionable value of abstraction. Attention is then focused on a recent Truax composition, Powers of Two (1995) as a work of electroacoustic music theatre. The historical musical and poetic references, as well as the sound sources adopted for the work, are discussed, and placed within the human framework of relationship embodied in the piece. A concluding section summarises the work as an attempt to create a contemporary myth from historical sources, and as a dramatic expression employing electroacoustic forces.
Since the mid-1980s commercial digital samplers have become widespread. The idea of musical instruments which have no sounds of their own is, however, much older, not just in the form of analogue samplers like the Mellotron, but in ancient myths and legends from China and elsewhere. This history of both digital and analogue samplers relates the latter to the early musique concrète of Pierre Schaeffer and others, and also describes a variety of one-off systems devised by composers and performers.
This paper describes TAO, a system for sound synthesis by physical modelling based on a new technique called cellular sound synthesis (CSS). The system provides a general mechanism for constructing an infinite variety of virtual instruments, and does so by providing a virtual acoustic material, elastic in nature, whose physical characteristics can be fine-tuned to produce different timbres. A wide variety of sounds such as plucked, hit, bowed and scraped sounds can be produced, all having natural physical and spacial qualities. Some of the musical and philosophical issues considered to be most important during the design and development of the system are touched upon, and the main features of the system are explained with reference to practical examples. Advantages and disadvantages of the synthesis technique and the prototype system are discussed, together with suggestions for future improvements.
'Organised sound' - the term coined by Edgard Varèse for a new definition of musical constructivism - denotes for our increasingly technologically dominated culture an urge towards the recognition of the human impulse behind the 'system'. Such is the diversity of activity in today's computer music, we need to maintain a balance between technological advances and musically creative and scholarly endeavour, at all levels of an essentially educative process. The model of 'life-long learning' makes a special kind of sense when we can explore our musical creativity in partnership with the computer, a machine now capable of sophisticated response from a humanly embedded intelligence.
We describe new applications of the theory of automata to natural language processing: the representation of very large scale dictionaries and the indexation of natural language texts. They are based on new algorithms that we introduce and describe in detail. In particular, we give pseudocodes for the determinisation of string to string transducers, the deterministic union of p-subsequential string to string transducers, and the indexation by automata. We report on several experiments illustrating the applications.
This paper addresses the problem of distribution of words and phrases in text, a problem of great general interest and of importance for many practical applications. The existing models for word distribution present observed sequences of words in text documents as an outcome of some stochastic processes; the corresponding distributions of numbers of word occurrences in the documents are modelled as mixtures of Poisson distributions whose parameter values are fitted to the data. We pursue a linguistically motivated approach to statistical language modelling and use observable text characteristics as model parameters. Multi-word technical terms, intrinsically content entities, are chosen for experimentation. Their occurrence and the occurrence dynamics are investigated using a 100-million word data collection consisting of a variety of about 13,000 technical documents. The derivation of models describing word distribution in text is based on a linguistic interpretation of the process of text formation, with the probabilities of word occurrence being functions of observable and linguistically meaningful text characteristics. The adequacy of the proposed models for the description of actually observed distributions of words and phrases in text is confirmed experimentally. The paper has two focuses: one is modelling of the distributions of content words and phrases among different documents; and another is word occurrence dynamics within documents and estimation of corresponding probabilities. Accordingly, among the application areas for the new modelling paradigm are information retrieval and speech recognition.
We discuss the random generation of strings using the grammatical formalism AGFL. This formalism consists of context-free grammars extended with a parameter mechanism, where the parameters range over a finite domain. Our approach consists in static analysis of the combinations of parameter values with which derivations can be constructed. After this analysis, generation of sentences can be performed without backtracking.