To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mirror-Rite is a work for ‘meta-trumpet’, computer and live electronics. The composition is neither notated nor stored, but forms itself as a complex of rule-based structures, transformations and processes around the improvisation of a live performer, its source of both energy and material. This article discusses the ways in which these algorithms deal with interactivity and projection in time, and how the handling of these aspects might permit the assembly of algorithmic elements into a complex dynamic whole, sensitive to (and existing solely within) present circumstances, but demonstrating unity between different performances.
The full paper explores the possibility of using Subsequential Transducers (SST), a finite state model, in limited domain translation tasks, both for text and speech input. A distinctive advantage of SSTs is that they can be efficiently learned from sets of input-output examples by means of OSTIA, the Onward Subsequential Transducer Inference Algorithm (Oncina et al. 1993). In this work a technique is proposed to increase the performance of OSTIA by reducing the asynchrony between the input and output sentences, the use of error correcting parsing to increase the robustness of the models is explored, and an integrated architecture for speech input translation by means of SSTs is described.
There are two distinct types of creativity: the flash out of the blue (inspiration? genius?), and the process of incremental revisions (hard work). Not only are we years away from modelling the former, we do not even begin to understand it. The latter is algorithmic in nature and has been modelled in many systems both musical and non-musical. Algorithmic composition is as old as music composition. It is often considered a cheat, a way out when the composer needs material and/or inspiration. It can also be thought of as a compositional tool that simply makes the composer’s work go faster. This article makes a case for algorithmic composition as such a tool. The ‘hard work’ type of creativity often involves trying many different combinations and choosing one over the others. It seems natural to express this iterative task as a computer algorithm. The implementation issues can be reduced to two components: how to understand one’s own creative process well enough to reproduce it as an algorithm, and how to program a computer to differentiate between ‘good’ and ‘bad’ music. The philosophical issues reduce to the question who or what is responsible for the music produced?
In spite of the wide availability of more powerful (context free, mildly context sensitive, and even Turing-equivalent) formalisms, the bulk of the applied work on language and sublanguage modeling, especially for the purposes of recognition and topic search, is still performed by various finite state methods. In fact, the use of such methods in research labs as well as in applied work actually increased in the past five years. To bring together those developing and using extended finite state methods to text analysis, speech/OCR language modeling, and related CL and NLP tasks with those in AI and CS interested in analyzing and possibly extending the domain of finite state algorithms, a workshop was held in August 1996 in Budapest as part of the European Conference on Artificial Intelligence (ECAI'96).
In language processing, finite state models are not a lesser evil that bring simplicity and efficiency at the cost of accuracy. On the contrary, they provide a very natural framework to describe complex linguistic phenomena. We present here one aspect of parsing with finite state transducers and show that this technique can be applied to complex linguistic situations.
Finite automata and various extensions of them, such as transducers, are used in areas as diverse as compilers, spelling checking, natural language grammar checking, communication protocol design, digital circuit simulation, digital flight control, speech recognition and synthesis, genetic sequencing, and Java program verification. Unfortunately, as the number of applications has grown, so has the variety of implementations and implementation techniques. Typically, programmers will be confused enough to resort to their text books for the most elementary algorithms. Recently, advances have been made in taxonomizing algorithms for constructing and minimizing automata and in evaluating various implementation strategies Watson 1995. Armed with this, a number of general-purpose toolkits have been developed at universities and companies. One of these, FIRE Lite, was developed at the Eindhoven University of Technology, while its commercial successor, FIRE Engine II, has been developed at Ribbit Software Systems Inc. Both of these toolkits provide implementations of all of the known algorithms for constructing automata from regular expressions, and all of the known algorithms for minimizing deterministic finite automata. While the two toolkits have a great deal in common, we will concentrate on the structure and use of the noncommercial FIRE Lite. The prototype version of FIRE Lite was designed with compilers in mind. More recently, computation linguists and communications protocol designers have become interested in using the toolkit. This has led to the development of a much more general interface to FIRE Lite, including the support of both Mealy and Moore regular transducers. While such a toolkit may appear extremely complex, there are only a few choices to be made. We also consider a ‘recipe’ for making good use of the toolkits. Lastly, we consider the future of FIRE Lite. While FIRE Engine II has obvious commercial value, we are committed to maintaining a version which is freely available for academic use.
There are currently two philosophies for building grammars and parsers: hand-crafted, wide coverage grammars; and statistically induced grammars and parsers. Aside from the methodological differences in grammar construction, the linguistic knowledge which is overt in the rules of handcrafted grammars is hidden in the statistics derived by probabilistic methods, which means that generalizations are also hidden and the full training process must be repeated for each domain. Although handcrafted wide coverage grammars are portable, they can be made more efficient when applied to limited domains, if it is recognized that language in limited domains is usually well constrained and certain linguistic constructions are more frequent than others. We view a domain-independent grammar as a repository of portable grammatical structures whose combinations are to be specialized for a given domain. We use Explanation-Based Learning (EBL) to identify the relevant subset of a handcrafted general purpose grammar (XTAG) needed to parse in a given domain (ATIS). We exploit the key properties of Lexicalized Tree-Adjoining Grammars to view parsing in a limited domain as finite state transduction from strings to their dependency structures.
Many of the processing steps in natural language engineering can be performed using finite state transducers. An optimal way to create such transducers is to compile them from regular expressions. This paper is an introduction to the regular expression calculus, extended with certain operators that have proved very useful in natural language applications ranging from tokenization to light parsing. The examples in the paper illustrate in concrete detail some of these applications.
Pietro Grossi was the first pioneer of computer music in Italy. During his activities from 1961 to the 1980s he devoted much research to algorithmic composition. Grossi’s first experiments in this field are discussed, dealing particularly with three important phases of his work: initiating algorithmic composition at the S 2F M studio in Florence, writing digital programs at the CNUCE Institute in Pisa, and realising a musical algorithm by translating the curve designed by the mathematician Peano into a sonic form.
A course in Design Structures is discussed. In this course students learn a multi-faceted approach to exploring mathematical and iterative ideas using sonification and visualisation techniques for their own compositional explorations. These ideas are discussed in relation to philosophy, history, technology, scientific paradigms, and cultural context. Some resulting student work is demonstrated.
A Lexical Transducer (LT) as defined by Karttunen, Kaplan, Zaenen 1992 is a specialized finite state transducer (FST) that relates citation forms of words and their morphological categories to inflected surface forms. Using LTs is advantageous because the same structure and algorithms can be used for morphological analysis (stemming) and generation. Morphological processing (analysis and generation) is computationally faster, and the data for the process can be compacted more tightly than with other methods. The standard way to construct an LT consists of three steps: (1) constructing a simple finite state source lexicon LA which defines all valid canonical citation forms of the language; (2) describing morphological alternations by means of two-level rules, compiling the rules to FSTs, and intersecting them to form a single rule transducer RT; and (3) composing LA and RT.
Finite state cascades represent an attractive architecture for parsing unrestricted text. Deterministic parsers specified by finite state cascades are fast and reliable. They can be extended at modest cost to construct parse trees with finite feature structures. Finally, such deterministic parsers do not necessarily involve trading off accuracy against speed — they may in fact be more accurate than exhaustive search stochastic context free parsers.
A source of potential systematic errors in information retrieval is identified and discussed. These errors occur when base form reduction is applied with a (necessarily) finite dictionary. Formal methods for avoiding this error source are presented, along with some practical complexities met in its implementation.
This paper deals with the role of the composer in algorithmic music. This role departs from traditional models because of the way computers are used in the compositional process, particularly when signal processing techniques are being integrated with sophisticated formal models to generate musical compositions. We shall examine several types of ‘musical formalism’ in order to bring out the active role of the composer in the compositional process.
When some months before publication of the first issue of Organised Sound, Iannis Xenakis sent the editors a previously unpublished set of lecture notes for possible inclusion in the journal, it seemed natural to locate the resulting article within the present issue, whose major theme is 'algorithmic composition' (AC). Whilst Xenakis himself does not use this term, it is evident that he has had a major influence on those who do, as evidenced by his citations in most of the other articles in this issue. Not only has Xenakis been an original thinker in this domain, his article as presented here also illuminates many of his compositional practices, indicated as the working-out of probabilistic formulae. The article ranges wildely, with a section on the UPIC system, and we believe it also captures much of the empathetic, intensely enquiring nature of a mind deeply versed in both scientific and artistic modes of humanist thought.