To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The development of artificial intelligence and machine learning is leading to a revolution in the way we think about economic decisions. The Economics of Language explores how the use of generative AI and large language models (LLMs) can transform the way we think about economic behaviour. It introduces the LENS framework (Linguistic content triggers Emotions and suggests Norms, which shape Strategy choice) and presents empirical evidence that LLMs can predict human behaviour in economic games more accurately than traditional outcome-based models. It draws on years of research to provide a step-by-step development of the theory, combining accessible examples with formal modelling. Offering a roadmap for future research at the intersection of economics, psychology, and AI, this book equips readers with tools to quantify the role of language in decision-making and redefines how we think about utility, rationality, and human choice.
In recent years, speech recognition devices have become central to our everyday lives. Systems such as Siri, Alexa, speech-to-text, and automated telephone services, are built by people applying expertise in sound structure and natural language processing to generate computer programmes that can recognise and understand speech. This exciting new advancement has led to a rapid growth in speech technology courses being added to linguistics programmes; however, there has so far been a lack of material serving the needs of students who have limited or no background in computer science or mathematics. This textbook addresses that need, by providing an accessible introduction to the fundamentals of computer speech synthesis and automatic speech recognition technology, covering both neural and non-neural approaches. It explains the basic concepts in non-technical language, providing step-by-step explanations of each formula, practical activities and ready-made code for students to use, which is also available on an accompanying website.
The Deeds of the Abbots of St Albans records the history of one of the most important abbeys in England, closely linked to the royal family and home to a school of distinguished chroniclers, including Matthew Paris and Thomas Walsingham. It offers many insights into the life of the monastery, its buildings and its role as a maker of books, and covers the period from the Conquest to the mid-fifteenth century.
Deep Learning is becoming increasingly important in a technology-dominated world. However, the building of computational models that accurately represent linguistic structures is complex, as it involves an in-depth knowledge of neural networks, and the understanding of advanced mathematical concepts such as calculus and statistics. This book makes these complexities accessible to those from a humanities and social sciences background, by providing a clear introduction to deep learning for natural language processing. It covers both theoretical and practical aspects, and assumes minimal knowledge of machine learning, explaining the theory behind natural language in an easy-to-read way. It includes pseudo code for the simpler algorithms discussed, and actual Python code for the more complicated architectures, using modern deep learning libraries such as PyTorch and Hugging Face. Providing the necessary theoretical foundation and practical tools, this book will enable readers to immediately begin building real-world, practical natural language processing systems.
Distributional semantics develops theories and methods to represent the meaning of natural language expressions, with vectors encoding their statistical distribution in linguistic contexts. It is at once a theoretical model to express meaning, a practical methodology to construct semantic representations, a computational framework for acquiring meaning from language data, and a cognitive hypothesis about the role of language usage in shaping meaning. This book aims to build a common understanding of the theoretical and methodological foundations of distributional semantics. Beginning with its historical origins, the text exemplifies how the distributional approach is implemented in distributional semantic models. The main types of computational models, including modern deep learning ones, are described and evaluated, demonstrating how various types of semantic issues are addressed by those models. Open problems and challenges are also analyzed. Students and researchers in natural language processing, artificial intelligence, and cognitive science will appreciate this book.
Digital health translation is an important application of machine translation and multilingual technologies, and there is a growing need for accessibility in digital health translation design for disadvantaged communities. This book addresses that need by highlighting state-of-the-art research on the design and evaluation of assistive translation tools, along with systems to facilitate cross-cultural and cross-lingual communications in health and medical settings. Using case studies as examples, the principles of designing assistive health communication tools are illustrated. These are (1) detectability of errors to boost user confidence by health professionals; (2) customizability for health and medical domains; (3) inclusivity of translation modalities to serve people with disabilities; and (4) equality of accessibility standards for localised multilingual websites of health contents. This book will appeal to readers from natural language processing, computer science, linguistics, translation studies, public health, media, and communication studies. This title is available as open access on Cambridge Core.
Space and time representation in language is important in linguistics and cognitive science research, as well as artificial intelligence applications like conversational robots and navigation systems. This book is the first for linguists and computer scientists that shows how to do model-theoretic semantics for temporal or spatial information in natural language, based on annotation structures. The book covers the entire cycle of developing a specification for annotation and the implementation of the model over the appropriate corpus for linguistic annotation. Its representation language is a type-theoretic, first-order logic in shallow semantics. Each interpretation model is delimited by a set of definitions of logical predicates used in semantic representations (e.g., past) or measuring expressions (e.g., counts or k). The counting function is then defined as a set and its cardinality, involving a universal quantification in a model. This definition then delineates a set of admissible models for interpretation.
This innovative study compares nineteenth-century Arabic translations of the Bible to determine how it emerged as a foundational text of Arab modernity. Bible translation gained global traction through the work of Anglophone Christian missionaries, who made an attempt at synchronising translated Bibles in world languages by laying down strict guidelines and supervising the processes of translation and dissemination. By engaging with the intellectual beginnings of two local translators, Butrus al-Bustani (1819-1883) and Ahmad Faris al-Shidyaq (1804-1887), as well as their subsequent contributions to Arabic language and literature, this book questions to what extent they complied with the missionaries' strategy in practice. Based on documents from the archives of Bible societies that tell the story of two key nahda versions of the text, we come to understand how colonial pressure was secondary to the process of incorporating the Bible into the nahda project of rethinking Arabic.
A single, consistent and accessible narrative of the Grail story, constructed from the principal motifs and narrative strands of all the original Grail romances.
This ground-breaking volume on early modern inter-Asian translation examines how translation from plain Chinese was situated at the nexus between, on the one hand, the traditional standard of biliteracy characteristic of literary practices in the Sinographic sphere, and on the other, practices of translational multilingualism (competence in multiple spoken languages to produce a fully localized target text). Translations from plain Chinese are shown to carve out new ecologies of translations that not only enrich our understanding of early modern translation practices across the Sinographic sphere, but also demonstrate that the transregional uses of a non-alphabetic graphic technology call for different models of translation theory.
On social media, new forms of communication arise rapidly, many of which are intense, dispersed, and create new communities at a global scale. Such communities can act as distinct information bubbles with their own perspective on the world, and it is difficult for people to find and monitor all these perspectives and relate the different claims made. Within this digital jungle of perspectives on truth, it is difficult to make informed decisions on important things like vaccinations, democracy, and climate change. Understanding and modeling this phenomenon in its full complexity requires an interdisciplinary approach, utilizing the ample data provided by digital communication to offer new insights and opportunities. This interdisciplinary book gives a comprehensive view on social media communication, the different forms it takes, the impact and the technology used to mine it, and defines the roadmap to a more transparent Web.
Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.
Sentence comprehension - the way we process and understand spoken and written language - is a central and important area of research within psycholinguistics. This book explores the contribution of computational linguistics to the field, showing how computational models of sentence processing can help scientists in their investigation of human cognitive processes. It presents the leading computational model of retrieval processes in sentence processing, the Lewis and Vasishth cue-based retrieval mode, and develops a principled methodology for parameter estimation and model comparison/evaluation using benchmark data, to enable researchers to test their own models of retrieval against the present model. It also provides readers with an overview of the last 20 years of research on the topic of retrieval processes in sentence comprehension, along with source code that allows researchers to extend the model and carry out new research. Comprehensive in its scope, this book is essential reading for researchers in cognitive science.
Language resources and computational models are becoming increasingly important for the study of language variation. A main challenge of this interdisciplinary field is that linguistics researchers may not be familiar with these helpful computational tools and many NLP researchers are often not familiar with language variation phenomena. This essential reference introduces researchers to the necessary computational models for processing similar languages, varieties, and dialects. In this book, leading experts tackle the inherent challenges of the field by balancing a thorough discussion of the theoretical background with a meaningful overview of state-of-the-art language technology. The book can be used in a graduate course, or as a supplementary text for courses on language variation, dialectology, and sociolinguistics or on computational linguistics and NLP. Part 1 covers the linguistic fundamentals of the field such as the question of status and language variation. Part 2 discusses data collection and pre-processing methods. Finally, Part 3 presents NLP applications such as speech processing, machine translation, and language-specific issues in Arabic and Chinese.
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
Sentiment analysis is the computational study of people's opinions, sentiments, emotions, moods, and attitudes. This fascinating problem offers numerous research challenges, but promises insight useful to anyone interested in opinion analysis and social media analysis. This comprehensive introduction to the topic takes a natural-language-processing point of view to help readers understand the underlying structure of the problem and the language constructs commonly used to express opinions, sentiments, and emotions. The book covers core areas of sentiment analysis and also includes related topics such as debate analysis, intention mining, and fake-opinion detection. It will be a valuable resource for researchers and practitioners in natural language processing, computer science, management sciences, and the social sciences.In addition to traditional computational methods, this second edition includes recent deep learning methods to analyze and summarize sentiments and opinions, and also new material on emotion and mood analysis techniques, emotion-enhanced dialogues, and multimodal emotion analysis.
Deep learning is revolutionizing how machine translation systems are built today. This book introduces the challenge of machine translation and evaluation - including historical, linguistic, and applied context -- then develops the core deep learning methods used for natural language applications. Code examples in Python give readers a hands-on blueprint for understanding and implementing their own machine translation systems. The book also provides extensive coverage of machine learning tricks, issues involved in handling various forms of data, model enhancements, and current challenges and methods for analysis and visualization. Summaries of the current research in the field make this a state-of-the-art textbook for undergraduate and graduate classes, as well as an essential reference for researchers and developers interested in other applications of neural methods in the broader field of human language processing.