To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Embedding technology plays a pivotal role in deep learning, particularly in industries such as recommendation, advertising, and search. It is considered a fundamental operation for transforming sparse vectors into dense representations that can be further processed by neural networks. Beyond its basic role, embedding technology has evolved significantly in both academia and industry, with applications ranging from sequence processing to multifeature heterogeneous data. This chapter discusses the basics of embedding, its evolution from Word2Vec to graph embeddings and multifeature fusion, and its applications in recommender systems, with an emphasis on online deployment and inference.
Recommender systems have evolved significantly in response to growing demands, progressing from early methods like Collaborative Filtering (CF) and Logistic Regression (LR) to more advanced models such as Factorization Machines (FM) and Gradient Boosting Decision Trees (GBDT). Since 2015, deep learning has become the dominant approach, leading to the development of hybrid and multimodel frameworks. Despite the rise of deep learning models, traditional recommendation methods still hold valuable advantages due to their interpretability, efficiency, and ease of deployment. Furthermore, these foundational models, such as CF, LR, and FM, form the basis for many deep learning approaches. This chapter explores the evolution of traditional recommendation models, detailing their principles, strengths, and influence on modern deep learning architectures, offering readers a comprehensive understanding of this foundational knowledge.
Whereas the growing body of research into algorithmic memory technologies and the platformisation of memory has a media-centric approach, this article engages with the question of how users experience and make sense of such omnipresent technologies. By means of a questionnaire and follow-up qualitative interviews with young adults (born between 1997 and 2005) and a Grounded Theory approach, we empirically examine an object of study that has been mainly explored theoretically. Our study found four major experiences associated with algorithmic memory technologies: intrusive, dissonant, nostalgic, and practical. Connected to these experiences, we found four sets of practices and strategies of use: avoidance and non-use; curating and training; reminiscing; and cognitive offloading and managing identity through memory. Our results show that our participants’ use and awareness of algorithmic memory technologies are diverse and, at times, contradictory, and shape their attitudes towards their memories, whether they are mediated or not. Hence, our study offers nuances and new perspectives to extant research into algorithmic memory technologies, which often assumes particular users and uses.
Building an effective recommender system requires more than just a strong model; it involves addressing a range of complex technical issues that contribute to the overall performance. This chapter explores recommender systems from seven distinct angles, covering feature selection, retrieval layer strategies, real-time performance optimization, scenario-based objective selection, model structure improvements based on user intent, the cold start problem, and the “exploration vs. exploitation” challenge. By understanding these critical aspects, machine learning engineers can develop robust recommender systems with comprehensive capabilities.
Recommender systems have become deeply integrated into daily life, shaping decisions in online shopping, news consumption, learning, and entertainment. These systems offer personalized suggestions, enhancing user experiences in various scenarios. Behind this, machine learning engineers drive the constant evolution of recommendation technology. Described as the “growth engine” of the internet, recommender systems play a critical role in the digital ecosystem. This chapter explores the role of these systems, why they are essential, and how they are architected from a technical perspective.
While previous chapters discussed deep learning recommender systems from a theoretical and algorithmic perspective, this chapter shifts focus to the engineering platform that supports their implementation. Recommender systems are divided into two key components: data and model. The data aspect involves the engineering of the data pipeline, while the model aspect is split between offline training and online serving. This chapter is structured into three parts: (1) the data pipeline framework and big data platform technologies; (2) popular platforms for offline training of recommendation models like Spark MLlib, TensorFlow, and PyTorch; and (3) online deployment and serving of deep learning recommendation models. Additionally, the chapter covers the trade-offs between engineering execution and theoretical considerations, offering insights into how algorithm engineers can balance these aspects in practice.
Advertising click-through rate (CTR) prediction is a fundamental task in recommender systems, aimed at estimating the likelihood of users interacting with advertisements based on their historical behavior. This prediction process has evolved through two main stages: from traditional shallow interaction models to more advanced deep learning approaches. Shallow models typically operate at the level of individual features, failing to fully leverage the rich, multilevel information available across different feature sets, leading to less accurate predictions. In contrast, deep learning models exhibit superior feature representation and learning capabilities, enabling a more realistic simulation of user interactions and improving the accuracy of CTR prediction. This paper provides a comprehensive overview of CTR prediction algorithms in the context of recommender systems. The algorithms are categorized into two groups: shallow interactive models and deep learning-based prediction models, including deep neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks. Additionally, this paper also discusses the advantages and disadvantages of the aforementioned algorithms, as well as the benchmark datasets and model evaluation methods used for CTR prediction. Finally, it identifies potential future research directions in this rapidly advancing field.
This paper introduces a distributed online learning coverage control algorithm based on sparse Gaussian process regression for addressing the problem of multi-robot area coverage and source localization in unknown environments. Considering the limitations of traditional Gaussian process regression in handling large datasets, this study employs multiple robots to explore the task area to gather environmental information and approximate the posterior distribution of the model using variational free energy methods, which serves as the input for the centroid Voronoi tessellation algorithm. Additionally, taking into consideration the localization errors, and the impact of obstacles, buffer factors and centroid Voronoi tessellation algorithms with separating hyperplanes are introduced for dynamic robot task area planning, ultimately achieving autonomous online decision-making and optimal coverage. Simulation results demonstrate that the proposed algorithm ensures the safety of multi-robot formations, exhibits higher iteration speed, and improves source localization accuracy, highlighting the effectiveness of model enhancements.
DLV2 is an AI tool for knowledge representation and reasoning that supports answer set programming (ASP) – a logic-based declarative formalism, successfully used in both academic and industrial applications. Given a logic program modeling a computational problem, an execution of DLV2 produces the so-called answer sets that correspond one-to-one to the solutions to the problem at hand. The computational process of DLV2 relies on the typical ground & solve approach, where the grounding step transforms the input program into a new, equivalent ground program, and the subsequent solving step applies propositional algorithms to search for the answer sets. Recently, emerging applications in contexts such as stream reasoning and event processing created a demand for multi-shot reasoning: here, the system is expected to be reactive while repeatedly executed over rapidly changing data. In this work, we present a new incremental reasoner obtained from the evolution of DLV2 toward iterated reasoning. Rather than restarting the computation from scratch, the system remains alive across repeated shots, and it incrementally handles the internal grounding process. At each shot, the system reuses previous computations for building and maintaining a large, more general ground program, from which a smaller yet equivalent portion is determined and used for computing answer sets. Notably, the incremental process is performed in a completely transparent fashion for the user. We describe the system, its usage, its applicability, and performance in some practically relevant domains.
Ideological and relational polarization are two increasingly salient political divisions in Western societies. We integrate the study of these phenomena by describing society as a multilevel network of social ties between people and attitudinal ties between people and political topics. We then define and propose a set of metrics to measure ‘network polarization’: the extent to which a community is ideologically and socially divided. Using longitudinal network modelling, we examine whether observed levels of network polarization can be explained by three processes: social selection, social influence, and latent-cause reinforcement. Applied to new longitudinal friendship and political attitude network data from two Swiss university cohorts, our metrics show mild polarization. The models explain this outcome and suggest that friendships and political attitudes are reciprocally formed and sustained. We find robust evidence for friend selection based on attitude similarity and weaker evidence for social influence. The results further point to latent-cause reinforcement processes: (dis)similar attitudes are more likely to be formed or maintained between individuals whose attitudes are already (dis)similar on a range of political issues. Applied across different cultural and political contexts, our approach may help to understand the degree and mechanisms of divisions in society.
AI's next big challenge is to master the cognitive abilities needed by intelligent agents that perform actions. Such agents may be physical devices such as robots, or they may act in simulated or virtual environments through graphic animation or electronic web transactions. This book is about integrating and automating these essential cognitive abilities: planning what actions to undertake and under what conditions, acting (choosing what steps to execute, deciding how and when to execute them, monitoring their execution, and reacting to events), and learning about ways to act and plan. This comprehensive, coherent synthesis covers a range of state-of-the-art approaches and models –deterministic, probabilistic (including MDP and reinforcement learning), hierarchical, nondeterministic, temporal, spatial, and LLMs –and applications in robotics. The insights it provides into important techniques and research challenges will make it invaluable to researchers and practitioners in AI, robotics, cognitive science, and autonomous and interactive systems.
How areas of land are allocated for different uses, such as forests, urban areas, and agriculture, has a large effect on the terrestrial carbon balance and, therefore, climate change. Based on available historical data on land-use changes and a simulation of the associated carbon emissions and removals, a surrogate model can be learned that makes it possible to evaluate the different options available to decision-makers efficiently. An evolutionary search process can then be used to discover effective land-use policies for specific locations. Such a system was built on the Project Resilience platform and evaluated with the Land-Use Harmonization dataset LUH2 and the bookkeeping model BLUE. It generates Pareto fronts that trade off carbon impact and amount of land-use change customized to different locations, thus providing a proof-of-concept tool that is potentially useful for land-use planning.
The autonomous safe flight of fixed-wing unmanned aerial vehicles (UAVs in complex low-altitude environments presents significant challenges and holds practical application value. This paper proposes a motion planning method for agile fixed-wing UAVs to address safety issues in navigating narrow corridors within such environments. In the path planning phase, we introduce the Improved Batch Informed Trees (IBIT*) to enhance both the solving speed and quality of BIT*. The IBIT* incorporates strategies such as using Rapidly Exploring Random Tree (RRT)-Connect for initial pathfinding, informed sparse sampling, and re-selecting parent nodes. During the trajectory planning phase, we first decouple the roll angle of the UAV from its three-dimensional position based on the agility of fixed-wing UAVs; subsequently, we address constraints related to smoothness and mission time by leveraging the characteristics of the Minimum Control Effort; finally, we design a differentiable penalty function to satisfy the dynamic performance constraints of the UAV. The effectiveness and superiority of the proposed motion planning method are demonstrated through numerical simulations and physical flight experiments.
This article explores how online language learners encounter foreign language speaking anxiety (FLSA), what mitigating strategies they apply to manage synchronous online tutorials, and what their asynchronous speaking practices are. In a large-scale mixed methods study, we gathered survey data from 307 language learners at a UK online and distance learning university and conducted in-depth group interviews with 10 students focusing on their FLSA experience and perceptions regarding synchronous and asynchronous speaking activities. The results reveal that the triggers of FLSA and the mitigating strategies learners apply partly overlap with those in the face-to-face context but are partly specific to the online environment (e.g. breakout rooms, vicarious learning). The use of technology can be anxiety-inducing (e.g. cameras) as well as supportive (e.g. online translation tools and dictionaries). Novel findings of the study are that avoidance strategies are more nuanced in this context, ranging from complete avoidance of tutorials to full engagement via the chat, and that the use of breakout rooms magnifies learners’ emotions and is one of the main triggers of FLSA. This might be helpful for practitioners – also beyond language courses – in scaffolding and optimising their small group activities online.