To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study examines the role of gakushū manga, or educational Japanese comics, in shaping collective memory narratives of World War II. It explores whether these works diverge from or perpetuate Japan-centric interpretations of World War II by analysing thematic trends, representational strategies, and selective memory frameworks. The findings reveal a dominant emphasis on Japanese victimhood, mainly through graphic depictions of civilian suffering, while representations of foreign victims, such as Chinese and Korean civilians, remain abstract or marginalised. The responsibility of those in positions of leadership is selectively portrayed, often exonerating figures like Emperor Hirohito, and the actions of such militaristic leaders are contextualised within broader systemic ideologies.
These manga replicate postwar narratives by foregrounding societal complicity, deliberate omission, and the delegation of the ‘Other’ to the periphery, in line with broader patterns of media-driven nationalism. They provide nuanced critiques of Japan’s wartime conduct but simultaneously maintain a selective focus that minimises Japan’s responsibilities as an aggressor. This research underscores the need for a balanced collective memory to foster reconciliation and a more inclusive understanding of wartime legacies in East Asia.
The lower limb exoskeleton is a typical wearable robot designed to assist human motion. However, its system stability and performance are often compromised due to unknown model parameters and inadequate control strategies. Therefore, it is crucial to explore the parametric identification of the exoskeleton and the design of corresponding control strategies for human-exoskeleton cooperative motion. First, an exoskeleton platform is developed to provide experimental validation. Simultaneously, a two-degree-of-freedom (2-DOF) exoskeleton model is constructed using the Lagrange method. The neighborhood field optimization (NFO) technique is then applied to identify the unknown model parameters of the exoskeleton. Additionally, the excitation trajectories for the exoskeleton are developed by the NFO method, incorporating several motion constraints to enhance the accuracy of model identification. An admittance controller is implemented to enable active control of the exoskeleton, allowing it to align with human intention and thereby improving the wearability and comfort of the device. Finally, both simulation and experimental results are compared and verified on the platform. These results demonstrate that the NFO method achieves superior identification accuracy compared to particle swarm optimization (PSO) and genetic algorithm (GA).
What defines a correct program? What education makes a good programmer? The answers to these questions depend on whether programs are seen as mathematical entities, engineered socio-technical systems or media for assisting human thought. Programmers have developed a wide range of concepts and methodologies to construct programs of increasing complexity. This book shows how those concepts and methodologies emerged and developed from the 1940s to the present. It follows several strands in the history of programming and interprets key historical moments as interactions between five different cultures of programming. Rooted in disciplines such as mathematics, electrical engineering, business management or psychology, the different cultures of programming have exchanged ideas and given rise to novel programming concepts and methodologies. They have also clashed about the nature of programming; those clashes remain at the core of many questions about programming today. This title is also available as Open Access on Cambridge Core.
Motivated by the astonishing capabilities of large language models (LLMs) in text-generation, reasoning, and simulation of complex human behaviors, in this paper, we propose a novel multi-component LLM-based framework, namely LLM4ACOE, that fully automates the collaborative ontology engineering (COE) process using role-playing simulation of LLM agents and retrieval augmented generation (RAG) technology. The proposed solution enhances the LLM-powered role-playing simulation with RAG ‘feeding’ the LLM with three different types of external knowledge. This knowledge corresponds to the knowledge required by each of the COE roles (agents), using a component-based framework, as follows: (a) domain-specific data-centric documents, (b) OWL documentation, and (c) ReAct guidelines. The aforementioned components are evaluated in combination, with the aim of investigating their impact on the quality of generated ontologies. The aim of this work is twofold, (a) to identify the capacity of LLM-based agents to generate acceptable (by human-experts) ontologies through agentic collaborative ontology engineering (ACOE) role-playing simulation, at specific levels of acceptance (accuracy, validity, and expressiveness of ontologies) without human intervention and (b) to investigate whether and/or to what extent the selected RAG components affect the quality of the generated ontologies. The evaluation of this novel approach is performed using ChatGPT-o in the domain of search and rescue (SAR) missions. To assess the generated ontologies, quantitative and qualitative measures are employed, focusing on coverage, expressiveness, structure, and human involvement.
In order to solve the problem of poor quality of paths generated by the traditional Q-RRT* algorithm and blind random search, an improved APF-QRRT* algorithm is proposed in this paper. The improved APF-QRRT* algorithm obtains a set of discrete critical path points connecting the start point and the end point by the Q-RRT* algorithm, and then fine-tunes the paths by using the local optimization capability of the APF to improve the smoothness and safety of the paths. The traditional Q-RRT* algorithm is improved, and the fast alternating expansion of two random trees is realized by introducing a bidirectional search strategy of two random trees and adopting node greedy expansion, where the nearest node of the tree is used as the reference for the expansion of this tree during the iterative process of path node generation. The experimental results show that the improved APF-QRRT* algorithm reduces the path planning time by 20.3%, the path length by 1.8%, the number of path nodes by 33.3%, and the number of sampling points by 23.6% compared with the standard APF-QRRT* algorithm in a complex environment. In this paper, a system test platform is constructed and utilized to carry out multi-AGV path planning experiments in real environments, and the experimental results show that the proposed hybrid path planning algorithm has good path planning effects.
This chapter explores collection analysis as a tool for proving some simple invariants about Horn clause logic programs. This static analysis of Horn clauses employs higher-order quantification and linear logic. This chapter introduces different types of collection approximations, such as multiset, set, and list approximations. The chapter also briefly mentions the automation of this analysis. Bibliographic notes provide pointers to relevant research in program analysis.
This chapter introduces linear logic, highlighting its unique approach to resource management. It presents sequent calculus proof systems for linear logic. The chapter discusses the polarity of logical connectives in linear logic and the concept of multi-zone sequents. It provides an informal semantics of resource consumption to illustrate the meaning of linear logic connectives. The chapter also touches upon the implementation of proof search in linear logic, mentioning techniques like lazy splitting of multisets. Bibliographic notes guide the reader to key literature on linear logic and its proof theory.
This chapter introduces the sequent calculus as a formal system for constructing proofs and as a basis for automated proof search. It contrasts the book’s approach with Gentzen’s original formulation. The chapter details the different types of inference rules within the sequent calculus, including structural rules (weakening, contraction), identity rules (initial, cut), and introduction rules for logical connectives. It distinguishes between additive and multiplicative rules. Finally, it defines sequent calculus proofs as trees of inference rules and briefly touches upon the properties of cut elimination. Bibliographic notes point to relevant literature.
Although deep reinforcement learning (DRL) techniques have been extensively studied in the field of robotic manipulators, there is limited research on directly mapping the output of policy functions to the joint space of manipulators. This paper proposes a motion planning scheme for redundant manipulators to avoid obstacles based on DRL, considering the actual shapes of obstacles in the environment. This scheme not only accomplishes the path planning task for the end-effector but also enables autonomous obstacle avoidance while obtaining the joint trajectories of the manipulator. First, a reinforcement learning framework based on the joint space is proposed. This framework uses the joint accelerations of the manipulator to calculate the Cartesian coordinates of the end-effector through forward kinematics, thereby performing end-to-end path planning for the end-effector. Second, the distance between all the linkages of the manipulator and irregular obstacles is calculated in real time based on the Gilbert–Johnson–Keerthi distance algorithm. The reward function containing joint acceleration is constructed with this distance to realize the obstacle avoidance task of the redundant manipulator. Finally, simulations and physical experiments were conducted on a 7-degree-of-freedom manipulator, demonstrating that the proposed scheme can generate efficient and collision-free trajectories in environments with irregular obstacles, effectively avoiding collisions.
This chapter demonstrates how logic programming, particularly using linear logic, can be used to encode and reason about simple security protocols. It discusses specifying communication processes and protocols, including communication on a public network, static key distribution, and dynamic symbol creation. The chapter explores how to represent encrypted data as an abstract data type and model protocols as theories in linear logic. It also covers techniques for abstracting the internal states of agents and representing agents as nested implications. Bibliographic notes cite relevant work on formal methods for security protocol analysis.
This chapter introduces the idea that computation can be viewed and reasoned about through the lens of proof theory. It highlights the historical context of defining computation, noting the equivalence of formal systems like lambda-calculus and Turing machines. The chapter discusses the benefits of using logic to specify computations, emphasizing the universally accepted descriptions of logics, which can ensure the precision of meaning for logic programs. It also outlines the book’s structure, dividing it into two parts: the first covering the proof-theoretic foundations of logic programming languages, and the second exploring their applications. The chapter concludes with bibliographic notes.
This chapter presents sequent calculus proof systems for classical and intuitionistic logics, which are variations of Gentzen’s LK and LJ proof systems. It highlights the differences in their inference rules, particularly regarding the right-hand side of sequents. The chapter discusses the cut-elimination theorem for these logics and its significance. It also explores the increase in proof size that can result from eliminating cuts. Furthermore, the chapter considers the choices involved in proof search within these systems, distinguishing between don’t-know and don’t-care nondeterminism. Bibliographic notes direct the reader to relevant historical and contemporary works.
This chapter delves into the formal proof theory of linear logic focused proofs. It defines paths in linear logic formulas, using them to describe right-introduction and left-introduction phases. The chapter proves the admissibility of the non-atomic initial rule and demonstrates the elimination of cut rules in the focused system for linear logic. Finally, it establishes the completeness of the focused proof system with respect to the unfocused proof system for linear logic. Readers primarily interested in the applications of linear logic programming can skip this chapter.