To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In recent years, the embodiment of AI in the form of a robot brings forth new challenges in privacy and transparency. A cognitive robot must be able to integrate multiple tasks in its performance that requires collecting a great amount of data and the use of various AI techniques. As robotics technology continues to advance, socially assistive humanoid robots will play a more central role in interacting with humans. However, can we trust robots in social contexts? How can we design embedded AI robots such that they are more transparent and trustworthy? and what technical, legal, and ethical frameworks might we adopt to build a trust relationship with robots? This chapter discusses these fundamental questions concerning privacy and transparency in human–robot interaction. It will then propose possible ethical and regulatory responses to address them.
With technological advancements occurring at a rapid pace in the field of computers, robotics, and artificial intelligence, major changes have taken place in the robotics industry. These changes have led to what some have termed the “robotics revolution” that has had a major impact on social organizations, the economy, and as discussed within this chapter, human rights for industry and service workers. The emergence of AI-enabled robotics has begun to change the world in major ways challenging the law within nation states and also internationally among nations. In that context, the use of intelligent service and industrial robots has broad applications for the large and small industries that use robots as a source of labor. For example, intelligent service robots are used in the fields of healthcare and medicine, transportation, and for care of the elderly and children. Further, police and security services also use robots for crowd control and for surveillance purposes. However, while developments in robotics have provided numerous benefits to society, they have also brought forth many issues that challenge social, moral, and professional norms within society. As a result, the ever-increasing growth and development of robotic technology in various industries is challenging current legal schemes in fundamental ways, one of which is human rights law. As an example, the use of industrial and service robots can lead to employment insecurity, threats to the health and safety of workers, and privacy concerns. Further, the use of robots in industry and for the delivery of services can be inconsistent with other human rights such as the right to health and safety, the right to equality of opportunity, the right to employment and fair working conditions, the right to life, the right to association, prohibition against discrimination, and equality, which are all supported in international and regional human rights documents.
Artificial intelligence (AI) is presented as a portal to more liberative realities, but its broad implications for society and certain groups in particular require more critical examination. This chapter takes a specifically Black theological perspective to consider the scepticism within Black communities around narrow applications of AI as well as the more speculative ideas about these technologies, for example general AI. Black theology’s perpetual push towards Black liberation, combined with womanism’s invitation to participate in processes that reconstitute Black quality of life, have perfectly situated Black theological thought for discourse around artificial intelligence. Moreover, there are four particular categories where Black theologians and religious scholars have already broken ground and might be helpful to religious discourse concerning Blackness and AI. Those areas are: white supremacy, surveillance and policing, consciousness and God. This chapter encounters several scholars and perspectives within the field of Black theology and points to potential avenues for future theological areas of concern and exploration.
The usage of robots and artificial intelligence is expanding and changing every day. These exciting developments, especially in areas such as engineering, industry, education, and health, have begun to influence the legal world and have become the grounds for many important discussions on the future of law and technology. One of these debates is the question of whether robot judges can take part in a trial, which is the subject of this chapter. Although this problem was previously described as a “distant dream,” there are important examples of this issue on the way to becoming a reality today. Considering developments of AI-enabled and humanoid robots, the following question is posed: “Can robot judges replace human judges?” As a current example, in the “Internet Courts” in China, a robot judge looks like a humanoid robot in the image of a woman using a 3D image inspired by human judges. For this reason, it is important to consider the positive and negative aspects of the possible consequences of the development in the legal world of robot judges that is likely to be widespread in the future, to ensure that the law does not fall behind technological developments.
Chapter 7 highlights key concepts in Decentralized Finance (DeFi) and compares it to traditional finance. It discusses major DeFi applications such as decentralized exchanges, lending/borrowing platforms, derivatives, prediction markets, and stablecoins. DeFi offers advantages, including open access, transparency, programmability, and composability. It enables peer-to-peer financial transactions without intermediaries, unlocking financial inclusion, efficiency gains, and innovation. However, risks such as smart contract vulnerabilities, price volatility, regulatory uncertainty, and lack of accountability persist. As DeFi matures, enhanced governance, security audits, regulation, and insurance will be vital to address these challenges. DeFi is poised to reshape finance if balanced with prudence. Important metrics to track growth include total value locked, trading volumes, active users, and loans outstanding. Research tools such as Dune Analytics, DeFi Llama, and DeFi Pulse provide data-driven insights. Overall, DeFi represents a profoundly transformative blockchain application, but responsible evolution is key. The chapter compares DeFi to traditional finance and analyzes major applications, benefits, risks, and metrics in this emerging field.
We prove that any increasing sequence of real numbers with average gap $1$ and Poisson pair correlations has some gap that is at least $3/2+10^{-9}$. This improves upon a result of Aistleitner, Blomer, and Radziwiłł.
Chapter 1 provides an overview of the concepts and definitions inherent to Web3. It presents a deep exploration into the phenomenon of "Convergence of Convergence," a term coined to denote the convergence of various dimensions within Web3, such as technology, data, user interactions, business models, identity, and organizational structures. The chapter also offers a comparative study of Web3 from different perspectives – tracing its evolution in the Internet era, analyzing its implications for user experience, evaluating its regulatory aspects, and understanding its scalability. Each of these aspects is explored in a detailed, standalone section, allowing readers to comprehend the multifaceted nature of Web3. The overarching aim of this chapter is to foster a comprehensive understanding of Web3, delineating its significance as a major shift in the Internet paradigm and its potential for creating more decentralized, user-empowered digital ecosystems.
Chapter 12 is the conclusion. It presents a discussion of how the components of performance evaluation for learning algorithms discussed throughout the book unify into an overall framework for in-laboratory evaluation. This is followed by a discussion of how to move from a laboratory setting to a deployment setting based on the material covered in the last part of the book. We then discuss the potential social consequences of machine learning technology deployment together with their causes, and advocate for the consideration of these consequences as part of the evaluation framework. We follow this discussion with a few concluding remarks.
Property-based testing (PBT) is a technique for validating code against an executable specification by automatically generating test-data. We present a proof-theoretical reconstruction of this style of testing for relational specifications and employ the Foundational Proof Certificate framework to describe test generators. We do this by encoding certain kinds of “proof outlines” as proof certificates that can describe various common generation strategies in the PBT literature, ranging from random to exhaustive, including their combination. We also address the shrinking of counterexamples as a first step toward their explanation. Once generation is accomplished, the testing phase is a standard logic programing search. After illustrating our techniques on simple, first-order (algebraic) data structures, we lift it to data structures containing bindings by using the $\lambda$-tree syntax approach to encode bindings. The $\lambda$Prolog programing language can perform both generating and checking of tests using this approach to syntax. We then further extend PBT to specifications in a fragment of linear logic.
Large-scale atmospheric circulation patterns, so-called weather regimes, modulate the occurrence of extreme events such as heatwaves or extreme precipitation. In their role as mediators between long-range teleconnections and local impacts, weather regimes have demonstrated potential in improving long-term climate projections as well as sub-seasonal to seasonal forecasts. However, existing methods for identifying weather regimes are not specifically designed to capture the relevant physical processes responsible for variations in the impact variable in question. This paper introduces a novel probabilistic machine learning method, RMM-VAE, for identifying weather regimes targeted to a local-scale impact variable. Based on a variational autoencoder architecture, the method combines non-linear dimensionality reduction with a prediction task and probabilistic clustering in one coherent architecture. The new method is applied to identify circulation patterns over the Mediterranean region targeted to precipitation over Morocco and compared to three existing approaches: two established linear methods and another machine-learning approach. The RMM-VAE method identifies regimes that are more predictive of the target variable compared to the two linear methods, both in terms of terciles and extremes in precipitation, while also improving the reconstruction of the input space. Further, the regimes identified by the RMM-VAE method are also more robust and persistent compared to the alternative machine learning method. The results demonstrate the potential benefit of the new method for use in various climate applications such as sub-seasonal forecasting, and illustrate the trade-offs involved in targeted clustering.