To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter educates the reader on the main ideas that have enabled various advancements in Artificial Intelligence (AI) and Machine Learning (ML). Using various examples, and taking the reader on a journey through history, it showcases how the main ideas developed by the pioneers of AI and ML are being used in our modern era to make the world a better place. It communicates that our lives are surrounded by algorithms that work based on a few main ideas. It also discusses recent advancements in Generative AI, including the main ideas that led to the creation of Large Language Models (LLMs) such as Chat GPT. The chapter also discusses various societal considerations in AI and ML and ends with various technological advancements that could further improve our abilities in using the main ideas.
This chapter serves as the book’s culminating exploration and synthesizes the book’s core arguments, offering a critical evaluation of the Gulf states’ transformative responses to the global imperative of decarbonization. Through an assessment of historical trends, economic projections, and potential shifts in geopolitical power dynamics, the chapter constructs a comprehensive potential outlook for the Gulf region within a rapidly evolving global energy landscape. Notably, the chapter’s focus is on the critical 10–20-year window, a period during which the Gulf states must strategically navigate the complexities, capitalize on the opportunities, and effectively address the multifaceted challenges posed by decarbonization. Importantly, this chapter offers penetrating insights into the potential challenges awaiting the Gulf states. By posing the essential questions that policymakers must confront, it provides a conceptual roadmap for developing proactive strategies designed to address these challenges head-on. This focus on foresight and strategic management is fundamental to the chapter’s significance.
The cognitive approach sees behaviour as resulting from the operation of internal mental processes. Our visual systems did not evolve to present us with a true description of the world; rather, they evolved to give us a useful description of the world that supports our actions upon it. We can see this in perceptual constancies in which a changing world is stabilized by the actions of our visual system, resulting in visual illusions. Although problems such as under-age drinking are often thought of as problems of logic (such as the abstract Wason task), they are perhaps better thought of as problems of duties and obligation and play a role in detecting freeriders to better enable cooperation. Statistical misconceptions such as the gambler’s fallacy and the hot hand fallacy may arise from our sensitivity to the patchiness of the world that we inhabit.
The last decade has seen an exponential increase in the development and adoption of language technologies, from personal assistants such as Siri and Alexa, through automatic translation, to chatbots like ChatGPT. Yet questions remain about what we stand to lose or gain when we rely on them in our everyday lives. As a non-native English speaker living in an English-speaking country, Vered Shwartz has experienced both amusing and frustrating moments using language technologies: from relying on inaccurate automatic translation, to failing to activate personal assistants with her foreign accent. English is the world's foremost go-to language for communication, and mastering it past the point of literal translation requires acquiring not only vocabulary and grammar rules, but also figurative language, cultural references, and nonverbal communication. Will language technologies aid us in the quest to master foreign languages and better understand one another, or will they make language learning obsolete?
The implementation of artificial intelligence (AI) tools into clinical spheres emphasizes the critical need for an AI-competent healthcare workforce that can interpret AI output and identify its limitations. Without comprehensive training, there is a risk of misapplication, mistrust, and underutilization. Workforce skill development events such as workshops and hackathons can increase AI competence and foster interdisciplinary collaboration to promote optimal patient care.
Methods:
The University of Florida hosted the AI for Clinical Care (AICC) workshop in April 2024 to address the need for AI-competent healthcare professionals. The hybrid workshop featured a beginner and advanced track with interactive sessions, hands-on skill development, and networking opportunities led by experts. An anonymous, voluntary post-workshop survey asked participants to score their knowledge and skills before and after the AICC workshop. A second, follow-up survey was administered approximately nine months later.
Results:
Ninety participants attended the AICC workshop, forty-one attendees completed the post-workshop survey, and six attendees completed the follow-up survey. Paired T-tests of the post-workshop survey revealed statistically significant (P < .001) increases in self-reported knowledge gain across all six beginner track learning objectives and significant (P < .05) increases across all five advanced track objectives. Feedback indicated participants appreciated the interactive format, although communication and networking needed improvement.
Conclusion:
The AICC workshop successfully advanced AI literacy among biomedical professionals and promoted collaborative peer networks. Continued efforts are recommended to enhance participant engagement and ensure equitable access to AI education in clinical settings.
After acquiring sufficient vocabulary in a foreign language, learners start understanding parts of conversations in that language. Speaking, in contrast, is a harder task. Forming grammatical sentences requires choosing the right tenses and following syntax rules. Every beginner EFL speaker makes grammar errors – and the type of grammar errors can reveal hints about their native language. For instance, Russian speakers tend to omit the determiner “the” because Russian doesn’t use such modifying words. One linguistic phenomenon that is actually easier in English than in many other languages is grammatical gender. English doesn’t assign gender to inanimate nouns such as “table” or “cup.” A few years ago, the differences in grammatical gender between languages helped reveal societal gender bias in automatic translation: translation systems that were shown gender-neutral statements in Turkish about doctors and nurses assumed that the doctor was male while the nurse was female.
At what time does the afternoon start, at 1 p.m. or 3 p.m.? Language understanding requires the ability to correctly match statements to their real-world meaning. This mapping process is a function of the context, which includes various factors such as location and time as well as the speaker’s and listeners’ backgrounds. For example, an utterance like, “It is hot today,” would mean different things were it expressed in Death Valley versus Alaska. Based on our background and experiences, people have different interpretations for time expressions, color descriptions, geographic expressions, qualities, relative expressions, and more. This ability to map language to real-world meaning is also required from the language technology tools we use. For example, translating a recipe that contains instructions to “preheat the oven to 180 degrees” requires a translation system to understand the implicit scale (e.g. Celsius versus Fahrenheit) based on the source language and the user’s location. To date, no automatic translation systems can do this, and there is little “grounding” in any widely used language technology tool.
Non-compositional phrases such as “by and large” are phrases whose meaning cannot be unlocked by simply translating the combination of words they constitute. In particular, figurative expressions – such as idioms, similes and metaphors – are ubiquitous in English. Among other reasons, figurative expressions are acquired late in the language learning journey because they often capture cultural conventions and social norms associated with the people speaking the language. Figurative expressions are especially prevalent in creative writing, acting as the spice that adds flavor to the writing. Artificial intelligence (AI) writing assistants such as ChatGPT are now capable of editing raw drafts into well-written pieces, to the advantage of native and non-native speakers alike. These AI tools, which have gained their writing skills from exposure to vast amounts of online text, are extremely adept at generating text similar to the texts they have been exposed to. Unfortunately, they have demonstrated shortcomings in creative writing that requires deviating from the norm.
Language learning is often regarded as beneficial for developing a higher level of empathy and cultural appreciation. When we connect with people from a different linguistic background than ours, we can catch a glimpse of the rich cultural and linguistic mosaic that makes up our world – and incorporate these insights into our perspective of humanity. We also recognize that there are certain compromises that EFL speakers face when they make English their dominant day-to-day means of communication. One is the loss of proficiency in their native language, which can include forgetting words and code-switching to English; the second is a change in identity as we adapt our sense of self to each language we speak. Examining these crises related to language and identity can help us map out a future for how we want to communicate – and for how language learning and language technologies can help us realize our vision.
Euphemisms, a particular type of idiom especially prevalent in American English, are vague or indirect expressions that often substitute harsh, embarrassing, or unpleasant terms. They are widely used to navigate sensitive topics like death and sex. “Passing away,” for example, has long been an accepted term to describe the act of dying. When euphemisms are in use for the length of time it takes to become lexicalized, they are often replaced with new ones, a phenomenon known as “the euphemism treadmill.” Correctly interpreting and using euphemisms can be difficult for EFL learners – and can lead to misuse since these expressions may rely on relevant cultural knowledge. That is unfortunate, given that euphemisms hold sensitive meanings. Artificial intelligence (AI) writing assistants can now go beyond grammar correction to suggesting edits for more inclusive language, such as replacing “whitelist” with “allow-list” and “landlord” with “property owner.” Such suggestions can help inform EFLs and users from diverse cultures – who carry a different cultural baggage – of unintended bias in their writing. At the same time, these assistants also run the risk of erasing individual and cultural differences.
Apart from the words we speak or write, nonverbal communication – such as tone of voice, facial expressions, eye contact, and gestures – also differs across cultures. For example, travel guides for Italy like to warn against using the 🤌 hand gesture commonly signaling “wait” in many countries, because Italians interpret this gesture as, “What the hell are you saying?” Tech companies are now dipping their toes into analyzing users’ behavior as expressed in nonverbal communication. For example, Zoom is providing business customers with AI tools that can determine users’ emotions during video calls based on facial expressions and tone of voice. Unless companies carefully consider cultural differences, the ramifications could be more algorithmic bias and discrimination.
While what is said can be difficult to understand, what is not said may pose an even bigger challenge. Language is efficient, so often what goes without saying is simply not being said. It is left for the reader or listener to interpret underspecified language and resolve ambiguities, a task that we do seamlessly using our personal experience, knowledge about the world, and commonsense reasoning abilities. In many cases, commonsense knowledge helps EFL learners compensate for low language proficiency. However, what is considered “commonsense” is not always universal. Some commonsense knowledge, especially pertaining to social norms, differs between cultures. Can language technologies help bridge this cultural gap? It depends. Chatbots like ChatGPT seem to have broad knowledge about every possible topic in the world. However, ChatGPT learned about the world from reading all the English text on the web, which is primarily coming from the US, and thus it has a North American lens. In addition, despite being “book smart,” it still lacks basic commonsense reasoning abilities that are employed by us to understand social interactions and navigate the world around us.
Automatic translation tools like Google Translate have improved immensely in recent years. Older translation technology selected the sentence that sounded more natural in the target language among multiple prospective word-by-word translations. Conversely, the current tools learn a sentence-level translation function from human translations. Although they are very useful, automatic translation tools don’t work equally well for every pair of languages and every genre and topic. For this reason, automatic translation didn’t yet make second language acquisition obsolete. Mastering English means being able to think in English rather than translating your thoughts from your native language. The language of our thoughts affects our word choice and grammatical constructions, so going through another language might result in incorrect or unnatural sentences. Choosing the right English words involves obstacles such as mispronunciation, malapropism, and inappropriate contexts.
Although the internet has removed geographical boundaries, transforming the world into a global village, English is still the most dominant language online. New forms of online communication such as emoji and memes have become an integral part of internet language. While it’s tempting to think of such visual communication formats as removing the cultural barriers – after all, emoji appear like a universal alphabet – their interpretation may rely on cultural references.
This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a “forward-walking” approach for providers articulating the medical device’s clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of “backward-reasoning” in computer science, entails sub-goals for providers to examine a system’s intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.
The rise of GenAI (generative AI) tools such as ChatGPT has transformed the research environment, yet most legal researchers remain untrained in the theory, mechanics and epistemic structure of such systems. The public itself was introduced to GenAI through Generative Pre-Trained Transformer tools such as ChatGPT and Claude. Although AI is a decades-old academic discipline, it is now rapidly expanding, and LLM-based AI tools (called Legal Research AI Tools, or LRATs hereinafter, such as Lexis+ AI) sit at AI’s cutting edge within legal research. These LRATs rely on non-legal theoretical informational concepts and technologies to function. Legal researchers often struggle to understand how AI-enabled tools function, which makes effective/reliable use of them more difficult. Without proper orientation, legal professionals risk using LRATs with misplaced confidence and insufficient clarity, the implications of which will be addressed in a future article. This article, written by Ryan Marcotte, Reference, Instruction, & Scholarship law librarian at DePaul University’s College of Law in Chicago, Illinois, defines and explains AI-assisted legal research (AIALR) as a third phase of research logic following the traditional book-based legal research (BLR) and computer-assisted legal research (CALR) phases. It also introduces a definition of AI tailored for legal research, outlines key conceptual structures underpinning LRATs, and explains how they interpret human input. From this grounding, this article offers two frameworks: (1) the Five Ps Research Plan and (2) the four prompt engineering methodologies of Retrieval Augmented Generation, Few-Shot Prompting, Chain-of-Thought/Chain-of-Logic, and Prompt Chaining. Together, these frameworks equip legal researchers with the understanding and skills to plan, shape, and evaluate their research interactions with LRATs in the age of GenAI.
There is no doubt that we are now in the midst of an AI-driven revolution in how organisations and their employees work with information. The power of recent GenAI and other deep learning technologies to absorb and process massive amounts of data as well as generate new information in response to natural language prompts has obvious implications for knowledge work. The current developments in more autonomous agentic AI systems alongside the commodification of large language models (LLMs) and reduced barriers to entry for application developers will drive a second wave of innovation over the coming five years. This will cause disruption for many organisations and the workers within them, but such changes seem inevitable. Preparing now to work with these technologies and the opportunities they present as well as mitigate the problems they bring is essential. The opportunities for many information professionals are significant as effectively managing data assets holds the key to competitive advantage in this rapidly changing environment. Here Dr Martin De Saulles, a technology analyst and writer (see page 123 for a review on his new book The AI and Data Revolution: Understanding the New Data Landscape), goes through some of the key points relating to the evolution of AI in relation to those who work with information.
In recent years, the rapid convergence of artificial intelligence (AI) and low-altitude flight technology has driven significant transformations across various industries. These advancements have showcased immense potential in areas such as logistics distribution, urban air mobility (UAM) and national defense. By adopting the AI technology, low-altitude flight technology can achieve high levels of automation and operate in coordinated swarms, thereby enhancing efficiency and precision. However, as these technologies become more pervasive, they also raise pressing ethical or moral concerns, particularly regarding privacy, public safety, as well as the risks of militarisation and weaponisation. These issues have sparked extensive debates. In summary, while the integration of AI and low-altitude flight presents revolutionary opportunities, it also introduces complex ethical challenges. This article will explore these opportunities and challenges in depth, focusing on areas such as privacy protection, public safety, military applications and legal regulation, and will propose strategies to ensure that technological advancements remain aligned with ethical or moral principles.
Generative AI tools, such as ChatGPT, have demonstrated impressive capabilities in summarisation and content generation. However, they are infamously prone to hallucination, fabricating plausible information and presenting it as fact. In the context of legal research, this poses significant risk. This paper, written by Sally McLaren and Lily Rowe, examines how widely available AI applications respond to fabricated case citations and assesses their ability to identify false cases, the nature of their summaries, and any commonalities in their outputs. Using a non-existent citation, we analysed responses from multiple AI models, evaluating accuracy, detail, structure and the inclusion of references. Results revealed that while some models flagged our case as fictitious, others generated convincing but erroneous legal content, occasionally citing real cases or legislation. The experiment underscores concern about AI’s credibility in legal research and highlights the role of legal information professionals in mitigating risks through user education and AI literacy training. Practical engagement with these tools is crucial to understanding the user experience. Our findings serve as a foundation for improving AI literacy in legal research.
Large Language Models (LLMs) could facilitate both more efficient administrative decision-making on the one hand, and better access to legal explanations and remedies to individuals concerned by administrative decisions on the other hand. However, it is an open research question of how performant such domain-specific models could be. Furthermore, they pose legal challenges, touching especially upon administrative law, fundamental rights, data protection law, AI regulation, and copyright law. The article provides an introduction into LLMs, outlines potential use cases for such models in the context of administrative decisions, and presents a non-exhaustive introduction to practical and legal challenges that require in-depth interdisciplinary research. A focus lies on open practical and legal challenges with respect to legal reasoning through LLMs. The article points out under which circumstances administrations can fulfil their duty to provide reasons with LLM-generated reasons. It highlights the importance of human oversight and the need to design LLM-based systems in a way that enables users such as administrative decision-makers to effectively oversee them. Furthermore, the article addresses the protection of training data and trade-offs with model performance, bias prevention and explainability to highlight the need for interdisciplinary research projects.