Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter characterizes violent extremism as an ideology, and associated communication-based or overt behavior, that protects, promotes, advances, and defines a group’s social identity, and is implicitly or actually violent. It presents a social identity theory and, primarily, an uncertainty-identity theory account of how normal social identity-based group and intergroup behaviors can become violently extreme. Social identity processes are driven by people’s motivation to (a) secure a favorable sense of self though belonging to high status groups, and (b) reduce uncertainty about themselves and who they are through identification with distinctive groups with unambiguously defined identities. In the former case, people strive to protect or improve their group’s status relative to other groups, and when moderate nonviolent strategies are continuously thwarted, they can reconfigure their group’s identity to incorporate and promote violent extremism. In the latter case, people strive to resolve feelings of self-uncertainty by identifying with distinctive groups, and when intergroup distinctiveness is blurred and their group’s social identity becomes fuzzy they are attracted to ethnocentrism, populist ideology, autocratic leaders, and ultimately violent extremism. The chapter ends by identifying warning signs of radicalization and intervention principles.
This chapter provides an outline analysis of the evolving governance framework for Artificial Intelligence (AI) in the island city-state of Singapore. In broad terms, Singapore’s signature approach to AI Governance reflects its governance culture more broadly, which harnesses the productive energy of free-market capitalism contained within clear guardrails, as well as the dual nature (as a regulator and development authority) of Singapore’s lead public agency in AI policy formulation. Singapore’s approach is interesting for other jurisdictions in the region and around the world and it can already be observed to have influenced the recent Association of South East Asian Nations (ASEAN) Guide on AI Governance and Ethics which was promulgated in early 2024.
We offer an integration of temporal approaches to the psychology of violent extremism. Focusing on the role of remembering, we draw attention to how memories and perceptions of the past motivate the use of violence in the present. Reminiscing about a glorious past elicits nostalgia, which, in turn, may increase present-day feelings of relative deprivation, collective angst, and threat. Furthermore, remembering historical perpetrators instills threat perceptions and negative intergroup emotions, whereas remembering past victimization elicits moral entitlement, thereby justifying more extreme means. We explore how different imaginings of the future – for the self and community – function as a double-edged sword either fueling or preventing radicalization in the present. Imagining can stimulate utopian or dystopian visions, which, in turn, may encourage mobilization of more extreme means by instilling a sense of legitimacy and hope in terms of utopias and moral obligation and urgency to prevent dystopias. However, imagining can also elicit a realistic, positive future outlook for the self and wider community, functioning as a protective shield against radicalization into violent extremism instead. We conclude by providing primary, secondary, and tertiary prevention recommendations based on our temporal approach aimed at policymakers and key stakeholders and avenues for future research.
This chapter explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. It examines the technical characteristics and capabilities of generative AIs that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, and algorithmic bias. It surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of a patchwork approach to privacy regulation, the overreliance on notice and choice, the barriers to transparency and accountability, and the inadequacy of individual rights and recourse. The chapter outlines critical elements of a new paradigm for generative AI privacy governance that recognizes collective and systemic privacy harms, institutes proactive measures, and imposes precautionary safeguards, emphasizing the need to recognize privacy as a public good and collective responsibility. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States, most notably the polarization that prevents the enactment of comprehensive federal privacy legislation, the strong commitment to free speech under the First Amendment, and the “permissionless” innovation approach that has historically characterized U.S. technology policy.
John Goodlad’s work and life energies have had a profound impact on public and private education in America. His influence has been far reaching. This chapter presents a brief account of his life and major accomplishments for the purpose of helping all of us who work in school and university partnerships to better understand and appreciate his many contributions. It should be clear, after reading this, how indebted the field is to him and how inspiring his efforts remain for those of us who continue the struggle to provide a quality education for all children and youth.
Through school–university partnerships (SUPs), individuals and organizations collaborate across the long-standing boundaries that exist between preschool through high school (p-12) and postsecondary education. Partnerships between institutions of higher education and schools take many forms and exist for many purposes; SUPs are boundary-spanning collaborative efforts that require individuals and groups to cross systemic divides in the United States educational system (Burns & Baker, 2016; Zeichner, 2010). In the first half of this chapter, we explore a broad definition for SUPs, define types of SUPs and briefly trace their development since the late 1800s. In the second half of this chapter, we apply three aspects of critical race theory (CRT) to SUPs, considering how SUPs might be facilitated to intentionally pursue racial equity.
The chapter examines the legal regulation and governance of ‘generative AI,’ ‘foundation AI,’ ‘large language models’ (LLMs), and the ‘general-purpose’ AI models of the AI Act. Attention is drawn to two potential sorcerer’s apprentices, namely, in the spirit of J. W. Goethe’s poem, people who were unable to control a situation they created. Focus is on developers and producers of such technologies, such as LLMs that bring about risks of discrimination and information hazards, malicious uses and environmental harms; furthermore, the analysis dwells on the normative attempt of EU legislators to govern misuses and overuses of LLMs with the AI Act. Scholars, private companies, and organisations have stressed limits of such normative attempts. In addition to issues of competitiveness and legal certainty, bureaucratic burdens and standard development, the threat is the over-frequent revision of the law to tackle advancements of technology. The chapter illustrates this threat since the inception of the AI Act and recommends some ways in which the law has not to be continuously amended to address the challenges of technological innovation.
We explore the promise and possibility of innovation in professional development schools (PDS). Based on a systematic review of 351 articles from school university partnerships, this chapter provides an analysis as well as illustrations of professional development school innovation. Our analysis points to three gears of innovation including the PDS itself as the initial innovation, the infusion of inquiry and research within the PDS as a second level of innovation, and a third level of innovation characterized as innovative outcomes. These outcomes related to innovation (1) as collaboration that fills a PK-12 learning gap and complements PK-12 instruction, (2) that supports the redesign of teacher education to strengthen learning through clinical practice and build program coherence, (3) in job-embedded professional learning, and (4) related to expanding the scope of partnerships. We conclude by highlighting a series of insights gained from the analysis and identifying future possibilities and challenges for PDSs.
Generative AI offers a new lever for re-enchanting public administration, with the potential to contribute to a turning point in the project to ‘reinvent government’ through technology. Its deployment and use in public administration raise the question of its regulation. Adopting an empirical perspective, this chapter analyses how the United States of America and the European Union have regulated the deployment and use of this technology within their administrations. This transatlantic perspective is justified by the fact that these two entities have been very quick to regulate the issue of the deployment and use of this technology within their administrations. They are also considered to be emblematic actors in the regulation of AI. Finally, they share a common basis in terms of public law, namely their adherence to the rule of law. In this context, the chapter highlights four regulatory approaches to regulating the development and use of generative AI in public administration: command and control, the risk-based approach, the experimental approach, and the management-based approach. It also highlights the main legal issues raised by the use of such technology in public administration and the key administrative principles and values that need to be safeguarded.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.
Generative artificial intelligence (GenAI) raises ethical and social challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. The main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of abuse, use or underuse of Gen AI. The epistemological approach investigates the specific way in which Gen AI produces information, knowledge, and a representation of reality that differs from that of human beings. To fully realise the impact of Gen AI, our paper contends that both these approaches should be pursued: an identification of the risks and opportunities of Gen AI also depends on considering how this form of AI works from an epistemological viewpoint and our ability to interact with it. Our analysis compares the epistemology of Gen AI with that of law, to highlight four problematic issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence. The epistemological analysis of these issues leads to a better framing of the social and ethical aspects resulting from the use, abuse or underuse of Gen AI.
Grow Your Own (GYO) programs have been lauded as innovative pathways for the recruitment of teachers into the field of education. This chapter will focus specifically on how GYOs at the pre-collegiate level can be conceptualized as innovative partnerships between PK-12 schools and universities to serve as a pipeline into the teaching profession. We used the National Association for Professional Development Schools (NAPDS) Nine Essentials as a lens to analyze the GYO literature. The Nine Essentials outline the fundamental qualities of professional development schools (PDSs), which serve as exemplars of school-university partnerships. PDSs are lauded as contexts that “embrace a culture of innovation.” We describe the relationship between GYOs and each of the Nine Essentials including areas of strength and possible opportunities for future innovation. Finally, we offer implications for viewing and designing GYOs as innovative, in-depth partnerships between PK-12 schools and universities
When the National Association for Professional Development Schools (NAPDS) decided to change its name, it also changed its scope, increasing its draw to include all school–university partnerships (SUPs). This handbook will capture the essence of what had been all things professional development school (PDS) but will also begin to assume responsibility for the ideas related to this broader realm. School–university partnerships could range from the ephemeral, created for one grant project or one university class activity, to long-term committed relationships that may or may not be teacher education related. This commentary addresses the chapters pertaining to the all-important history and conceptual foundations of this work for future partnership scholars, considering each author’s thoughtful efforts and perspective and adding my own as a second generation PDS researcher and participant.
Agency is fundamental to the work of all professionals and attempts to improve or reform education and schools must attend to teacher agency. This chapter provides a conceptual understanding and begins with an examination of terms used to describe the ways teachers act or are positioned, including agency, empowerment, autonomy, identity, self-efficacy, and voice, and explores the interrelationships among these terms. Contextual factors that impact teacher agency such as school culture, administrative style, practitioner inquiry, collaboration, measures of accountability, time constraints, and prior experience are reviewed. The fact that teacher agency may be expressed through professional attitudes and action, leadership, curriculum curation, and resistance to imposed mandates is explored, and finally, the authors highlight the benefits of agentic teachers to schools and students. School–University partnerships provide a unique opportunity to support teachers as agentic professionals and the chapter concludes with a set of specific recommendations to facilitate such an endeavor.