To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter sets Leviticus within its narrative context in the Pentateuch and discusses historical-critical approaches to its composition. It further addresses a theology of holiness and the decentralization of cultic worship, which gives greater importance to purity in the home.
As the use of computational text analysis in the social sciences has increased, topic modeling has emerged as a popular method for identifying latent themes in textual data. Nevertheless, concerns have been raised regarding the validity of the results produced by this method, given that it is largely automated and inductive in nature, and the lack of clear guidelines for validating topic models has been identified by scholars as an area of concern. In response, we conducted a comprehensive systematic review of 789 studies that employ topic modeling. Our goal is to investigate whether the field is moving toward a common framework for validating these models. The findings of our review indicate a notable absence of standardized validation practices and a lack of convergence toward specific methods of validation. This gap may be attributed to the inherent incompatibility between the inductive, qualitative approach of topic modeling and the deductive, quantitative tradition that favors standardized validation. To address this, we advocate for incorporating qualitative validation approaches, emphasizing transparency and detailed reporting to improve the credibility of findings in computational social science research when using topic modeling.
Immunohistochemistry is a powerful diagnostic tool for practicing pathologists. Over the past few decades, the techniques used in immunohistochemistry have become exponentially more complex. By exploiting the specificity of antibody-antigen interactions, we can use commercially available labeled antibodies to determine the presence and dispersion of various macromolecules within tissue. Antigen retrieval techniques, tissue preservation and standardization have broadened the utility of immunohistochemistry from diagnostic ancillary test to screening for hereditary syndromes and serving as biomarkers in the era of personalized medicine. This chapter will describe the conceptual framework of immunohistochemistry, outline technical mechanisms, and explain its clinical relevance.
This paper explores the implementation and enduring significance of the German language program in Milwaukee Public Schools between 1867 and 1918. Despite the German language program facing challenges, notably the Bennett Law of 1889—which sought to restrict foreign language instruction statewide—the program persisted, highlighting the tension between local identity and state mandates. This study argues that the creation of the German course initiated a process of consolidation and standardization in Milwaukee Public Schools, shifting decision-making to school administrators who sought to accommodate the largest cultural group in Milwaukee. This case study of the Milwaukee Public Schools’ German Language Program reveals how school policies prioritized a multilingual approach to Americanization. The paper is structured in three sections, examining the evolution of language policy, the political implications of the Bennett Law, and the post-Bennett landscape of language education, ultimately demonstrating the interplay between consolidation and cultural inclusivity.
Experimental stone tool replication is an important method for understanding the context and production of prehistoric technologies. Experimental control is valuable for restricting the influence of confounding variables. Researchers can exert control in studies related to cognition and behavior by standardizing the type, form, and size of raw materials. Although standardization measures are already part of archaeological practice, specific protocols—let alone comparisons between standardization techniques—are rarely openly reported. Consequently, independent laboratories often repeat the costly trial-and-error process for selecting usable raw material types or forms. Here, we investigated various techniques and raw materials (such as hand-knapped flint, machine-cut basalt, manufactured glass, and porcelain) and evaluated them for validity, reliability, and standardizability. We describe the tests we performed, providing information on the individual approaches, as well as comparisons between the techniques and materials according to validity and reliability, along with relative costs. We end by providing recommendations. This is intended as a serviceable guide on raw material standardization for knapping experiments, including existing strategies and ones so far undescribed in the experimental archaeology literature. The future of this field would benefit from developments in the relevant technologies and methodologies, especially for those that are not yet widely available or affordable.
Drawing on extensive ethnographic engagement with the social world of the UNESCO Convention for the Safeguarding of the Intangible Cultural Heritage, this Element explores the mainstreaming of sustainable development principles in the heritage field. It illustrates how, while deeply entwined in the UN standardizing framework, sustainability narratives are expanding the frontiers of heritage and unsettling conventional understandings of its social and political functions. Ethnographic description of UNESCO administrative practices and case studies explain how the sustainabilization of intangible cultural heritage entails a fundamental shift in perspective: heritage is no longer nostalgically regarded as a fragile relic in need of preservation but as a resource for the future with new purposes and the potential to address broader concerns and anxieties of our times, ranging from water shortages to mental health. This might ultimately mean that the safeguarding endeavor is no longer about us protecting heritage but about heritage protecting us.
In spring 2024, the European Union formally adopted the AI Act, aimed at creating a comprehensive legal regime to regulate AI systems. In so doing, the Union sought to maintain a harmonized and competitive single market for AI in Europe while demonstrating its commitment to protect core EU values against AI’s adverse effects. In this chapter, we question whether this new regulation will succeed in translating its noble aspirations into meaningful and effective protection for people whose lives are affected by AI systems. By critically examining the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies, we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.
Recent work in sociolinguistics criticizes labeling sets of linguistic practices as languages and varieties. A focal concept is translanguaging – while opening productive perspectives on linguistic behavior, this approach often claims that, linguistically speaking, there is no such thing as a language. In this chapter we argue that this ontological claim is too strong, and that bottom-up approach to activism that follows in its trail, is insufficient as a response to linguistically embedded social hierarchies and power inequalities. Linguistics has a checkered history; labeling of varieties and construction of language standards has served dubious ends. However, using Norway as a case in point and alluding to other cases of standardization and norm regulation, we argue that effective linguistic activism aimed at social justice sometimes requires the identification of varieties as linguistic objects. We reject a generalized language suspicion, because the anti-language approach to activism pushes out of theoretical reach a level of organization where social and political hierarchies are instituted and maintained – but where such hierarchies may also be challenged and altered. We conclude that socially engaged language scholars must struggle with the concrete contextual assessments that languages and varieties confront us with, and face the normative dilemmas that top-down political intervention on languages allegedly faces. Otherwise, important means of social justice are lost.
The Taxonomy Regulation establishes common and science-based definitions to determine whether an economic activity is environmentally sustainable. It aims to make sustainable finance more accessible to investors, while also protecting them from false or misleading claims about a financial product’s sustainability. In doing so it shifts a large part of the burden to prevent greenwashing onto the legislator. This chapter therefore critically analyses the Taxonomy’s ability to protect investors from greenwashing and identifies a number of pitfalls. The market for sustainable financial products is rapidly growing, yet the Taxonomy so far only covers environmental sustainability in selected sectors. Gathering and disclosing the necessary environmental data can be challenging and costly and may discourage companies and financial market participants to offer Taxonomy-aligned products, possibly turning it into a niche product. The Taxonomy’s binary approach makes it difficult for investors to assess the sustainability of complex financial products and limits incentives for non-aligned companies to improve their performance. This chapter therefore argues for an extension of the Taxonomy that distinguishes between positive, intermediary and harmful activities and provides definitions for social and governance aspects of sustainability
The authors in this collection start with the insight that not all instances of semiotic indeterminacy are produced in the same way, that they can be located differently in the process of semiosis, and this fact shapes how and when semiotic indeterminacy is deployed by formulators and interpreters. The authors explore patterned uses of semiotic indeterminacy in Brazil, Bulgaria, Iran, and the United States to examine the role indeterminacy plays in institutional attempts at control and persuasion.
A model-based modification (SIBTEST) of the standardization index based upon a multidimensional IRT bias modeling approach is presented that detects and estimates DIF or item bias simultaneously for several items. A distinction between DIF and bias is proposed. SIBTEST detects bias/DIF without the usual Type 1 error inflation due to group target ability differences. In simulations, SIBTEST performs comparably to Mantel-Haenszel for the one item case. SIBTEST investigates bias/DIF for several items at the test score level (multiple item DIF called differential test functioning: DTF), thereby allowing the study of test bias/DIF, in particular bias/DIF amplification or cancellation and the cognitive bases for bias/DIF.
Influence curves of some parameters under various methods of factor analysis have been given in the literature. These influence curves depend on the influence curves for either the covariance or the correlation matrix used in the analysis. The differences between the influence curves based on the covariance and the correlation matrices are derived in this paper. Simple formulas for the differences of the influence curves, based on the two matrices, for the unique variance matrix, factor loadings and some other parameter are obtained under scale-invariant estimation methods, though the influence curves themselves are in complex forms.
This chapter considers standardization in molecular communication. Two IEEE standards, 1906.1 and 1906.1.1, have been developed for nanonetworks, in general, and molecular communication, in particular; these standards and their development are described.
Chapter 2 delves into the intricate interactional dynamics of administering cognitive assessments, with a focus on the Addenbrooke’s Cognitive Examination-III (ACE-III). The chapter critically examines the standardisation challenges faced by clinicians in specialised memory assessment services, highlighting the nuanced reasons for non-standardized practices. While cognitive assessments play a pivotal role in diagnosing cognitive impairments, the study questions the assumed standardization of the testing process. Drawing on Conversation Analysis (CA), the authors analyse 40 video-recordings of the ACE-III being administered in clinical practice to reveal variations from standardized procedures. The chapter expands on earlier findings to show how clinicians employ recipient-design strategies during the assessment. It introduces new analyses of practitioner utterances in the third turn, suggesting deviations could be associated with practitioners’ working diagnoses. The chapter contends that non-standard administration is a nuanced response to the interactional and social challenges inherent in cognitive assessments. It argues that clinicians navigate a delicate balance between adhering to standardized procedures and tailoring interactions to individual patient needs, highlighting the complex interplay between clinical demands and recipient design. Ultimately, the chapter emphasizes the importance of understanding the social nature of cognitive assessments and provides insights into the valuable reasons for non-standardized practices in clinical settings.
Many academic and media accounts of the massive spread of English across the globe since the mid-twentieth century rely on simplistic notions of globalization mostly driven by technology and economic developments. Such approaches neglect the role of states across the globe in the increased usage of English and even declare individual choice as a key factor (e.g., De Swaan, 2001; Crystal, 2003; Van Parijs, 2011; Northrup, 2013). This chapter challenges these accounts by using and extending the state traditions and language regimes framework, STLR (Cardinal & Sonntag, 2015). Presenting empirical findings that 142 countries in the world mandate English language education as part of their national education systems, it is suggested there are important similarities with the standardization of national language at the nation-state level especially in the nineteenth century and early twentieth centuries. This work reveals severe limitations of other approaches in political science to global English, including linguistic justice. It is shown how in the case of global English the convergence of diverse language regimes must be distinguished from state traditions but cannot be separated from them. With the severe challenges to global liberal cosmopolitanism, the role of individual state language education policies will become increasingly important.
This article presents a comprehensive evaluation of two nuclear-rated bilateral telerobotic systems, Telbot and Dexter, focusing on critical performance metrics such as effort transparency, stiffness, and backdrivability. Despite the absence of standardized evaluation methodologies for these systems, this study identifies key gaps by experimentally assessing the quantitative performance of both systems under controlled conditions. The results reveal that Telbot exhibits higher stiffness, but at the cost of greater effort transmission, whereas Dexter offers smoother backdrivability. Furthermore, positional discrepancies were observed during the tests, particularly in nonlinear positional displacements. These findings highlight the need for standardized evaluation methods, contributing to the development, manufacturing, and procurement processes of future bilateral telerobotic systems.
Plant names carry a significant amount of information without providing a lengthy description. This is an efficient shorthand for scientists and stakeholders to communicate about a plant, but only when the name is based on a common understanding. It is standard to think of each plant having just two names, a common name and a scientific name, yet both names can be a source of confusion. There are often many common names that refer to the same plant, or a single common name that refers to multiple different species, and some plants have no common name at all. Scientific names are based upon international standards; however, when the taxonomy is not agreed upon, two scientific names may be used to describe the same species. Weed scientists and practitioners can easily memorize multiple plant names and know that they refer to the same species, but when we consider global communication and far-reaching databases, it becomes very relevant to consider two sides of this shift: (1) a need for greater standardization (due to database management and risk of lost data from dropped cross-referencing); and (2) the loss of local heritage, which provides useful meaning through various common names. In addition, weed scientists can be resistant to changing names that they learned or frequently use. The developments in online databases and reclassification of plant taxonomy by phylogenetic relationships have changed the accessibility and role of the list of standardized plant names compiled by the Weed Science Society of America (WSSA). As part of an attempt to reconcile WSSA and USDA common names for weedy plants, the WSSA Standardized Plant Names Committee recently concluded an extensive review of the Composite List of Weeds common names and had small changes approved to about 10% of the list of more than 2,800 distinct species.
The question of how to balance free data flows and national policy objectives, especially data privacy and security, is key to advancing the benefits of the digital economy. After establishing that new digital technologies have further integrated physical and digital activities, and thus, more and more of our social interactions are being sensed and datafied, Chapter 6 argues that innovative regulatory approaches are needed to respond to the impact of big data analytics on existing privacy and cybersecurity regimes. At the crossroads, where multistakeholderism meets multilateralism, the roles of the public and private sectors should be reconfigured for a datafied world. Looking to the future, rapid technological developments and market changes call for further public–private convergence in data governance, allowing both public authorities and private actors to jointly reshape the norms of cross-border data flows. Under such an umbrella, the appropriate role of multilateral, state-based norm-setting in Internet governance includes the oversight of the balance between the free flow of data and other legitimate public policies, as well as engagement in the coordination of international standards.
Mass gatherings are events where many people come together at a specific location for a specific purpose, such as concerts, sports events, or religious gatherings, within a certain period of time. In mass-gathering studies, many rates and ratios are used to assess the demand for medical resources. Understanding such metrics is crucial for effective planning and intervention efforts. Therefore, this systematic review aims to investigate the usage of rates and ratios reported in mass-gathering studies.
Methods:
In this systematic review, the PRISMA guidelines were followed. Articles published through December 2023 were searched on Web of Science, Scopus, Cochrane, and PubMed using the specified keywords. Subsequently, articles were screened based on titles, abstracts, and full texts to determine their eligibility for inclusion in the study. Finally, the articles that were related to the study’s aim were evaluated.
Results:
Out of 745 articles screened, 55 were deemed relevant for inclusion in the study. These included 45 original research articles, three special reports, three case presentations, two brief reports, one short paper, and one field report. A total of 15 metrics were identified, which were subsequently classified into three categories: assessment of population density, assessment of in-event health services, and assessment of out-of-event health services.
Conclusion:
The findings of this study revealed notable inconsistencies in the reporting of rates and ratios in mass-gathering studies. To address these inconsistencies and to standardize the information reported in mass-gathering studies, a Metrics and Essential Ratios for Gathering Events (MERGE) table was proposed. Future research should promote consistency in terminology and adopt standardized methods for presenting rates and ratios. This would not only enhance comparability but would also contribute to a more nuanced understanding of the dynamics associated with mass gatherings.
This chapter examines how ethical frameworks for education have been displaced through processes of standardization both historically and contemporarily. Before turning to current examples, the chapter begins with an analysis of twentieth-century movements in philosophy of education and curriculum to illustrate how processes of standardization and educational “narrowing” emerged as the dominant educational vision for American schooling, corresponding with the push for accountability and neoliberal reform in the last few decades of the twentieth century. How this narrowing exists in today’s K-12 and higher education environment, as well as its impact on historically marginalized groups, is then explored. The chapter then turns to how the contemporary emphasis on educational technology, datafication, and digitalization reinforces educational standardization to the detriment of ethical educational possibilities. The chapter concludes with considerations of how ethical educational visions might be revived in our current era.