To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What is a focus group? Why do we use them? When should we use them? When should we not? Focus Groups for the Social Science Researcher provides a step-by-step guide to undertaking focus groups, whether as a stand-alone method or alongside other qualitative or quantitative methods. It recognizes the challenges that focus groups encounter and provides tips to address them. The book highlights three unique, inter-related characteristics of focus groups. First, they are inherently social in form. Second, the data emerge organically through conversation; they are emic in nature. Finally, focus groups generate data at three levels of analysis: the individual, group, and interactive level. The book builds from these three characteristics to explain when focus groups can usefully be employed in different research designs. This is an essential text for students and researchers looking for a concise and accessible introduction to this important approach to data collection.
Item response theory (IRT) represents an alternative measurement approach to Classical test theory (CTT) that has been developed to address some of the key limitations in CTT. IRT utilizes a logistic function to jointly scale both person characteristics (e.g. ability) and task characteristics (e.g. difficulty) along a common metric, and is grounded upon the notion that different item sets should not result in different scaling solutions. This provides IRT with a number of advantages over CTT; namely that the performances of individuals who are administered different sets of tasks may still be justifiably compared. From this basis, the IRT approach has established its utility through its applications in such varied contexts as adaptive testing, cognitive diagnostic modeling, item difficulty modeling, and latent class analyses. The current chapter focuses on applied issues in measurement with IRT, with emphasis on its distinct advantages over traditional approaches.
In this chapter, the authors review concepts related to reliability and validity of measurement in the social sciences. Necessarily eschewing detailed reviews of statistical methods to examine reliability and validity of measurement, the authors focus on terse discussion on key concepts and elementary methods. In addition, given the immense literatures on topics the authors could only discuss in brief, the authors point to sources that provide helpful, focused treatments of undiscussed ideas and techniques. Authors conclude that, given reliability and validity of measurement is crucial to perform scientific research, social scientists ought to prioritize establishing evidence for reliability/validity of measurement for social scientists’ measures and experimental methods.
The chapter begins with a description of the historical roots of comparative research, followed by a description of its spectacular in the last 50 years. The core methodological issues of the field are described: Does an instrument that is administered in different groups or countries constitute and adequate measure of the underlying construct in each application? If so, can we compare scores across groups and countries. A taxonomy of equivalence (similarity of meaning) and bias (presence of nuisance factors) to be considered in comparative studies is described. Special emphasis is given to test adaptations a tool to provide culture-informed measures. The gap between the advanced statistical procedures available to analyze comparative data and the much less advanced level of our theories of cross-cultural similarities and differences is mentioned as a key challenge for the future.