We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
SPARK launched in 2016 to build a US cohort of autistic individuals and their family members. Enrollment includes online consent to share data and optional consent to provide saliva for genomic analysis. SPARK’s recruitment strategies include social media and support of a nation-wide network of clinical sites. This study evaluates SPARK’s recruitment strategies to enroll a core study population.
Methods:
Individuals who joined between January 31, 2018, and May 29, 2019 were included in the analysis. Data include sociodemographic characteristics, clinical site referral, the website URL used to join, how the participant heard about SPARK, enrollment completion (online registration, study consents, and returning saliva sample), and completion of the baseline questionnaire. Logistic regressions were performed to evaluate the odds of core participant status (completing enrollment and baseline questionnaire) by recruitment strategy.
Results:
In total, 31,715 individuals joined during the study period, including 40% through a clinical site. Overall, 88% completed online registration, 46% returned saliva, and 38% were core participants. Those referred by a clinical site were almost twice as likely to be core participants. Those who directly visited the SPARK website or performed a Google search were more likely to be core participants than those who joined through social media.
Discussion:
Being a core participant may be associated with the “personal” connection and support provided by a clinical site and/or site staff, as well as greater motivation to seek research opportunities. Findings from this study underscore the value of adopting a multimodal recruitment approach that combines social media and a physical presence.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
Online platforms such as Amazon’s Mechanical Turk (MTurk), CloudResearch, and Prolific have become a common source of data for behavioral researchers and consumer psychologists alike. This chapter reviews contemporary issues associated with online panel research, discussing first how the COVID-19 pandemic impacted the extent to which researchers use online panels and the workers participating on certain online panels. The chapter explores how factors like a TikTok video can impact who uses these online panels and why. A longitudinal study of researcher perceptions and data quality practices finds that many practices do not align with current recommendations. The authors provide several recommendations for researchers to conduct high-quality behavioral research online, including the use of appropriate prescreens before data collection, data analysis preregistration practices, and avoiding post-screens after data collection that are not preregistered. Finally, the authors recommend researchers thoroughly report details on recruitment, restrictions, completion rates, and any differences in dropout rates across conditions.
Edited by
Cait Lamberton, Wharton School, University of Pennsylvania,Derek D. Rucker, Kellogg School, Northwestern University, Illinois,Stephen A. Spiller, Anderson School, University of California, Los Angeles
Netnography is a specific set of related data collection, analysis, ethical and representational research practices related to ethnography. Unlike ethnography, in netnography a significant amount of the data is collected in a naturalistic manner from researcher engagement with a digital experience, such as interacting with a virtual world or with others via social media communication. This chapter explains netnography and illustrates how it might be useful as a stand-alone method, or part of a multi-method approach, to help psychological consumer researchers investigate a range of important real-world phenomena.
Although Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited from online labor markets. We address these potential concerns by presenting new demographic data about the Mechanical Turk subject population, reviewing the strengths of Mechanical Turk relative to other online and offline methods of recruiting subjects, and comparing the magnitude of effects obtained using Mechanical Turk and traditional subject pools. We further discuss some additional benefits such as the possibility of longitudinal, cross cultural and prescreening designs, and offer some advice on how to best manage a common subject pool.
Edited by
Ruth Kircher, Mercator European Research Centre on Multilingualism and Language Learning, and Fryske Akademy, Netherlands,Lena Zipp, Universität Zürich
This chapter provides an overview of how to use focus groups in order to elicit language attitudes. Focus groups allow access to the collective discourse practices of a specified group of participants and can be used as a way of eliciting more natural and spontaneous responses. However, participants may feed off each other’s ideas rather than express their own original thoughts, and certain minority opinions may be downplayed, repressed, or withheld by the participants. Nevertheless, this method can be viewed as an attempt to analyse salient social representations in a communicative conversational situation and can yield otherwise unrevealed strands of research participants’ narratives. After an exploration of the advantages and disadvantages of using focus groups to investigate language attitudes, this chapter offers an overview of key practical issues of planning and research design. The analysis of the data resulting from focus group discussions is explored, particularly from a critical sociolinguistic perspective, involving mapping/categorisation of the data, tracing the circulation of people and resources over space and time, finding meaningful connections, and making valid claims. The chapter concludes with a case study of attitudes towards Breton and Yiddish in a variety of settings.
Virtual platforms can provide a socially distanced mechanism by which to promote ongoing research progress in the coronavirus disease 2019 (COVID-19) era and may change our approach to online research in the future. Understanding how to best utilise online research represents an important task for our field.
The COVID-19 pandemic imposed new constraints on empirical research, and online data collection by social scientists increased. Generalizing from experiments conducted during this period of persistent crisis may be challenging due to changes in how participants respond to treatments or the composition of online samples. We investigate the generalizability of COVID era survey experiments with 33 replications of 12 pre-pandemic designs, fielded across 13 quota samples of Americans between March and July 2020. We find strong evidence that pre-pandemic experiments replicate in terms of sign and significance, but at somewhat reduced magnitudes. Indirect evidence suggests an increased share of inattentive subjects on online platforms during this period, which may have contributed to smaller estimated treatment effects. Overall, we conclude that the pandemic does not pose a fundamental threat to the generalizability of online experiments to other time periods.
Amazon's Mechanical Turk is widely used for data collection; however, data quality may be declining due to the use of virtual private servers to fraudulently gain access to studies. Unfortunately, we know little about the scale and consequence of this fraud, and tools for social scientists to detect and prevent this fraud are underdeveloped. We first analyze 38 studies and show that this fraud is not new, but has increased recently. We then show that these fraudulent respondents provide particularly low-quality data and can weaken treatment effects. Finally, we provide two solutions: an easy-to-use application for identifying fraud in the existing datasets and a method for blocking fraudulent respondents in Qualtrics surveys.
While Canada has long criminalized aspects of sex work, the specific act of purchasing sexual services was not against the law per se. In 2014, however, the then Conservative government implemented new legislation targeting sex work clients. Given the criminalization and persistent stigmatization of their activities, assessing clients’ changing actions, perceptions, and knowledge of the new legislation is challenging. We thus turned to a major Canadian online sex work review forum to examine postings on forum threads. This paper examines the risk knowledge practices in which clients engage as they try to make sense of the modified legal regime and avoid new legal risks. Our findings illuminate clients’ varied understandings of their own criminalization.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.