We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Researchers are increasingly reliant on online, opt-in surveys. But prior benchmarking exercises employ national samples, making it unclear whether such surveys can effectively represent Black respondents and other minorities nationwide. This paper presents the results of uncompensated online and in-person surveys administered chiefly in one racially diverse American city—Philadelphia—during its 2023 mayoral primary. The participation rate for online surveys promoted via Facebook and Instagram was .4%, with White residents and those with college degrees more likely to respond. Such biases help explain why neither our surveys nor public polls correctly identified the Democratic primary’s winner, an establishment-backed Black Democrat. Even weighted, geographically stratified online surveys typically underestimate the winner’s support, although an in-person exit poll does not. We identify some similar patterns in Chicago. These results indicate important gaps in the populations represented in contemporary opt-in surveys and suggest that alternative survey modes help reduce them.
This chapter examines four prominent online research methods – online surveys, online experiments, online content analysis, and qualitative approaches – and a number of issues/best practices related to them that have been identified by scholars across a number of disciplines. In addition, several platforms for conducting online research, including online survey and experimental design platforms, online content capture programs, and related quantitative and qualitative data analysis tools, are identified in the chapter. Various advantages (e.g., time saving, cost, etc.) and disadvantages (e.g., sampling issues, validity and privacy issues, ethical issues) of each method are then discussed along with best practices for using them when conducting online research.
A strong participant recruitment plan is a major determinant of the success of human subjects research. The plan adopted by researchers will determine the kinds of inferences that follow from the collected data and how much it will cost to collect. Research studies with weak or non-existent recruitment plans risk recruiting too few participants or the wrong kind of participants to be able to answer the question that motivated them. This chapter outlines key considerations for researchers who are developing recruitment plans and provides suggestions for how to make recruiting more efficient.
In January 2022, Fiji was hit by multiple natural disasters, including a cyclone causing flooding, an underwater volcanic eruption, and a tsunami. This study aimed to investigate perceived needs among the disaster-affected people in Fiji and to evaluate the feasibility of the Humanitarian Emergency Settings Perceived Needs Scale (HESPER Web) during the early stage after multiple natural disasters.
Methods:
A cross-sectional study using a self-selected, non-representative study sample was conducted. The HESPER Web was used to collect data.
Results:
In all, 242 people participated. The number of perceived serious needs ranged between 2 and 14 (out of a possible 26), with a mean of 6 (SD = 3). The top 3 most reported needs were access to toilets (60%), care for people in the community who are on their own (55%), and distress (51%). Volunteers reported fewer needs than the general public.
Conclusions:
The top 3 needs reported were related to water and sanitation and psychosocial needs. Such needs should not be underestimated in the emergency phase after natural disasters and may require more attention from responding actors. The HESPER Web was considered a usable tool for needs assessment in a sudden onset disaster.
Prior research demonstrates that responses to surveys can vary depending on the race, gender, or ethnicity of the investigator asking the question. We build upon this research by empirically testing how information about researcher identity in online surveys affects subject responses. We do so by conducting an experiment on Amazon’s Mechanical Turk in which we vary the name of the researcher in the advertisement for the experiment and on the informed consent page in order to cue different racial and gender identities. We fail to reject the null hypothesis that there is no difference in how respondents answer questions when assigned to a putatively black/white or male/female researcher.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.