To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Effects and the size of effects are proposed as a key way to overcome the limitations of significance testing in modern statistics. However, traditionally, effects are usually added on to the statistical testing procedure. This chapter proposes instead thinking first about what effects to look for in the context of the research and its maturity within the discipline of HCI. The main types of effect, changes in location, stochastic dominance and (co)variation, are described. Consideration of which of these effects that current best knowledge can predict leads to choosing the test most suitable to find those effects.
Cronbach's alpha is often used as a key measure of reliability in questionnaires developed in HCI. However, the state-of-the-art literature on Cronbach's alpha shows that it is itself not a reliable measure. This chapter demonstrates the problems and leads to a more nuanced interpretation of reliability, using both Cronbach's alpha and other more modern alternatives.
The focus of statistical testing on significance can lead to the two types of error discussed in this chapter: finding a significant result when there is no underlying real effect; failing to get significance when there is in fact an underlying real effect. Though a blinkered approach to significance can be problematic for good data analysis, consideration of the two types of error is useful first to help think about sample sizes in studies but, second, it is a good way to assess the quality and robustness of statistical tests.
Factor analysis is quite regularly seen in HCI to validate questionnaires, particularly those used to measure user experience. This chapter describes and illustrates the process of developing a questionnaire to measure a particular concept, starting from item generation through to factor analysis and interpretation. It introduces bifactor analysis as a more recent development in factor analysis that could help the interpretation of questionnaires made up of multiple factors.
A directional hypothesis predicts the specific way in which data are affected by an experimental manipulation. This, therefore, fits well with severe testing by making a concrete, explicit prediction. However, one-tailed testing associated with directional hypotheses regularly receives criticism because it can look like a way to make it easy to achieve statistical significance. This chapter describes why this is a problem and makes the case that despite the fit with severe testing, two-tailed testing is still the better way to do statistical analysis.
Each chapter of this book covers specific topics in statistical analysis, such as robust alternatives to t-tests or how to develop a questionnaire. They also address particular questions on these topics, which are commonly asked by human-computer interaction (HCI) researchers when planning or completing the analysis of their data. The book presents the current best practice in statistics, drawing on the state-of-the-art literature that is rarely presented in HCI. This is achieved by providing strong arguments that support good statistical analysis without relying on mathematical explanations. It additionally offers some philosophical underpinnings for statistics, so that readers can see how statistics fit with experimental design and the fundamental goal of discovering new HCI knowledge.