We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 7, the author introduces both content analysis and basic statistical analysis to help evaluate the effectiveness of assessments. The focus of the chapter is on guidelines for creating and evaluating reading and listening inputs and selected response item types, particularly multiple-choice items that accompany these inputs. The author guides readers through detailed evaluations of reading passages and accompanying multiple-choice items that need major revisions. The author discusses generative artificial intelligence as an aid for drafting inputs and creating items and includes an appendix which guides readers through the use of ChatGPT for this purpose. The author also introduces test-level statistics, including minimum, maximum, range, mean, variance, standard deviation, skewness, and kurtosis. The author shows how to calculate these statistics for an actual grammar tense test and includes an appendix with detailed guidelines for conducting these analyses using Excel software.
This chapter discusses Feature Engineering techniques that look holistically at the feature set, therefore replacing or enhancing the features based on their relation to the whole set of instances and features. Techniques such as normalization, scaling, dealing with outliers and generating descriptive features are covered. Scaling and normalization are the most common, it involves finding the maximum and minimum and changing the values to ensure they will lie in a given interval (e.g., [0, 1] or [−1, 1]). Discretization and binning involve, for example, analyzing a feature that is an integer (any number from -1 trillion to +1 trillion) and realize that it only takes the values 0, 1 and 10 so it can be simplified into a symbolic feature with three values (value0, value1 and value10). Descriptive features is the gathering of information that talks about the shape of the data, the discussion centres around using tables of counts (histograms) and general descriptive features such as maximum, minimum and averages. Outlier detection and treatment refers to looking at the feature values across many instances and realizing some values might present themselves very far from the rest.
Pencil, paper and a scientific calculator are often adequate when analysing small amounts of data. However, spreadsheets are favoured especially when large amounts of data are involved. This chapter explores some of the basic features of spreadsheets and show how they may be applied to the analysis and presentation of experimental data. The Excel spreadsheet program by Microsoft is highlighted due to its availability, power, and longevity. Specific features of Excel are discussed such as the LINEST function for fitting equations, and the Analysis ToolPak which contains a number of useful analysis tools.
No matter how much care is taken during an experiment, or how sophisticated the equipment used, values obtained through measurement are influenced by errors. Errors can be thought of as acting to conceal the true value of the quantity sought through experiment. Random errors cause values obtained through measurement to occur above and below the true value. This chapter considers statistically-based methods for dealing with variability in experimental data such as that caused by random errors. As statistics can be described as the science of assembling, organising and interpreting numerical data, it is an ideal tool for assisting in the analysis of experimental data.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.