To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, there are two types of probabilities that can be estimated: empirical probability and theoretical probability. Empirical probability is calculated by conducting a number of trials and finding the proportion that resulted in each outcome. Theoretical probability is calculated by dividing the number of methods of obtaining an outcome by the total number of possible outcomes. Adding together the probabilities of two different events will produce the probability that either one will occur. Multiplying the probabilities of two events together will produce the probability that both will occur at the same time or in succession. As the number of trials increases, the empirical probability and theoretical probability converge.
It is possible to build a histogram of empirical or theoretical probabilities. As the number of trials increases, the empirical and theoretical probability distributions converge. If an outcome is produced by adding together (or averaging) the results of events, the probability distribution is normally distributed. Because of this, it is possible to make inferences about the population based on sample data – a process called generalization. The mean of sample means converges to the population mean, and the standard deviation of means (the standard error) converges on the value.
This chapter discusses two types of descriptive statistics: models of central tendency and models of variability. Models of central tendency describe the location of the middle of the distribution, and models of variability describe the degree that scores are spread out from one another. There are four models of central tendency in this chapter. Listed in ascending order of the complexity of their calculations, these are the mode, median, mean, and trimmed mean. There are also four principal models of variability discussed in this chapter: the range, interquartile range, standard deviation, and variance. For the latter two statistics, students are shown three possible formulas (sample standard deviation and variance, population standard deviation and variance, and population standard deviation and variance estimated from sample data), along with an explanation of when it is appropriate to use each formula. No statistical model of central tendency or variability tells you everything you may need to know about your data. Only by using multiple models in conjunction with each other can you have a thorough understanding of your data.
Pearson’s correlation describes the relationship between two interval- or ratio-level variables. Positive correlation values indicate that individuals who have high X scores tend to have high Y scores (and that individuals with low X scores tend to have low Y scores). A negative correlation indicates that individuals with high X scores tend to have low Y scores (and that individuals with low X scores tend to have high Y scores). Correlation values closer to +1 or –1 indicate stronger relationships between the variables; values close to zero indicate weaker relationships. A correlation between two variables does not imply a causal relationship between them.
It is also possible to test a correlation coefficient for statistical significance, where the null hypothesis is r = 0. This follows the same steps of all NHSTs. The effect size for Pearson’s r is calculated by squaring the r value (r2).
A correlation is visualized with a scatterplot. Scatterplots for strong correlations have dots that are closely grouped together; scatterplots showing weak correlations have widely spaced dots. Positive correlations have dots that cluster in the lower-left and upper-right quadrants of a scatterplot. Negative correlations have the reverse pattern.
One of the most frequent research designs in the social sciences is to compare two groups’ scores. When there is no pairing between sets of scores, it is necessary to conduct an unpaired two-sample t-test. The steps of this NHST are the same eight steps of the previous statistical tests because all of these are members of the GLM. There are some modifications that make the unpaired two-sample t-test unique:
• The null hypothesis for an unpaired two-sample t-test is always.
• The equation for the number of degrees of freedom is different: df = (n1 – 1) + (n2 – 1).
• The formula to calculate the observed value is more complex:.
• The effect size for an unpaired two-sample t-test is still Cohen’s d, but the formula has been modified:.
The unpaired two-sample t-test shares many characteristics with other NHSTs: the sensitivity to sample size, the necessity of calculating an effect size, the arbitrary nature of selecting an α value, and the need to worry about Type I and Type II errors.
Paired scores occur when each score in one sample corresponds to a single score in another sample. Whenever paired scores occur, researchers can measure whether the difference between them is statistically significant through a paired-samples t-test. A paired-samples t-test is similar to a one-sample t-test. The major difference is that the sampling distribution consists of differences between group means. This necessitates a few minor changes to the t-test procedure. The major differences are:
• The null hypothesis for a paired-samples t-test is now, whereis the average difference between paired scores.
• n now refers to the number of score pairs in the data.
• The Cohen’s d formula is now, where the numerator is the difference between the two sample means and is the pooled standard deviation.
All of the other aspects of a paired-samples t-test are identical to a one-sample t-test, including the default α value, the rationale for selecting a one- or a two-tailed test, how to compare the observed and critical values in order to determine whether to reject the null hypothesis, and the interpretation of the effect size. The caveats of all NHSTs apply.
Assuming no prior linguistics background, this introductory textbook summarises key topics and issues from workplace discourse research in a clear and accessible manner. The topics covered include how people issue directives, use humour and social talk, and how they manage conflict and disagreement. The role of language in the enactment of identity is also explored, in particular leadership, gender, and cultural identity, along with the implications and applications of workplace research for training and communications skills development. Over 160 international examples are provided as illustration, which come from a wide range of workplace settings, countries and languages. The examples focus on authentic spoken discourse, to demonstrate how theory captures the patterns found in everyday interaction. Introducing Language in the Workplace provides an excellent up-to-date resource for linguistics courses as well as other courses that cover workplace discourse, such as business communication or management studies.
This exciting new textbook introduces the concepts and tools essential for upper-level undergraduate study in water resources and hydraulics. Tailored specifically to fit the length of a typical one-semester course, it will prove a valuable resource to students in civil engineering, water resources engineering, and environmental engineering. It will also serve as a reference textbook for researchers, practicing water engineers, consultants, and managers. The book facilitates students' understanding of both hydrologic analysis and hydraulic design. Example problems are carefully selected and solved clearly in a step-by-step manner, allowing students to follow along and gain mastery of relevant principles and concepts. These examples are comparable in terms of difficulty level and content with the end-of-chapter student exercises, so students will become well equipped to handle relevant problems on their own. Physical phenomena are visualized in engaging photos, annotated equations, graphical illustrations, flowcharts, videos, and tables.
With a machine learning approach and less focus on linguistic details, this gentle introduction to natural language processing develops fundamental mathematical and deep learning models for NLP under a unified framework. NLP problems are systematically organised by their machine learning nature, including classification, sequence labelling, and sequence-to-sequence problems. Topics covered include statistical machine learning and deep learning models, text classification and structured prediction models, generative and discriminative models, supervised and unsupervised learning with latent variables, neural networks, and transition-based methods. Rich connections are drawn between concepts throughout the book, equipping students with the tools needed to establish a deep understanding of NLP solutions, adapt existing models, and confidently develop innovative models of their own. Featuring a host of examples, intuition, and end of chapter exercises, plus sample code available as an online resource, this textbook is an invaluable tool for the upper undergraduate and graduate student.
Community and primary health care nursing is a rapidly growing field. Founded on the social model of health, the primary health care approach explores how social, environmental, economic and political factors affect the health of the individual and communities, and the role of nurses and other health care practitioners in facilitating an equitable and collaborative health care process. An Introduction to Community and Primary Health Care provides an engaging introduction to the theory, skills and range of professional roles in community settings. This edition has been fully revised to include current research and practice, and includes three new chapters on health informatics, refugee health nursing and developing a career in primary health care. Written by an expert team, this highly readable text is an indispensable resource for any reader undertaking a course in community and primary health care and developing their career in the community.
An Introduction to Japanese Society provides a highly readable introduction to Japanese society by internationally renowned scholar Yoshio Sugimoto. Taking a sociological approach, the text examines the multifaceted nature of contemporary Japanese society with chapters covering class, geographical and generational variation, work, education, gender, ethnicity, religion, popular culture, and the establishment. This edition begins with a new historical introduction placing the sociological analysis of contemporary Japan in context, and includes a new chapter on religion and belief systems. Comprehensively revised to include current research and statistics, the text covers changes to the labor market, evolving conceptions of family and gender, demographic shifts in an aging society, and the emergence of new social movements. Each chapter now contains illustrative case examples, research questions, recommended further readings and useful online resources. Written in a lively and engaging style, An Introduction to Japanese Society remains essential reading for all students of Japanese society.
Deterministic evolution is a hallmark of classical mechanics. Given a set of exact initial conditions, differential equations evolve the trajectories of particles into the future and can exactly predict the location of every particle at any instant in time. So what happens if our uncertainties in the initial position or velocity of a particle are tiny? Does that mean that our uncertainties about the subsequent motion of the particle are necessarily tiny as well? Or are there situations in which a very slight change in initial conditions leads to huge changes in the later motion? For example, can you really balance a pencil on its point? What has been learned in relatively recent years is that, in contrast to Laplace’s vision of a clock-like universe, deterministic systems are not necessarily predictable. What are the attributes of chaos and how can we quantify it? We begin our discussion with the notion of integrability, which ensures the absence of chaos.
In this chapter we describe motion caused by central forces, especially the orbits of planets, moons, and artificial satellites due to central gravitational forces. Historically, this is the most important testing ground of Newtonian mechanics. In fact, it is not clear how the science of mechanics would have developed if the earth had been covered with permanent clouds, obscuring the moon and planets from view. And Newton’s laws of motion with central gravitational forces are still very much in use today, such as in designing spacecraft trajectories to other planets. Our treatment here of motion in central gravitational forces is followed in the next chapter with a look at motion due to electromagnetic forces, which can also be central in special cases, but are commonly much more varied, partly because they involve both electric and magnetic forces. Throughout this chapter we focus on nonrelativistic regimes. The setting where large speeds are involved and gravitational forces are particularly large is the realm of general relativity, where Newtonian gravity fails to capture the correct physics. We explore such extreme scenarios in the capstone Chapter 10.
In this final chapter we introduce Hamilton--Jacobi theory along with its special insights into classical mechanics, and then go on to show how Erwin Schrödinger used the Hamilton--Jacobi equation to learn how to write his famous quantum-mechanical wave equation. In doing so, we will have introduced the reader to two of the ways classical mechanics served as a stepping stone to the world of quantum mechanics. Back in Chapter 5 we showed how Feynman’s sum-over-paths method is related to the principle of least action and the Lagrangian, and here we will show how Schrödinger used the Hamilton--Jacobi equations to invent wave mechanics. These two approaches, along with a third approach developed by Werner Heisenberg called “matrix mechanics,” turn out to be quantum-mechanical analogues of the classical mechanical theories of Newton, Lagrange, Hamilton, and Hamilton and Jacobi, in that they are describing the same thing in different ways, each with its own advantages and disadvantages.
As we saw in Chapter 1, Newton’s laws are valid only for observers at rest in an inertial frame of reference. But to an observer in a non-inertial frame, like an accelerating car or a rotating carnival ride, the same object will generally move in accelerated curved paths even when no forces act upon it. How then can we do mechanics from the vantage point of actual, non-inertial frames? In many tabletop situations, the effects of the non-inertial perspective are small and can be neglected. Yet even in these situations we often still need to quantify how small these effects are. Furthermore, learning how to study dynamics from the non-inertial vantage point turns out to be critical in understanding many other interesting phenomena, including the directions of large-scale ocean currents, the formation of weather patterns -- including hurricanes and tornados, life inside rotating space colonies or accelerating spacecraft, and rendezvousing with orbiting space stations. There is an infinity of ways a frame might accelerate relative to an inertial frame. Two stand out as particularly interesting and useful: linearly uniformly accelerating frames, and rotating frames.