We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this part of the book, we repeat the material in Part II, but this time we focus on continuous random variables, which can take on an uncountable number of values. Continuous random variables are very relevant to computer systems – how else can we model response time, for example? Working in continuous time also allows us to leverage everything we know about calculus.
In Chapter 16, we defined an estimator of some unknown quantity, , based on experimentally sampled data, . This estimator, denoted by , is called a maximum likelihood (ML) estimator, because it returns that value of that produces the highest likelihood of witnessing the particular sampled data.
The goal of this part of the book is to learn how to run simulations of computer systems. Simulations are an important part of evaluating computer system performance. For example, we might have a new load-balancing algorithm, and we’re trying to understand whether it reduces the mean job response time or improves utilization. Or we might have a queueing network, where we want to understand the fraction of packet drops when we double the arrival rate of packets. Being able to simulate the computer system is an easy way to get answers to such questions.
Until now, we have typically talked about the mean, variance, or higher moments of a random variable (r.v.). In this chapter, we will be concerned with the tail probability of a r.v. , specifically.
In the last chapter we studied randomized algorithms of the Las Vegas variety. This chapter is devoted to randomized algorithms of the Monte Carlo variety.
In Part I, we saw that experiments are classified as either having a discrete sample space, with a countable number of possible outcomes, or a continuous sample space, with an uncountable number of possible outcomes. In this part, our focus will be on the discrete world. In Part III we will focus on the continuous world.
This chapter is a very brief introduction to the wonderful world of transforms. Transforms come in many varieties. There are z-transforms, moment-generating functions, characteristic functions, Fourier transforms, Laplace transforms, and more. All are very similar in their function. In this chapter, we will study z-transforms, a variant particularly well suited to common discrete random variables. In Chapter 11, we will study Laplace transforms, a variant ideally suited to common continuous random variables.
Having covered how to generate random variables in the previous chapter, we are now in good shape to move on to the topic of creating an event-driven simulation. The goal of simulation is to predict the performance of a computer system under various workloads. A big part of simulation is modeling the computer system as a queueing network.
In this first part of the book we focus on some basic tools that we will need throughout the book. We start, in Chapter 1, with a review of some mathematical basics: series, limits, integrals, counting, and asymptotic notation. Rather than attempting an exhaustive coverage, we instead focus on a select “toolbox” of techniques and tricks that will come up over and over again in the exercises throughout the book. Thus, while none of this chapter deals with probability, it is worth taking the time to master its contents.
Until now we have only studied discrete random variables. These are defined by a probability mass function (p.m.f.). This chapter introduces continuous random variables, which are defined by a probability density function.
The focus until now in the book has been on probability. We can think of probability as defined by a probabilistic model, or distribution, which governs an “experiment,” through which one generates samples, or events, from this distribution. One might ask questions about the probability of a certain event occurring, under the known probabilistic model.
At this point in our discussion of discrete-time Markov chains (DTMCs) with states, we have defined the notion of a limiting probability of being in state
In Chapter 18 we saw several powerful tail bounds, including the Chebyshev bound and the Chernoff bound. These are particularly useful when bounding the tail of a sum of independent random variables. We also reviewed the application of the Central Limit Theorem (CLT) to approximating the tail of a sum of independent and identically distributed (i.i.d.) random variables.
This book assumes some mathematical skills. The reader should be comfortable with high school algebra, including logarithms. Basic calculus (integration, differentiation, limits, and series evaluation) is also assumed, including nested (3D) integrals and sums. We also assume that the reader is comfortable with sets and with simple combinatorics and counting (as covered in a discrete math class).
In Chapter 15, we focused on estimating the mean and variance of a distribution given observed samples. In this chapter and the next, we look at the more general question of statistical inference, where this time we are estimating the parameter(s) of a distribution or some other quantity. We will continue to use the notation for estimators given in Definition 15.1.
In Chapter 4 we devoted a lot of time to computing the expectation of random variables. As we explained, the expectation is useful because it provides us with a single summary value when trading off different options. For example, in Example 4.1, we used the “expected earnings” in choosing between two startups.