The mean or expected value of a random variable can be thought of as the “long-term average”, meaning the average of the outcomes of an ever increasing number of trials of the experiment.
Denoted E(X) or μX.
If the variable X takes the values x1,x2,…xn with probabilities p1,p2,…,pn respectively, then the mean is defined as: E(X)=μX=p1x1+p2x2+⋯+pnxn
You can think of this as a weighted average of the values, with their probabilities as weights. This makes sense: We want to take all values into account, but those values that have a higher probability are meant to appear more often, and so should contribute more. Each value contributes an amount proportionate to its relative frequency.
As a simple example, consider the example from the last section, with probability table:
X | 0 | 1 | 2 |
P(X) | 1/2 | 1/4 | 1/4 |
Then for the mean we would have:
E(X)=12⋅0+14⋅1+14⋅2=34=0.75
We can think of this as saying that if you were to play that game repeatedly, you would be gaining on average $0.75 per game. You can also think of it as the “fair price to pay to play the game”.
We examined a number of games in the previous section. Compute the mean of the random variables in each of those games.
The standard deviation follows a similar formula:
σ2X=p1(x1−μX)2+p2(x2−μX)2+⋯pn(xn−μX)2
So we look at how far each value is from the mean, square to remove the signs, average while accounting for the different probabilities, and finally take a square root.
This square of the standard deviation, typically called the Variance Var(X), you will often see written as E((X−μX)2).
Compute the standard deviation for each of the examples discussed so far.