Random Variables: Distribution and Expectation¶
A random variable \(X\) on a sample space \(\Omega\) is a function \(X: \Omega \rightarrow \mathbb{R}\) that assigns to each sample point \(\omega\in\Omega\) a real number \(X(\omega)\).
In this chapter, we discuss discrete random variables, which take values in a range that is finite or countably infinite. This means even though we define \(X\) to map \(\Omega\) to \(\mathbb{R}\), the actual set of values \(\{X(\omega):\omega\in \Omega\) that \(X\) takes is a discrete subset of \(\mathbb{R}\).
The term “random variable” is a bit of a misnomer: it is a function and there is nothing random about it, nor is it a variable! What is random is which sample point of the experiment is realized, and hence the value that the random variable maps the sample point to.
Distributions¶
Like with basic probability space, the most important things about any random variable are:
The set of values it can take.
The probabilities with which it takes on the values.
Since a random variable is defined on a probability space, we can calculate these probabilities given the probabilities of the sample points. Let \(a\) be any number in the range of a random variable \(X\). Then, the set
is an event in the sample space because it is a subset of \(\Omega\). We usually abbreviate this event to simply “\(X=a\)”, and we discuss its probability \(\mathbb {P}[X=a]\). The collection of these probabilities for all possible values of \(a\) is known as the distribution of the random variable \(X\).
Formally, the distribution of a discrete random variable \(X\) is the collection of values \(\{(a, \mathbb{P}[X=a]): a\in\mathscr{A}\}\), where \(\mathscr{A}\) is the set of all possible values taken by \(X\).
Note the collection of events \(X=a\), for \(a\in\mathscr{A}\), satisfy two key properties:
Any two events \(X=a_1\) and \(X=a_2\) with \(a1\not=a_2\) are disjoint.
The union of these events is equal to the entire sample space \(\Omega\).
The collection of events thus form a partition of the sample space.
Bernoulli Distribution¶
A simple yet useful probability distribution is the Bernoulli distribution of a random variable which takes value in \(\{0,1\}\).
where \(0\leq p\leq1\). We say that \(X\) is dsitributed as a Bernoulli random variable with parameter \(p\), and write \(X \sim Bernoulli(p)\).
Binomial Distribution¶
The binomial distribution is the discrete probability distribution of the number of successes in a sequence of \(n\) independent experiments, each asking a yes –no question, and each with its own boolean-valued outcome with probability \(p\).
A random variable with this distribution is called a binomial random variable, and we write \(X \sim Bin(n,p)\).
Hypergeometric Distribution¶
The hypergeometric distribution is a discrete probability distribution that describes the probability of \(k\) successes (random draws for which the object drawn has a specified feature) in \(n\) draws, without replacement, from a finite population of size \(N\) that contains exactly \(B\) objects with that feature, wherein each draw is either a success or a failure.
A random variable with this distribution is called a hypergeometric random variable with parameters \(N, B, n\), and we write \(Y \sim Hypergeometric(N,B,n)\).
Joint Distributions¶
We are often interested in multiple random variables on the same sample space. The joint distribution for two discrete random variables \(X\) and \(Y\) is the collection of values \(\{((a, b),\mathbb{P}[X=a,Y=b]):a\in\mathscr{A},b\in \mathscr{B}\}\), where \(\mathscr{A}\) is the set of all possible values taken by \(X\), and \(\mathscr{B}\) is the set of all possible values taken by \(Y\).
When we are a joint distribution for \(X\) and \(Y\), the distribution \(\mathbb{P} [X=a]\) for \(X\) is called the marginal dsitribution for \(X\), and can be found by “summing” over the values for \(Y\):
The marginal distribution for \(Y\) is analogous. In the case of more than 2 variables, we sum over all possible values for all other variables. In some cases, it may be obtained more simply.
Random variables \(X\) and \(Y\) are said to be independent if the events \(X=a\) and \(Y=b\) are independent for all values \(a, b\). Equivalently, the joint distribution of independent random variables decomposes as
Expectation¶
The expectation of a discrete random variable \(X\) is defined as
where the sum is over all possible values taken by the random variable.
The expectation can be thought of as summarizing the distribution into a more compact, convenient form that is also easier to compute. The expectation can be thought of as a “typical value” for the random variable (though note it may not be a discrete value that \(X\) can actually take).
Linearity of Expectation¶
For any two random variables \(X\) and \(Y\) on the same probability space, we have
For any constant \(c\), we have
Proof
Consider a particular \(a\times\mathbb{P}[X=a]\) in the above sum. By definition, \(\mathbb{P}[X=a]\) is the sum of \(\mathbb{P}[\omega]\) over those sample points \(\omega\) for which \(X(\omega)=a\). We know every sample point \(\omega\in\Omega\) is in exactly one of those events \(X=a\). This means we can write out the above definition as
Now we apply this to \(\mathbb{E}[X+Y]\):
This completes the proof of the first equality; the proof of the second is left as an exercise.