Student's t-distribution, t-test
June 20, 2021 15 min read
Here I discuss, how to derive Student's t-distribution, an important statistical distribution, used as a basis for t-test.
Student’s t-distribution is yet another important distribution, associated with Chi-squared distribution. For the derivation of chi-squared distribution see one of my previous posts.
William Sealy Gosset, known as “Student” used to work at the Guinness brewery and was interested in working with small datasets. He was not allowed to publish his findings under his real name, but was allowed to publish the resluts of his research under a pseudonym. Having been an attendant of Karl Pearson’s courses, he chose the fictional name “Student”.
Student’s t-distribution derivation
Suppose that you have sampled times from a normally distributed random variable , but you don’t know the mean and variance of that variable.
The best you can do is to substitute the unknown mean and variance with their unbiased sample estimates: sample mean and sample variance .
It is intuitive to substitute sample mean and sample variance into the formula of normal distribution instead of the true ones. Turns out that we consider a very similar random variable (ratio between sample mean and square root of sample variance, normalized by ), it is said to be t-Student distributed.
You may notice that the random variable looks very much like a ratio of two chi-squared-distributed random variables respectively. Therefore, would be a Fisher-Snedecor F-distributed random variable, if we managed to prove that the numerator and denominator were independent and that the denominator is chi-squared-distributed.
We shall look deeper into the properties of both of these estimators to find out more about them, as important properties arise from their analysis.
Sample mean and its distribution
Sample mean has a normal distribution . Let us show this fact:
Recall that by the rule of summation of normally distributed random variables sum of independent normally distributed random variables (not necessarily infinitely many!) is exactly normally distributed with mean and variance : .
Therefore, the sample mean - because if , .
Hence, is a standard normal random variable and its square is a chi-square distributed variable with 1 degree of freedom.
Sample variance, unbiased (Bessel) estimator and its expectation
The best estimate of variance of a random variable that we can get from our experiment, is sample variance .
Note that is normalized by , not , which is not intuitive and called Bessel’s correction. It means that the naive sample variance for small samples is somewhat smaller than the exact variance (called distribution variance). To show this fact, let us follow the logic of this post from StackOverflow, which is very similar to derivation of Bias-Variance tradeoff in Machine Learning books, e.g. Hastie-Tibshirani.
Denote the sample mean and the true (distribution) mean.
Let us denote naive (biased) sample variance and unbiased sample variance .
Then expectation of the naive sample variance is:
.
Thus, the unbiased sample variance is , so that .
Sample variance consists of sum of squares of non-independent normal random variables
Now, what we are aiming to do is construct a ratio between squared sample mean and sample variance that would follow Fisher-Snedecor F-distribution, which is a ratio of two chi-squared-distributed random variables.
We want sample variance divided by exact variance be a -distributed random variable.
It is tempting to assume that it is a sum of squares of standard normal variable and, thus, would be -distributed - with degrees of freedom.
Indeed:
- (variance of sum is sum of variances, variance of a variable divided by is variance divided by )
- (variance of difference of iids is sum of variances)
However, there is a huge problem: summands are normal variables, but NOT independent normal variables! E.g. if , they are exactly the opposite of each other, and number of degrees of freedom equals 1. If , two of them can take arbitrary values, but the third one is fixed. This sounds very much like the argument in Pearson’s goodness of fit test, right? Let us prove this one, too.
Sample variance is distributed as a chi-squared random variable with n-1 degrees of freedom
I am following the logic of this well-written article from PennState.
Ok, suppose that we knew the exact expectation . Then let us construct the sum of squares of our samples:
Again, let us add and subtract the sample mean to this sum of squares:
or:
, where , .
By Cochran’s theorem sample variance is independent of sample mean , thus, probability density function of is a convolution of probability density functions of and .
Now, we can directly use the convolution formula or apply one of spectral analysis tools to it to derive the distribution of , moment-generating functions/cumulants or characteristic functions/Fourier transform.
Fourier transform of a convolution is a multiple of Fourier transforms. Thus, .
Characteristic function of a chi-squared distribution is .
Thus, characteristic function . But this is the characteristic function of (characteristic functions are mostly reversible, so that correspondence of characteristic functions implies correspondence of distributions).
Hence, .
Cochran’s theorem: Independence of sample mean and sample variance
Moreover, it is not obvious that our numerator (sample mean) and denominator (sample variance) are independent as well. To deal with these problems, we need one more tool in our pocket.
A general argument, called Cochran’s theorem, exists, that can be used to prove independence of these two.
I will consider Cochran’s theorem in detail it in the next post.
t-statistic distribution derivation from F-distribution
Let us derive the t-Student distribution from Fisher-Snedecor’s F.
We know that .
Hence, differentiating, we get: .
Probability density function of T is symmetric, since the underlying distributions are symmetric.
Thus, or .
Substituting probability density function of F distribution with 1 and n-1 degrees of freedom into the last formula, we obtain the t-Student distribution probability density function with (n-1) degrees of freedom:
.
In this section I was following the logic of this post.
Student’s t-test
Student’s t-test is a family of statistical tests, based on application of t-distribution.
Paired data and unpaired data
Suppose that you have pairs of data from 2 measurements, e.g. same person’s temperature without treatment and with treatment.
Our null-hypothesis is that the treatment doesn’t work. Then a random variable:
is supposed to be t-Student distributed. We will reject the null-hypothesis, is the p-value for the obtained value of T is sufficiently low.
TODO: unpaired data
Equal and unequal variance
TODO
Confidence intervals estimation
t-Student’s distribution can be helpful for estimation of confidence intervals (see wikipedia 1 and 2) for the estimate of the mean.
Suppose that we need to calculate the range of reasonably probably values of mean of our normal distribution.
Pick a value A that corresponds to the probability level of 90% or 95% of t-Student distribution: p(-A < T < A) = 0.9. Using this level A we can calculate the confidence interval for :
, thus,
So, our confidence interval for is [ , ].
Written by Boris Burkov who lives in Moscow, Russia, loves to take part in development of cutting-edge technologies, reflects on how the world works and admires the giants of the past. You can follow me in Telegram