Written by Boris Burkov who lives in Moscow, Russia, loves to take part in development of cutting-edge technologies, reflects on how the world works and admires the giants of the past.
Pearson's Chi-square test - intuition and derivation
June 17, 2021 23 min read
Here I discuss, how an average mathematically inclined person like myself could stumble upon Karl Pearson's chi-squared test (it doesn't seem intuitive at all from the first glance). I demonstrate the intuition behind it and then prove its applicability to multinomial distribution.
In one of my previous posts I derived the chi-square distribution for sum of squares of Gaussian random
variables and showed that it is a special case of Gamma distribution and very similar to Erlang distribution. You can
look them up for reference.
A motivational example
Suppose you’ve taken 5 samples of a random variable that you assumed to be standard normal Gaussian and received
suspicious results: you have a feeling that your dots have landed too far away from the peak of distribution.
Sample 5 points from standard normal distribution. In this example sampled points are very far from distribution’s mode, this is very, very unlikely.
Given the probability density function (abbreviated pdf) of a gaussian distribution and knowledge that standard deviation is 1,
you would expect a much more close distribution of your samples, something like this:
Another sample of 5 points from standard normal distribution. This result is much more typical.
You could ask yourself a question: what is the average value of probability density function that I would observe,
when sampling a point from standard normal?
Well, you can actually calculate it, following the expectation formula: Eξ[pdf]=−∞∫+∞This is pdf, i.e. the random variable you’re averaging2π1e−2x2⋅This is pdf of ξ over which you’re averaging2π1e−2x2⋅dx=2π1−∞∫+∞e−x2dx.
So, probability density function of an average point, you sample, is expected to be 22π1=0.707106⋅0.398942=0.282095.
Let us calculate pdf for several points:
if x=0 (which is the most probable point), your fξ(0)=2π1e−0/2=0.398942, a bit more probable than average.
if x=1 (which is the standard deviation), your fξ(1)=2π1e−1/2=0.241970, a bit less probable than average.
if x=2 (which is 2 standard deviations), your fξ(2)=2π1e−4/2=0.053989, not much.
if x=3 (which is 3 standard deviations), your fξ(3)=2π1e−9/2=0.004432, very small.
With these numbers let’s return to the 5 points I’ve observed. Out of those five point two points are at 3 standard
deviations, two points are at 2 standard deviations and one point is at 1 standard deviation. So the probability density
function of such an observation is fξ2(3)⋅fξ2(2)⋅fξ(1)=0.0044322⋅0.0539892⋅0.241970=1.38⋅10−8.
At the same time the expected pdf of five average point to be observed is 0.2820955=0.0017863.
Seems like our observation was a really improbable one, it is less probable than average by over 100 000 times.
Intuition behind Pearson’s chi-square test
We need a more rigorous tool than just comparison of the probability of our observed five points with the average.
Let us call our vector of observations X=(x1,x2,x3,x4,x5)T. The combined pdf to observe exactly our 5 points is a 5-dimensional multidimensional standard normal distribution 2π51e−2x12+x22+x32+x42+x52.
But note that we don’t want exactly our point. Any “equiprobable” point will do. For instance, e−21+1+1+1+1=e−20+2+1+1+1, the 5-dimensional points (1,1,1,1,1)T and (0,2,1,1,1)T are “equiprobable”, and we want to group them into one.
So, we are actually interested in the distribution of the sum x12+x22+x32+x42+x52 as for identical values of the sum the pdfs of likelihood of a vector of observations X=(x1,x2,x3,x4,x5)T are identical.
Each xi∼N(0,1) is a standard Gaussian-distributed random variable, so the sum in question is a random variable distributed as Chi-square: i∑Nxi2∼χN2. Please, refer to my older post on Gamma/Erlang/Chi-square distribution for the proof.
That’s why chi-square distribution is the right tool to answer the question, we’re interested in: is it likely, that the observed set of points was sampled from a standard normal distribution.
Derivation of Pearson’s goodness of fit test statistic
The chi-square test is widely used to validate the hypothesis that a number of samples were taken from a multinomial distribution.
Suppose you’ve rolled a k=6-sided dice n=120 times, and you expect it to be fair. You would expect Ei=20 occurrences of each value of the cube i∈ {1,2,3,4,5,6} (row E - expected), instead you see some different outcomes Oi (row O - observed):
1
2
3
4
5
6
E
20
20
20
20
20
20
O
15
14
22
21
25
23
We need to estimate the likelihood of an outcome like this, if the dice was fair. Turns out that a certain statistic based on these data follows the chi-square distribution: χk−12=i∑kEi(Oi−Ei)2. I’ll prove this fact here by induction for increasing number of dice sides k loosely following some constructs from de Moivre-Laplace theorem (which is a special case of Central Limit Theorem, proven before the adoption of much more convenient Fourier analysis techniques).
Induction step: (k+1)-sided dice from k-sided dice
In order to perform the induction step, we need to show that if we had a k-dice described by a k-nomial distribution and could apply χk−1 test to it, there is a way to construct a dice with k+1 sides from it, so that it can be described by a (k+1)-nomial distribution and validated by a χk test.
Here’s an example to illustrate the process of creation of an additional side of our dice.
Imagine that we had a continuous random variable, representing the distribution of human heights.
We transformed the real-valued random variable of height into a discrete one by dividing all people into one of two bins: “height < 170cm” and “height >= 170cm” with probabilities p1 and p2, so that p1+p2=1.
Chi-square test works for our data according to induction basis. We can see sampling of a value from this r.v. as a 2-dice roll (or a coin flip).
Now we decided to split the second bin (“height >= 170cm”) into two separate bins: “170cm <= height < 180cm” and “height >= 180cm”.
So, what used to be one side of our 2-dice has become 2 sides, and now we have a 3-dice. Let’s show that Chi-squared test will just get another degree of freedom (it’s going to be χ22 instead of χ12 now),
but will still work for our new 3-dice.
Statement 1. Pearson’s test’s statistic is a sum of chi-square-distributed first term and a simple second term
Let’s write down the formula of Pearson’s test for 3-nomial disitrubion and split it into binomial part and an additional term:
i=1∑3npi(Oi−npi)2=np1(O1−np1)2+np2(O2−np2)2+np3(O3−np3)2=j=1∑2npj(Oj′−npj)2∼χ12 for sum of k=2 terms by induction basenp1(O1−np1)2+n(p2+p3)(O2+O3−n(p2+p3))2−this part should also be ∼χ12, let’s prove thisn(p2+p3)(O2+O3−n(p2+p3))2+np2(O2−np2)2+np3(O3−np3)2
Statement 2. Second term is a square of standard normal random variable
The random variable that we’ve received has a χ12 distribution because it is a square of ξ=np2p3(p2+p3)O2p3−O3p2 random variable, which is a standard normal one.
Let’s show this fact: indeed O2 and O3 are gaussian r.v. (by de Moivre-Laplace/C.L.T.)
with expectations of E[O2]=np2 and E[O3]=np3 and variances Var[O2]=np2(1−p2) and Var[O3]=np3(1−p3) respectively.
Sum of 2 gaussian random variables is gaussian with expectation equal to sum of expectations and variance equal to sum of
variances, plus covariance: σX+Y=σX2+σY2+2ρσXσY. This fact can be proved using either convolutions or Fourier transform (traditionally known as characteristic functions in the theory of probabilities field).
Thus, we’ve shown that our normal random variable ξ has zero expectation and unit variance: ξ∼N(0,1). Hence, ξ2∼χ12.
Statement 3. Independence of the second term and the first term
Finally, we need to show independence of the second term ξ2∼χ12 and the first term j=1∑knpj(Oj′−npj)2∼χk−12, so that their sum is distributed as χk2.
Let me remind you the notation: here Oj′=Oj for j=1..k−1 and Oj′=Oj+Oj+1 for j=k, so that the final component of the sum in the first term is n(pk+pk+1)(Ok+Ok+1−n(pk+pk+1))2.
For every j<k (in case of the first step of induction k=2 there’s only one j=1) consider Oj and numerator (O2p3−O3p2) of our ξ:
so that the first k−1 components npj(Oj−npj)2 of χk−22 are independent of our ξ2=np2p3(p2+p3)(O2p3−O3p2)2.
What’s left is to show independence of the final component n(pk+pk+1)(Ok+Ok+1−n(pk+pk+1))2 of χk−22 from our ξ2=np2p3(p2+p3)(O2p3−O3p2)2 (or just its numerator):
Many thanks to Ming Zhang, Justin Cen and RoyalDiscusser for finding errors in this post and suggesting improvements.
Written by Boris Burkov who lives in Moscow, Russia, loves to take part in development of cutting-edge technologies, reflects on how the world works and admires the giants of the past. You can follow me in Telegram