Gauss – Markov Theorem / BLUE Properties

Given the assumptions of the classical linear regression model, the least-squares estimators, in the class of unbiased linear estimators, have minimum variance, that is, they are BLUE. In other words, the Gauss-Markov theorem holds the properties of Best Linear Unbiased Estimators.

Following are some of the assumptions that should be taken into consideration for the mathematical derivation of the Gauss-Markov Theorem :

Properties of \(k_i\);

1. \(\sum_{}^{}{k_i}=\; 0\)

2. \(\sum_{}^{}{k_ix_i}=\; \sum_{}^{}{k_iX_i}\; =\; 1\)

3. \(\sum_{}^{}{k_i^{2}}=\; \frac{1}{\sum_{}^{}{x_i^{2}}}\)

Assumptions of \(u_i\);

1. \(E\left( u_i \right)\; =\; 0\)

2. \(Var\left( u_i \right)\; = E\left( u_i^{2} \right)\; =\; \sigma _{u}^{2}\; =\;Constant\)

3. \(u_i\; \sim \; N\left( 0, \;\sigma _{u}^{2} \right)\)

4. \(E\left( u_i,u_j \right)\; =\; 0\)

5. \(E\left( u_i,\; X_i \right)\; =\; X_i E\left(u_i \right)\; =\; 0\)

Properties of BLUE (Best Linear Unbiased Estimators)

Lets take the regression line \(Y\; =\; \alpha \; +\; \beta X_{i}\; +\; u_{i}\)

1. Linearity i.e., \(\hat{\beta}\)

Lets take the regression line \(Y\; =\; \alpha \; +\; \beta X_i\; +\; u_{i}\)

As we know,

\(\hat{\beta} = \frac{\sum_{}^{}{x_{i}y_{i}}}{\sum_{}^{}{x_{i^{}}^{2}}}\),

\(=\frac{\sum_{}^{}{\left( X- \bar{X} \right)\left( Y-\bar{Y} \right)}}{\sum_{}^{}{x_{i}^{2}}}\),

\(= \frac{\sum_{}^{}{\left( X- \bar{X} \right)Y\; -\; \sum_{}^{}{\left( X-\bar{X} \right)\bar{Y}}}}{\sum_{}^{}{x_{i}^{2}}}\),

\(=\; \frac{\sum_{}^{}{x_i\; Yi}}{\sum_{}^{}{x_{i}^{2}}}\),

\(=\; \sum_{}^{}{k_{i}Y_{i}}\)         because     \([\frac{x_{i}}{\sum_{}^{}{x_{i}^{2}}}=\; k_{i}]\)

2. Unbiasedness i.e., \(\hat\beta\)  is unbiased,

From Linearity,

\(\hat{\beta}=\; \sum_{}^{}{k_{i}Y_{i}}\),

\(=\; \sum_{}^{}{k_{i}\; \left(\alpha \; +\; \beta X_{i}\; +\; u_{i} \right)}\),

\(=\; \alpha \sum_{}^{}{k_{i}}+\; \beta \sum_{}^{}{k_{i}X_{i}\; +\; \sum_{}^{}{k_{i}u_{i}}}\),

\(=\; \; \beta \sum_{}^{}{k_{i}X_{i}\; +\; \sum_{}^{}{k_{i}u_{i}}}\), because \([\sum_{}^{}{k_i}=\; 0]\),

Taking expectations both sides,

\(E\left( \hat{\beta} \right)\; =\; \beta \),         because        \([\sum_{}^{}{k_ix_i}=\; 1 \;,\; E\left( u_i \right)\; =\; 0 ]\),

3. Best i.e, \(\hat\beta\) is best or \(\hat\beta\) have minimum variance

As we know,

\(Var\left(\hat \beta \right)\; =\; E\left\{ \hat \beta \; -\beta \right\}^{2}\),

\(=\;E\; \left\{ \beta \; +\; \sum_{}^{}{k_{i}u_{i}\; +\; \beta } \right\}^{2}\),

\(=\;E\; \left\{ \sum_{}^{}{k_{i}u_{i}} \right\}^{2}\),

\(=\; E\left\{ k_{1}^{2}u_{1}^{2}+k_{2}^{2}u_{2}^{2}+k_{3}^{2}u_{3}^{2}……….. \right\} + 2E\left\{ k_{1}k_{2}u_{1}u_{2}+ k_{2}k_{3}u_{2}u_{3} +…….. \right\}\),

\(= E\left\{ \sum_{}^{}{k_{i}^{2}u_{i}^{2}} \right\}\; +\; 2E\left\{ k_{i}k_{j}u_{i}u_{j} \right\}\),

\(=\; E\left\{ \sum_{}^{}{k_{i}^{2}u_{i}^{2}} \right\}\),

\(=\; \sum_{}^{}{k_{i}^{2}\; E\left( u_{i}^{2} \right)}\),  beacause       \([E\left( k_{i}k_{j}u_{i}u_{j} \right)\; =\; 0]\),

\(= \sigma _{ui}^{2} \;\sum_{}^{}{k_{i}^{2}\; }\),

Now suppose there is another estimator,

\(var\left( \beta^\ast \right)\; =\; \sum_{}^{}{u_{i}^{2}}\sigma _{ui}^{2}\),

Where \(k_{i}\; \neq \; w_{i}\),

Let \(w_{i}\; =\; k_{i}\; +\; c_{i}\),

So,

\(Var\; \left( \beta^\ast \right)\; =\; \sum_{}^{}{\left( k_{i}\; +\; c_{i} \right)^{2}\; \sigma _{ui}^{2}}\),

\(=\; \sigma _{ui}^{2}\; \sum_{}^{}{\left\{ k_{i}^{2}\; +\; c_{i}^{2}\; +\; 2k_{i}c_{i} \right\}}\),

\(=\; \sigma _{ui}^{2}\; \sum_{}^{}{k_{i}^{2}}+\; \sigma _{ui}^{2}\; \sum_{}^{}{c_{i}^{2}}\),  As \([\sum k_{i}c_{i} = 0]\),

\(=\; Var\left( \hat{\beta} \right)\; +\; \sigma _{ui}^{2}\; \sum_{}^{}{c_{i}^{2}}\),

So, \(Var\; \left( \hat\beta \right)\; < Var\left( \beta^\ast \right) \)

Now for \(\alpha\) ;

1. Linearity i.e.; \(\alpha\) is linear,

As we know,

\(\hat\alpha \; =\; \bar{Y}\; -\; \hat\beta \bar{X}\),

\(=\; \frac{\sum_{}^{}{Y_{i}}}{n}-\hat\beta \bar{X}\),

\(=\; \frac{\sum_{}^{}{Y_{i}}}{n}-\; \sum_{}^{}{k_{i}Y_{i}\bar{X}}\),

\(=\; \sum_{}^{}{\left\{ \frac{1}{n}\; -\; k_{i}\bar{X} \right\}} Y_i\),

So, \(\hat\alpha\) is linear.

2. Unbiasedness i.e.; \(\hat\alpha\)  is unbiased,

From linearity,

\(\hat\alpha=\; \sum_{}^{}{\left\{ \frac{1}{n}\; -\; k_{i}\bar{X} \right\}} Y_i\),

\(=\; \sum_{}^{}{\left\{ \frac{1}{n}\; -\; k_{i}\; \bar{X} \right\}\; \left( \alpha \; +\; \beta X_{i}\; +\; u_{i} \right)}\),

\(=\; \sum_{}^{}{\left\{ \frac{\alpha }{n}\; -\; \alpha k_{i}\bar{X}\; +\; \frac{\beta X_{i}}{n}\; -\; \beta X_{i}k_{i}\bar{X}\; +\; \frac{u_{i}}{n}\; -\; u_{i}k_{i}\bar{X} \right\}}\),

\(=\; \frac{n\alpha }{\alpha }\; -\; \alpha \bar{X}\sum_{}^{}{k_{i}\; +\; \beta \; \sum_{}^{}{\frac{X_{i}}{n}\; -\; \beta \bar{X}\sum_{}^{}{k_{i}X_{i}\; +\; \frac{\sum_{}^{}{u_{i}}}{n}-\sum_{}^{}{k_{i}u_{i}\bar{X}}}}}\; \),

\(=\; \alpha \; +\; \beta \bar{X}\; -\; \beta \bar{X}\; +\; \frac{\sum_{}^{}{u_{i}}}{n}\; -\; \bar{X}\; \sum_{}^{}{k_{i}u_{i}}\),

Now taking expectations both sides,

\(E\left( \hat\alpha \right)\; =\; \alpha \; +\; E\left( ui \right)\; \frac{1}{n}\),

\(E\left( \hat\alpha \right)\; =\; \alpha \; \),

3. Best i.e.; \(\hat\alpha\) is best or \(\hat\alpha\) have minimum variance,

From unbiased,

\(\hat\alpha =\; \alpha \; +\;\; \frac{\sum{u_i}}{n} -\; \bar{X}\; \sum_{}^{}{k_{i}u_{i}}\),

We know,

\(var\left( \alpha \right)\; =\; E\left\{ \hat\alpha -\alpha \right\}^{2}\),

\(=\; E\left\{ \alpha +\frac{\sum_{}^{}{u_{i}}}{n} -\; \bar{X}\; \sum_{}^{}{k_{i}u_{i}}-\alpha \right\}^2\),

\(=\; E\left\{ \sum_{}^{}{\left( \frac{1}{n}\; -\; k_{i}\bar{X} \right)^{2}}u_{i}^{2} \right\}\),

\(=\; \sigma _{ui}^{2}\left\{ \sum_{}^{}{\left( \frac{1}{n^{2}}\; +\; k_{i}^{2}\bar{X}\; -2\; \frac{1}{n}ki\bar{X} \right)^{}} \right\}\),

\(=\; \sigma _{ui}^{2}\left\{ \frac{n}{n^{2}}\; +\; \bar{X}\sum_{}^{}{k_{i}^{2}}-\; \frac{2}{n}\bar{X}\sum_{}^{}{ki} \right\}\),

\(=\; \sigma _{ui}^{2}\left\{ \frac{1}{n}\; +\; \bar{X}\sum_{}^{}{k_{i}^{2}} \right\}\),     As \([\sum_{}^{}{k_i}=\; 0 ]\),

Lets take another estimator \(\alpha^\ast\),

\(Var\left( \alpha^\ast \right)\; =\; \sigma _{ui}^{2}\left\{ \frac{1}{n}\; +\; \bar{X}\sum_{}^{}{w_{i}^{2}} \right\}\),

Where \(k_{i}\; \neq \; w_{i}\) and let \(w_i=k_i\;+\;c_i\),

Now,

\(Var\left( \beta \right)\; =\; \sigma _{ui}^{2}\; \left\{ \frac{1}{n}\; +\; \bar{X}\; \sum_{}^{}{\left( k_{i}\; +\; c_{i} \right)} \right\}\),

\(=\; \sigma _{ui}^{2}\left\{ \frac{1}{n}\; +\; \bar{X}\; \sum_{}^{}{\left( k_{i}^{2}\; +\; c_{i}^{2}\; +\; 2k_{i}c_{i} \right)} \right\}\),

\(=\; \sigma _{ui}^{2}\left\{ \frac{1}{n}\; +\; \bar{X}\; \sum_{}^{}{k_{i}^{2}\; +\; \bar{X}\; \sum_{}^{}{c_{i}^{2}}} \right\}\),           As \([\sum k_{i}c_{i} = 0]\),

\(=\;\sigma _{ui}^{2}\; \frac{1}{n}+\; \sigma _{ui}^{2}\bar{X}\sum_{}^{}{k_{i}^{2}\; +\; \sigma _{ui}^{2}\bar{X}\sum_{}^{}{c_{i}^{2}}}\),

\(=\;\sigma _{ui}^{2}\left\{ \frac{1}{n}\; +\; \bar{X}\; \sum_{}^{}{c_{i}^{2}} \right\}\; +\; \sigma _{ui}^{2}\; \bar{X}\sum_{}^{}{c_{i}^{2}}\),

So, \(var\left( \hat\alpha \right)\; < var \left( \alpha^\ast \right)\)

Share This Article

Leave a comment