lecture 5 (1).pptx economic modeling regrissin

halanajeeb181 13 views 17 slides Oct 12, 2024
Slide 1
Slide 1 of 17
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17

About This Presentation

Economic


Slide Content

Properties of the OLS Estimators Lecture 5

OLS Properties The primary property of OLS estimators is that they satisfy the criteria of minimizing the sum of squared residuals. However, there are other properties. These properties do not depend on any assumptions - they will always be true so long as we compute them in the manner just shown. Recall the normal form to estimate β = (X′X) −1 X′y It is came from y = X ˆ β + e

X′e = 0 implies that for every column x k of X, x′ k e = 0. In other words, each regressor has zero sample correlation with the residuals. Note that this does not mean that X is un-correlated with the disturbances; we’ll have to assume this. From β = (X′X) −1 X′y we get (X′X) βˆ = X′y Now substitute in y = X ˆ β + e to get (X′X) ˆ β = X′(X ˆ β + e) So (X′X) ˆ β = ( X′X) ˆ β + X′e X′e = 0 1. The observed values of X are uncorrelated with the residuals.

What does X′e look like?

2. The sum of the residuals is zero. If there is a constant, then the first column in X (i.e. X1) will be a column of ones. This means that for the first element in the X′e vector (i.e. X11 × e1 + X12 × e2 + . . . + X1n × en ) to be zero, it must be the case that ∑ e i = 0. 3. The sample mean of the residuals is zero This follows straightforwardly from the previous property i.e.

4 The Gauss-Markov Assumptions 1. y = Xβ + e This assumption states that there is a linear relationship between y and X. 2. X is an n × k matrix of full rank. This assumption states that there is no perfect multicollinearity. In other words, the columns of X are linearly independent. This assumption is known as the identification condition.

This assumption - the zero conditional mean assumption - states that the disturbances average out to 0 for any value of X. Put differently, no observations of the independent variables convey any information about the expected value of the disturbance. The assumption implies that E(y) = Xβ. This is important since it essentially says that we get the mean function right. 4 The Gauss-Markov Assumptions

4 The Gauss-Markov Assumptions

4 The Gauss-Markov Assumptions

4 The Gauss-Markov Assumptions

Disturbances that meet the two assumptions of homoskedasticity and no autocorrelation are referred to as spherical disturbances. We can compactly write the Gauss-Markov assumptions about the disturbances as: 4 The Gauss-Markov Assumptions

4 The Gauss-Markov Assumptions

5. The predicted values of y are uncorrelated with the residuals. The predicted values of y are equal to X ˆβ i.e. ˆy = X ˆβ. From this we have ˆ y′e = (X ˆβ)′e = b′X′e = 0 This last development takes account of the fact that X′e = 0.

βˆ is an unbiased estimator of β The Gauss-Markov Theorem states that, conditional on assumptions, there will be no other linear and unbiased estimator of the β coefficients that has a smaller sampling variance. In other words, the OLS estimator is the Best Linear, Unbiased and Efficient estimator (BLUE). How do we know this?

Proof that βˆ is an unbiased estimator of β.

The Variance-Covariance Matrix of the OLS Estimates

The Variance-Covariance Matrix of the OLS Estimates
Tags