Biased & inconsistent coefficient estimates

Hi Guys!

I’ve been reading AF since I started studying for Level 1 earlier this year, but just never signed up until now. Figured that Level 2 is probably a good time to start getting involved over here. :slight_smile: Thanks to everyone who’s posted on here (especially s2000magician, tickersu, and many others, lifesavers that you all are), because it really helped me with my exam prep!

Right now I’m a little stuck on how using lagged dependent variables in a regression with serially correlated errors causes biased & inconsistent coefficient estimates. So, I understand that this necessarily makes the error term correlated with an independent variable (the lagged dependent variable), which is a violation of one of the assumptions of multiple regression and a violation of exogeneity. What I don’t understand is why the coefficient estimates are now biased and inconsistent.

I also don’t understand this: The CFAI text stated that when the expected error term is not 0 (which would be the case in this example), the coefficient estimates would be biased & inconsistent. The best reasoning I can come up with is that the coefficient estimates are biased and inconsistent because the expected error is not 0 and therefore the estimates are wrong, but this reasoning doesn’t really satisfy me.

Probably just missing something really obvious, but hopefully someone can help.

Anyway, glad to be joining the AF community and hopefully I can make some positive contributions!

Glad to hear you’ve found the place helpful! It’s pretty much well-beyond the scope of the exam, but your mini explanation is a good way to think about it. Think of what’s being estimated incorrectly: the coefficients and the variance of the error term. The standard errors of the coefficients are a function of the error variance (this leads to the inconsistent estimators-- you’ve lost a nice asymptotic property).

Another helpful thing to remember is that (aside from normality and homoscedasticity of the errors), you need the other assumptions to have unbiased and consistent OLS estimators. If one is violated, you’ve compromised those properties.

It’s not that you’re missing something really obvious-- it’s just helpful to see the calculation to demonstrate the bias and inconsistency. If you look up the proofs for unbiasedness and consistency for OLS estimators, it’ll probably be a little easier to see why you can lose these properties when certain assumptions are violated.

Hope this helps, let me know if you’re having difficulty finding a link for those.

Thanks so much tickersu! I have come to regard you as one of the “unofficial tutors” on AF. smiley

I just looked up the proofs for unbiasedness and consistency for OLS estimators, and it did a really nice job of tying everything together. I now have a hugely improved understanding of the whole multiple regression topic. Thanks again.

Glad it was helpful! I typically find it beneficial to go through the background calculations while paying close attention to the steps where assumptions come in to play. It becomes much more clear why you lose certain property(ies) when issues arise.