It says that autoregressive conditional heteroskedasticity (ARCH) is “where the variance of the residuals in one time period of a time series is dependent on the variance of the residuals in another period.” I thought heteroskedasticity is where the variance of the error terms is related to the independent variable and serial correlation is where the variance of the error term is correlated with other error terms. If you’re testing the error terms against each other in the time series isn’t that serial correlation and not heteroskedasticity? I’m finding it difficult to keep all the tests and definitions of error terms straight.
What would we do without maratikus’s Wikipedia expertise?
I posted this on another thread started by rainy but it went unanswered. I figured that it might help in answering Bradleyz’s question. Bradley, if you can follow basic statistics rules ( I think the most important one here would be the Law of Total Variance) you should be able to answer the question you have. However, I do find that the statement that the variance of the residuals in one time period of a time series is dependent on the variance of the residuals in another period isn’t particularly accurate. This applies only for the conditional variance. For the unconditional variance, it does not depend on the variance of the residuals in the previous period all…It is actually a constant. Joey, would like some of your input on this. Thanks! ********************************************************************* Joey, I was reading through the stuff in the Schweser notes, and I found something which I found particularly weird and it made me wonder if there is something wrong with the material. Honestly this is the first time I have come across the discussion on ARCH in the context of it being a problem “If a time-series model has been found to contain ARCH errors, regression procedures that correct for heteroskedasticity, e.g. GLS must be applied for the predictive model. Otherwise, the standard errors of the model’s coefficients will be wrong, leading to incorrect predictions” The question is, why should an ARCH model be corrected for heteroskedasticity using GLS? My understanding is (Et|Et-1)= a + b(Et-1)^2 if we have y=B’Xt+Et and Et=Ut*(a+b(Et-1)^2)^0.5. I.e. Et is CONDITIONALLY HETEROSKEDASTIC. However, the UNCONDITIONAL variance is HOMOSKEDASTIC, i.e. var(Et)=a/(1-b) if the process generating the error term is weakly covariance stationary, and therefore the model satisfies the classical assumptions and therefore OLS will work fine, although there are other estimators that are more efficient. You need only the unconditional variance because you are doing the prediction on y. Thanks for your help Joey!
All of that is correct right up to the last sentence. First, it’s not clear that you are just doing prediction. However, even if you are doing just prediction efficient estimation of your parameters provides better prediction. There is no question though that the OLS estimators are not catastrophically bad (or even nearly as bad as fashion would have you believe). There was an awful lot of econometric modelling done before 1982 (and even subsequent to that) that never heard of ARCH and worked okay.
So is there any possible reason why ARCH errors need to be corrected using GLS as mentioned in the Schweser notes? That statement seemed kinda off to me and I am not sure if there was a mistake in the Schweser notes … If u want you can refer to an earlier thread on Dickey FUller started by Rainyliu.
Sure - more efficient estimation - the whole point of ARCH.
I see. Thanks, cos from what was written in the Schweser notes it seemed like, it was a problem that led to disastrous effects. So I am right to say, OLS works and it is perfectly ok to use OLS, but GLS is the more efficient estimator that’s all?
Well, I would agree but I bet I can find someone who doesn’t.
And what would be the reasons offered by the person who doesn’t agree?
Hmmm…maybe we should ask them? I guess that they would be of the same ilk as Clive Granger’s stuff on cointegration where you show that you can reach some wrong conclusions by using traditional methods. This case is not as strong as that one though because the OLS estimators are closer to truth than the are in the cointegrated case.