I can’t seem to understand what is an ARCH model… Can anybody explain this in an easy way? Tks.
There is no easy way to understand. Either you get it or you dont. I have read it a total of 3 times till now and I still dont get it.
Conditional Heteroskedasticity is a serious problem which can invalidate a regression model results . ARCH attempts to study if past variance is correlated to current variance i.e. Remember that the et terms are the residuals from the ORIGINAL AR(1) model ! Create a new variable which is THE residuals from the ORIGINAL AR(1) model. Then do a regression study on this variable unto itself : et^2 = b0 + b1et-1^2 + errort If this new model shows a high degree of correlation , i.e. high t-stats , then the ORIGINAL model is ARCH(1) Do you then toss it out? Well , maybe you can use it to predict variance one period out , although I find this somewhat fishy , since if you can predict variance on the basis of a faulty model , then why is the original model fauly? Ah well , that’s just the way it is , put a spin on it , and then try and sell a new idea , that’s what quant is about I think , selling normal distributions around even tho tail events regularly invalidate the norm big tme
What ARCH tests is if there is conditional heteroskedasticity which is when error terms are correlated with each other. Looking at a graph is the best way to grasp what conditional heteroskedasticity is. Basically, ARCH is used to see if conditional heteroskedasticity. How ARCH does this, is it runs a regression with the Error Term at period x as the dependent variable. The independent variavble is Error Term at period x-1. So it is basically testing how good last period’s error term is at predicting this period’s error term. If the regressoin is significant and last period’s error term does a good job predicting this period’s error term, conditional heteroskedasticity is present. If the test is insignificant, then the error term at x-1 does a poor job at predicting the error term at period x. In this case, the error term is random and no heteroskedasticity is present. It’s actually a pretty smart way to test for this problem when you think about it. You basically run a regression on the lagged error terms of your original regression against the error terms of your original regression.
I remember learning the Breusch-Pagan (nR^2) to test conditional heteroskedasticity, is ARCH another alternative? Also one question i did said to use Generlized Least Square to correct ARCH. Can someone please explain this?
Linear regression: Issue: Conditional H, which means variance of residuals is nonconstant and and related to independent variables Detect Conditional H: BP test Correct Condition H: Robus standard errors Time series regression: Issue: ARCH, which means variance of residuals in one period are dependent on variance of residuals in previous period. Detect ARCH: test if coefficient a1 is stat sig different from zero Correct ARCH: generalized least squares Although not completely correct, you can think of ARCH as simply Condition H for a time series model.
Bump. Good question.