Hi all. I just finished quant and have a pretty good grip on it, however, I’m sometimes confused by CFA text’s reference to autocorrelation (AC) and autocorrelation of residuals (ACoR). Most of the text refers to ACoR but sometimes “just” to AC (i.e. p.478) On page 470 re: unit root test of nonstationarity CFA text states “for a stationary time series, either AC at all lags are statistically indistinguishable from zero, or the ACs drop off rapidly to zero as the number of lags becomes large”. Are they referring to ACs or ACoRs? I thought this statement applied to ACoRs. Am I missing something? AC - autocorrelation of the time series with itself ; ACoR - autocorrelation of the residuals…same? not the same? Is text just leaving out “of residuals” as a convenience? Just when I think I understand I start losing the concept/question my understanding. Could someone please clarify? Thanks in advance!
The first step to straightening this out is to figure out what kind of model you are looking at. a) A time series model is one in which we are modelling the value of something at time T by looking at it’s value at time t-1, t-2, etc. An example might be that you are looking at R(t) = return of hedge fund in period t. For almost all hedge funds and hedge fund indexes, you can fit statistically significant models that R(t) = a1*R(t - 1) + a2*R(t-2) + … + error. The return of a hedge fund in month t is highly correlated with its return in month t-1, t-2, etc. (hmm, why might that be?) This is autocorrelation, not autocorrelation of the residuals. If you fit a times series model and have autocorrelation of the residuals, it means that you need to add terms to your model. b) A regression model is one in which you are regressing some variable X(t) on DIFFERENT variables Y(t), S(t), etc… In this case, you want to test whether you have serial correlation of the residuals. If you do, it means that the assumptions of your tests are wrong and your statistics are biased or not well-estimated. In this case, you need to do some more modelling, perhaps by adding in lagged terms like X(t-1). Then there is the issue of stationarity. For a time series like we have in a), stationarity means that the autocorrelation depends only on the lag. So no matter which time I sample, the autocorrelation between X(t) and X(t-k) depends only on k and not on t. For “well-behaved” processes, the autocorrelation function a(k) decays exponentially in k. That “AC at all lags are statistically indistinguishable from zero” is bogus because then we don’t have a very interesting time series (although it is stationary). A string of independent events, like the return on a stock index on day t, would be an example of a time series in which there is no significant autocorrelation. Edit: I am not saying in last that stock index returns are stationary because their variance surely changes and their means maybe change. In finance, it’s pretty hard to find stationary processes if you watch them for very long.
Good explanation, JD. Can you clarify whether serial correlation is related to conditional heteroskedasticity ? It seems to m, if the error terms are correlated in some way (positive or negative), then that owuld be same as having a conditional heteroskedastic situation, no? Conditional heteroskedasticity occurs when the error terms are not constant and not random, but follow some kind of a pattern… isn’t that what serial correlation is?
Thanks JoeyDV! Dreary - Cond’l Hetero - when the error term is related (conditionned on) the values (LEVEL) of the Indy variables - i.e. when the Indy takes on a large value, the error term is large®…introduces bias in the std. errors (usually underestimates them) - t-test not valid, regr coeffs’ consistency NOT affected. Failure to detect would mean you’d “find” stat’l signif. when there is none.