Time series (lagged residuals) - what is the idea?

Could someone exlain what exactly does it mean when we have a table with 12 lagged residuals info (Lag/Autocorrelation/t-stat). For example, if we are taking 12th lag- does it mean that 12th lag is regresed against 1st lag or what?

Lets denote the residual by u^t. Then the 12th lag of the residual would be u^t-12.

In an autocorrelation table (showing lag, ac and t-stat), Lag 12 denotes corr( u^t, u^t-12) => correlation between the current residual and its lag 12 period ago

In general lag k denotes corr(u^t, u^t-k) => correlation between the current residual and its lag k periods ago

The t-stat next to lag 12 would be testing if this 12th order autocorrelation value is statistically different from zero at some signficance level (alpha).

So the null hypothesis (H0) would be it is not different to zero while the alternative is that it is different to zero. As per usual, if p-value < alpha or t-stat is more extreme than the t-critical values, we reject H0 and conclude that the autocorrelation coefficient is indeed statistically different to zero at the given alpha.

Say you estimate this regression : U^t = constant + (beta1) U^t-12 + {other variables}

To test if autorcorrelation were significant at lag 12, the two tail hypothesis test would be for beta1.

My understanding is that beta1 merely tells you if there is autocorrelation or not at lag 12. It doesn’t compute the 12th order autocorrelation value, which is given by the autocorrelation table.

When you have a time-serie regression you may have failed to correctly specify the model, so the errors will show a pattern known as serial correlation. If the serial correlation is positive (common case) the problem is serious, so you need to re-specify your model with more variables, change the current ones, transform them, etc.

The lagged autocorrelation table shows you the different correlation coefficients for the Error serie in T and the Error serie in T-k, you go lagging one period and so on. How many lags you will analyze has no general rule, but a practical rule is to calculate the cube root of the number of observations of your sample. For example, if your sample has 1000 observations, you should calculate 10 lagged autocorrelations of the error serie.

A high autocorrelation in any lag would show a bad behavior of the error, so your model is not correctly specified. The Ho of those hypothesis is that R(t , t-k) is equal than zero, so to prove your model is correctly specified you should fail to reject that r = 0 in all cases (accept the null, accept r=0), therefore all your T-tests must be lower than the critical value, or p-values higher than the alpha.

This table shows autocorrelations, not regressions of the error T on the error T-k, so do not confuse the procedures.

Regards