t-test for correlation Vs t-test for correlation in time series

Usually the t-test for correlation is:

t = r * ((n-2)/(1-r^2))^(1/2)

while when considering time series (autocorrelation of residual errors) is:

t = r / (1/(n)^(1/2))

where

r --> correlation coefficient

n --> number of observations

Why?

I don’t know exactly why, but what I can see here is that those TESTS are testing different things.

The first one is for 2 series of data, proving if the correlation coefficient is statistically significant or statiscally different from zero. So that is the appropiate test for it.

In the second one you have 1 serie and the 2nd is the same serie but lagged 1 time, for example:

1 -

2 1

3 2

4 3

and so on.

So I think, as you are in a different scenario, the T-test must be adjusted for that. This formulas are statistically proved, and in some way we rely on them. If you want the fundamental of the formula, go to advanced statistics books.

Regards

Yes it may be this…

The t-test calculations aren’t adjusted at all. You standardize an observed value by subtracting the assumed true value of the parameter and dividing the difference by the standard error of the sampling distribution for the parameter estimator. The standard errors are different for the two kinds of correlations.

In the first, the standard error is [(1-r^2)/(n-2)]^0.5 to calculate the test statistic (assume p is sample correlation): (p-0)/ standard error for estimator’s sampling distribution. When we plug in the bold, it’s equivalent to:

r*[(n-2)/(1-r^2)]^0.5 for the first test statistic.

For the autocorrelation of errors test, the standard error is (T)^(-0.5) so we can see that (p-0)/(T^0.5) where p is the sample residual autocorrelation, and T is the number of observations in the sample over time (sample size, n).

The calculations of the test statistics are the same:

(observed value of estimator - hypothesized value of parameter)/ (standard error for the estimator)

Again, only the standard errors are different.

Oh, nice explanation Tickersu, it is clear now.

In the autocorrelation test standard error you mean T = n, right?

You got it! T is the sample size (some people use T instead of n as a way to emphasize the time series nature of the data). I just clarified in the previous post to help clear things up.