Dickey-Fuller test for cointegration for time series with unit roots

Hello,

i have difficulty understanding the logic behind this test.

the book states we can test the cointegration between 2 time series that have a unit root. the steps are below:

  1. estimate the regression

  2. use the Dickey-Fuller test whether the time series have a unit root

  3. if the DF test concludes that the error terms have a unit root, then the time series are not cointegrated.

  4. if the DF test concludes that the error terms have no unit root, then the time series are cointegrated.

Question: if the test is for cointegration for times series that have a unit root, why are we testing if the time series have a unit root? isn’t haing unit root a preassumption for such scenario?

thanks in advance for your input.

I think that you’re missing some things:

  1. We’re regressing one time series against another time series.
  2. If both are stationary (i.e., neither has a unit root), then the regression is reliable.
  3. If only one is stationary (i.e., one has a unit root, the other doesn’t), then the regression is not reliable.
  4. If neither is stationary, (i.e., each has a unit root), then you need to check for cointigration:
    • If they don’t diverge long-term, they’re cointigrated: the regression is likely reliable in the long-run, but may not be reliable in the short run.
    • If they do diverge long-term, they’re not cointigrated: the regression is unreliable
1 Like

thanks, S2000magician! it’s much clear that way you break it down.