Hypothesis testing for Durbin Watson test

Hi all, could someone help me with decision rule for hypothesis testing?? I got confused with some examples.

So, for quant methods the rule is that Ho is what I want to reject , so usually regression models are better specified (in various circumstances) when I am able to reject Ho (because I want my model to be well specified).

Well, apparently in some cases this rule does not work. For example, Durbin Watson is used to test if errors are serially correlated (I don’t errors to be correlated - assumption 5). The test is showed as following: Ho: no positive correlation and Ha: positive correlation. Here though, if I reject Ho (supposedly what I want), I will find out that errors are correlated, and therefore my model is not well specified… Makes no sense for me…

So the rule of “Ho is what I want to reject” does not work 100% of the times or I am misunderstanding something??

Thank you!!

I’d consider the choice of null hypothesis and alternative hypothesis in the Durbin-Watson test to be an anomaly. Or else simply forget about naming the hypotheses. If you get a statistic near zero, you have positive serial correlation, if you get a statistic near four you have negative serial correlation, if you have a statistic near two you have neither positive serial correlation nor negative serial correlation, and if you have values near one or three, you don’t know what you have.

The rule of “Hₒ is what you want to reject” does, in fact, work 100% of the time, except for the few cases where it doesn’t. D-W is one of those cases.

This isn’t the best way to look at it. Suppose the true regression function is E(Y)= bo + b1x1.

You can fit E(Y) bo +b1X + b2Z + b3X^2 (and a bunch of other terms too, possibly), and you can reject Ho on all the hypothesis tests, but the model is not specified well (in fact, it’s likely overspecified).

There are other examples I can use for this, but this gets the point across. Just remember you need to think about things more than a black and white context to determine whether the specification is appropriate (another example is testing at alpha 0.05 but a coefficient p-value is 0.07; if the theory tell you that variable should be left in, the model is likely better specified by leaving the variable in and somewhat ignoring the 0.07 p-value (or you would forgo that test in the first place since you know it should be in the model)).

Another anomaly i can think of is “Autocorrelation of the Residuals”, where we set H0 : b1=0, typically we want the erroe terms not be correlated & expect the autocorrelation coefficient to be zero.