Question 5 Dennehy topic test
Can anybody teach me how to read the table for dickey fuller test and when to reject the null? I don’t understand what the answer is saying “based on the t ratios and significance levels…”
Thannks.
Question 5 Dennehy topic test
Can anybody teach me how to read the table for dickey fuller test and when to reject the null? I don’t understand what the answer is saying “based on the t ratios and significance levels…”
Thannks.
DF is used to test if the time series has a stationary convariance. if b1=1, that means there is a random walk pattern in the time series, because Xt=b0+b1Xt-1+error term, if b1=1, then xt is determined solely by the error term, that is a random walk.
In DF test, let g=b1-1, the null is g=0 (which is equivalent of b1=1). if rejected, we accept that b1 is significantly not equal to 1.
In question, the value of test statistics is the test value of g, you compare it with t-statistics, the rest is all the same.
So which numbers do I use to test the hypothesis? For dependent variable is it 0.0036 < 0.1123? So reject? Not reject?
Dickey Fuller is not a special case and the approach is generalized for other t-tests as well.
So you come up with a regression equation with b0 and b1 as two coefficients. These have their t-statistics. If t-stats are not give you can calculate using the fomula t=b/std error (b is b0, b1 etc).
Then you compare this number to the critical t value, which is normally given. So on a 5% significance level say critical t value is 1.96 and your two s-stats are 2.56 and 8.96 respectively. These are greater than the critical value, so you can say they are statistically significant, i.e. your regression is quite good.
A more standardized approach is to develop a null hypothesis H0: b1 = 0, which is saying b1 is equal to zero, i.e. your regression sucks. Your regression says b1 is, say 0.86 (i.e. not zero). Should this be trusted? On a confidence level of 95%, based on t-test, we can say it is not zero because 0.86 is statistically significant (remember 8.96>1.96) so we reject the null hypothesis that b1 is zero.
There is yet another way to judge coefficients and that is the p-values. If a coefficient has a very small p value (close to zero) then you can again claim your findings are significant (p is a probability of being wrong, so a small probability, usually up to 0.05 as a rule of thumb, means small chance of being wrong). If your p value is greater than 0.05 you may start doubting your regression can be trusted. Whenever you doubt your findings, you can’t reject the null hypothesis.
There are some other complications of the above but this is the most basic and simple explanation. For instance, what if H0: b1 = 1? or 2? or3?
0.1123 is a large p-value, so this means the coefficient 0.0036 may not be correct and it might as well be equal to zero. This is what the null hypothesis was saying, that H0: b1 = 0, so you go with the null, i.e. you can’t reject it…
We are not comparing 0.1123 with 0.0036. 0.1123 means there is 11.36% chance our model sucks and this probability alone is enough to doubt our findings. Note the other two p-values are 0 (or very small) which means they are good findings.
Oh so significance of t is also known as p- value? That might be he confusion I was having.
So for t test it is 0.0036< 1.591 so don’t reject.
So for the next one it is -0.432>-5.846 so reject
am I right?
1.591 is the t-value and it doesn’t say much on its own. You need to compare this to the critical t value, which is not given.
The p-value, on the other hand, if given, says a lot just by its own: if it is greater than 0.05 then the coefficient is not statistically significant.
So that Means we have to just use p value to answer this question?
Yes. Defective assemblies per hour has a large p-value, which means its t-stat (1.591) is insignificant.
This is a common misunderstanding of a p-value; it is not the probability that you are wrong. It does not tell you the probability that the null is true given the data and similarly, 1-pvalue does not tell you the probability that the alternative is true (or null is false). The p-value is the probability of observing a test statistic at least as extreme as the current one, assuming that the null is true. This is quite different from saying that the p-value is the probability that you are wrong. The p-value by itself can not answer questions regarding probabilities surrounding hypotheses.
Again, see my comment above. A p-value of 0.1123 indicates that if the null is true, then there is an 11.23% chance of observing results at least as extreme as the current ones. It does not tell you the chance that “our model sucks” or that we are making a mistake.
This is another thing to be wary of when looking at p-values. A small p-value doesn’t indicate how “good” a finding is-- it only indicates whether that result is statistically significant at the pre-specified level of alpha. A very small p-value can be attached to a result that has little practical value (i.e. small p-value, “bad” result).
Thanks for clarifications. Just like statisticians try to simplify everything and cram everything into straight line (whereas the relationship may be far from a straight line), I too try to simplify the thought process.
Thanks guys, appreciate it.
No problem… Real statisticians (or even people who have a decent background) can use much more than straight lines-- they try to actually model what’s going on in a process. The CFAI books are very, very limited in what they cover. Linearity in OLS means only in the parameters (i.e. the beta estimates), but you could introduce nonlinearity by adding x-squared, ln(x), 1/x, and many others!
If you want to “simplify” p-values, think of them (informally) as telling you how strongly the data disagree with the null. In comparison to a larger p-value, a smaller p-value indicates the data are more strongly “against” the null hypothesis. We determine “how small” by using the pre-selected alpha.
Anarchist.
LOL. I remember during undergrad econometrics class I asked the professor why force the data into a straight line, why not “let the findings speak for themselves” he said shut up and sit down you fool.
You can’t blame him for being annoyed but there is an inherent problem with all these models with so many assumptions and simplifications so when things go south everybody suddenly becomes smart in explaining the outcomes while the reality is infinitely complex.