Quant Null Hypothesis testing

Has anyone found a good way to remember all of the tests/rejection rules for all of the stats?

T-test, F-test, P - test,

Thanks!

If the calculated test statistic is larger in absolute value than the critical value, then you reject the null.

If you are given a p-value, reject the null if the p-value is smaller than your alpha level.

Doing a bunch of short practice problems will help you internalize the calculations and the conventions for your conclusion.

I am not sure what a P-test is (I assume you mean p-value, which is common to each of the classic hypothesis tests).

The rules of thumb are thus:

T-test (one-sample, two tailed): tests whether a parameter is different from a hypothesized value (e.g. H0: beta = 0; Ha: beta != 0). Follows t-distribution which approaches normal distribution as N increases (for all practical purposes when N>30). Degrees of freedom: n-k-1. Reject when calculated t-value > critical t-value Reject null when p-value is less than alpha. Conclude that parameter is different from hypothesized value in population with (1-alpha)% confidence.

T-test (one-sample, one tailed): tests whether a parameter is less than (or greater than) a hypothesized value (e.g. H0: beta>=1; Ha: beta < 1). Degrees of freedom: n-k-1. Reject when calculated t-value > critical t-value. For 95% confidence, reject null when p-value is less than 0.05. Conclude that parameter is less than (or greater than) hypothesized value in population with 95% confidence.

T-test (two-sample, two tailed): tests whether two parameters are different from one another (e.g. H0: X1-bar = X2-bar). Degrees of freedom: n-k-1. Reject when calculated t-value > critical t-value. For 95% confidence, reject null when p-value is less than 0.05. Conclude that population parameters are not the same with 95% confidence.

T-test (two-sample, one tailed): tests whether a parameter is less than (or greater than) another parameter (e.g. H0: beta1>=beta2; Ha: beta1 < beta2). Degrees of freedom: n-k-1. Reject when calculated t-value > critical t-value. For 95% confidence, reject null when p-value is less than 0.05. Conclude that one parameter is less than (or greater than) the other in the population with 95% confidence.

F-test (regression test, one-tailed): tests whether at least one parameter in a model is different from zero (e.g. H0: Beta1=beta2=beta3…=betaN=0; Ha: at least one parameter is different from zero). This tests for the utility of your model. Follows Chi-square distribution. The degrees of freedom are numerator: k, denominator : n-k-1. Reject the null hypothesis if the F observed is larger than the F critical. For 95% confidence, reject null when p-value is less than 0.05. Conclude that at least one parameter in the model is statistically significantly different from zero (i.e. at least one parameter has a stat sig cetaris paribus relationship with the DV on average).

F-test (equality of variances, one-tailed): tests whether two (normally distributed) populations have different (or greater or lesser than) variances (e.g. H0: variance1=variance2; Ha: variance1 != variance2, where F observed = Larger variance / Smaller variance). Follows Chi-square distribution. The degrees of freedom are numerator sample size - 1, denominator sample size - 1. Reject the null hypothesis if the F observed is larger than the F critical. For 95% confidence, reject null when p-value is less than 0.05. Conclude populations have different variances with 95% confidence. For a two-tailed test, use alpha/2 to get critical value.

Thanks for the responses but… any easier way to remember this?