Sample size and T stat

Q1: Why does a small sample size decrease the value of a T-stat and increase the probability of not rejecting the null?

Q2: What is the difference between a Type 1 and 2 error?

Thanks

:neutral_face:

When you have a small sample, any unusual, extreme value in the sample will have a lot of weight (much more than in the overall population); therefore, to avoid being misled, the t-critical values are larger: you need much stronger evidence to reject the null hypothesis.

A Type I error is rejecting the null hypothesis when it it true.

A Type II error is failing to reject the null hypothesis when it is false.

1 Like

All else constant, a smaller sample size will increase the standard error used in the denominator of the test statistic. This will result in a smaller magnitude for the test statistic. This would imply less evidence contradicting the null hypothesis, and therefore, it’s harder to reject the null hypothesis.

Here is a fun fact. A Type II error is actually only accepting a false null hypothesis. This occurs when someone actually concludes the null is true (but it happens to be false) or makes a practical decision that is congruent with accepting the null (when it’s actually false…they may erroneously say “FTR” but their actions are clearly “accept Ho”).

The phrase “fail to reject Ho” was used as a distinct outcome that is not equivalent to accepting Ho. Many people acts in a way that implies they’ve accepted Ho (FTR on the Dickey-Fuller test, for example, and acting as if there is no unit root would be a situation where someone has accepted Ho in practical terms).

Most non-stats books conflate the ideas of FTR Ho and accepting Ho. They’re separate outcomes, but people often don’t see the distinction.

Thanks guys

Cool.

Here’s another fun fact: you can now use subscripts here: H0.

:wink:

I’ve been waiting a long time for this! Thanks for pointing it out :grin:

Have a look at th t-Table as you move from df 1 to 100 , that is increasing sample size, the t-critical values decrease ( High Probability of Type 1 Error: Rejecting a true null) small sample size, high critical t values and huge sample size small critical t values. More so as you move from right to left (decreasing confidence level: the critical t values decrease) and as you move left to right (increasing confidence level: t values increase which also increase probability of type 2 errors: failing to reject a false null)

Ehhh, no?

If you go down a column in a t-table, the confidence/alpha level is fixed. It’s showing you that larger degrees of freedom reduce the critical value needed. The probability of a Type I error, alpha, and the confidence level, are the same within the column.

If you set the row (the degrees of freedom) to any particular value, you can see how the critical value would change with changes in the confidence level and alpha.

Increasing the sample size doesn’t increase the probability of a Type I error-- that’s fixed by the researcher and is independent of the sample size. P-values are partially dependent on the sample size, though.

I’m surprised that you didn’t jump on this one as well.

I thought you were the only one who could handle the distinction! In seriousness though. I saw the first few things were wrong and didn’t read any further-- somewhat of a bad habit. However, it’s slightly silly that not only did I overlook it in the same thread :blush:, but also the other poster must not have seen my earlier post. I think I’ll read the whole post before replying next time.