Decrease the level of significance - decrease probability of Type 1 error but increases probability of type 2 error. Can someone explain why this is please? Thanks!
Decreasing the level of significance decreases the size of the rejection region and increases the size of the acceptance region; therefore, you’re less likely to reject the null hypothesis (whether it’s true or not), and more likely to accept the null hypothesis (whether it’s false or not).
So . . . you’re less likely to reject a true null hypothesis (perpetrate a Type I error), and more likely to accept a false null hypothesis (perpetrate a Type II error).
1 Like
A crisp and concise explanation for a cornerstone topic.
If you aren’t leaving enough room in the distribution to find outliers, you might be committing a Type 1 error. Conversely, if you are finding too many outliers, you may be committing a Type 2 error.
Other way round:
- If you aren’t leaving enough room in the distribution to find outliers, you might be committing a Type II error. If your level of significance is 0%, you’ll always fail to reject the null hypothesis.
- Conversely, if you are finding too many outliers, you may be committing a Type I error. If your level of significance is 100%, you’ll always reject the null hypothesis.
1 Like