Type I, II Error

For a hypothesis test with a probability of a Type II error of 60% and a probability of a Type I error of 5%, which of the following statements is most accurate? A. The power of the test is 40%. and there is a 5% probability that the test statistic will exceed the critical value(s). B. There is a 95% probability that the test statistic will be between the critical values if this is a two-tailed test. C. There is a 5% probability that the null hypothesis will be rejected when actually true. and the probability of rejecting the null when it is false is 40%. They claim that the answer is C, on the basis that in A,B the null hypothesis could be false, which would make the claims invalid in options (A,B) since the probability of rejection would be unknown. How does this make sense? I do not get their reasoning. Is the probability of rejecting the null hypothesis already incorporated into the probabilities? Thanks!

A isn’t necessarily true because the null could be true or false-- the statement would need to say “assuming the null is true, there is a 5% probability…”

B isn’t necessarily true for the same reason above.

C is always true assuming the probability of each error is as described in the question-- it’s definitional for the chosen significance level (alpha, P(Type I Error)) and the power of a test (1-P[Type II Error]) [P(rejecting Ho | Ho is false)]. You may see beta instead of P(Type II Error).

I don’t think this is a good question, but it is accurate. It may be helpful to review the definitions of Type I and Type II errors as well as power of a test.

α is the probability of rejecting _ a true null hypothesis _ (i.e., the probability of a Type I error).

α _ is not _ the probability of rejecting the null hypothesis.

So does it mean Type I error defines the significance level I need in the hypothesis test.

But it doesn’t mean the test statics will actually fall out of the confidence interval of 95% with the given mean value in the null hypothesis.

Am I right?

The _ probability of _ a Type I error defines the significance you _ choose _ in the hypothesis test.

It does mean that if and only if the distribution of the (sample) test statistics is exactly the same as the distribution you use to create the confidence interval.

Thank you very much.

My pleasure.

Type I error is when you keep a manager that provides no value.

Type II error is when you fire a manager that is providing value.

Null is true, but rejected.

That’s the error for me.

SchweserNotes: “The significance level is the probability of making a Type I error (rejecting the null when it is true) and is designated by the Greek letter alpha (α). For instance, a significance level of 5% (α = 0.05) means there is a 5% chance of rejecting a true null hypothesis. "
So the question says Type I error of 5% means α=5%, thus I think it means “there is a 5% chance of rejecting a true null hypothesis”=”(A)there is a 5% probability that the test statistic will exceed the critical value(s)". This also means that “(B)There is a 95% probability that the test statistic will be between the critical values, if this is a two-tailed test.”
I think the question says “a probability of a Type I error of 5%” already contains the meaning that “a probability of rejecting a true null hypothesis =5%”.
Where is my error?

Where is my error?

in not reading the earlier posts
The second post in this thread (by @tickersu) tells you where you’ve erred

And a general point: if you see a post by s2000magician, read it.
Same goes for posts by MikeyF and breadmaker (and my apologies if I’ve missed any other names I should have included)

The difference between thinking the question says something and what it actually says often exists. As we reviewed prior, unqualified, the first statements mean the null could be true or false, and so the probability of a Type I error depends also then on the probability of the null being true (which means it cant be 5% chance of type 1 error).

The answer choice C explicitly qualifies that the null is true to help us show the probability of a Type I error is 5% and then explicitly says the null is false to allow us to make a statement about the power of the test (and probability of Type II error).

The trick here is to write out the definition of Type I and Type II error rate, and power. Then the question is testing your knowledge of only the definition.

P(Type I Error) = probability of rejecting the null when it is true/given that it is true/…other wordings which qualify that the null is true
P(Type II Error) = probability of accepting the null when it is false/given that it is false/… other wordings which qualify that the null is false
Power= probability of rejecting the null when it is false/given that it is false/…other verbiage which clarifies the null is false

Then, you can see that A and B do not satisfy the definitions (and given numbers) better than choice C.

1 Like