when we are testing for the regression coefficients, typically our hypotheses would be h0:b0=0 or h0:b1=1.
Is b0 considered the risk free rate and we are trying to test that b0 is not equal to 0 because otherwise we get no return? And for b1=1, we are hoping to reject this hypothesis and get a market beta different than 1 because otherwise the stock’s will be as sensitive as the market?
You say, " when we are testing for the regression coefficients".
What regression coefficients?
Are you regressing equity returns of one stock vs. the returns of the overall market, or are you regressing the prices of bonds vs. their date of issuance, or are you regressing the implied volatility of call options vs. their strike prices, or . . . ?
I can’t tell you what the coefficients mean if I don’t know what your _X_s and _Y_s are.
@S2000magician I think I may have got it. Both B0 and B1 are based on what we are trying to test.
For example if we think that a stock is more sensitive than the market risk, then we would do H0: B1=1 and Ha: B1 > 1,hoping that we can reject our null.
I think I got confused reading that when set up our null for either B0 and B1, they should always be like this: h0:b0=0 or h0:b1=1, but to me wasn’t making much sense.