why is morgage rates and level of GDP both explain a significant amount of the variation in BuildCo’s annual sales at 5% significance level when using the Ftest. i can see why mortage rates are because the p-value is less thatn 5%, but why GDP?
This is a result of multicollinearity (check pg 200 in Schweser book 1), although I’m not sure whether the effect is significant in this case. Nevertheless, multicollinearity refers to the instance where coefficients are not significant alone, but when combined, the regression is significant. It’s a result of a high correlation between the coefficients. i.e., if the BuildCo’s regression were significant, but the GDP and mortgage rates coefficients alone were not significant, this suggests that GDP and mortgage rates are highly correlated… that is, if I understand it correctly.
Clark_CFA is correct. A regression analysis may appear reliable and statistically significant despite the fact that the indep variables may not appear or be significant predictors themselves. When this happens, we say that the coefficients exhibit multicollinearity. That is, the indep variables alone contribute little in the way of explaining the variation in the dependent variable, but combined they explain a lot. This is because they are correlated, not bc they are good predictors of the dependent variable, and this is why multicollinearity is an issue and should be corrected. Your example coincides with how the book explains how you should go about detecting multicollinearity. If R^2 and the F-test appear significant but the individual t-tests (or p-values) of the indep variables appear statistically insignificant, then multicollinearity may exist. This makes sense bc the F-test measures whether a group of indep variables explains the variation in the dependent variable while the individual t-tests do not. The F-test should echo the indication from the t-tests if the indep variables are reliable, if it does not, multicollinearity might exist. Hope this helps.
In general Clark and BMiller are correct however, I would like to clarify a couple of things Clark said “multicollinearity refers to the instance where coefficients are not significant alone, but when combined, the regression is significant.” This is not accurate. Clark is describing the result of multicollinearity not what it refers to. What is refers to is when independent variables (or combinations) are correlated with each other. It results in standard errors that are too large and since that is the denominator for calculating t stats it makes the t stats too small and you fail to reject that null that the coefficient is equal to zero. Thankfully CFAI does not provide a test for it so as Clark et al have said, you spot it by having low t stats but high F stat and high R^2 I know that is a bit nit picky but I like to be precise