It is possible, though. The question is somewhat unclear, so I will share a couple of answers.
First, if the question is: “If the overall test is significant, is it possible that some coefficients, individually, are nonsignificant?” Short answer is yes, because the overall test is only asking “Is this group statistically useful for predicting Y? In other words, is at least one of the listed coefficients different from zero?” The individual t-tests ask the question “Does Xi add anything to predicting Y if already have [all the other X variables] in the model?” This one is more straight forward after remembering what question the tests ask.
- If your question is, “Is it possible to have the overall model F test significant, but NONE of the individual tests are significant?” Then the answer is also yes, this is possible.
This can happen in multicollinearity. Suppose a simple case of Y predicted by X1 and X2 where X1 and X2 are highly similar (but not similar enough to cause a problem fitting the model) and good at explaining variation in Y.
The overall model F test is B1=B2=0 and this is significant because the group of explanatory variables, X1 and X2, jointly are useful for predicting Y. Due to a sufficient degree of multicollinearity, you could see the individual test for B1 and B2 each show up nonsignificant.
Again, the reason is that the tests for individual coefficients are asking a different question than the F test for the overall model. The test on the individual coefficient for Bi asks “If we account for all other X variables first, does the addition of Xi add to our ability to explain variation in Y?”
The answer could be no for X1-- if you have X2, you don’t gain much (in a statistical sense) for explaining Y. And similarly, when you look at the same question for X2, it may not add much beyond X1 for predicting Y.
Another way to see this is that for any linear regression with more than 1 coefficient (non intercept), you have a variance inflation factor (VIF) which is 1/(1-R-squaredj); R-squaredj is the R-squared from a regression of Xj on Xi (and other independent variables if they exist in the main model). This VIF is related to how much smaller the t-score is for Xj when Xi (and any others) are in the model; it is a way to show how much redundant information there is-- this is why multicollinearity can make t tests “artificially small” and standard errors “larger”.
Long story short: you absolutely could have a signficant overall model F test with all (non intercept) coefficients showing up “nonsignificant”. But this is only seemingly contradictory.