Hi. I can’t find an exact answer, though I think I know it. When calculating correlation to use in calculation of beta, we should be using the risk-adjusted returns (i.e., excess returns of Ri-Rf) instead of the non-excess returns, right? I ask this because in calculating various performance measures, we take care to specify Rf-Rf. Just want to verify that we take the correlation of the excess returns, and that the outputted correlation coefficient is what is then used in calculating beta.
Thank you!
Because beta is the slope of a regression line, it doesn’t matter whether you use actual returns or returns in excess of the risk-free rate; the slope will be the same either way.
Thanks for answering. In my case, I am calculating based on monthly investment returns and return (slightly) different slopes depending on whether or not I adjust for the risk-free rate (say, 3M T-Bills).
If I just make a silly example in Excel (pasted screenshot below), I can make random arrays of X, Y, and a smaller random Rf (just completely random numbers). If I take the slope of (X, Y), I end up getting 0.303545. However, if I take the slope of (X-Rf, Y-Rf), I get 0.299347. These would both show up as 0.30 in a report, but I would prefer to use a more accurate methodology. Note by the below that I also return a slightly different correlation. I’m inclined to believe I should use the correlation and beta of the risk-adjusted returns. Or, am I completely missing something? Please provide your thoughts – thanks a lot.
The risk-free rate has to be a constant. Varying it from one observation to the next is the source of the difference.
Thanks again. The 3M T-bill rate changes on a monthly basis, which means the risk-free rate changes as well. At this point I am more asking for practical and application purposes instead of CFA purposes.
So, when calculating with that in mind, is it better to use the risk-adjusted returns?