Why do we always use excess returns in the numerator when comparing returns? I understand we want to see how much additional return we are getting per unit of risk but why not just compare how much total return we get per unit of risk?
Can someone please explain this mathematically, without just saying we want to see performance relative to benchmark or required goal?
Why can I not just look at two portfolios and say, R p1 / st. dev p1 > R p2 /st. dev p2, so portfolio 1 gives me more bang per unit of risk, so I’m always going to go with portfolio 1?
I think i can try to answer this…not sure though…
Imagine you have 2 portfolios, 1 gives you 5% return for 1 unit of risk and the other gives you 6% return for i unit of risk. You compare them both and you’d obviously say that the second one is better.
But then you realize that the benchmark is giving you 7% returns! And it doesn’t have any risk also. So actually both the above portfolios should be discarded.
If you wouldn’t have considered the risk free rate, you would have simply invested in the portfolio just giving 6% returns despite taking risk!
But (using numbers from your example) what if we have a return goal of 7%? We aren’t saying that the risk-free rate is 7%, we are just saying our goal is to return 7%. So when comparing which portfolio will do this better, we have to take a risk-adjusted return, e.g. (6% - 7%)/risk, to compare which portfolio is better at achieving a “goal return.” I guess the whole assumption here is that a “goal return” is being treated as a risk free return; otherwise, we would need to calculate an information ratio. So why, for goals-based investing, do we compare risk-adjusted returns assuming that the “target return” is risk free?
I’d like to try answering this again, not sure though…
If we are taking a goal based return, let’s say 10%, first we need to question if such an asset exists which provides a 10% return for the same amount of risk that we are taking. If it does, then the comparison seems rational. Because how can we compare two different securities’ returns if they’ve got different per unit of risk.
If we take both the risk and return into consideration, we’re back at the information ratio like you mentioned. I’m not sure if it answers your question. Nice ques btw, it got me thinking!
Think of the global minimum variance frontier (not the efficient frontier but the one that curls back on itself a little like a U rotated 90 degrees clockwise).
If you compare to portfolios using return / risk you are would could think you are comparing two points on this like. Return / risk.
But what we know from CAPM is that there is a possibility of risk free asset.
We need to think that the presense of risk free alters the set of “best options”. We should only be willing to take on risk of we get a return above the risk free.
Comparing using (R - rf)/risk is more think about the CAL lines. We what the CAL with the highest gradient.
There are no dumb questions when it comes to understanding complex financial concepts like risk-adjusted returns. Feel free to ask, and let’s break it down together!