Why are calculations typically performed on excess returns vs returns

Ha ha ha! That’s good!

@maratikus Sorry for the (relatively) late reply. I’m not sure what you mean by high condition number. I assume that since the risk-free rate has a standard deviation of 0 and 0 correlation with the rest of the portfolio that the covariance matrix is the same whether you have the risk-free rates in there or not. Assuming you use standard MVO with weights constrained to 1, the optimal portfolio is wmin+(1/lambda)*sig-1*(I-1*wmin’)*u, where lambda is a risk aversion coefficient, sig-1 is the inverse of the covariance matrix, I is a nxn idenity matrix, 1 is a vector of 1s, u is the expected return vector, and wmin is the minimum variance portfolio (sig-1*1/1’*sig-1*1). I know it may seem counter-intuitive, but because (1wmin’)*rf=rf, then (I-1*wmin’)*u=(I-1*wmin’)*(u-rf), and it doesn’t matter what the risk-free rate is or whether you subtract it out. The portfolio weights are the same regardless.

jmh, I see what you are saying. As RFR goes up, Efficient Frontier shifts down and since indifference curves are parallel, the same weights would be optimal. I didn’t quite communicate what I was trying to. In my head when I think of mean-variance optimization, I automatically jump to Sharpe maximization. For a given efficient frontier, you draw a tangent line from (0 vol, RFR - ret). Then because of leveraging (borrowing or lending), you get your optimal portfolio. As a result, you really want to maximize for Sharpe (given borrowing and lending are possible). As RFR goes up, the tangent line changes and portfolio weights change. Optimal weights become (sig^-1)*(u-rf)/(1’*(sig^-1)*(u-rf)). Those weights are sensitive to RFR. It’s evident in the example that I showed earlier (RFR changes from 0 to 2%). When you start performing numerical calculations, condition number shows up. http://en.wikipedia.org/wiki/Condition_number

@maratiksu I see what you’re saying. I guess the way I do things is different. For instance, I tend to generate expected return forecasts on excess return data. If you did the forecast in terms of the normal returns, then you’re right, of course. So if I were doing Sharpe ratio optimization I would add back the risk-free rate to my mean return forecast and the formula in your post would essentially simplify to (sig^-1)*(ux)/(1’*(sig^-1)*(ux)), where ux is my initial excess return forecast. So in this sense, it wouldn’t matter what the RFR is. Another interesting point is whether the weights from mean variance optimization are more sensitive than the Sharpe ratio optimization to a change in the return. If I’m not mistaken, the mean-variance weights are more sensitive.

jmh, I see what you are saying. Of course, if you estimate excess returns, you don’t care about RFR. The initial post was about adjusting returns for rfr -> I talked about that. No disagreements here. In regards with sensitivity of weights to the vector of mean returns -> that’s a relatively simple calculation (you have a formula for x, just differentiate with respect to u and you will get a matrix of sensitivities). It becomes more complicated when you start using Bayesian methods and introduce constraints but I don’t want to get into that because that would be work-related.

Remember also that if you are using historical returns, the risk-free rate is seldom constant in reality. The standard deviation in the Sharpe ratio should really be SD(Port_R[i] - Rfr[i]). For equities, this may not be a huge issue because equity volatility is typically so much larger that the contribution of RFR volatility to risk premium volatility is negligible. But for fixed income investments, it may be more important.

I mostly agree bchadwick, but I think the historical standard deviation of the risk-free rate (in the US at least) is so much lower than all but very low duration fixed income instruments that it shouldn’t matter (it’s normally like 0.1% annualized when long-term bonds might be 8%). It might be better to think about it in terms of expected volatility of the risk-free asset, rather than the historical fact that the risk-free asset hasn’t had 0 volatility. If you expect the risk-free asset to have 0 volatility, you can exclude it from consideration. There might be times when it doesn’t have 0 volatility, like if the Fed changed rates sharply over the course of the next 1 or 2 months when you might want to consider it, but its a reasonable assumption.

jmh, have you done any work on estimating betas for international stocks?

Though I work on the international side of my firm, I haven’t really needed to estimate betas on individual international stocks. I recall there being some snags when doing this, such as making sure you use the right risk-free rate for the investor. My group focuses more on global strategy rather than doing bottom-up research, so I done the regression for sectors internationally, but we assume the investor has hedged their currency exposure so I can just use the US$ returns with the US risk-free rate.

I don’t have experience with that but I’d like to tackle this problem at some point. I can think of a few problems with calculating betas (or running regressions) on international stocks or indices. One - different settlement time. Daily correlation between S&P 500 and Dax would be understated if I used settlement prices. If I only deal with European and US markets, I could just measure all prices at the same time (for example, 10:30 am Eastern time). However, if I deal with US, Europe and Asia, I can’t find an overlapping time interval when all markets are open. Two - correctly adjusting for currencies but that doesn’t seem to be a problem for you as you deal with hedged investments. Three - choice of benchmark for beta calculations. For example, S&P 500 is easy to deal with but that’s probably not the best benchmark. There would be similar problems with calculating duration of global portfolios. I’d appreciate advise.

I like to do weekly returns to avoid the issue of closing hours; it makes the periods overlap better. S&P is not so bad, particularly with developed country benchmarks. I’ve used the MSCI All-Cap World Index, which is 96% correlated to the S&P on a weekly basis. I probably ought to be using currency returns in there too. I haven’t because I’ve been doing my return analysis as hedged dollar returns, but technically the currency should matter anyway. Since I write and have a word budget, I don’t have the space to go into all the complexities that introduces, but I’ve been thinking about how to include that.

Thanks, bchadwick. Very practical suggestions. Have you been pretty happy with the esimate based on weekly data? Do you manage a long only or long-short portfolio? I’m asking because if you manage longs and shorts (especially, market neutral strategy) then accuracy of beta becomes very important whereas dealing with long only portfolio allows for a larger error.

I just do analysis. Would love to manage a tactical asset allocation portfolio, or sector/country selection equity portfolio, but since I am a mid-career transitioner, I haven’t had the track record to do that. Would love to talk to / meet people who do, and if it leads to something good, would be happy to cut in a referral fee. For me, I don’t do a fully quantitative process; mostly because I don’t fully trust traditional MVO, and my mathematics isn’t really deep enough to invent my own robust optimizer (the math is within my reach, but I just haven’t had time to go through it fully). I call what I do “quantitative-assisted” which is where I use quantitative analyses to sanity-check discretionary decisions, and for identifying things I should look at. I’ll take a look at how an optimizer would design the portfolio, but often times I’ll just use a simple equal-variance weighting. What I’ve been trying to think through on my off time is how to combine a portfolio that uses an optimization strategy (some variant of MPT) with a portfolio that uses trailing stops. This is because many of my macro ideas are directional, and fundamental views also need to have some kind of time frame before you can turn that into an expected return, but often I have a view on direction, without a real price or return target. It’s hard to use MPT if you have a directional view without a return expectation. I think both MPT and many trading methodologies can have value, but they aren’t fully compatible. I think of it as a less-extreme version of trying to build a bridge between relativity and quantum mechanics (obviously less extreme, but a similar idea of trying to bridge two seemingly incompatible ways of looking at things). Obviously momentum+daily rebalancing might be one way to do it, but I still need to work the details.

maratikus Wrote: ------------------------------------------------------- > Thanks, bchadwick. Very practical suggestions. > Have you been pretty happy with the esimate based > on weekly data? I forgot to address this… For analysis, this has been fine. For portfolio management, I think it depends on your time horizon and your specific process. I’ve always been a bit skeptical about whether daily data truly gives you better estimates of beta. Sure, the standard errors are smaller, but the chance of picking up something spurious and acting on it seems to be greater too, particularly if you are doing daily data with 1000s of securities. If you find spurious stuff with a smaller standard error, the result will be that the optimizer loads up on it. Ooops! So although I haven’t gone and proved it mathematically, I’m still suspicious about whether going to daily data really gives you better information. Obviously if you are data constrained, sometimes you have no choice but to do it. If you’re really concerned about that, I guess you could do 2-day returns, which would help with increasing period overlap. It’s always seemed a little odd because the week doesn’t divide evenly into 2-day periods, but it may not be that important in the end. I tend to do analyses on broad indices that have long histories, so weekly data is generally ok. The big issue is that one might end up with a regime change if you use a lot of weekly data instead of more daily data from a shorter period. On the other hand, you could estimate using daily data, and then the regime change will hit you harder, because you thought your estimates were even more precise… so it’s a toss up. The main point is to know where your weaknesses are and be sure to use your brain to know if the market is sticking it to you there. And markets have this way of trying out all the different ways to humiliate everyone. I think if it like an immune system, trying to poke and prod until every strategy gets taken down at some point.

maratikus, I forgot the time difference, that’s a big one. I agree with what bchadwick says. Daily makes sense for trading strategies with 1-5 day horizons, but if you plan on holding longer than that, you don’t get much additional value. Some strategies are better at that horizon, like pairs trading or some other stuff, but you might want to do that analysis with the US data first and then expand it to international. Alternately, some providers will give you the price at say noon around the world or something. bchadwick, Resampling provides some benefits for creating diversified portfolios, but whenever I’ve programmed it it tends to take a while. You can also try incorporating turnover, transactions cost, or tracking error constraints. I have found that the Black-Litterman method is a much better approach than standard mean-variance. The derivation is a bit much, but the actual formula is actually quite intuitive when you break it down (I hate to admit it, but I nearly have it by memory at this point). One thing I would note is that actual return targets aren’t what supposed to go into mean-variance optimization. This implies a standard deviation of near 0 and a whole bunch of confidence in your view. If you think of it more in terms of the distribution of the possible returns over the next week or month, then you won’t worry so much about these sorts of specific views. This is one benefit of the BL approach, since by backing out what the market is implying you can have some better room to judge what returns are appropriate rather than just pulling numbers out of nowhere. Using some sort of regression model to estimate returns normally gives quite a low R^2, but that’s because the number that you estimate is some form of conditional mean and not a specific estimate of what will happen that month. The very fact that these estimates don’t change that much is precisely why they have may value in an optimization. Too large of changes in the expected mean does lead to large deviations. A new paper called the Augmented Black-Litterman model from Cheung (and on SSRN) is a bit mathier, but it let’s you incorporate factor views into your portfolio. So if you think the expected return on the S&P500 is 1.5% instead of 1% in equilibrium, your whole portfolio will change (you can also incorporate idiosyncratic and stock specific views).

bchadwick and jmh, thanks for your suggestions. I am interested in estimating betas for stress testing (daily loss) purposes. I understand that it’s better to be approximately right then precisely wrong but I am still trying to figure what would work best for me. bchadwick, your approach is very practical. I think you will underperform most quant driven systems in the long run. jmh, which providers offer prices at a fixed time around the world? Does Bloomberg provide such service? I’m interested in portfolios of futures, options and FX. Another question about BL. Do you use opinions derived from quant data? Two decent papers on BL and its extensions: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1117574 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1213325

Ouch maratikus telling bchadwick that he will underperform haha. If you’re doing this strictly for daily loss, than it wouldn’t make much sense to regress Japanese stocks against a US index or factor score or something. This can help avoid the time inconsistency issue. Just do the regressions with the data available for that market at that time. If you’re using the bloomberg add-in in excel, you can try to use the import data wizard for intraday bars to pull data for a specific time. I’m not sure how far back the data goes or the quality, but I knew it was there. It is fairly common for FX to be quoted at one fixed time around the world. Also libor fixings. And maratikus, my groups’ approach is different than the approach I would use if I were managing money, but we do use quantitative models to inform our strategy (though it is not the only input).

jmh530 Wrote: ------------------------------------------------------- > Ouch maratikus telling bchadwick that he will > underperform haha. bchadwick, I’m so sorry. I meant the opposite. I guess I changed the sentence from the quant approaches underperforming you to you outperforming quant approaches and messed up in the middle. My appology! I truly believe that you will outperform quant methods because your approach is very solid.

jmh530 Wrote: ------------------------------------------------------- > If you’re doing this strictly for daily loss, than > it wouldn’t make much sense to regress Japanese > stocks against a US index or factor score or > something. This can help avoid the time > inconsistency issue. Just do the regressions with > the data available for that market at that time. Let’s say I invest in a market-neutral manager who is beta neutral and trades US vs Japan. I am not very concerned for his position if markets move 10%. However, if I calculate his beta using settlement daily prices, I would be way off. What do you mean by doing the regression with the data available for that market at that time? Since I work with portfolios of thousands different instruments, I have to automate the process. > If you’re using the bloomberg add-in in excel, you > can try to use the import data wizard for intraday > bars to pull data for a specific time. I’m not > sure how far back the data goes or the quality, > but I knew it was there. It is fairly common for > FX to be quoted at one fixed time around the > world. Also libor fixings. Thanks for your suggestions. I will explore that option. > And maratikus, my groups’ approach is different > than the approach I would use if I were managing > money, but we do use quantitative models to inform > our strategy (though it is not the only input). Quant driven opinions is an interesting topic. It would be fun to talk about that some other time.

maratikus Wrote: ------------------------------------------------------- > jmh530 Wrote: > -------------------------------------------------- > ----- > > Ouch maratikus telling bchadwick that he will > > underperform haha. > > bchadwick, I’m so sorry. I meant the opposite. I > guess I changed the sentence from the quant > approaches underperforming you to you > outperforming quant approaches and messed up in > the middle. My appology! I truly believe that > you will outperform quant methods because your > approach is very solid. No problem. I could tell that “it’s so practical you will underperform” didn’t quite fit. Also I saw your other post, so came prepared. By the way, this is one of the more interesting threads we’ve had in a long time. Great to have threads that make you revisit your assumptions and think… that’s how process improvements happen.