Standard deviation vs VAR

Question says: standard deviation is more useful than VAR in evaluating new managers and new portfolio strategies. Correct or incorrect?

Answer: standard deviation typically requires several years before the managers return history is available, which limits its use in determining the effectiveness of new managers and strategies.

Can someone please explain why standard deviation requires several years before manager return history is available?

If you want to use the standard deviation of returns as the measure of the new managers risk, then you need to have data for it.

Since they are new, then you cannot derive a standard deviation of returns that never happened.

However, I do not think VAR is appropriate either for the same reasons, unless you somehow estimate the standard deviation, either through holdings’ history, or peer comparision, but this isn’t mentioned in the material AFAIR.

Exactly, how can VAR be more effective when it also requires previous returns?

Var can calcuated numeours ways like Variance-Covariance method, Historical and Monte Carlo which requires no history.

They all do.

The difference is, historical methods use the historical distribution. While the rest still need standard deviation as inputs (or any other measure of dispersion).

Hope someone can clear things up better than I can.

its a very strange question.

if I were a selling a 60/40 strategy as a new manager, then i cound be asked.

  1. what is the standard deviation of your strategy?

I would take last 25yrs MSCI FI & EQ data, calculate the correlation and then calculate the expected std.dev.

  1. what is the value at risk?

take the same data, pick out the worst 5% of months and state the VaR.

what i think the question could be asking, is if rather than being NEW new. the manager is 3-5yrs new. so maybe VaR is more meaningful than std dev?

Any other explanations as to why VAR is more useful than standard deviation in evaluating new managers and new portfolio strategies? I don’t see how standard deviation requires several years before the managers return history is available?

what happened to all the Quant you studied in Levels 1 and 2?

How is standard deviation calculated - can you go back to basics and then decipher whether you are on the right track when you state

That still doesn’t answer the OP, how does a VAR measure risk better than SD for a new portfolio manager?

my answer is now ‘VaR position limits’.

i.e. the manager has no history, and little clue what they will actually do, so the only choice is notional limits, actual $ limits or VaR limits.

…but then VaR limits come with a huge caveat, that they are as only as good as the calculation method.

Any other answers as to why VAR is a better measure of risk than standard deviation in evaluating new managers and new portfolio strategies?

This is my reason: It’s similar to saying npv is better than irr. Apart from the multiple benefits of npv, it gives you the value unlike a percentage like irr. Similarly, VaR gives a loss as the result than a % like SD. It’s an absolute number which makes it a better comparable than a % figure. Check out the net for why percentages may not be the right comparison metric! Hope this answer is satisfactory

But why does standard deviation typically require several years before the managers return history is available, which limits its use in determining the effectiveness of new managers and strategies. VAR also requires past returns?

I think a key point here is that VAR is typically used for short term periods of risk analysis. While you can calculate annual VAR, most of the time is used monthly, weekly, daily. I don’t know where the question is from and can’t reference the answer key, but that would be my best guess, and probably what I would answer.

I too thought abt that but realized later that you still need the standard deviation input. You might have an annualized SD input which you convert to a daily, weekly etc figure. And taking SD of little data (without a sufficient sample) is sort of misleading.

you also do not evaluate a manager based on the volatility (std deviation) of returns produced. You evaluate him based on whether he generates profits / losses - and for that VAR is the right number.

Std Deviation is a % number - while VAR is easily convertible to the $ profit / loss.

This thread has dragged too long with no real answers to the OP.

Both VAR and SD are useless in measuring new manager performance.

Unless you want to measure expected performance based on a certain strategy, but that wasn’t the question.

In theory to calculate an analytical VaR you don’t actually need any money manager performance history, all you need is price history for the assets currently held in the portfolio. Analytical VaR is based on historical prices of individual securities held in a portfolio at a point in time. On day one of a new strategy, if the manager buys some equities and holds them for a second, a parametric/analytical VaR can be easily calculated, whereas the standard deviation of money manager returns can not as there is no manager performance history. In terms of VaR, only an historic VaR requires a money manager track record to exist before being calculated. Even a Monte Carlo simulated VaR can be calculated without a money manager track record.

So this is surely why VaR is more useful that standard deviation when assessing new money managers.

The same exact methods can also apply to standard deviation, VaR just quantifies the risk in terms of dollars.

The answer given in the OP “Answer: standard deviation typically requires several years before the managers return history is available, which limits its use in determining the effectiveness of new managers and strategies.” states they are concerning themselves with the standard deviation of the managers returns, not the standard deviation of the portfolio holdings - two very different things!