An investor using a normal distribution to estimate VaR will overestimate at low levels of significance, but will overestimate at high levels of significance if the return distribution is leptokurtic.
Why is it so? Does anyone get it?
An investor using a normal distribution to estimate VaR will overestimate at low levels of significance, but will overestimate at high levels of significance if the return distribution is leptokurtic.
Why is it so? Does anyone get it?
leptokurtic = higher peak and thinner tail - so smaller std deviation at the tails (when the level of significance is high).
and since you are doing mean - k * std dev -> to compute VAR
the VAR number would be a higher number – when compared to a Standard normal distribution.
Fixed that for you.
It’s in the midrange – say, from +1σ to +2.5σ, and from −1σ to −2.5σ – that a leptokurtic distribution is lower than a normal distribution.
Does anyone get it?.. no not me.
fat tails or thin tails, the distributions always cross over. so impossible to tell where the 5% (or 10% or 1%) significance falls in comparison.
( i dont see this in the cfai texts btw)
but at lower levels of confidence, say 85%, VaR is lower for the distribution with the heavier tails, why is it overestimate then?
Words are overrated. This helps me: