Reducing Sample Size by 1 When Computing Variance

Hi all

I ran into this question today

A CFA practice question for Level 1 (12/2014) - Study Session 12. Reading 43. http://analystnotes.com/cfa/cfa_exam_prep.php?page=0zt2b6dAEB

My question for you is one of basic statistics.

Why do we reduce the sample size by 1 when computing the variance?

It has to do with degrees of freedom: you’re using the sample mean in the calculation, not the population mean, so you lose one degree of freedom when you compute (and then use) the sample mean.

I wrote an article on this that may be of some help: http://financialexamhelp123.com/sample-standard-deviation/.

Thanks S2000magician!

A population has extreme large and small values that are less likely to be included in a sample (for bell-like distributions, anyway). So a sample you collect will usually be biased towards the mean, which underestimtes the SD and variance.

What I don’t know is: whether n-1 is derived from some mathematical principle or if it’s just a makeshift adjustment that everyone does to ‘fix’ the bias… anyone know?

It is derived from a mathematical principle: dividing by n – 1 ensures that the expected value of the (bias-adjusted) sample standard deviation equals the population standard deviation.

Great, thanks :]

My pleasure.