Confidence interval formula

Can anyone tell me when to use mean± 1.65 * stand deviation and when to use mean ± 1.65 * (standard devia / root of n)?

Thanks for your help.

The former creates a confidence interval for a single item (e.g., next month’s return on a bond fund); the latter creates a confidence interval for the mean.

I wrote an article on confidence intervals (http://www.financialexamhelp123.com/confidence-intervals/) that covers this and a whole lot more.

(Note: as of April 25, 2016, there is a charge to read the articles on my website.)

^=/

For exam purposes, I would think of it as magician said: the first one is for a single observation (although, not a real confidence interval). Although, when making an interval as mean +/- 1.65*S I prefer to say its describing the sampled data assuming it’s approximately normal. Here’s how that works.

Pretend we are talking about price of a candy bar where x-bar =2 dollars S=1 dollars

—> our interval is (2-1.65, 2+1.65) which simplifies to (0.35, 3.65). What does this mean? Since we used +/-1.65 we know this corresponds to the 5th and 95th percentiles, respectively-- that is, 90% of the distribution is between our interval (which isn’t necessarily true for the confidence interval mentioned later). We can say that About 90% of the sampled data fall within 1.65 dollars of the mean (or equivalently, that 90% of the sample data fall within the interval of 0.35 dollars and 3.65 dollars). They (CFAI–I would never…) might ask you to calculate a 90% CI (think of it like a prediction interval) for the price of a single candy bar (or stock return for a year, in the case of finance).

The second calculation: x-bar +/-1.65*(S/[n^0.5]) is a confidence interval for the mean. It’s most commonly used to make an inference about the value of the true mean. When they want you to make a statement about the population mean you should remember that you need to use the point estimate for the true mean, which is the sample mean (x-bar), and a measure of variability for the sample mean, which is the standard error of the mean (S/[n^0.5]).

Essentially, you can recognize which formula is needed based on whether they want you to make a statement about the population mean or not. If they want you to make a statement or inference about the population mean, make sure you use the standard error of the mean (S/[n^0.5]). Otherwise, just use S (with the appropriate critical value, of course).

Hope this helps!

Edited for additional clarity. I don’t really consider it a CI/PI (and I haven’t seen any real statistics books that do, either), but I’ve seen the Institute phrase it in a way that blurs the line with a CI and the more descriptive nature of the first kind of interval discussed (keep in mind they are a light-weight authority on anything statistical in comparison with a true statistics source). S2000 has a good example just below of what I mean when I say it’s an interval to describe the data, rather than make an inference. If you know the true mean and true sd, you can make a descriptive statement about the population, though. With sample information, it’s describing the sample unless you’re using the estimated standard error (at least as far as I’ve seen…look up the Empirical rule and Chebyshev’s rule to see more that the mean +/- 2s is a descriptive method).

To amplify on what tickersu (who knows a heck of a lot more about statistics than I do) wrote, the first formula is also used when constructing a confidence interval for a given percentage of the population. So, if they ask you for a confidence interval in which you expect 95% of the data points to fall, use the first formula.