Hello all,
I’ve been reviewing quant and I can’t seem to get this right. When do we use the standard error vs standard deviation when calculating confidence intervals.
That is Xbar ± Se * Critical value vs Xbar += Sd*critical value.
Another question regarding this, is when there are confidence intervals based purely off critical values vs off the SE*Crit;SD*Crit.
So have been reading financial exam help, and there is a differentiation there. Are we only using the SE when we are estimating where the population mean will ie? If so, then how does the above work with Hypothesis testing?
Edit: This is making sense now. Use SD when we want to know where the mean will lie, 90% range.
Thank you!
*I searched and there was talk about knowing a population SD and what not, however that didn’t prove true in my readings.