The first is used for a confidence interval for an arbitrary (single) observation (e.g., a confidence interval for next month’s return on a mutual fund). It is also used for a confidence interval for a percentage of all observations.
The second is used for a confidence interval for the mean of a population.
Adding to this, you can (more or less) think of a confidence interval as the set of (null hypothesis) values for which we would fail to reject the null hypothesis at a given level of significance. The two approaches will always give you the same answer, which further illustrates the interconnectedness.
I wasn’t going to put too fine of a point on this, but since it’s gotten follow up, I will.
Z and T distributions are never the same (let me know when you get to infinity), but the T-distribution can become essentially the same as the Z-distribution for a large enough sample size.
The rule of thumb of 30 for “large” is purely that-- a rule of thumb. It’s an idea for when you might be able to rely on the central limit theorem (you should see this in the L1 curriculum, I believe). There is no exact and 100% agreed upon number for a “large” sample (in this context), but people who have studied the topic have noticed that 30 seems to be “good enough”. A sample size of 31 isn’t necessarily any better than a sample size of 30, and a sample of 29 isn’t necessarily and worse than a sample of 30.
I’m not sure if the CFAI book says that 30 observations constitute a large sample, I would think that it doesn’t true at all. I’m agree with Tickersu that 30 observations are about the minium sample size to use to at least don’t distort the results due a too small sample size. As far as I know, 30 observations are enough, but also, less than 30 you are on a warning zone, and above 30 you are increasing the efficiency and consistency of the estimators, so you are good.
What I wanted to say in my previous post is that the T-value and Z-value for a sample of 120 are very close.
I’m going to further clarify-- 30 is a rule of thumb that you will see very frequently (including in statistical texts), but 30 isn’t a magic number. Many (authoritative) sources agree on ~30, which is why it’s a rule of thumb, but there is no hard and fast rule that 30 is good and 29 is suddenly bad.
As I said, 25 can be sufficiently “large”, but it might not be in some cases. If you’re using a smaller sample you might have to make extra assumptions (such as those required for a t-distribution), since you can’t rely on the central limit theorem.
I gree with that, some time-series analysis don’t have much more data than 25 or 30 observations, and indeed are good enough; however, it is not a general rule. This is why I don’t think that CFAI could state that 30 observations are considered a large sample without clarifying the analysis context.