How to Calculate a Confidence Interval
The core idea behind a confidence interval is surprisingly simple: take your sample estimate, then add and subtract a buffer that accounts for sampling uncertainty. That buffer — the margin of error — depends on how much variability is in your data, how many observations you collected, and how confident you want to be.
For a population mean, the formula is x̄ ± z* × (s / √n), where x̄ is the sample mean, s is the standard deviation, n is the sample size, and z* is the critical value for your chosen confidence level. When your sample is small (under 30 observations), swap z* for t* from the t-distribution — it produces wider intervals that honestly reflect the extra uncertainty from estimating the standard deviation with limited data.
For a proportion, the setup changes slightly. The formula becomes p̂ ± z* × √(p̂(1 − p̂) / n), where p̂ is the sample proportion. This version works well when both np̂ and n(1 − p̂) exceed 10 — below that threshold, the normal approximation gets shaky and you should consider exact binomial methods instead.
This calculator handles both modes automatically. Enter your sample statistics, pick a confidence level, and the tool computes the interval along with a step-by-step breakdown showing every intermediate value so you can follow the math or check it against your own work.
Choosing the Right Confidence Level
The confidence level you pick creates a direct tradeoff between how sure you want to be and how precise your interval ends up being. Higher confidence means a wider interval — you are casting a bigger net to make sure you catch the true value. Lower confidence gives a tighter range but increases the chance you miss.
| Confidence Level | z* Critical Value | Typical Use Cases |
|---|---|---|
| 90% | 1.645 | Exploratory research, pilot studies, quick estimates |
| 95% | 1.960 | Standard for most published research and polling |
| 99% | 2.576 | Medical trials, safety-critical engineering, regulatory work |
Most coursework and journal articles default to 95% because it has become the de facto standard over decades of use — not because 95% has any special mathematical property. The real question is always about consequences: if being wrong costs lives or millions of dollars, move to 99%. If you are running a quick A/B test to decide which button color to try next week, 90% is perfectly reasonable.
Confidence Interval vs. Margin of Error
People use these terms interchangeably in casual conversation, but they mean different things. The margin of error is a single number — it is the distance from your sample estimate to either edge of the interval. The confidence interval is the full range: estimate minus margin of error to estimate plus margin of error.
When a news report says a candidate polls at 52% with a margin of error of ±3%, the confidence interval runs from 49% to 55%. That distinction matters because "within the margin of error" means the interval includes the other candidate's number too — so the race is genuinely too close to call, not that the poll is unreliable. Understanding this gap between the two terms prevents the most common misinterpretation of polling data.
Frequently Asked Questions
What is a confidence interval?
It gives you a range where the true population value probably sits, based on your sample data. A 95% CI means the method you used would capture the true value in about 95 out of 100 repeated studies.
The part that trips most students up: a 95% CI does not mean there is a 95% probability the true value is inside this particular interval. The true value either is in there or it is not. The 95% describes how often the method works across many repetitions, not the odds for any single interval.
Should I use the t-distribution or z-distribution?
If you know the population standard deviation (rare outside textbooks), use z. If you are estimating it from your sample — which is almost every real scenario — use t.
The t-distribution has heavier tails than z, so it produces wider intervals that account for the extra uncertainty. Once your sample hits about 30 observations, the t and z values converge so closely that the choice barely matters in practice.
What confidence level should I use?
Go with 95% unless you have a specific reason not to. That is what reviewers expect, what textbooks teach, and what most software defaults to.
Bump to 99% when mistakes are expensive — drug trials, bridge engineering, regulatory submissions. Drop to 90% when you need a quick directional answer and the cost of being slightly off is low.
What does margin of error mean?
It is half the width of the confidence interval — the distance from your point estimate to the edge of the range. A poll at 48% with ±3% margin of error means the full interval runs from 45% to 51%.
Bigger samples shrink the margin of error. Doubling your sample does not halve it though — you need to quadruple the sample size to cut the margin in half because the standard error scales with 1/√n.
How does sample size affect the confidence interval?
More data means a narrower interval, full stop. The standard error — which drives the interval width — shrinks proportionally to the square root of the sample size.
The practical takeaway: going from 25 to 100 observations cuts your interval width in half. Going from 100 to 400 halves it again. There are diminishing returns, which is why most surveys land between 1,000 and 2,000 respondents — beyond that, the improvement per additional person is tiny relative to the cost.