How to Perform a T-Test
Learn how to perform one-sample, two-sample, and paired t-tests. This guide covers the t-test formula, degrees of freedom, p-values, and when to use each type of t-test.
What Is a T-Test?
A t-test is a parametric hypothesis test used to determine whether there is a statistically significant difference between means. It uses the t-distribution, which accounts for the extra uncertainty introduced when estimating population variance from a small sample. The t-distribution approaches the standard normal distribution as sample size increases, so t-tests are appropriate when n < 30 or when the population standard deviation is unknown, regardless of n.
Types of T-Tests
A one-sample t-test compares a sample mean to a known or hypothesized population mean. A two-sample (independent) t-test compares the means of two separate, independent groups. A paired t-test compares means from the same group measured at two different times or under two conditions, using the differences between paired observations. Choosing the correct type depends entirely on your study design.
The One-Sample T-Test Formula
The test statistic is: t = (x̄ − μ₀) / (s / √n), where x̄ is the sample mean, μ₀ is the hypothesized population mean, s is the sample standard deviation, and n is the sample size. The degrees of freedom are df = n − 1. Compare the calculated t to the critical value from a t-table at your chosen significance level α, or compute a p-value directly.
Two-Sample Independent T-Test
For two independent groups with means x̄₁ and x̄₂, sample sizes n₁ and n₂, and sample standard deviations s₁ and s₂, the test statistic is: t = (x̄₁ − x̄₂) / √(s₁²/n₁ + s₂²/n₂). This is Welch's t-test, which does not assume equal variances and is preferred in most settings. The degrees of freedom are approximated using the Welch–Satterthwaite equation, which is typically computed by software.
Paired T-Test
For paired data, compute the difference dᵢ = x₁ᵢ − x₂ᵢ for each pair. Then calculate the mean difference d̄ and its standard deviation s_d. The test statistic is: t = d̄ / (s_d / √n), with df = n − 1 where n is the number of pairs. Pairing eliminates between-subject variability and generally produces a more powerful test than an independent samples design.
Interpreting p-Values and Critical Values
After computing t, find the p-value — the probability of observing a t-statistic as extreme or more extreme than the one calculated, assuming H₀ is true. If p ≤ α (commonly 0.05), reject the null hypothesis. Equivalently, if |t_calculated| > t_critical (from the t-table at the chosen α and df), reject H₀. A two-tailed test divides α across both tails; a one-tailed test places all α in one tail.
Assumptions and Checks
T-tests assume the data are approximately normally distributed (or n is large enough by the Central Limit Theorem), observations are independent within and between groups, and for the pooled two-sample test, that variances are equal (check with Levene's test). Outliers can heavily influence t-test results, so inspect data with box plots before testing. If normality is severely violated, consider the Mann-Whitney U test (non-parametric equivalent).
Try These Calculators
Put what you learned into practice with these free calculators.
Related Guides
How to Calculate Confidence Intervals
Step-by-step guide to calculating confidence intervals. Learn when to use z-intervals vs. t-intervals, how to choose a confidence level, and how to interpret the results.
How to Calculate Z-Score
Learn how to calculate a z-score step by step. Understand the z-score formula, what it means, and how to use it to compare data points across different distributions.
How to Calculate Chi-Square Test
Learn how to perform a chi-square test of independence and goodness-of-fit. This guide explains the chi-square formula, how to build a contingency table, and how to interpret results using degrees of freedom.