How to Calculate Standard Deviation

Learn how to calculate standard deviation step by step for both population and sample data sets. Understand what standard deviation measures and how to interpret results.

What Is Standard Deviation?

Standard deviation measures how spread out the values in a data set are around the mean. A small standard deviation means the data points cluster closely around the mean, while a large standard deviation indicates greater variability. It is one of the most widely used statistics in science, finance, and quality control.

Population vs. Sample Standard Deviation

Population standard deviation (σ) is used when you have data for an entire population and divides by N (the total count). Sample standard deviation (s) is used when your data is a sample from a larger population and divides by N − 1 (Bessel's correction) to reduce bias. In practice, sample standard deviation is more commonly used because we rarely have complete population data.

Step 1: Find the Mean

Start by calculating the mean of your data set. For {2, 4, 4, 4, 5, 5, 7, 9}, the mean is (2 + 4 + 4 + 4 + 5 + 5 + 7 + 9) / 8 = 40 / 8 = 5. This mean will be the reference point from which deviations are measured.

Step 2: Calculate Squared Deviations

Subtract the mean from each data point to get the deviation, then square each deviation to make them all positive. For the example above with mean = 5, the deviations are {−3, −1, −1, −1, 0, 0, 2, 4} and the squared deviations are {9, 1, 1, 1, 0, 0, 4, 16}.

Step 3: Calculate Variance

Variance is the average of the squared deviations. For population variance, divide the sum of squared deviations by N. For sample variance, divide by N − 1. In the example, the sum of squared deviations is 32. As a population, variance = 32 / 8 = 4. As a sample, variance = 32 / 7 ≈ 4.57.

Step 4: Take the Square Root

Standard deviation is simply the square root of the variance. For the population, σ = √4 = 2. For the sample, s = √4.57 ≈ 2.14. The result is in the same units as your original data, making it directly interpretable unlike variance.

Interpreting Standard Deviation

In a normal distribution, approximately 68% of values fall within one standard deviation of the mean, 95% fall within two standard deviations, and 99.7% fall within three (the 68-95-99.7 rule). For a data set with mean 100 and standard deviation 15 (like IQ scores), about 68% of values fall between 85 and 115.

Try These Calculators

Put what you learned into practice with these free calculators.