How to Calculate Probability - Complete Guide
Learn the fundamentals of probability including basic probability, compound events, conditional probability, Bayes' theorem, and expected value with worked examples.
What Is Probability?
Probability is the mathematical study of uncertainty and randomness. It assigns a number between 0 and 1 to an event, where 0 means the event is impossible and 1 means it is certain. A probability of 0.5 (or 50%) means the event is equally likely to happen or not. Probability originated from gambling problems in the 17th century but has since become fundamental to science, engineering, medicine, finance, and artificial intelligence. Whether you are predicting the weather, assessing medical test accuracy, or analyzing financial risk, probability provides the framework for reasoning about uncertain outcomes.
Basic Probability Formula
For equally likely outcomes, the probability of an event is the number of favorable outcomes divided by the total number of possible outcomes: P(A) = favorable outcomes / total outcomes. For example, the probability of rolling a 4 on a fair six-sided die is 1/6, because there is one favorable outcome out of six possibilities. The probability of drawing a heart from a standard 52-card deck is 13/52 = 1/4. This counting approach works when every outcome in the sample space is equally probable, which is the case for fair dice, fair coins, well-shuffled cards, and random selections from a uniform population.
Complementary Events
The complement of an event A, written A', consists of all outcomes that are not in A. The probabilities of an event and its complement always sum to 1: P(A) + P(A') = 1. This means P(A') = 1 - P(A). This rule is extremely useful when it is easier to calculate the probability of something not happening. For example, the probability of rolling at least one 6 in four rolls of a die is easier to compute as 1 minus the probability of no 6 at all: 1 - (5/6)⁴ = 1 - 625/1296 = approximately 0.518. Without the complement rule, you would need to consider every combination of rolls that includes at least one 6.
Compound Events: AND and OR
When combining events, the addition rule gives the probability of A or B: P(A or B) = P(A) + P(B) - P(A and B). The subtraction of P(A and B) prevents double-counting outcomes that belong to both events. If A and B are mutually exclusive (they cannot both occur), then P(A and B) = 0, and the formula simplifies to P(A or B) = P(A) + P(B). The multiplication rule gives the probability of A and B: for independent events, P(A and B) = P(A) x P(B). For example, the probability of flipping heads twice in a row is (1/2) x (1/2) = 1/4. Independence means the occurrence of one event does not affect the probability of the other.
Conditional Probability
Conditional probability measures the probability of event A given that event B has already occurred, written P(A|B). The formula is P(A|B) = P(A and B) / P(B). For example, if a bag contains 3 red and 5 blue marbles and you draw one red marble without replacement, the probability that the second marble is red is P(red2|red1) = 2/7, because only 2 red marbles remain out of 7 total. Conditional probability is the key to understanding how new information updates our beliefs. It is essential in medical diagnosis, quality control, spam filtering, and any scenario where prior knowledge influences the likelihood of outcomes.
Bayes' Theorem
Bayes' theorem provides a way to reverse conditional probabilities. It states: P(A|B) = P(B|A) x P(A) / P(B). This lets you update the probability of a hypothesis A given new evidence B. A classic example is medical testing: if a disease affects 1% of the population and a test has 95% sensitivity (P(positive|disease) = 0.95) and 90% specificity (P(negative|no disease) = 0.90), then P(disease|positive) = (0.95 x 0.01) / (0.95 x 0.01 + 0.10 x 0.99) = 0.0095 / 0.1085 = approximately 8.8%. Despite the positive test, the actual probability of having the disease is less than 9%, a counterintuitive but important result.
Expected Value
The expected value of a random variable is the long-run average outcome if an experiment is repeated many times. It is calculated as the sum of each outcome multiplied by its probability: E(X) = sum of (x_i x P(x_i)). For example, in a game where you roll a die and win $10 for a 6 but pay $2 for any other number, the expected value is (1/6 x $10) + (5/6 x (-$2)) = $1.67 - $1.67 = $0.00, making it a fair game. Expected value is the foundation of decision theory, insurance pricing, gambling analysis, and risk management. It tells you what to expect "on average," even though any single trial may differ substantially.
Common Probability Distributions
Several probability distributions appear repeatedly in practice. The binomial distribution models the number of successes in a fixed number of independent trials, like the number of heads in 10 coin flips. The normal (Gaussian) distribution describes many natural phenomena and is characterized by its bell shape, with mean and standard deviation as its two parameters. The Poisson distribution models the number of rare events in a fixed interval, like the number of customer arrivals per hour. Understanding these distributions allows you to model uncertainty mathematically and make predictions with quantified confidence.
Try These Calculators
Put what you learned into practice with these free calculators.
Related Guides
How to Calculate Permutations and Combinations - Complete Guide
Learn how to calculate permutations and combinations with clear formulas and examples. Understand when order matters, factorials, and real-world counting problems.
Understanding Standard Deviation - Complete Guide
Learn what standard deviation is, how to calculate it step by step, and why it matters. Covers population vs. sample standard deviation with clear examples.
How to Calculate Mean, Median, and Mode - Complete Guide
Learn how to calculate mean, median, and mode with clear explanations and examples. Understand when to use each measure of central tendency.