Standard Deviation vs Standard Error -- What's the Difference?
Everyone remembers high school math teachers drilling on about standard deviation whenever a statistics lesson came up.
Anytime one observes or track a large group of numbers - measuring anything from IQ or bone length to nose hair count or the size of animals in a herd - those numbers will cluster around an average number. The amount of observations that fall on or very near the average is not at all random, with 34% falling very close below and 34% appearing very close above the average number measured. Thus, 68% of those measured will fall within one standard deviation of the average observation.
If a herd of 100 elk are five feet tall, and one standard deviation of the sample of elk measured is one foot, then 68% of those elk will be between the height of four and six feet tall.
Going further, 95% of all elk measured will fall within two standard deviations of the average number. In the case of the elks, that means that 95% of them will fall within two standard deviations (two feet) of the average. 95% of that elk population will range from three to seven feet tall.
The last 5% that is not included are the extremes - the very very tall elk, and the very very short elk. Because that 5% number is split between the tallest and shortest, we can know that 2.5% of the elk will greater than two standard deviations below the average height - so less than three feet tall. We can also know that the last 2.5% of the elk will be very, very tall - or greater than two standard deviations above the average, at seven feet tall or higher.
Statisticians know that many numbers are close to their estimates, but are rarely perfect. Standard error is then integrated into a statisticians guesses to account for the possibility that their numbers are slightly off. Typically, the more information a statistician has - the more samples measured - the lower the standard error.
The Law of Big Numbers
If a herd of 10 elks is measured, a statistician can begin to understand the average elk size, but their understanding of the average elk size will not be perfect. This is because the more samples you have - the more elks measured - the more certain a statistician can be that his number are correct. If those 10 elks measured out to be five feet tall, the statistician might think that the average is somewhere near five feet, but they would know that their number is not perfect and could change drastically. If the statistician measures 100,000 elk and the average is five feet, the statistician can be mathematically positive that five feet is the real, true average height of an elk. Every time the statistician measured another elk, they increased their sample size and decreased the amount of standard error they expect to encounter.
By Travis Lindsay
Posted in ...Business