Accuracy and precision are terms used to explain the sources of error in a data set. Accuracy describes how close a measurement is to the correct answer. Precision describes the spread of the data or how close the measurements are to each other.
To determine the accuracy of a measurement, the correct or accepted value must be known. The most common calculation associated with accuracy is percent error.
percent error = |(accepted value - experimental value)| x 100
experimental value
The precision of a data set can be determined in a number of ways, including range, standard deviation and percent deviation. Range is determined by subtracting the smallest value from the largest value in a data set.
Deviation literally means difference, so we can calculate it using subtraction. By finding the difference between an individual measurement and the average of all the measurements in a data set, we can find how "off" that single measurements is from all the others. A very basic way of looking at standard deviation is to think of it as the average of all the deviations of the individual measurements from the average of the data set.
The problem with simply using standard deviation to determine precision is magnitude (the size of the numbers.) A standard deviation of 1.00 may sound large or small without some idea of the magnitude of the measurements in the data set. If your measurements range from 1.20 to 3.56, it is huge! But if the range of the data is 1000.0 to 1002.0, it would be much more acceptable.
Percent compares the part to the whole, so it takes away the uncertainty of magnitude. Percent deviation allows us to compare the standard deviation to the average of the data set. The lower the percentage that each individual measurement differs from the average of the data set, the better the precision.
percent deviation = standard deviation x 100
average of the data set
No comments:
Post a Comment