Wednesday, August 22, 2012

Measurement: Precision & Accuracy


Accuracy and precision are terms used to explain the sources of error in a data set.  Accuracy describes how close a measurement is to the correct answer. Precision describes the spread of the data or how close the measurements are to each other.

To determine the accuracy of a measurement, the correct or accepted value must be known.  The most common calculation associated with accuracy is percent error.

percent error = |(accepted value - experimental value)|   x   100
                                   experimental value

The precision of a data set can be determined in a number of ways, including range, standard deviation and percent deviation. Range is determined by subtracting the smallest value from the largest value in a data set.

Deviation literally means difference, so we can calculate it using subtraction.  By finding the difference between an individual measurement and the average of all the measurements in a data set, we can find how "off" that single measurements is from all the others. A very basic way of looking at standard deviation is to think of it as the average of all the deviations of the individual measurements from the average of the data set.  

The problem with simply using standard deviation to determine precision is magnitude (the size of the numbers.) A standard deviation of 1.00 may sound large or small without some idea of the magnitude of the measurements in the data set.  If your measurements range from 1.20 to 3.56, it is huge!  But if the range of the data is 1000.0 to 1002.0, it would be much more acceptable.

Percent compares the part to the whole, so it takes away the uncertainty of magnitude.  Percent deviation allows us to compare the standard deviation to the average of the data set.  The lower the percentage that each individual measurement differs from the average of the data set, the better the precision.

percent deviation = standard deviation   100
                                average of the data set

Measurement: Significant Digits


There are two types of numbers in science: exact (counting) numbers and inexact (measurements and calculated quantities).  Exact or counting numbers represent objects.  For instance, a dozen eggs has exactly 12 eggs.  You can’t have 12.01 eggs.  Measurements and numbers based on calculations will always have some uncertainty. Significant digits are used to represent that uncertainty or the amount of confidence you have in a measurement.
Uncertainty occurs because we use equipment to make measurements.  You can only measure a length as exact as the increments on the ruler you are using.  Significant digits are the numbers we know with certainty plus one more that is estimated.

Basic Rules:
  1. All non-zero digits are significant
  2. All zeros between non-zero digits are significant
  3. Zeros to the right of the decimal and to the right of a non-zero digit are significant
  4. Zeros to the right of the decimal, but to the left of all non-zero digits are not significant
  5. If there is no decimal, zeros to the right of the last non-zero digit are not significant

Rules for Calculations:
  1. In addition and subtraction, use the LEAST number of DECIMALS.
  2. In multiplication and division, use the LEAST number of SIGNIFICANT DIGITS.
  3. Apply each rule using the order of operations.

Measurement: Introduction



All science is based on analyzing data. There are two types of data in chemistry. Qualitative data is based on descriptions such as color, state and luster. Quantitative data is based on numerical measurements.
Chemistry represents its quantitative data using the metric system.  Mass is measured in grams, volume in liters, length in meters, and temperature in Celsius or the Kelvin scale. EVERY QUANTITATIVE MEASUREMENT MUST HAVE BOTH A QUANTITY AND A UNIT.
Numbers mean nothing without a unit!