- Systematic error
-
Systematic errors are biases in measurement which lead to the situation where the mean of many separate measurements differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of several different types. Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either zero error or percentage error. For example, consider an experimenter taking a reading of the time period of a pendulum swinging past a fiducial mark: If his stop-watch or timer starts with 1 second on the clock then all of his results will be off by 1 second (zero error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage error in the calculated average of his results; the final result will be slightly larger than the true period. Distance measured by radar will be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation.
Systematic errors may also be present in the result of an estimate based on a mathematical model or physical law. For instance, the estimated oscillation frequency of a pendulum will be systematically in error if slight movement of the support is not accounted for.
Systematic errors can be either constant, or be related (e.g. proportional or a percentage) to the actual value of the measured quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environment temperature). When they are constant, they are simply due to incorrect zeroing of the instrument. When they are not constant, they can change sign. For instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or −102° (systematic error = −2°), respectively. Thus, the temperature will be overestimated when it will be above zero, and underestimated when it will be below zero.
Constant systematic errors are very difficult to deal with, because their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove systematic error is through calibration of the measurement instrument.
In a statistical context, the term systematic error usually arises where the sizes and directions of possible errors are unknown.
Contents
Drift
Systematic errors which change during an experiment (drift) are easier to detect. Measurements show trends with time rather than varying randomly about a mean.
Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment, for example if each measurement is higher than the previous measurement which could perhaps occur if an instrument becomes warmer during the experiment. If the measured quantity is variable, it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, for instance by resetting the instrument immediately before the experiment, it needs to be allowed for by subtracting its (possibly time-varying) value from the readings, and by taking it into account in assessing the accuracy of the measurement.
If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, suppose the timing of a pendulum using an accurate stopwatch several times gives readings randomly distributed about the mean. A systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running. Measuring instruments such as ammeters and voltmeters need to be checked periodically against known standards.
Systematic errors can also be detected by measuring already known quantities. For example, a spectrometer fitted with a diffraction grating may be checked by using it to measure the wavelength of the D-lines of the sodium electromagnetic spectrum which are at 600nm and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.
Systematic versus random error
Measurement errors can be divided into two components: random error and systematic error.[1] Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter's interpretation of the instrumental reading. Random errors show up as different results for ostensibly the same repeated measurement. Systematic error cannot be discovered this way because it always pushes the results in the same direction. If the cause of a systematic error can be identified, then it can usually be eliminated.
See also
- Experimental uncertainty analysis
- Biased sample
- Errors and residuals in statistics
- Observational error
References
- ^ John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. p. 94, §4.1. ISBN 093570275X. http://books.google.com/books?id=giFQcZub80oC&pg=PA94.
Categories:
Wikimedia Foundation. 2010.