The correct use of terms such as error of measurement, precision, trueness, accuracy, uncertainty of measurement and other expressions is essential for a professional approach to analytical measurement. This article is the first of a two-part series on analytical measurement terminology. It explains what these terms mean in a laboratory environment and how you can strengthen confidence in the quality of your own measurement results.
Measurement results should reflect reality and serve as a basis for making decisions. You should be able to trust the result of a single measurement without having to perform replicate (i.e. repeat measurements). To achieve this goal, it is essential to quantify and as far as possible minimize systematic and random errors. However, this means you need to have a thorough understanding of the possible sources of measurement errors before you can set about optimizing an analytical method with regard to trueness and precision.
This first article discusses different sources of measurement errors in thermal analysis and shows how such errors can be identified and avoided. Examples are taken from DSC (differential scanning calorimetry), TGA (thermogravimetric analysis), TMA (thermomechanical analysis) and DMA (dynamic mechanical analysis). Part 2 (UserCom 30) describes how a concept for determining the uncertainty of measurement is developed.
The results of measurements performed under identical conditions are never free of error but are scattered around a mean value (B). Depending on quality of the measurement procedure, the mean value deviates more or less from a generally accepted reference value that is considered to be the true value (A) (Figure 1). The difference between an individual measurement value (Ci ), and the true value (A) is made up of two parts: the systematic error and the random error. In general, a systematic error remains constant in magnitude and sign within a measurement series and applies to all measurement values − it causes the values to be too high or too low to the same extent. A systematic error is often referred to as bias and is also known as a determinate error.
Systematic errors are often difficult to detect and eliminate. In contrast, the scatter (or spread) of individual values (Ci ) around the mean value is due to random errors of measurement − some values are too high while others are too low. Random errors are also called indeterminate errors. They can be described with the aid of statistical parameters such as the standard deviation.
A typical example of a systematic error of measurement in thermogravimetric analysis, TGA, is buoyancy: If a sample is heated in air at atmospheric pressure, the density of the air in the furnace decreases with increasing temperature. As a result, the buoyancy experienced by the sample, crucible, and crucible support decreases and the apparent mass of the sample increases. This systematic error is particularly relevant for small mass losses. If it is not corrected, the measured mass loss deviates from the true value by a certain amount.
In practice, this systematic error is corrected by performing a blank measurement in which an empty crucible is heated under identical conditions. The resulting blank curve is then subtracted from the sample measurement curve. An example of random error of measurement is the scatter of measured values of the enthalpy of fusion of indium. The values in Table 1 (in J/g) are the results of one hundred DSC measurements of the same test specimen. The mean value of all the measured values was 28.45 J/g. The deviation from the conventional true value or accepted reference value (28.51 J/g) is the systematic error of measurement. The spread of the measured values due to random errors of measurement is 0.12 J/g (standard deviation).
These terms can be explained by considering a set of measurement values using a target board presentation (Figure 2). The true value is assumed to be at the center of the target. The smaller the systematic error of measurement of the individual values, the better is the trueness of the mean value of the measurement series.
Accuracy is a term that involves both trueness and precision. Trueness describes the closeness of agreement between the mean value of a series of measurements and an accepted reference value or conventional true value and is a measure of the systematic error of measurement. Precision describes the closeness of agreement between individual values obtained in measurement series, that is, the scatter or spread of the values. It is a measure of the random error of measurement.
The most important causes of measurement errors are influences of the procedure, instrumental influences, sampling and sample preparation, environmental influences, experimental parameters, evaluation methodology, time-dependent factors and shortcomings of the operator.
The accuracy of analytical measurement results can be improved through detailed understanding of the measurement process and competent method development.
Analytical Measurement Terminology in the Laboratory. Part 1: Trueness, Precision and Accuracy | Thermal Analysis Application No. UC 291 | Application published in METTLER TOLEDO Thermal Analysis UserCom 29