This article is intended to shed some light on the importance of ruling out error by properly calibrating the analyser in an analytical instrumentation system. By Doug Nordstrom and Tony Waters.
In many analytical instrumentation systems, the analyser does not provide an absolute measurement. Rather, it provides a relative response based on settings established during calibration, which is a critical process subject to significant error.
To calibrate an analyser, a calibration fluid of known contents and quantities is passed through it, producing measurements of component concentration. If these measurements are not consistent with the known quantities in the calibration fluid, the analyser is adjusted accordingly. Later, when process samples are analysed, the accuracy of the analyser’s reading will depend on the accuracy of the calibration process.
It is therefore imperative that we understand how error or contamination can be introduced through calibration; when calibration can — and cannot — address a perceived performance issue with the analyser; how atmospheric pressure or temperature fluctuations can undo the work of calibration; and when and when not to calibrate.
System design
One common problem in calibration is incorrect system configuration. In many cases, the calibration fluid is mistakenly introduced downstream of the stream selection valve system and without the benefits of a double block and bleed configuration (Figure 1). A better place to introduce the calibration fluid would be through the sample stream selection system, as in Figure 2. The purpose of a sample stream selection system is to enable rapid replacement of sample streams without the risk of cross contamination.
In figures 1 and 2, each stream in the sample stream selection system is outfitted with two block valves and a bleed valve (to vent) to ensure that one stream — and only one stream — is making its way to the analyzer at one time. Over the years, stream selection systems have evolved from double block and bleed (DBB) configurations comprised of conventional components to modular, miniaturised systems (New Sampling/Sensor Initiative, ANSI/ISA 76.00.02). The most efficient systems provide fast purge times, low valve actuation pressures, and enhanced safety characteristics, together with high flow capacity and consistent pressure drop from stream to stream for a predictable delivery time to the analyser.
A stream selection system provides the greatest insurance against the possibility of the calibration fluid leaking into a sample stream. Nevertheless, some technicians will bypass this assembly and locate the calibration fluid as close as possible to the analyser with the intent of conserving this expensive fluid. If only a single ball valve is employed, as in Figure 1, the attempt to conserve calibration gas may result in biased analyser readings. The analyser may be properly calibrated, but there is always the risk that a small amount of calibration gas could leak into the sample stream and throw off the measurements.
Limitations of calibration
To effectively calibrate an analyser, the operator, technician or engineer should understand, theoretically, what calibration is, what it can correct and what it cannot. Let’s start with the difference between precision and accuracy.
A shooter’s target is a good metaphor for explanatory purposes. In Figure 3, the shooter has produced a series of hits (in red) on the target. Since the hits are very close together in one cluster, it can rightly be said that the shooter is precise. Time and again, he is hitting the target in the same place. Precision yields repeatable outcomes. However, the shooter is not hitting the centre of the target and, therefore, he is not accurate. If he or she makes an adjustment and lands all of his hits in the centre of the target, then he will be both precise and accurate.
The same terms can be applied to analysers. An analyser must first be precise. It must yield repeatable results when presented with a known quantity in the form of a calibration fluid. If it does not, then the analyser is malfunctioning or the system is not maintaining the sample at constant conditions. Calibration cannot correct for imprecision.
If the analyser produces consistent results but the results are not the same as the known composition of the calibration fluid, then the analyser is said to be inaccurate. This situation can and should be addressed through calibration. This is called correcting the bias.
Even if the analyser is found to be precise and accurate when tested with calibration fluids, it is still possible that it will yield inaccurate results when analysing the sample stream. If the analyser is asked to count red molecules and it encounters pink ones, what does it do? The pink molecules look red to the analyser so it counts them as red, resulting in an inflated red count. This is called positive interference: A molecule that should not be counted is counted because, to the analyser, it looks similar to the molecule that should be counted. For example, in a system designed to count propane molecules, propylene molecules may show up. It’s possible that the analyser will count them as propane because it was not configured to make a distinction between the two.
No analyser is perfect, but they all strive for ‘selectivity’, which means they respond to just the molecules you want them to and not to anything else. Some analysers are more complex and are programmed to chemically inhibit certain types of interference.
Atmospheric changes in gas analysers
Gas analysers are essentially molecule counters. When they are calibrated, a known concentration of gas is introduced, and the analyser’s output is checked to ensure that it is counting correctly. But what happens when the atmospheric pressure changes by five to 10 per cent as it is known to do in some climates?
The number of molecules in a given volume will vary with the change in atmospheric pressure and as a result the analyser’s count will change. There is a common misperception that atmospheric pressure is a constant 14.7 psia (1 bar.a), but, based on the weather, it may fluctuate as much as 1 psi (0.07 bar) up or down.
In order for the calibration process to be effective, absolute pressure in the sampling system during calibration and during analysis of samples must be the same. Absolute pressure may be defined as the total pressure above a perfect vacuum. In a sampling system, it would be the system pressure as measured by a gauge, plus atmospheric pressure.
To understand the degree of fluctuation in measurement that may be brought about by changes in absolute pressure, let’s refer to the perfect gas law: PV = nRT, where P = pressure, psia; V = volume, cubic in.; n = number of moles (molecules); R = gas constant; and T = absolute temperature, °F.
Rearranging this equation to read: n=PV/RT, shows that as temperature and pressure change, the number of molecules present in the standard volume also changes. Pressure changes are more critical than temperature fluctuations. One atmosphere of pressure is defined as 14.3 psi. Therefore, a 1 psi variation in pressure can change the number of molecules in the analyser volume by about seven per cent. Temperature, on the other hand, is measured on the absolute scale, keeping in mind that absolute zero is 460°F (-273°C), so a 1°F (0.5° C) temperature variation changes the number of molecules by only about 0.3 per cent. In sum, it is probable that one might get a large change in pressure in percentage terms. It is not probable that one would get a large temperature change in percentage terms.
Validation versus calibration
The best method for calibration is one that employs an automated system of regular validation, with statistical process control. Validation is the process of checking the analyser at regular time intervals to determine whether it is on or off the target. In validation, a reading is taken and that reading is recorded. It is the same process as calibration, except that no correction is made.
An automated system will run a validation check at regular intervals, usually once a day, and analyse the outcome for any problem that would require an adjustment or re-calibration. The system will allow for inevitable ups and downs but if it observes a consistent trend — one that is not correcting itself — then it alerts the operator that the system could be going catastrophically wrong.
A human being can manually validate a system at regular intervals, just like an automated system, but, more often than not, the human being will also make an adjustment to the analyser, even if the system is just one per cent off. The result is a series of occasional and minor adjustments that introduce additional variance and make it difficult to analyse trends and determine when the system is truly running off course. It is better to allow an automated system to run unattended until a statistical analysis of the results suggests that attention is required.
Conclusion
Calibration is an important process and an absolute requirement in analytical systems, but care must be taken to perform this process properly. The operator, technician or engineer should understand how best to introduce the calibration gas into the system (i.e., through a DBB configuration so the possibility of cross-stream contamination is minimised) and how to control for atmospheric fluctuations in gas analysers (i.e., through an absolute pressure regulator).
Further, the technician or operator should understand the limitations of calibration — what problems it can address and what problems it cannot — and how frequent adjustments to the analyser based on incomplete data can introduce error. If the analyser is regularly validated with an automated system and is properly calibrated when a statistical analysis justifies it, then calibration will function as it should, and provide an important service in enabling the analyser to provide accurate measurements.
[Doug Nordstrom is market manager for analytical instrumentation at Swagelok Company, while Tony Waters is a US-based process engineer who holds process analyser training courses.]