Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software
Input Devices | Data Loggers + Recorders | Books | Links + Resources
We’re covering pressure measurement next in this section because it’s required very commonly in most industrial process control systems and is the next-most measured process parameter after temperature. We shall see that many different types of pressure-sensing and pressure measurement systems are available to satisfy this requirement. However, before considering these in detail, it’s important for us to understand that pressure can be quantified in three alternative ways in terms of absolute pressure, gauge pressure, or differential pressure.
The formal definitions of these are as follows.
Absolute pressure: This is the difference between the pressure of the fluid and the absolute zero of pressure.
Gauge pressure: This describes the difference between the pressure of a fluid and atmospheric pressure. Absolute and gauge pressures are therefore related by the expression:
Absolute pressure = Gauge pressure + Atmospheric pressure:
A typical value of atmospheric pressure is 1.013 bar. However, because atmospheric pressure varies with altitude as well as with weather conditions, it’s not a fixed quantity.
Therefore, because gauge pressure is related to atmospheric pressure, it also is not a fixed quantity.
Differential pressure: This term is used to describe the difference between two absolute pressure values, such as the pressures at two different points within the same fluid (often between the two sides of a flow restrictor in a system measuring volume flow rate).
Pressure is a quantity derived from the fundamental quantities of force and area and is usually measured in terms of the force acting on a known area. The SI unit of pressure is the Pascal, which can alternatively be expressed as Newtons per square meter. The bar, which is equal to 10,000 Pascal, is a related metric unit that is more suitable for measuring the most typically met pressure values. The unit of pounds per square inch is not an SI unit, but is still in widespread use, especially in the United State and Canada. Pressures are also sometimes expressed as inches of mercury or inches of water, particularly when measuring blood pressure or pressures in gas pipelines. These two measurement units derive from the height of the liquid column in manometers, which were a very common method of pressure measurement in the past. The torr is a further unit of measurement used particularly to express low pressures (1 torr = 133.3 Pascal).
To avoid ambiguity in pressure measurements, it’s usual to append one or more letters in parentheses after the pressure value to indicate whether it’s an absolute, gauge, or differential pressure: (a) or (abs) indicates absolute pressure, (g) indicates gauge pressure, and (d) specifies differential pressure. Thus, 2.57 bar (g) means that the pressure is 2.57 bar measured as gauge pressure. In the case of the pounds per square inch unit of pressure measurement, which is still in widespread use, it’s usual to express absolute, gauge, and differential pressure as psia, psig, and psid, respectively.
Absolute pressure measurements are made for such purposes as aircraft altitude measurement (in instruments known as altimeters) and when quantifying atmospheric pressure. Very low pressures are also normally measured as absolute pressure values. Gauge pressure measurements are made by instruments such as those measuring the pressure in vehicle tires and those measuring pressure at various points in industrial processes. Differential pressure is measured for some purposes in industrial processes, especially as part of some fluid flow rate-measuring devices.
In most applications, typical values of pressure measured range from 1.013 bar (the mean atmospheric pressure) up to 7000 bar. This is considered to be the "normal" pressure range, and a large number of pressure sensors are available that can measure pressures in this range.
Measurement requirements outside this range are much less common. While some of the pressure sensors developed for the "normal" range can also measure pressures that are either lower or higher than this, it’s preferable to use special instruments that have been specially designed to satisfy such low- and high-pressure measurement requirements. In the case of low pressures, such special instruments are commonly known as vacuum gauges.
Our discussion summarizes the main types of pressure sensors in use. This discussion is concerned primarily only with the measurement of static pressure, because the measurement of dynamic pressure is a very specialized area that is not of general interest. In general, dynamic pressure measurement requires special instruments, although modified versions of diaphragm-type sensors can also be used if they contain a suitable displacement sensor (usually either a piezoelectric crystal or a capacitive element).
The diaphragm is one of three types of elastic-element pressure transducers. Applied pressure causes displacement of the diaphragm and this movement is measured by a displacement transducer. Different versions of diaphragm sensors can measure both absolute pressure (up to 50 bar) and gauge pressure (up to 2000 bar) according to whether the space on one side of the diaphragm is, respectively, evacuated or open to the atmosphere. A diaphragm can also be used to measure differential pressure (up to 2.5 bar) by applying the two pressures to the two sides of the diaphragm. The diaphragm can be plastic, metal alloy, stainless steel, or ceramic. Plastic diaphragms are the least expensive, but metal diaphragms give better accuracy. Stainless steel is normally used in high temperature or corrosive environments.
Ceramic diaphragms are resistant even to strong acids and alkalis and are used when the operating environment is particularly harsh. The name aneroid gauge is sometimes used to describe this type of gauge when the diaphragm is metallic.
The typical magnitude of diaphragm displacement is 0.1 mm, which is well suited to a strain gauge type of displacement-measuring transducer, although other forms of displacement measurements are also used in some kinds of diaphragm-based sensors. If the displacement is measured with strain gauges, it’s normal to use four strain gauges arranged in a bridge circuit configuration. The output voltage from the bridge is a function of the resistance change due to the strain in the diaphragm. This arrangement automatically provides compensation for environmental temperature changes. Older pressure transducers of this type used metallic strain gauges bonded to a diaphragm typically made of stainless steel. However, apart from manufacturing difficulties arising from the problem of bonding the gauges, metallic strain gauges have a low gauge factor, which means that the low output from the strain gauge bridge has to be amplified by an expensive d.c. amplifier. The development of semiconductor (piezoresistive) strain gauges provided a solution to the low-output problem, as they have gauge factors up to 100 times greater than metallic gauges. However, the difficulty of bonding gauges to the diaphragm remained and a new problem emerged regarding the highly nonlinear characteristic of the strain-output relationship.
The problem of strain-gauge bonding was solved with the emergence of monolithic piezoresistive pressure transducers. These have a typical measurement uncertainty of +-0.5% and are now the most commonly used type of diaphragm pressure transducer. The monolithic cell consists of a diaphragm made of a silicon sheet into which resistors are diffused during the manufacturing process. Such pressure transducers can be made very small and are often known as micro-sensors.
Also, in addition to avoiding the difficulty with bonding, such monolithic silicon-measuring cells have the advantage of being very inexpensive to manufacture in large quantities. Although the inconvenience of a nonlinear characteristic remains, this is normally overcome by processing the output signal with an active linearization circuit or incorporating the cell into a microprocessor-based intelligent measuring transducer. The latter usually provide analogue-to-digital conversion and interrupt facilities within a single chip and give a digital output that is integrated readily into computer control schemes. Such instruments can also offer automatic temperature compensation, built-in diagnostics, and simple calibration procedures. These features allow measurement inaccuracy to be reduced down to a value as low as +-0.1% of full-scale reading.
Capacitive Pressure Sensor
A capacitive pressure sensor is simply a diaphragm-type device in which diaphragm displacement is determined by measuring the capacitance change between the diaphragm and a metal plate that is close to it. Such devices are in common use and are sometimes known as Baratron gauges. It’s also possible to fabricate capacitive elements in a silicon chip and thus form very small micro-sensors. These have a typical measurement uncertainty of +-0.2%.
Fiber-Optic Pressure Sensors
Fiber-optic sensors, also known as optical pressure sensors, provide an alternative method of measuring displacements in diaphragm and Bourdon tube pressure sensors by optoelectronic means and enable the resulting sensors to have a lower mass and size compared with sensors in which displacement is measured by other methods. The shutter sensor described earlier in Section 13 is one form of fiber-optic displacement sensor. Another form is the fotonic sensor in which light travels from a light source, down an optical fiber, reflected back from a diaphragm, and then travels back along a second fiber to a photodetector.
There is a characteristic relationship between the light reflected and the distance from the fiber ends to the diaphragm, thus making the amount of reflected light dependent on the diaphragm displacement and hence the measured pressure.
Apart from the mass and size advantages of fiber-optic displacement sensors, the output signal is immune to electromagnetic noise. However, measurement accuracy is usually inferior to that provided by alternative displacement sensors, and the choice of such sensors also incurs a cost penalty. Thus, sensors using fiber optics to measure diaphragm or Bourdon tube displacement tend to be limited to applications where their small size, low mass, and immunity to electromagnetic noise are particularly advantageous.
Apart from the limited use with diaphragm and Bourdon tube sensors, fiber-optic cables are also used in several other ways to measure pressure. A form of fiber-optic pressure sensor known as a microbend sensor. In this, the refractive index of the fiber (and hence the intensity of light transmitted) varies according to the mechanical deformation of the fiber caused by pressure. The sensitivity of pressure measurement can be optimized by applying pressure via a roller chain such that bending is applied periodically. The optimal pitch for the chain varies according to the radius, refractive index, and type of cable involved. Microbend sensors are typically used to measure the small pressure changes generated in Vortex shedding flowmeters. When fiber-optic sensors are used in this flow measurement role, the alternative arrangement can be used, where a fiber-optic cable is merely stretched across the pipe. This often simplifies the detection of vortices.
Phase-modulating fiber-optic pressure sensors also exist. The mode of operation of these was discussed earlier.
Bellows, illustrated schematically are another elastic-element type of pressure sensor that operate on very similar principles to the diaphragm pressure sensor.
Pressure changes within the bellows, which are typically fabricated as a seamless tube of either metal or metal alloy, produce translational motion of the end of the bellows that can be measured by capacitive, inductive (LVDT), or potentiometric transducers.
Different versions can measure either absolute pressure (up to 2.5 bar) or gauge pressure (up to 150 bar). Double-bellows versions also exist that are designed to measure differential pressures of up to 30bar.
Bellows have a typical measurement uncertainty of only +-0.5%, but have a relatively high manufacturing cost and are prone to failure. Their principle attribute in the past has been their greater measurement sensitivity compared with diaphragm sensors.
However, advances in electronics mean that the high-sensitivity requirement can usually be satisfied now by diaphragm-type devices, and usage of bellows is therefore falling.
(c) Helical type Unknown pressure Unknown pressure
(b) Spiral type Unknown pressure
The Bourdon tube is also an elastic element type of pressure transducer. It’s relatively inexpensive and is used commonly for measuring the gauge pressure of both gaseous and liquid fluids. It consists of a specially shaped piece of oval-section, flexible, metal tube that is fixed at one end and free to move at the other end. When pressure is applied at the open, fixed end of the tube, the oval cross section becomes more circular. In consequence, there is displacement of the free end of the tube. This displacement is measured by some form of displacement transducer, which is commonly a potentiometer or LVDT. Capacitive and optical sensors are also sometimes used to measure the displacement.
The three common shapes of Bourdon tubes are shown. The maximum possible deflection of the free end of the tube is proportional to the angle subtended by the arc through which the tube is bent. For a C-type tube, the maximum value for this arc is somewhat less than 360 deg.
Where greater measurement sensitivity and resolution are required, spiral and helical tubes are used. These both give much greater deflection at the free end for a given applied pressure. However, this increased measurement performance is only gained at the expense of a substantial increase in manufacturing difficulty and cost compared with C-type tubes and is also associated with a large decrease in the maximum pressure that can be measured. Spiral and helical types are sometimes provided with a rotating pointer that moves against a scale to give a visual indication of the measured pressure.
C-type tubes are available for measuring pressures up to 6000 bar. A typical C-type tube of 25 mm radius has a maximum displacement travel of 4 mm, giving a moderate level of measurement resolution. Measurement inaccuracy is typically quoted at +-1% of full-scale deflection. Similar accuracy is available from helical and spiral types, but while the measurement resolution is higher, the maximum pressure measurable is only 700 bar.
The existence of one potentially major source of error in Bourdon tube pressure measurement has not been widely documented, and few manufacturers of Bourdon tubes make any attempt to warn users of their products appropriately. The problem is concerned with the relationship between the fluid being measured and the fluid used for calibration. The pointer of Bourdon tubes is normally set at zero during manufacture, using air as the calibration medium. However, if a different fluid, especially a liquid, is subsequently used with a Bourdon tube, the fluid in the tube will cause a nonzero deflection according to its weight compared with air, resulting in a reading error of up to 6%. This can be avoided by calibrating the Bourdon tube with the fluid to be measured instead of with air, assuming of course that the user is aware of the problem.
Alternatively, correction can be made according to the calculated weight of the fluid in the tube.
Unfortunately, difficulties arise with both of these solutions if air is trapped in the tube, as this will prevent the tube being filled completely by the fluid. Then, the amount of fluid actually in the tube, and its weight, will be unknown.
In conclusion, therefore, Bourdon tubes only have guaranteed accuracy limits when measuring gaseous pressures. Their use for accurate measurement of liquid pressures poses great difficulty unless the gauge can be totally filled with liquid during both calibration and measurement, a condition that is very difficult to fulfill practically.
Manometers are passive instruments that give a visual indication of pressure values. Various types exist.
The U-tube manometer, shown, is the most common form of manometer.
Applied pressure causes a displacement of liquid inside the U-shaped glass tube, and output pressure reading P is made by observing the difference, h, between the level of liquid in the two halves of the tube A and B, according to the equation P=hrg, where r is the specific gravity of the fluid. If an unknown pressure is applied to side A, and side B is open to the atmosphere, the output reading is gauge pressure. Alternatively, if side B of the tube is sealed and evacuated, the output reading is absolute pressure. The U-tube manometer also measures the differential pressure, ( p1+p2), according to the expression ( p1+p2)=hrg, if two unknown pressures p1 and p2 are applied, respectively, to sides A and B of the tube.
Output readings from U-tube manometers are subject to error, principally because it’s very difficult to judge exactly where the meniscus levels of the liquid are in the two halves of the tube. In absolute pressure measurement, an additional error occurs because it’s impossible to totally evacuate the closed end of the tube.
U-tube manometers are typically used to measure gauge and differential pressures up to about 2 bar. The type of liquid used in the instrument depends on the pressure and characteristics of the fluid being measured. Water is an inexpensive and convenient choice, but it evaporates easily and is difficult to see. Nevertheless, it’s used extensively, with the major obstacles to its use being overcome by using colored water and by regularly topping up the tube to counteract evaporation. However, water is definitely not used when measuring the pressure of fluids that react with or dissolve in water. Water is also unsuitable when high pressure measurements are required. In such circumstances, liquids such as aniline, carbon tetrachloride, bromoform, mercury, or transformer oil are used instead.
Well-Type Manometer (Cistern Manometer)
The well-type or cistern manometer, shown, is similar to a U-tube manometer but one-half of the tube is made very large so that it forms a well. The change in the level of the well as the measured pressure varies is negligible. Therefore, the liquid level in only one tube has to be measured, which makes the instrument much easier to use than the U-tube manometer.
If an unknown pressure, p1, is applied to port A and port B is open to the atmosphere, the gauge pressure is given by p1=hr. It might appear that the instrument would give better measurement accuracy than the U-tube manometer because the need to subtract two liquid level measurements in order to arrive at the pressure value is avoided. However, this benefit is swamped by errors that arise due to typical cross-sectional area variations in the glass used to make the tube. Such variations don’t affect the accuracy of the U-tube manometer to the same extent.
Unknown pressure; Resonant wire; Output current; Frequency converter; Oscillator amplifier
Inclined Manometer (Draft Gauge)
The inclined manometer or draft gauge shown is a variation on the well-type manometer in which one leg of the tube is inclined to increase measurement sensitivity.
However, similar comments to those given earlier apply about accuracy.
Resonant Wire Devices
A typical resonant wire device is shown. Wire is stretched across a chamber containing fluid at unknown pressure subjected to a magnetic field. The wire resonates at its natural frequency according to its tension, which varies with pressure. Thus pressure is calculated by measuring the frequency of vibration of the wire. Such frequency measurement is normally carried out by electronics integrated into the cell. Such devices are highly accurate, with a typical inaccuracy figure being +-0.2%full-scale reading. They are also particularly insensitive to ambient condition changes and can measure pressures between 5 mbar and 2 bar.
Electronic Pressure Gauges
This section is included because many instrument manufacturers' catalogues have a section entitled "electronic pressure gauges." However, in reality, electronic pressure gauges are merely special forms of the pressure gauges described earlier in which electronic techniques are applied to improve performance. All of the following commonly appear in instrument catalogues under the heading "electronic pressure gauges." Piezoresistive pressure transducer: This diaphragm-type sensor uses piezoresistive strain gauges to measure diaphragm displacement.
Piezoelectric pressure transducer: This diaphragm-type sensor uses a piezoelectric crystal to measure diaphragm displacement.
Magnetic pressure transducer: This class of diaphragm-type device measures diaphragm displacement magnetically using inductive, variable reluctance, or eddy current sensors.
Capacitive pressure transducer: This diaphragm-type sensor measures variation in capacitance between the diaphragm and a fixed metal plate close to it.
Fiber-optic pressure sensor: Known alternatively as an optical pressure sensor, this uses a fiber-optic sensor to measure the displacement of either a diaphragm or a Bourdon tube pressure sensor.
Potentiometric pressure sensor: This is a device where the translational motion of a bellows-type pressure sensor is connected to the sliding element of an electrical potentiometer Resonant pressure transducer: This is a form of resonant wire pressure-measuring device in which the pressure-induced frequency change is measured by electronics integrated into the device.
Special Measurement Devices for Low Pressures
The term vacuum gauge is applied commonly to describe any pressure sensor designed to measure pressures in the vacuum range (pressures less than atmospheric pressure, i.e., below 1.013 bar). Many special versions of the types of pressure transducers described earlier have been developed for measurement in the vacuum gauge. The typical minimum pressure measureable by these special forms of "normal" pressure-measuring instruments are 10 mbar (Bourdon tubes), 0.1 mbar (manometers and bellows-type instruments), and 0.001 mbar (diaphragms). However, in addition to these special versions of normal instruments, a number of other devices have been specifically developed for measurement of pressures below atmospheric pressure. These special devices include the thermocouple gauge, the Pirani gauge, the thermistor gauge, the McLeod gauge, and the ionization gauge, and they are covered in more detail next. Unfortunately, all of these specialized instruments are quite expensive.
Cold surface Thermocouple Hot surface Heating circuit
The thermocouple gauge is one of a group of gauges working on the thermal conductivity principle. At low pressure, the kinematic theory of gases predicts a linear relationship between pressure and thermal conductivity. Thus measurement of thermal conductivity gives an indication of pressure. Fig. shows a sketch of a thermocouple gauge. Operation of the gauge depends on the thermal conduction of heat between a thin hot metal strip in the center and the cold outer surface of a glass tube (that is normally at room temperature). The metal strip is heated by passing a current through it and its temperature is measured by a thermocouple.
The temperature measured depends on the thermal conductivity of the gas in the tube and hence on its pressure. A source of error in this instrument is the fact that heat is also transferred by radiation as well as conduction. This error is of a constant magnitude, independent of pressure. Hence, it can be measured, and thus correction can be made for it. However, it’s usually more convenient to design for low radiation loss by choosing a heated element with low emissivity. Thermocouple gauges are typically used to measure pressures in the range 10_4 mbar up to 1 mbar.
This is identical in its mode of operation to a thermocouple gauge except that a thermistor is used to measure the temperature of the metal strip rather than a thermocouple. It’s commonly marketed under the name electronic vacuum gauge in a form that includes a digital light-emitting diode display and switchable output ranges.
A typical form of Pirani gauge is shown. This is similar to a thermocouple gauge but has a heated element that consists of four coiled tungsten wires connected in parallel. Two identical tubes are normally used, connected in a bridge circuit, as shown, with one containing the gas at unknown pressure and the other evacuated to a very low pressure. Current is passed through the tungsten element, which attains a certain temperature according to the thermal conductivity of the gas. The resistance of the element changes with temperature and causes an imbalance of the measurement bridge. Thus, the Pirani gauge avoids the use of a thermocouple to measure temperature (as in the thermocouple gauge) by effectively using a resistance thermometer as the heated element. Such gauges cover the pressure range 10_5 to 1 mbar.
Fig. shows the general form of a McLeod gauge in which low-pressure fluid is compressed to a higher pressure that is then read by manometer techniques. In essence, the gauge can be visualized as a U-tube manometer that is sealed at one end and where the bottom of the U can be blocked at will. To operate the gauge, the piston is first withdrawn. This causes the level of mercury in the lower part of the gauge to fall below the level of junction J between the two tubes marked Y and Z in the gauge. Fluid at unknown pressure Pu is then introduced via the tube marked Z, from where it also flows into the tube of cross-sectional area A marked Y. Next, the piston is pushed in, moving the mercury level up to block junction J.
At the stage where J is just blocked, the fluid in tube Y is at pressure Pu and is contained in a known volume, Vu. Further movement of the piston compresses the fluid in tube Y and this process continues until the mercury level in tube Z reaches a zero mark. Measurement of the height (h) above the mercury column in tube Y then allows calculation of the compressed volume of the fluid, Vc, as Vc=hA.
Then, by Boyle's law: PuVu=PcVc.
Also, applying the normal manometer equation, Pc=Pu+hrg, where r is the mass density of mercury, the pressure, Pu, can be calculated as:
Pu = [Ah^2rg]/[Vu - Ah]
Compressed volume Vc is often very much smaller than the original volume, in which case Equation approximates to:
Although the smallest inaccuracy achievable with McLeod gauges is _1%, this is still better than that achievable with most other gauges available for measuring pressures in this range.
Therefore, the McLeod gauge is often used as a standard again t which other gauges are calibrated. The minimum pressure normally measurable is 10_1 mbar, although lower pressures can be measured if pressure-dividing techniques are applied.
The ionization gauge is a special type of instrument used for measuring very low pressures in the range 10_10 to 1 mbar. Normally, they are only used in laboratory conditions because their calibration is very sensitive to the composition of the gases in which they operate, and use of a mass spectrometer is often necessary to determine the gas composition around them. They exist in two forms known as a hot cathode and a cold cathode. The hot cathode form is shown schematically. In this, gas of unknown pressure is introduced into a glass vessel containing free electrons discharged from a heated filament, as shown.
Gas pressure is determined by measuring the current flowing between an anode and a cathode within the vessel. This current is proportional to the number of ions per unit volume, which in turn is proportional to the gas pressure. Cold cathode ionization gauges operate in a similar fashion except that the stream of electrons is produced by a high voltage electrical discharge.
High-Pressure Measurement (Greater than 7000 bar)
Measurement of pressures above 7000 bar is normally carried out electrically by monitoring the change of resistance of wires of special materials. Materials having resistance pressure characteristics that are suitably linear and sensitive include manganin and gold-chromium alloys. A coil of such wire is enclosed in a sealed, kerosene-filled, flexible bellows. The unknown pressure is applied to one end of the bellows, which transmit pressure to the coil. The magnitude of the applied pressure is then determined by measuring the coil resistance. Devices are often named according to the metal used in them, For example, manganin wire pressure sensor and gold-chromium wire pressure sensor. Pressures up to 30,000 bar can be measured by devices such as the manganin wire pressure sensor, with a typical inaccuracy of +-0.5%.
Intelligent Pressure Transducers
Adding microprocessor power to pressure transducers brings about substantial improvements in their characteristics. Measurement sensitivity improvement, extended measurement range, compensation for hysteresis and other nonlinearities, and correction for ambient temperature and pressure changes are just some of the facilities offered by intelligent pressure transducers.
For example, inaccuracy values as low as +-0.1% can be achieved with silicon piezoresistive bridge devices.
Inclusion of microprocessors has also enabled the use of novel techniques of displacement measurement, For example, the optical method of displacement measurement. In this, the motion is transmitted to a vane that progressively shades one of two monolithic photodiodes exposed to infrared radiation. The second photodiode acts as a reference, enabling the microprocessor to compute a ratio signal that is linearized and is available as either an analogue or a digital measurement of pressure. The typical measurement inaccuracy is _0.1%. Versions of both diaphragms and Bourdon tubes that use this technique are available.
Infrared radiation Vane Diaphragm Measurement photodiode; Dm Dr Reference photodiode Comparator; Signal conditioning; in microprocessor; Output measurement signal
Differential Pressure-Measuring Devices
Differential pressure-measuring devices have two input ports. One unknown pressure is applied to each port, and instrument output is the difference between the two pressures. An alternative way to measure differential pressure would be to measure each pressure with a separate instrument and then subtract one reading from the other. However, this would produce a far less accurate measurement of the differential pressure because of the well-known problem that the process of subtracting measurements amplifies the inherent inaccuracy in each individual measurement. This is a particular problem when measuring differential pressures of low magnitude.
Differential pressure can be measured by special forms of many of the pressure-measuring devices described earlier. Diaphragm pressure sensors, and their piezoresistive, piezoelectric, magnetic, capacitive, and fiber-optic named variants, are all commonly available in a differential-pressure-measuring form in which the two pressures to be subtracted are applied to either side of the diaphragm. Double-bellows pressure transducers (including devices known as potentiometric pressure transducers) are also used, but are much less common than diaphragm based sensors. A special form of U-tube manometer is also sometimes used when a visual indication of differential pressure values is required. This has the advantage of being a passive instrument that does not require a power supply and it’s used commonly in liquid flow-rate indicators.
Selection of Pressure Sensors
Choice between the various types of instruments available for measuring midrange pressures (1.013-7000 bar) is usually strongly influenced by the intended application. Manometers are used commonly when just a visual indication of pressure level is required, and dead-weight gauges, because of their superior accuracy, are used in calibration procedures of other pressure-measuring devices. When an electrical form of output is required, the choice is usually either one out of the several types of diaphragm sensors (strain gauge, piezoresistive, piezoelectric, magnetic, capacitive, or fiber optic) or, less commonly, a Bourdon tube.
Bellows-type instruments are also sometimes used for this purpose, but much less frequently.
If very high measurement accuracy is required, the resonant wire device is a popular choice.
In the case of pressure measurement in the vacuum range (less than atmospheric pressure, i.e., below 1.013 bar), adaptations of most of the types of pressure transducers described earlier can be used. Special forms of Bourdon tubes measure pressures down to 10 mbar, manometers and bellows-type instruments measure pressures down to 0.1 mbar, and diaphragms can be designed to measure pressures down to 0.001 mbar. However, a number of more specialized instruments have also been developed to measure vacuum pressures, as discussed in Section 15.10. These generally give better measurement accuracy and sensitivity compared with instruments that are designed primarily for measuring midrange pressures. This improved accuracy is particularly evident at low pressures. Therefore, only the special instruments described in Section 15.10 are used to measure pressures below 10^4 mbar.
At high pressures (>7000 bar), the only devices in common use are the manganin wire sensor and similar devices based on alternative alloys to manganin.
For differential pressure measurement, diaphragm-type sensors are the preferred option, with double-bellows sensors being used occasionally. Manometers are also sometimes used to give visual indication of differential pressure values (especially in liquid flow-rate indicators).
These are passive instruments that have the advantage of not needing a power supply.
Calibration of Pressure Sensors
Different types of reference instruments are used according to the range of the pressure measuring instrument being calibrated. In the midrange of pressures from 0.1 mbar to 20 bar, U-tube manometers, dead-weight gauges, and barometers can all be used as reference instruments for calibration purposes. The vibrating cylinder gauge also provides a very accurate reference standard over part of this range. At high pressures above 20 bar, a gold-chrome alloy resistance reference instrument is normally used. For low pressures in the range of 10^1 to 10^3 mbar, both the McLeod gauge and various forms of micro-manometers are used as a pressure-measuring standard. At even lower pressures below 10^3 mbar, a pressure-dividing technique is used to establish calibration. This involves setting up a series of orifices of an accurately known pressure ratio and measuring the upstream pressure with a McLeod gauge or micro-manometer.
The limits of accuracy with which pressure can be measured by presently known techniques are as follows:
---- Reference Calibration Instruments: 10_7 mbar +-4% 10^5 mbar +-2% 10^3 mbar _1% 10^1 mbar +-0.1% 1 bar +-0.001%
10^4 bar +-0.1%
Weights Piston; Reference mark; Measured pressure
Dead-weight gauge (pressure balance)
The dead-weight gauge, also known by the alternative names of piston gauge and pressure balance, is shown in schematic form. It’s a null-reading type of measuring instrument in which weights are added to the piston platform until the piston is adjacent to a fixed reference mark, at which time the downward force of the weights on top of the piston is balanced by the pressure exerted by the fluid beneath the piston. The fluid pressure is therefore calculated in terms of the weight added to the platform and the known area of the piston. The instrument offers the ability to measure pressures to a high degree of accuracy and is widely used as a reference instrument against which other pressure-measuring instruments are calibrated in the midrange of pressures. Unfortunately, its mode of measurement makes it inconvenient to use and is therefore rarely used except for calibration duties.
Special precautions are necessary in the manufacture and use of dead-weight gauges. Friction between the piston and the cylinder must be reduced to a very low level, as otherwise a significant measurement error would result. Friction reduction is accomplished by designing for a small clearance gap between the piston and the cylinder by machining the cylinder to a slightly greater diameter than the piston. The piston and cylinder are also designed so that they can be turned relative to one another, which reduces friction still further. Unfortunately, as a result of the small gap between the piston and the cylinder, there is a finite flow of fluid past the seals. This produces a viscous shear force, which partly balances the dead weight on the platform. A theoretical formula exists for calculating the magnitude of this shear force, suggesting that exact correction can be made for it. In practice, however, the piston deforms under pressure and alters the piston/cylinder gap and so shear force calculation and correction can only be approximate.
Despite these difficulties, the instrument gives a typical measurement inaccuracy of only +-0.01%. It’s normally used for calibrating pressures in the range of 20 mbar up to 20 bar.
However, special versions can measure pressures down to 0.1 mbar or up to 7000 bar.
In addition to its use for normal process measurements, the U-tube manometer is also used as a reference instrument for calibrating instruments measuring midrange pressures. Although it’s a deflection rather than null type of instrument, it manages to achieve similar degrees of measurement accuracy to the dead-weight gauge because of the error sources noted in the latter.
The major source of error in U-tube manometers arises out of the difficulty in estimating the meniscus level of the liquid column accurately. There is also a tendency for the liquid level to creep up the tube by capillary action, which creates an additional source of error.
U tubes for measuring high pressures become unwieldy because of the long lengths of liquid column and tube required. Consequently, U-tube manometers are normally used only for calibrating pressures at the lower end of the mid-pressure range.
The most commonly used type of barometer for calibration duties is the Fortin barometer. This is a highly accurate instrument that provides measurement inaccuracy levels of between _0.03 and _0.001% of full-scale reading depending on the measurement range. To achieve such levels of accuracy, the instrument has to be used under very carefully controlled conditions of lighting, temperature, and vertical alignment. It must also be manufactured to exacting standards and is therefore very expensive to buy. Corrections have to be made to the output reading according to ambient temperature, local value of gravity, and atmospheric pressure.
Because of its expense and difficulties in using it, the barometer is not normally used for calibration other than as a primary reference standard at the top of the calibration chain.
-- Unknown pressure
Amplified reference pressure
Known reference pressure
Vibrating cylinder gauge
The vibrating cylinder gauge acts as a reference standard instrument for calibrating pressure measurements up to 3.5 bar. It consists of a cylinder in which vibrations at the resonant frequency are excited by a current-carrying coil. The pressure-dependent oscillation frequency is monitored by a pickup coil, and this frequency measurement is converted to a voltage signal by a microprocessor and signal conditioning circuitry contained within the package. By evacuating the space on the outer side of the cylinder, the instrument is able to measure the absolute pressure of the fluid inside the cylinder. Measurement errors are less than 0.005% over the absolute pressure range up to 3.5 bar.
Gold-chrome alloy resistance instruments
For measuring pressures above 7000 bar, an instrument based on measuring the resistance change of metal coil as the pressure varies is used, and the same type of instrument is also used for calibration purposes. Such instruments commonly use manganin or gold-chrome alloys for the coil. Gold-chrome has a significantly lower temperature coefficient (i.e., its pressure/ resistance characteristic is less affected by temperature changes) and is therefore the normal choice for calibration instruments, despite its higher cost. An inaccuracy of only _0.1% is achievable in such devices.
The McLeod gauge, which has already been discussed earlier in Section 15.10, can be used for the calibration of instruments designed to measure low pressures between 10^4 and 0.1 mbar (10^7 to 10^4 bar).
An ionization gauge is used to calibrate instruments measuring very low pressures in the range 10^13 to 10^3 bar. It has the advantage of having a straight-line relationship between output reading and pressure. Unfortunately, its inherent accuracy is relatively poor, and specific points on its output characteristic have to be calibrated against a McLeod gauge.
Micromanometers are instruments that work on the manometer principle but are specially designed to minimize capillary effects and meniscus reading errors. The centrifugal form of a micromanometer, shown is the most accurate type for use as a calibration standard down to pressures of 10^3 mbar. In this, a rotating disc serves to amplify a reference pressure, with the speed of rotation being adjusted until the amplified pressure just balances the unknown pressure. This null position is detected by observing when oil droplets sprayed into a glass chamber cease to move. Measurement inaccuracy is +-1%.
Other types of micromanometers also exist, which give similar levels of accuracy, but only at somewhat higher pressure levels. These can be used as calibration standards at pressures up to 50 mbar.
Calculating Frequency of Calibration Checks
Some pressure-measuring instruments are very stable and unlikely to suffer drift in characteristics with time. Devices in this class include resonant wire devices, ionization gauges, and high pressure instruments (those working on the principle of resistance change with pressure). All forms of manometers are similarly stable, although small errors can develop in these through volumetric changes in the glass in the longer term. Therefore, for all these instruments, only annual calibration checks are recommended, unless of course something happens to the instrument that puts its calibration into question.
However, most instruments used to pressure consist of an elastic element and a displacement transducer that measures its movement. Both of these component parts are mechanical in nature. Devices of this type include diaphragms, bellows, and Bourdon tubes. Such instruments can suffer changes in characteristics for a number of reasons. One factor is the characteristics of the operating environment and the degree to which the instrument is exposed to it. Another reason is the amount of mishandling they receive. These parameters are entirely dependent on the particular application the instrument is used in and the frequency with which it’s used and exposed to the operating environment. A suitable calibration frequency can therefore only be determined on an experimental basis.
A third class of instrument from the calibration requirements viewpoint is the range of devices working on the thermal conductivity principle. This range includes the thermocouple gauge, Pirani gauge, and thermistor gauge. Such instruments have characteristics that vary with the nature of the gas being measured and must therefore be calibrated each time that they are used.
Procedures for Calibration
Pressure calibration requires the output reading of the instrument being calibrated to be compared with the output reading of a reference standard instrument when the same pressure is applied to both. This necessitates designing a suitable leak-proof seal to connect the pressure measuring chambers of the two instruments.
The calibration of pressure transducers used for process measurements often has to be carried out in situ in order to avoid serious production delays. Such devices are often remote from the nearest calibration laboratory and to transport them there for calibration would take an unacceptably long time. Because of this, portable reference instruments have been developed for calibration at this level in the calibration chain. These use a standard air supply connected to an accurate pressure regulator to provide a range of reference pressures. An inaccuracy of +-0.025%is achieved when calibrating midrange pressures in this manner. Calibration at higher levels in the calibration chain must, of course, be carried out in a proper calibration laboratory maintained in the correct manner. However, irrespective of where calibration is carried out, several special precautions are necessary when calibrating certain types of instrument, as described in the following paragraphs.
U-tube manometers must have their vertical alignment set up carefully before use. Particular care must also be taken to ensure that there are no temperature gradients across the two halves of the tube. Such temperature differences would cause local variations in the specific weight of the manometer fluid, resulting in measurement errors. Correction must also be made for the local value of g (acceleration due to gravity). These comments apply similarly to the use of other types of manometers and micromanometers.
The existence of one potentially major source of error in Bourdon tube pressure measurement has not been widely documented, and few manufacturers of Bourdon tubes make any attempt to warn users of their products appropriately. This problem is concerned with the relationship between the fluid being measured and the fluid used for calibration. The pointers of Bourdon tubes are normally set at zero during manufacture, using air as the calibration medium.
However, if a different fluid, especially a liquid, is subsequently used with a Bourdon tube, the fluid in the tube will cause a nonzero deflection according to its weight compared with air, resulting in a reading error of up to 6% of full-scale deflection.
This can be avoided by calibrating the Bourdon tube with the fluid to be measured instead of with air. Alternatively, correction can be made according to the calculated weight of the fluid in the tube. Unfortunately, difficulties arise with both of these solutions if air is trapped in the tube, as this will prevent the tube being filled completely by the fluid. Then, the amount of fluid actually in the tube, and its weight, will be unknown. To avoid this problem, at least one manufacturer now provides a bleed facility in the tube, allowing measurement uncertainties of less than 0.1% to be achieved.
When using a McLeod gauge, care must be taken to ensure that the measured gas does not contain any vapor. This would be condensed during the compression process, causing a measurement error. A further recommendation is insertion of a liquid air cold trap between the gauge and the instrument being calibrated to prevent the passage of mercury vapor into the latter.
We started the section off by looking at the formal definitions of the three ways in which pressure is quantified in terms of absolute, gauge, and differential pressure. We then went on to look at the devices used for measuring pressure in three distinct ranges: normal, midrange pressures between 1.013 bar (the mean atmospheric pressure) and 7000 bar, low or vacuum pressures below 1.013 bar, and high pressures above 7000 bar.
We saw that a large number of devices are available for measurements in the "normal" range.
Of these, sensors containing a diaphragm are used most commonly. We looked at the type of material used for the diaphragm in diaphragm-based sensors and also examined the different ways in which diaphragm movement can be measured. These different ways of measuring diaphragm displacement give rise to a number of different names for diaphragm-based sensors, such as capacitive and fiber-optic (optical) sensors.
Moving on, we examined various other devices used to measure midrange pressures. These included bellows sensors, Bourdon tubes, several types of manometers, and resonant wire sensors. We also looked at the range of devices commonly called electronic pressure gauges.
Many of these are diaphragm-based sensors that use an electronic means of measuring diaphragm displacement, with names such as piezoresistive pressure sensor, piezoelectric pressure sensor, magnetic pressure sensor, and potentiometric pressure sensor.
We then went on to study the measurement of low pressures. To start with, we observed that special forms of instruments used commonly to measure midrange pressures can measure pressures below atmospheric pressure instruments (Bourdon tubes down to 10 mbar, bellows type instruments down to 0.1 mbar, manometers down to 0.1 mbar, and diaphragm-based sensors down to 0.001 mbar). As well as these special versions of Bourdon tubes, several other instruments have been specially developed to measure in the low-pressure range. These include thermocouple and thermistor gauges (measure between 10^4 and 1 mbar), the Pirani gauge (measures between 10^5 and 1mbar), the McLeod gauge (measures down to 10^1 mbar or even lower pressures if it’s used in conjunction with pressure-dividing techniques), and the ionization gauge (measures between 10^10 to 1 mbar).
When we looked at measurement of high pressures, we found that our choice of instrument was much more limited. All currently available instruments for this pressure range involve monitoring the change of resistance in a coil of wire made from special materials. The two most common devices of this type are the manganin wire pressure sensor and gold-chromium wire pressure sensor.
The following three sections were devoted to intelligent pressure-measuring devices, instruments measuring differential pressure, and some guidance about which type of device to use in particular circumstances.
Then our final subject of study in the section was the means of calibrating pressure-measuring devices. We looked at various instruments used for calibration, including the dead-weight gauge, special forms of the U-tube manometers, barometers, the vibrating cylinder gauge, gold-chrome alloy resistance instruments, the McLeod gauge, the ionization gauge, and micro manometers. We then considered how the frequency of recalibration should be determined for various kinds of pressure-measuring devices. Finally, we looked in more detail at the appropriate practical procedures and precautions that should be taken for calibrating different types of instruments.
--1. Explain the difference among absolute pressure, gauge pressure, and differential pressure. When pressure readings are being written down, what is the mechanism for defining whether the value is a gauge, absolute, or differential pressure?
--2. Give examples of situations where pressure measurements are normally given as (a) absolute pressure, (b) gauge pressure, and (c) differential pressure.
--3. Summarize the main classes of devices used for measuring absolute pressure.
--4. Summarize the main classes of devices used for measuring gauge pressure.
--5. Summarize the main classes of devices used for measuring differential pressure.
--6. Explain what a diaphragm pressure sensor is. What are the different materials used in construction of a diaphragm pressure sensor and what are their relative merits?
--7. Strain gauges are used commonly to measure displacement in a diaphragm pressure sensor. What are the difficulties in bonding a standard strain gauge to the diaphragm and how are these difficulties usually solved?
--8. What are the advantages in using a monolithic piezoresistive displacement transducer with diaphragm pressure sensors?
--9. What other types of devices apart from strain gauges are used to measure displacement in a diaphragm strain gauge? Summarize the main features of each of these alternative types of displacement sensors.
--10. Discuss the mode of operation of fiber-optic pressure sensors. What are their principal advantages over other forms of pressure sensors?
--11. What are bellows pressure sensors? How do they work? Describe some typical applications.
--12. How does a Bourdon tube work? What are the three common shapes of Bourdon tubes and what is the typical measurement range of each type?
--13. Describe the three types of manometers available. What is the typical measurement range of each type?
--14. What is a resonant wire pressure-measuring device and what is it typically used for?
--15 What is an electronic pressure gauge? Discuss the different types of electronic gauges that exist.
--16. Discuss the range of instruments available for measuring very low pressures (pressures below atmospheric pressure).
--17. How are high pressures (pressures above 7000 bar) normally measured?
--18. What advantages do intelligent pressure transducers have over their non-intelligent counterparts?
--19. A differential pressure can be measured by subtracting the readings from two separate pressure transducers. What is the problem with this? Suggest a better way of measuring differential pressures.
--20. How are pressure transducers calibrated? How is a suitable frequency of calibration determined?
--21. Which instruments are used as a reference standard in the calibration of pressure sensors?
Updated: Tuesday, December 31, 2019 9:01 PST