Home | Glossary | Books | Links/Resources
|EMC Testing | Environmental
2 Review of Instrument Types
3 Static Characteristics of Instruments
4 Dynamic Characteristics of Instruments
5 Necessity for Calibration
7 Quiz / Problems
Two of the important aspects of measurement covered in the opening section concerned how to choose appropriate instruments for a particular application and a review of the main applications of measurement. Both of these activities require knowledge of the characteristics of different classes of instruments and, in particular, how these different classes of instrument perform in different applications and operating environments.
We therefore start this section by reviewing the various classes of instruments that exist. We see first of all that instruments can be divided between active and passive ones according to whether they have an energy source contained within them. The next distinction is between null-type instruments that require adjustment until a datum level is reached and deflection-type instruments that give an output measurement in the form of either a deflection of a pointer against a scale or a numerical display. The third distinction covered is between analogue and digital instruments, which differ according to whether the output varies continuously (analogue instrument) or in discrete steps (digital instrument). Fourth, we look at the distinction between instruments that are merely indicators and those that have a signal output.
Indicators give some visual or audio indication of the magnitude of the measured quantity and are commonly found in the process industries. Instruments with a signal output are commonly found as part of automatic control systems. The final distinction we consider is between smart and non-smart instruments. Smart, often known as intelligent, instruments are very important today and predominate in most measurement applications. Because of their importance, they are given more detailed consideration later in Section 11.
The second part of this section looks at the various attributes of instruments that determine their performance and suitability for different measurement requirements and applications. We look first of all at the static characteristics of instruments. These are their steady-state attributes (when the output measurement value has settled to a constant reading after any initial varying output) such as accuracy, measurement sensitivity, and resistance to errors caused by variations in their operating environment. We then go on to look at the dynamic characteristics of instruments. This describes their behavior following the time that the measured quantity changes value up until the time when the output reading attains a steady value. Various kinds of dynamic behavior can be observed in different instruments ranging from an output that varies slowly until it reaches a final constant value to an output that oscillates about the final value until a steady reading is obtained. The dynamic characteristics are a very important factor in deciding on the suitability of an instrument for a particular measurement application. Finally, at the end of the section, we also briefly consider the issue of instrument calibration, although this is considered in much greater detail later in Section 4.
Fgr. 1 --- Spring Piston Fluid Pointer Scale; Pivot
Review of Instrument Types
Instruments can be subdivided into separate classes according to several criteria. These sub-classifications are useful in broadly establishing several attributes of particular instruments such as accuracy, cost, and general applicability to different applications.
Active and Passive Instruments
Instruments are divided into active or passive ones according to whether instrument output is produced entirely by the quantity being measured or whether the quantity being measured simply modulates the magnitude of some external power source. This is illustrated by examples.
An example of a passive instrument is the pressure-measuring device shown in Fgr. 1.
The pressure of the fluid is translated into movement of a pointer against a scale. The energy expended in moving the pointer is derived entirely from the change in pressure measured:
there are no other energy inputs to the system.
An example of an active instrument is a float-type petrol tank level indicator as sketched in Fgr. 2. Here, the change in petrol level moves a potentiometer arm, and the output signal consists of a proportion of the external voltage source applied across the two ends of the potentiometer. The energy in the output signal comes from the external power source: the primary transducer float system is merely modulating the value of the voltage from this external power source.
In active instruments, the external power source is usually in electrical form, but in some cases, it can be other forms of energy, such as a pneumatic or hydraulic one.
One very important difference between active and passive instruments is the level of measurement resolution that can be obtained. With the simple pressure gauge shown, the amount of movement made by the pointer for a particular pressure change is closely defined by the nature of the instrument. While it’s possible to increase measurement resolution by making the pointer longer, such that the pointer tip moves through a longer arc, the scope for such improvement is clearly restricted by the practical limit of how long the pointer can conveniently be. In an active instrument, however, adjustment of the magnitude of the external energy input allows much greater control over measurement resolution. While the scope for improving measurement resolution is much greater incidentally, it’s not infinite because of limitations placed on the magnitude of the external energy input, in consideration of heating effects and for safety reasons.
In terms of cost, passive instruments are normally of a more simple construction than active ones and are therefore less expensive to manufacture. Therefore, a choice between active and passive instruments for a particular application involves carefully balancing the measurement resolution requirements against cost.
Null-Type and Deflection-Type Instruments
The pressure gauge just mentioned is a good example of a deflection type of instrument, where the value of the quantity being measured is displayed in terms of the amount of movement of a pointer. An alternative type of pressure gauge is the dead-weight gauge shown in Fgr. 3, which is a null-type instrument. Here, weights are put on top of the piston until the downward force balances the fluid pressure. Weights are added until the piston reaches a datum level, known as the null point. Pressure measurement is made in terms of the value of the weights needed to reach this null position.
The accuracy of these two instruments depends on different things. For the first one it depends on the linearity and calibration of the spring, whereas for the second it relies on calibration of the weights. As calibration of weights is much easier than careful choice and calibration of a linear-characteristic spring, This means that the second type of instrument will normally be the more accurate. This is in accordance with the general rule that null-type instruments are more accurate than deflection types.
In terms of usage, a deflection-type instrument is clearly more convenient. It’s far simpler to read the position of a pointer against a scale than to add and subtract weights until a null point is reached. A deflection-type instrument is therefore the one that would normally be used in the workplace. However, for calibration duties, a null-type instrument is preferable because of its superior accuracy. The extra effort required to use such an instrument is perfectly acceptable in this case because of the infrequent nature of calibration operations.
Fgr. 4 --- Cam; Counter Switch
Analogue and Digital Instruments
An analogue instrument gives an output that varies continuously as the quantity being measured changes. The output can have an infinite number of values within the range that the instrument is designed to measure. The deflection-type of pressure gauge described earlier in this section ( Fgr. 1) is a good example of an analogue instrument. As the input value changes, the pointer moves with a smooth continuous motion. While the pointer can therefore be in an infinite number of positions within its range of movement, the number of different positions that the eye can discriminate between is strictly limited; this discrimination is dependent on how large the scale is and how finely it’s divided.
A digital instrument has an output that varies in discrete steps and so can only have a finite number of values. The rev counter sketched in Fgr. 4 is an example of a digital instrument.
A cam is attached to the revolving body whose motion is being measured, and on each revolution the cam opens and closes a switch. The switching operations are counted by an electronic counter. This system can only count whole revolutions and cannot discriminate any motion that is less than a full revolution.
The distinction between analogue and digital instruments has become particularly important with rapid growth in the application of microcomputers to automatic control systems. Any digital computer system, of which the microcomputer is but one example, performs its computations in digital form. An instrument whose output is in digital form is therefore particularly advantageous in such applications, as it can be interfaced directly to the control computer. Analogue instruments must be interfaced to the microcomputer by an analogue to-digital (A/D) converter, which converts the analogue output signal from the instrument into an equivalent digital quantity that can be read into the computer. This conversion has several disadvantages. First, the A/D converter adds a significant cost to the system. Second, a finite time is involved in the process of converting an analogue signal to a digital quantity, and this time can be critical in the control of fast processes where the accuracy of control depends on the speed of the controlling computer. Degrading the speed of operation of the control computer by imposing a requirement for A/D conversion thus impairs the accuracy by which the process is controlled.
Indicating Instruments and Instruments with a Signal Output
The final way in which instruments can be divided is between those that merely give an audio or visual indication of the magnitude of the physical quantity measured and those that give an output in the form of a measurement signal whose magnitude is proportional to the measured quantity.
The class of indicating instruments normally includes all null-type instruments and most passive ones. Indicators can also be further divided into those that have an analogue output and those that have a digital display. A common analogue indicator is the liquid-in-glass thermometer. Another common indicating device, which exists in both analogue and digital forms, is the bathroom scale. The older mechanical form of this is an analogue type of instrument that gives an output consisting of a rotating pointer moving against a scale (or sometimes a rotating scale moving against a pointer). More recent electronic forms of bathroom scales have a digital output consisting of numbers presented on an electronic display. One major drawback with indicating devices is that human intervention is required to read and record a measurement. This process is particularly prone to error in the case of analogue output displays, although digital displays are not very prone to error unless the human reader is careless.
Instruments that have a signal-type output are used commonly as part of automatic control systems. In other circumstances, they can also be found in measurement systems where the output measurement signal is recorded in some way for later use. This subject is covered in later sections. Usually, the measurement signal involved is an electrical voltage, but it can take other forms in some systems, such as an electrical current, an optical signal, or a pneumatic signal.
Smart and Non-smart Instruments
The advent of the microprocessor has created a new division in instruments between those that do incorporate a microprocessor (smart) and those that don't. Smart devices are considered in detail in Section 11.
Static Characteristics of Instruments
If we have a thermometer in a room and its reading shows a temperature of 20 deg. C , then it does not really matter whether the true temperature of the room is 19.5 or 20.5 deg. C .
Such small variations around 20 degr. C are too small to affect whether we feel warm enough or not. Our bodies cannot discriminate between such close levels of temperature and therefore a thermometer with an inaccuracy of _0.5 deg. C is perfectly adequate. If we had to measure the temperature of certain chemical processes, however, a variation of 0.5 degr. C might have a significant effect on the rate of reaction or even the products of a process. A measurement inaccuracy much less than _0.5 degr. C is therefore clearly required.
Accuracy of measurement is thus one consideration in the choice of instrument for a particular application. Other parameters, such as sensitivity, linearity, and the reaction to ambient temperature changes, are further considerations. These attributes are collectively known as the static characteristics of instruments and are given in the data sheet for a particular instrument. It’s important to note that values quoted for instrument characteristics in such a data sheet only apply when the instrument is used under specified standard calibration conditions. Due allowance must be made for variations in the characteristics when the instrument is used in other conditions.
The various static characteristics are defined in the following paragraphs.
Accuracy and Inaccuracy (Measurement Uncertainty)
The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value. In practice, it’s more usual to quote the inaccuracy or measurement uncertainty value rather than the accuracy value for an instrument.
Inaccuracy or measurement uncertainty is the extent to which a reading might be wrong and is often quoted as a percentage of the full-scale (f.s.) reading of an instrument.
The aforementioned example carries a very important message. Because the maximum measurement error in an instrument is usually related to the full-scale reading of the instrument, measuring quantities that are substantially less than the full-scale reading means that the possible measurement error is amplified. For this reason, it’s an important system design rule that instruments are chosen such that their range is appropriate to the spread of values being measured in order that the best possible accuracy is maintained in instrument readings. Clearly, if we are measuring pressures with expected values between 0 and 1 bar, we would not use an instrument with a measurement range of 0-10 bar.
A pressure gauge with a measurement range of 0-10 bar has a quoted inaccuracy of _1.0% f.s. (_1% of full-scale reading).
(a) What is the maximum measurement error expected for this instrument? (b) What is the likely measurement error expressed as a percentage of the output reading if this pressure gauge is measuring a pressure of 1 bar?
(a) The maximum error expected in any measurement reading is 1.0% of the full-scale reading, which is 10 bar for this particular instrument. Hence, the maximum likely error is 1.0% _ 10 bar = 0.1 bar.
(b) The maximum measurement error is a constant value related to the full-scale reading of the instrument, irrespective of the magnitude of the quantity that the instrument is actually measuring. In this case, as worked out earlier, the magnitude of the error is 0.1 bar. Thus, when measuring a pressure of 1 bar, the maximum possible error of 0.1 bar is 10% of the measurement value.
Precision is a term that describes an instrument's degree of freedom from random errors.
If a large number of readings are taken of the same quantity by a high-precision instrument, then the spread of readings will be very small. Precision is often, although incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high-precision instrument may have a low accuracy. Low accuracy measurements from a high-precision instrument are normally caused by a bias in the measurements, which is removable by recalibration.
The terms repeatability and reproducibility mean approximately the same but are applied in different contexts, as given later. Repeatability describes the closeness of output readings when the same input is applied repetitively over a short period of time, with the same measurement conditions, same instrument and observer, same location, and same conditions of use maintained throughout. Reproducibility describes the closeness of output readings for the same input when there are changes in the method of measurement, observer, measuring instrument, location, conditions of use, and time of measurement. Both terms thus describe the spread of output readings for the same input. This spread is referred to as repeatability if the degree of repeatability or reproducibility in measurements from an instrument is an alternative way of expressing its precision. Fgr. 5 illustrates this more clearly by showing results of tests on three industrial robots programmed to place components at a particular point on a table. The target point was at the center of the concentric circles shown, and black dots represent points where each robot actually deposited components at each attempt. Both the accuracy and the precision of Robot 1 are shown to be low in this trial. Robot 2 consistently puts the component down at approximately the same place but this is the wrong point.
Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high accuracy because it consistently places the component at the correct target position.
Fgr. 5 --- (a) Low precision, low accuracy (b) High precision, low accuracy (c) High precision, high accuracy ROBOT 1 ROBOT 2 ROBOT 3
Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be expected in some value. While it’s not, strictly speaking, a static characteristic of measuring instruments, it’s mentioned here because the accuracy of some instruments is sometimes quoted as a tolerance value. When used correctly, tolerance describes the maximum deviation of a manufactured component from some specified value. For instance, crankshafts are machined with a diameter tolerance quoted as so many micrometers (10^-6 m), and electric circuit components such as resistors have tolerances of perhaps 5%.
A packet of resistors bought in an electronics component shop gives the nominal resistance value as 1000 O and the manufacturing tolerance as _5%. If one resistor is chosen at random from the packet, what is the minimum and maximum resistance value that this particular resistor is likely to have?
The minimum likely value is 1000 O -5% = 950 O.
The maximum likely value is 1000 O + 5% = 1050 O.
Range or Span
The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure.
It’s normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. The Xs marked on Fgr. 6 show a plot of typical output readings of an instrument when a sequence of input quantities are applied to it. Normal procedure is to draw a good fit straight line through the Xs, as shown in Fgr. 6.
(While this can often be done with reasonable accuracy by eye, it’s always preferable to apply a mathematical least-squares line-fitting technique, as described in Section 8.) Nonlinearity is then defined as the maximum deviation of any of the output readings marked X from this straight line. Nonlinearity is usually expressed as a percentage of full-scale reading.
Fgr. 6 --- Output reading Gradient = Sensitivity of measurement; Measured quantity
Sensitivity of Measurement
The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being measured changes by a given amount. Thus, sensitivity is the ratio:
scale deflection value of measurand producing deflection The sensitivity of measurement is therefore the slope of the straight line drawn on Fgr. 6.
If, For example, a pressure of 2 bar produces a deflection of 10 degrees in a pressure transducer, the sensitivity of the instrument is 5 degrees/bar (assuming that the deflection is zero with zero pressure applied).
Example 2.3 The following resistance values of a platinum resistance thermometer were measured at a range of temperatures. Determine the measurement sensitivity of the instrument in ohms/ degr. C.
Resistance (V) Temperature ( degr. C)
307 200 314 230 321 260 328 290
If these values are plotted on a graph, the straight-line relationship between resistance change and temperature change is obvious.
For a change in temperature of 30 degr. C, the change in resistance is 7 O. Hence the measurement sensitivity = 7/30 = 0.233 O / degr. C.
If the input to an instrument is increased gradually from zero, the input will have to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way that they specify threshold for instruments. Some quote absolute values, whereas others quote threshold as a percentage of full-scale readings. As an illustration, a car speedometer typically has a threshold of about 15 km/h. This means that, if the vehicle starts from rest and accelerates, no output reading is observed on the speedometer until the speed reaches 15 km/h.
When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity that produces an observable change in the instrument output. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of f.s. deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions.
Using a car speedometer as an example again, this has subdivisions of typically 20 km/h. This means that when the needle is between the scale markings, we cannot estimate speed more accurately than to the nearest 5 km/h. This value of 5 km/h thus represents the resolution of the instrument.
Sensitivity to Disturbance
All calibrations and specifications of an instrument are only valid under controlled conditions of temperature, pressure, and so on. These standard ambient conditions are usually defined in the instrument specification. As variations occur in the ambient temperature, certain static instrument characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this change. Such environmental changes affect instruments in two main ways, known as zero drift and sensitivity drift. Zero drift is sometimes known by the alternative term, bias.
Zero drift or bias describes the effect where the zero reading of an instrument is modified by a change in ambient conditions. This causes a constant error that exists over the full range of measurement of the instrument. The mechanical form of a bathroom scale is a common example of an instrument prone to zero drift. It’s quite usual to find that there is a reading of perhaps 1 kg with no one on the scale. If someone of known weight 70 kg were to get on the scale, the reading would be 71 kg, and if someone of known weight 100 kg were to get on the scale, the reading would be 101 kg. Zero drift is normally removable by calibration. In the case of the bathroom scale just described, a thumbwheel is usually provided that can be turned until the reading is zero with the scales unloaded, thus removing zero drift.
The typical unit by which such zero drift is measured is volts/ degr. C. This is often called the zero drift coefficient related to temperature changes. If the characteristic of an instrument is sensitive to several environmental parameters, then it will have several zero drift coefficients, one for each environmental parameter. A typical change in the output characteristic of a pressure gauge subject to zero drift is shown in Fgr. 7a.
Fgr. 7 --- Scale reading Scale reading Scale reading Characteristic with zero drift Characteristic with sensitivity drift Characteristic with zero drift and sensitivity drift Nominal characteristic Nominal characteristic Nominal characteristic Pressure; Pressure
Sensitivity drift (also known as scale factor drift) defines the amount by which an instrument's sensitivity of measurement varies as ambient conditions change. It’s quantified by sensitivity drift coefficients that define how much drift there is for a unit change in each environmental parameter that the instrument characteristics are sensitive to. Many components within an instrument are affected by environmental fluctuations, such as temperature changes: for instance, the modulus of elasticity of a spring is temperature dependent. Fgr. 7b shows what effect sensitivity drift can have on the output characteristic of an instrument. Sensitivity drift is measured in units of the form(angular degree/bar)/ degr. C. If an instrument suffers both zero drift and sensitivity drift at the same time, then the typical modification of the output characteristic is shown in Fgr. 7c.
The following table shows output measurements of a voltmeter under two sets of conditions:
(a) Use in an environment kept at 20 degr. C which is the temperature that it was calibrated at.
(b) Use in an environment at a temperature of 50 deg. C .
Voltage readings at calibration temperature of 20 degr. C (assumed correct) Voltage readings at temperature of 50 degr. C
10.2 10.5 20.3 20.6 30.7 40.0 40.8 50.1
Determine the zero drift when it’s used in the 50 degr. C environment, assuming that the measurement values when it was used in the 20 degr. C environment are correct. Also calculate the zero drift coefficient.
Zero drift at the temperature of 50 degr. C is the constant difference between the pairs of output readings, that is, 0.3 volts.
The zero drift coefficient is the magnitude of drift (0.3 volts) divided by the magnitude of the temperature change causing the drift (30 degr. C). Thus the zero drift coefficient is 0.3/30 = 0.01 volts/ degr. C.
Example 2.5: A spring balance is calibrated in an environment at a temperature of 20 degr. C and has the following deflection/load characteristic:
Load (kg) 0 1 2 3
Deflection (mm) 0 20 40 60
It’s then used in an environment at a temperature of 30 deg. C , and the following deflection/ load characteristic is measured:
Load (kg) 0 1 2 3
Deflection (mm) 5 27 49 71
Determine the zero drift and sensitivity drift per degr. C change in ambient temperature.
At 20 degr. C, deflection/load characteristic is a straight line. Sensitivity = 20 mm/kg.
At 30 degr. C, deflection/load characteristic is still a straight line. Sensitivity = 22 mm/kg.
Zero drift (bias) = 5 mm (the no-load deflection)
Sensitivity drift = 2 mm/kg Zero drift/ degr. C = 5/10 = 0.5 mm/ degr. C
Sensitivity drift/ deg. C = 2/10 = 0.2 (mm/kg)/ degr. C
Fgr. 8 --- Output reading Maximum output hysteresis Maximum input hysteresis Curve B - variable decreasing Curve A - variable increasing; Dead space; Measured variable
Fgr. 8 illustrates the output characteristic of an instrument that exhibits hysteresis. If the input measured quantity to the instrument is increased steadily from a negative value, the output reading varies in the manner shown in curve A. If the input variable is then decreased steadily, the output varies in the manner shown in curve B. The non-coincidence between these loading and unloading curves is known as hysteresis. Two quantities are defined, maximum input hysteresis and maximum output hysteresis, as shown in Fgr. 8. These are normally expressed as a percentage of the full-scale input or output reading, respectively.
Hysteresis is found most commonly in instruments that contain springs, such as a passive pressure gauge ( Fgr. 1) and a Prony brake (used for measuring torque). It’s also evident when friction forces in a system have different magnitudes depending on the direction of movement, such as in the pendulum-scale mass-measuring device. Devices such as the mechanical flyball (a device for measuring rotational velocity) suffer hysteresis from both of the aforementioned sources because they have friction unmoving parts and also contain a spring. Hysteresis can also occur in instruments that contain electrical windings formed round an iron core, due to magnetic hysteresis in the iron. This occurs in devices such as the variable inductance displacement transducer, the linear variable differential transformer, and the rotary differential transformer.
Dead space is defined as the range of different input values over which there is no change in output value. Any instrument that exhibits hysteresis also displays dead space, as marked on Fgr. 8. Some instruments that don’t suffer from any significant hysteresis can still exhibit a dead space in their output characteristics, however. Backlash in gears is a typical cause of dead space and results in the sort of instrument output characteristic shown in Fgr. 9. Backlash is commonly experienced in gear sets used to convert between translational and rotational motion (which is a common technique used to measure translational velocity).
Fgr. 9 --- Output reading, Dead space, Measured variable
Dynamic Characteristics of Instruments
The static characteristics of measuring instruments are concerned only with the steady-state reading that the instrument settles down to, such as accuracy of the reading.
The dynamic characteristics of a measuring instrument describe its behavior between the time a measured quantity changes value and the time when the instrument output attains a steady value in response. As with static characteristics, any values for dynamic characteristics quoted in instrument data sheets only apply when the instrument is used under specified environmental conditions.
Outside these calibration conditions, some variation in the dynamic parameters can be expected.
In any linear, time-invariant measuring system, the following general relation can be written between input and output for time (t) > 0:
.... where qi is the measured quantity, qo is the output reading, and ao ... an , bo ... bm are constants.
The reader whose mathematical background is such that Equation (1) appears daunting should not worry unduly, as only certain special, simplified cases of it are applicable in normal measurement situations. The major point of importance is to have a practical appreciation of the manner in which various different types of instruments respond when the measurand applied to them varies.
If we limit consideration to that of step changes in the measured quantity only, then Equation (1) reduces to ...
Further simplification can be made by taking certain special cases of Equation (2.2), which collectively apply to nearly all measurement systems.
If all the coefficients a1 ... an other than a0 in Equation (2.2) are assumed zero, then ...
... where K is a constant known as the instrument sensitivity as defined earlier.
Any instrument that behaves according to Equation (2.3) is said to be of a zero-order type.
Following a step change in the measured quantity at time t, the instrument output moves immediately to a new value at the same time instant t, as shown in Fgr. 10. A potentiometer, which measures motion, is a good example of such an instrument, where the output voltage changes instantaneously as the slider is displaced along the potentiometer track.
Fgr. 11 --- Magnitude Measured quantity Instrument output 63%, 0 t, t, Time (time constant)
Fgr. 10 ---Measured quantity; Instrument
If all the coefficients a2 ... an except for ao and a1 are assumed zero in Equation (2) then ...
Any instrument that behaves according to Equation (4) is known as a first-order instrument.
If d/dt is replaced by the D operator in Equation (4), we get ...
If Equation (6) is solved analytically, the output quantity q0 in response to a step change in qi at time t varies with time in the manner shown in Fgr. 11.Thetime constant t of the step response is time taken for the output quantity q0 to reach 63% of its final value.
The thermocouple is a good example of a first-order instrument. It’s well known that if a thermocouple at room temperature is plunged into boiling water, the output e.m.f. does not rise instantaneously to a level indicating 100 deg. C , but instead approaches a reading indicating 100 deg. C in a manner similar to that shown in Fgr. 11.
A large number of other instruments also belong to this first-order class: this is of particular importance in control systems where it’s necessary to take account of the time lag that occurs between a measured quantity changing in value and the measuring instrument indicating .... the change. Fortunately, because the time constant of many first-order instruments is small relative to the dynamics of the process being measured, no serious problems are created.
A balloon is equipped with temperature- and altitude-measuring instruments and has radio equipment that can transmit the output readings of these instruments back to the ground. The balloon is initially anchored to the ground with the instrument output readings in steady state. The altitude-measuring instrument is approximately zero order, and the temperature transducer is first order with a time constant of 15 seconds. The temperature on the ground, T0,is10 degr. C and the temperature Tx at an altitude of x meters is given by the relation: Tx = T0 _ 0.01x.
(a) If the balloon is released at time zero, and thereafter rises upward at a velocity of 5 meters/second, draw a table showing the temperature and altitude measurements reported at intervals of 10 seconds over the first 50 seconds of travel. Show also in the table the error in each temperature reading.
(b) What temperature does the balloon report at an altitude of 5000 meters?
In order to answer this question, it’s assumed that the solution of a first-order differential equation has been presented to the reader in a mathematics course. If the reader is not so equipped, the following solution will be difficult to follow.
Let the temperature reported by the balloon at some general time t be Tr. Then Tx is related to Tr by the relation:
This result might have been inferred from the table given earlier where it can be seen that the error is converging toward a value of 0.75. For large values of t, the transducer reading lags the true temperature value by a period of time equal to the time constant of 15 seconds. In this time, the balloon travels a distance of 75 meters and the temperature falls by 0.75_. Thus for large values of t, the output reading is always 0.75_ less than it should be.
If all coefficients a3 ... an other than a0, a1, and a2 in Equation (2) are assumed zero, then we get.....
It’s convenient to re-express the variables a0, a1, a2, and b0 in Equation (8) in terms of three parameters: K (static sensitivity), o (undamped natural frequency), and x (damping ratio), where....
This is the standard equation for a second-order system, and any instrument whose response can be described by it’s known as a second-order instrument. If Equation (2.9) is solved analytically, the shape of the step response obtained depends on the value of the damping ratio parameter x.
The output responses of a second-order instrument for various values of x following a step change in the value of the measured quantity at time t are shown in Fgr. 12. For case A, where x = 0, there is no damping and the instrument output exhibits constant amplitude oscillations when disturbed by any change in the physical quantity measured. For light damping of x = 0.2, represented by case B, the response to a step change in input is still oscillatory but the oscillations die down gradually. A further increase in the value of x reduces oscillations and overshoots still more, as shown by curves C and D, and finally the response becomes very overdamped, as shown by curve E, where the output reading creeps up slowly toward the correct reading. Clearly, the extreme response curves A and E are grossly unsuitable for any measuring instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy would be to aim toward a damping ratio of 0.707, which gives the critically damped response (C). Unfortunately, most of the physical quantities that instruments are required to measure don’t change in the mathematically convenient form of steps, but rather in the form of ramps of varying slopes. As the ... form of the input variable changes, so the best value for x varies, and choice of x becomes one of compromise between those values that are best for each type of input variable behavior anticipated. Commercial second-order instruments, of which the accelerometer is a common example, are generally designed to have a damping ratio (x) somewhere in the range of 0.6-0.8.
Necessity for Calibration
The foregoing discussion has described the static and dynamic characteristics of measuring instruments in some detail. However, an important qualification that has been omitted from this discussion is that an instrument only conforms to stated static and dynamic patterns of behavior after it has been calibrated. It can normally be assumed that a new instrument will have been calibrated when it’s obtained from an instrument manufacturer and will therefore initially behave according to the characteristics stated in the specifications. During use, however, its behavior will gradually diverge from the stated specification for a variety of reasons. Such reasons include mechanical wear and the effects of dirt, dust, fumes, and chemicals in the operating environment.
The rate of divergence from standard specifications varies according to the type of instrument, the frequency of usage, and the severity of the operating conditions. However, there will come a time, determined by practical knowledge, when the characteristics of the instrument will have drifted from the standard specification by an unacceptable amount. When this situation is reached, it’s necessary to recalibrate the instrument back to standard specifications. Such recalibration is performed by adjusting the instrument at each point in its output range until its output readings are the same as those of a second standard instrument to which the same inputs are applied. This second instrument is one kept solely for calibration purposes whose specifications are accurately known. Calibration procedures are discussed more fully in Section 4.
This section began by reviewing various different classes of instruments and considering how these differences affect their typical usage. We saw, For example, that null-type instruments are favored for calibration duties because of their superior accuracy, whereas deflection-type instruments are easier to use for routinemeasurements.We also looked at the distinction between active and passive instruments, analogue and digital instruments, indicators and signal output-type instruments, and, finally, smart and nonsmart instruments. Following this, we went on to look at the various static characteristics of instruments. These define the quality of measurements when an instrument output has settled to a steady reading. Several important lessons arose out of this coverage. In particular, we saw the important distinction between accuracy and precision, which are often equated incorrectly as meaning the same thing. We saw that high precision does not promise anything at all about measurement accuracy; in fact, a high-precision instrument can sometimes give very poor measurement accuracy. The final topic covered in this section was the dynamic characteristics of instruments. We saw that there are three kinds of dynamic characteristics: zero order, first order, and second order. Analysis of these showed that both first and second-order instruments take time to settle to a steady-state reading when the measured quantity changes. It’s therefore necessary to wait until the dynamic motion has ended before a reading is recorded. This places a serious limitation on the use of first- and second-order instruments to make repeated measurements. Clearly, the frequency of repeated measurements is limited by the time taken by the instrument to settle to a steady-state reading.
QUIZ / Problems
1. Briefly explain four ways in which measuring instruments can be subdivided into different classes according to their mode of operation, giving examples of instruments that fall into each class.
2. Explain what is meant by:
(a) active instruments (b) passive instruments Give examples of each and discuss the relative merits of these two classes of instruments.
3. Discuss the advantages and disadvantages of null and deflection types of measuring instruments. What are null types of instruments mainly used for and why? 2.4. What are the differences between analogue and digital instruments? What advantages do digital instruments have over analogue ones? 5. Explain the difference between static and dynamic characteristics of measuring instruments.
6. Briefly define and explain all the static characteristics of measuring instruments.
7. How is the accuracy of an instrument usually defined? What is the difference between accuracy and precision?
8. Draw sketches to illustrate the dynamic characteristics of the following:
(a) zero-order instrument (b) first-order instrument (c) second-order instrument In the case of a second-order instrument, indicate the effect of different degrees of damping on the time response.
9. State briefly how the dynamic characteristics of an instrument affect its usage.
10. A tungsten resistance thermometer with a range of -270 to +1100 degr. C has a quoted inaccuracy of _1.5% of full-scale reading. What is the likely measurement error when it’s reading a temperature of 950 degr. C?
11. A batch of steel rods is manufactured to a nominal length of 5 meters with a quoted tolerance of _2%. What is the longest and shortest length of rod to be expected in the batch?
12. What is the measurement range for a micrometer designed to measure diameters between 5.0 and 7.5 cm?
13. A tungsten/5% rhenium-tungsten/26% rhenium thermocouple has an output e.m.f.
as shown in the following table when its hot (measuring) junction is at the temperatures shown. Determine the sensitivity of measurement for the thermocouple in mV/C.
14. Define sensitivity drift and zero drift. What factors can cause sensitivity drift and zero drift in instrument characteristics?
15. (a) An instrument is calibrated in an environment at a temperature of 20 degr. C and the following output readings y are obtained for various input values x: Determine the measurement sensitivity, expressed as the ratio y/x.
(b) When the instrument is subsequently used in an environment at a temperature of 50 degr. C, the input/output characteristic changes to the following:
Determine the new measurement sensitivity. Hence determine the sensitivity drift due to the change in ambient temperature of 30 deg. C .
16. The following temperature measurements were taken with an infrared thermometer that produced biased measurements due to the instrument being out of calibration.
Calculate the bias in the measurements.
Values measured by uncalibrated instrument ( deg. C ) Correct value of temperature ( deg. C )
17. A load cell is calibrated in an environment at a temperature of 21 deg. C and has the following deflection/load characteristic:
When used in an environment at 35 deg. C , its characteristic changes to the following:
(a) Determine the sensitivity at 21 and 35 deg. C .
(b) Calculate the total zero drift and sensitivity drift at 35 deg. C .
(c) Hence determine the zero drift and sensitivity drift coefficients (in units of mm/ deg. C and (mm per kg)/( deg. C).
18. An unmanned submarine is equipped with temperature- and depth-measuring instruments and has radio equipment that can transmit the output readings of these instruments back to the surface. The submarine is initially floating on the surface of the sea with the instrument output readings in steady state. The depth-measuring instrument is approximately zero order and the temperature transducer first order with a time constant of 50 seconds. The water temperature on the sea surface, T0,is20 degr. C and the temperature Tx at a depth of x meters is given by the relation:
Tx = T0 _ 0:01x
(a) If the submarine starts diving at time zero, and thereafter goes down at a velocity of 0.5 meters/second, draw a table showing the temperature and depth measurements reported at intervals of 100 seconds over the first 500 seconds of travel. Show also in the table the error in each temperature reading.
(b) What temperature does the submarine report at a depth of 1000 meters?
19. Write down the general differential equation describing the dynamic response of a second-order measuring instrument and state the expressions relating the static sensitivity, undamped natural frequency, and damping ratio to the parameters in this differential equation. Sketch the instrument response for cases of heavy damping, critical damping, and light damping and state which of these is the usual target when a second-order instrument is being designed.
NEXT: Measurement Uncertainty
Article index [industrial-electronics.com/DAQ/mi_0.html]
Home | Glossary | Books | Links/Resources
|EMC Testing | Environmental
Updated: Sunday, 2014-03-30 4:58 PST