Ins&Meas_Static Charactersistics of Measurement Systems.pptx

RamanKhanal2 13 views 43 slides Feb 26, 2025
Slide 1
Slide 1 of 43
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43

About This Presentation

Ins&Meas_Static Charactersistics of Measurement Systems.pptx


Slide Content

Chapter 3: Static Characteristics of Measurement Systems TRIBHUVAN UNIVERSITY INSTITUTE OF ENGINEERING Learning Objectives: Understand the different static characteristics of measurement systems.

I. INTRODUCTION Learning Objectives: Understand the different characteristics of measurement systems.

Characteristics of Measurement Systems The treatment of instrument and measurement system characteristics can be divided into two distinct categories: Static characteristics and Dynamic characteristics . Some applications involve the measurement of quantities that are either constant or vary slowly with time. Under these circumstances it is possible to define a set of criteria that gives a meaningful description of quality of measurement without interfering with dynamic descriptions that involve the use of differential equations. These criteria are called Static Characteristics . Normally static characteristics of a measurement system are, in general, those that must be considered when the system or instrument is used to measure a condition not varying with time.

Characteristics of Measurement System However, many measurements are concerned with rapidly varying quantities and, therefore, for such cases we must examine the dynamic relations which exist between the output and the input. This is normally done with the help of differential equations. Performance criteria based upon dynamic relations constitute the Dynamic Characteristics . Some of the important static characteristics are: Accuracy and Precision, Tolerance, Range or Span, Linearity, Sensitivity of measurement, Threshold, Resolution, Sensitivity to disturbance, Hysteresis effects, Dead space.

REVIEW QUESTIONS Differentiate between static and dynamic characteristics of measuring systems. List the common static characteristics of a measurement system.

II. ACCURACY AND PRECISION Learning Objectives: Understand the concepts of accuracy and precision. Differentiate between accuracy and precision.

Accuracy and Precision Accuracy It is the closeness with which an instrument reading approaches the value of the quantity being measured. Thus, accuracy of a measurement means conformity to truth. The accuracy may be specified in terms of inaccuracy or limits of errors and can be expressed in the following ways: Accuracy as Percentage of Full Scale   Accuracy as Percentage of Actual Value   … ( 3.1 ) … ( 3.2 )

Accuracy as Percentage of Full Scale If an instrument has accuracy specified as % FS then the error will have a fixed value Take, for example, an instrument calibrated for a flow of 100 lit/min with stated accuracy 1.0% of FS. At a flow of 100 lit/min (full scale) the error will be 1% of full scale, or +-1 lit/min. As the flow moves away from full scale the error will still be 1% FS (+-1lit/min), so at a flow of 50 lit/min that error of +-1 lit/min becomes a larger percentage (+-2 lit/min) of flow. Going further away from full scale flow further increases the error as a percentage of flow; at a flow of 10 lit/min the +-1 lit/min error is +-10% of the flow.

Accuracy as Percentage of Actual Value If, however, an instrument has accuracy specified as % Actual Value, then the error will aways be the same percentage of the actual flow. Using the 100 lit/min instrument again as the example, but this time with a stated accuracy of 1% of Actual Value, at 10 lit/min of flow the error is only +-1% of the flow, better by ten times.

Accuracy and Precision Precision It is a measure of the reproducibility of the measurements, i.e. given a fixed value of a quantity; precision is a measure of agreement within a group of measurements. The term precise means clearly or sharply defined. It is a measure of the reproducibility of the measurements, i.e. given a fixed value of a quantity; precision is a measure of agreement within a group of measurements. The term precise means clearly or sharply defined.

Difference Between Accuracy and Precision As an example of the difference in meaning of accuracy and precision suppose that we have an ammeter which possesses a high degree of precision by virtue of its clearly legible, finely divided, distinct scale, and a knife-edge pointer with mirror arrangements to remove parallax. Let us say that its readings can be taken to of an ampere. At the same time, its zero adjustment is wrong. Now every time we take a reading the ammeter is as precise as ever, we can take readings to of an ampere, and the readings are consistent and clearly defined. However, the readings taken with this ammeter are not accurate, since they do not conform to truth on account of its faulty zero adjustment.  

Difference Between Accuracy and Precision Thus, we say that a set of readings shows precision, if the results agree among themselves. Agreement, however, has no guarantee of accuracy, as there may be some systematic disturbing effect that causes all the values to be in error. High precision with poor accuracy Good average accuracy with poor precision High accuracy with high precision Poor accuracy with poor precision

REVIEW QUESTIONS Define accuracy. How accuracy can be specified in terms of full-scale range and actual value? Define precision . Differentiate between accuracy and precision with an example .

III. TOLERANCE, SPAN OR RANGE, LINEARITY , SENSITIVITY, THRESHOLD AND RESOLUTION Learning Objectives: Define tolerance, span or range, linearity, sensitivity, threshold and resolution .

Tolerance Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be expected in some value. Whilst it is not, strictly speaking, a static characteristic of measuring instruments, it is mentioned here because the accuracy of some instruments is sometimes quoted as a tolerance figure. When used correctly, tolerance describes the maximum deviation of a manufactured component from some specified value. For instance, crankshafts are machined with a diameter tolerance quoted as so many microns ( ), and electric circuit components such as resistors have tolerances of perhaps . One resistor chosen at random from a batch having a nominal value 1000W and tolerance might have an actual value anywhere between and .  

Range or Span The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure. Linearity It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. Figure shows a plot of the typical output readings of an instrument when a sequence of input quantities are applied to it.

Linearity Normal procedure is to draw a good fit straight line through the s, as shown in Figure   Whilst this can often be done with reasonable accuracy by eye, it is always preferable to apply a mathematical least-squares line-fitting technique. The non-linearity is then defined as the maximum deviation of any of the output readings marked from this straight line. Non-linearity is usually expressed as a percentage of full-scale reading.    

Sensitivity of Measurement The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being measured changes by a given amount. Thus, sensitivity is the ratio of scale deflection produced and the value of measurand producing deflection. The sensitivity of measurement is therefore the slope of the straight line drawn on Figure . If, for example, a pressure of produces a deflection of in a pressure transducer, the sensitivity of the instrument is (assuming that the deflection is zero with zero pressure applied).  

Threshold of Measurement If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum level of input is known as the threshold of the instrument . Manufacturers vary in the way that they specify threshold for instruments. Some quote absolute values, whereas others quote threshold as a percentage of full-scale readings. As an illustration, a car speedometer typically has a threshold of about . This means that, if the vehicle starts from rest and accelerates, no output reading is observed on the speedometer until the speed reaches .  

Resolution of a Measurement Instrument When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity that produces an observable change in the instrument output. This minimum level of observable change in the output is known as the resolution of the instrument. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of full-scale deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions.

Resolution of a Measurement Instrument Using a car speedometer as an example again, this has subdivisions of typically . This means that when the needle is between the scale markings, we cannot estimate speed more accurately than to the nearest . This figure of thus represents the resolution of the instrument.  

REVIEW QUESTIONS Define: tolerance, span or range Define linearity. How linearity/nonlinearity of an instrument is determined? Define sensitivity . How sensitivity of an instrument is determined? Define: threshold and resolution.

IV. SENSITIVITY TO DISTURBANCE Learning Objectives: Understand the sensitivity to disturbances Define zero drift and sensitivity drift .

Sensitivity to Disturbance All calibrations and specifications of an instrument are only valid under controlled conditions of temperature, pressure etc. These standard ambient conditions are usually defined in the instrument specification. As variations occur in the ambient temperature etc., certain static instrument characteristics change, and the sensitivity to disturbance is a measure of the magnitude of this change. Such environmental changes affect instruments in two main ways, known as zero drift and sensitivity drift .

Zero Drift or Bias Zero drift or bias describes the effect where the zero reading of an instrument is modified by a change in ambient conditions. This causes a constant error that exists over the full range of measurement of the instrument. The mechanical form of bathroom scale is a common example of an instrument that is prone to bias. It is quite usual to find that there is a reading of perhaps with no one stood on the scale. If someone of known weight were to get on the scale, the reading would be .   Zero drift is normally removable by calibration. In the case of the bathroom scale just described, a thumbwheel is usually provided that can be turned until the reading is zero with the scales unloaded, thus removing the bias.

Zero Drift or Bias Zero drift is also commonly found in instruments like voltmeters that are affected by ambient temperature changes. Typical units by which such zero drift is measured are . This is often called the zero-drift coefficient related to temperature changes.   If the characteristic of an instrument is sensitive to several environmental parameters, then it will have several zero drift coefficients, one for each environmental parameter. A typical change in the output characteristic of a pressure gauge subject to zero drift is shown in Figure .

Sensitivity Drift Sensitivity drift (also known as scale factor drift) defines the amount by which an instrument’s sensitivity of measurement varies as ambient conditions change. It is quantified by sensitivity drift coefficients that define how much drift there is for a unit change in each environmental parameter that the instrument characteristics are sensitive to. Many components within an instrument are affected by environmental fluctuations, such as temperature changes: for instance, the modulus of elasticity of a spring is temperature dependent.

Sensitivity Drift Figure shows what effect sensitivity drift can have on the output characteristic of an instrument .

Sensitivity Drift Sensitivity drift is measured in units of the form . If an instrument suffers both zero drift and sensitivity drift at the same time, then the typical modification of the output characteristic is shown in Figure .  

REVIEW QUESTIONS What is sensitivity to a disturbance? How is it measured? Define zero drift and sensitivity drift .

V . HYSTERESIS EFFECT AND DEAD SPACE Learning Objectives: Understand the hysteresis effect on any measurement. Understand the dead space of a measurement system/instrument.

Hysteresis Effects Figure illustrates the output characteristic of an instrument that exhibits hysteresis. If the input measured quantity to the instrument is steadily increased from a negative value, the output reading varies in the manner shown in curve A. If the input variable is then steadily decreased, the output varies in the manner shown in curve B. The non-coincidence between these loading and unloading curves is known as hysteresis .

Hysteresis Effect Hysteresis can also occur in instruments that contain electrical windings formed round an iron core, due to magnetic hysteresis in the iron. This occurs in devices like the variable inductance displacement transducer, the LVDT and the rotary differential transformer.

Hysteresis Effects Two quantities are defined, maximum input hysteresis and maximum output hysteresis , as shown in Figure . These are normally expressed as a percentage of the full-scale input or output reading respectively. Hysteresis is most commonly found in instruments that contain springs, such as the passive pressure gauge and the Prony brake (used for measuring torque). It is also evident when friction forces in a system have different magnitudes depending on the direction of movement, such as in the pendulum-scale mass-measuring device. Devices like the mechanical flyball (a device for measuring rotational velocity) suffer hysteresis from both of the above sources because they have friction in moving parts and also contain a spring.

Dead Space Dead space is defined as the range of different input values over which there is no change in output value. Any instrument that exhibits hysteresis also displays dead space, as marked on Figure .

Dead Space Some instruments that do not suffer from any significant hysteresis can still exhibit a dead space in their output characteristics, however . Backlash in gears is a typical cause of dead space, and results in the sort of instrument output characteristic shown in Figure . Backlash is commonly experienced in gear-sets used to convert between translational and rotational motion (which is a common technique used to measure translational velocity).

REVIEW QUESTIONS What is hysteresis effect in a measuring instrument? What are the common causes of it? Define dead space of a measuring instrument. What are the common causes of it?

Solved Examples: Example 3.1: Determine the sensitivity of a pressure gauge as a ratio of scale length to pressure if the gauge has radius of scale line as and pressure of to is displayed over an arc of . The gauge has linear calibration curve.    

Solved Examples: Example 3.2 The individual sensitivities of different elements comprise a temperature measuring system are: Transducer Wheatstone bridge Amplifier gain Pen recorder Determine the overall sensitivity and the temperature change corresponding to a recorder pen movement of .      

Solved Examples: Example 3.3: A load cell calibrated at a temperature of has the following output/input characteristics:   When it is used in an environment of , its characteristics change to the following:   Determine zero drift, sensitivity drift and sensitivity drift per change in ambient temperature. If of scale division can be read with a fair degree of certainty, then determine the resolution of the instrument in both case, i.e., at and .          

Solved Examples: Example 3.4: A wheatstone bridge requires a change of 7 Ω in the unknown arm of the bridge to produce a change in deflection of 3 mm in the galvanometer. Determine the sensitivity. Also determine the deflection factor. 0.429   2.331  

Solved Examples: Example 3.5: (a) An instrument is calibrated in an environment at a temperature of 20⁰C and the following output readings y are obtained for various input values x: Determine the measurement sensitivity, expressed as the ratio y/x. (b) When the instrument is subsequently used in an environment at a temperature of 50⁰C, the input/output characteristic changes to the following: y 13.1 26.2 39.3 52.4 65.5 78.6 x 5 10 15 20 25 30 y 14.7 29.4 44.1 58.8 73.5 88.2 x 5 10 15 20 25 30     /⁰C  

Solved Examples: Example 3.6: The output of a temperature transducer is recorded over its full-scale range of 25⁰C as shown below: Determine (a) the static sensitivity of the device, and (b) the maximum nonlinearity of the device. Calibration Temperature (⁰C) 0.0 5.0 10.0 15.0 20.0 25.0 Output Reading (⁰C) 0.0 5.0 9.8 14.8 19.9 25.0   nonlinearity = 0.8%  
Tags