Instrument Ass. 2
Short Description
Download Instrument Ass. 2...
Description
CHAPTER 2:
Q1:- Explain what is meant by? (a) Active instruments (b) Passive instruments Give examples of each and discuss the relative merits of these two classes of Instruments? Answer:Passive instruments are those which do not require an external power to operate, and the output is a measure of some variation in passive components (e.g. resistance or capacitance) Examples of passive instruments
Slide-wire resistor Resistance strain gauge Differential transformer
Capacitors A capacitor is a passive electrical component that can store energy in the electric field between a pair of conductors (called "plates"). The process of storing energy in the capacitor is known as "charging", and involves electric charges of equal magnitude, but opposite polarity, building up on each plate. A capacitor's ability to store charge is measured by its capacitance, in units of farads. Capacitors are often used by engineers in electric and electronic circuits as energystorage devices. They can also be used to differentiate between high-frequency and low-frequency signals. This property makes them useful in electronic filters. Practical capacitors have series resistance, internal leakage of charge, series inductance and other non-ideal properties not found in a theoretical, ideal, capacitor.
Active instruments are not Self generating type. They require an external power, and produce an analog voltage or current when stimulated by some physical form of energy.
Thermocouple Photovoltaic cell Moving coil generator
Thermocouple A thermocouple is a device consisting of two different conductors (usually metal alloys) that produce a voltage, proportional to a temperature difference, between either ends of the two conductors. Thermocouples are a widely used type of temperature sensor for measurement and control and can also be used to convert a temperature gradient into electricity. They are inexpensive, interchangeable, are supplied with standard connectors, and can measure a wide range of temperatures. In contrast to most other methods of temperature measurement, thermocouples are self powered and require no external form of excitation. The main limitation with thermocouples is accuracy and system errors of less than one degree Celsius (C) can be difficult to achieve.
Q2:-Discuss the advantages and disadvantages of null and deflection types of measuring instrument. What are null types of instrument mainly used for and why? Answer:Deflection type of measuring instruments:They are those instruments in which pointer system is used to illustrate the output of the system. Its accuracy depends on the linearity and calibration of the spring, the simple diagram of this system is as fallow
Advantages:
Due to their convenient use, these instruments are readily used for calibrations.
It is very easy to read the output of deflection type instruments.
Disadvantages:
They are less accurate.
Null type measuring instruments:In this type of measuring technique, direct weights are used to find the output of the system. Its accuracy depends on the calibration of the weights. The simple diagram of this system is as fallow.
Advantages:
They are very much accurate. They are readily used for measuring.
Disadvantages:
Due to addition of different weights is involved, this method become little difficult then the first one.
However, for calibration duties, the null-type instrument is preferable because of its superior accuracy. The extra effort required to use such an instrument is perfectly acceptable in this case because of the infrequent nature of calibration operations. Q3:-Briefly define and explain all the static characteristics of measuring instruments? Answer:Accuracy of measurement is thus one consideration in the choice of instrument for a particular application. Other parameters such as sensitivity, linearity and the reaction to ambient temperature changes are further considerations. These attributes are
collectively known as the static characteristics of instruments, and are given in the data sheet for a particular instrument. The various static characteristics are defined as fallow Accuracy and inaccuracy:The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value. In practice, it is more usual to quote the inaccuracy figure rather than the accuracy figure for an instrument. Inaccuracy is the extent to which a reading might be wrong, and is often quoted as a percentage of the full-scale reading of an instrument. The term measurement uncertainty is frequently used in place of inaccuracy. Precision:Precision is a term that describes an instrument’s degree of freedom from random errors. If a large number of readings are taken of the same quantity by a high precision instrument, then the spread of readings will be very small. Precision is often, though incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high precision instrument may have a low accuracy. Low accuracy measurements from a high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration. Tolerance:Tolerance is a term that is closely related to accuracy and defines the maximum error that is to be expected in some value. When used correctly, tolerance describes the maximum deviation of a manufactured component from some specified value. Range or Span:The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure. Linearity:It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. The non-linearity is then defined as the maximum deviation of any of the output readings marked X from this straight line. Non-linearity is usually expressed as a percentage of full-scale reading. Sensitivity of measurement:The sensitivity of measurement is a measure of the change in instrument output that occurs when the quantity being measured changes by a given amount. Thus, sensitivity is the ratio: [Scale deflection/value of measured producing deflection] Threshold:-
If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable. This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way that they specify threshold for instruments. Resolution:When an instrument is showing a particular output reading, there is a lower limit on the magnitude of the change in the input measured quantity that produces an observable change in the instrument output. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of deflection. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions. Dead space:Dead space is defined as the range of different input values over which there is no change in output value. Any instrument that exhibits hysteresis also displays dead space.
Q4:- Explain the difference between accuracy and precision in an instrument? Answer:Precision is a term that describes an instrument’s degree of freedom from random errors. If a large number of readings are taken of the same quantity by a high precision instrument, then the spread of readings will be very small. Precision is often, though incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high precision instrument may have a low accuracy. Low accuracy measurements from a high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration. The figure shows the results of tests on three industrial robots that were programmed to place components at a particular point on a table. The target point was at the centre of the concentric circles shown, and the black dots represent the points where each robot actually deposited components at each attempt. Both the accuracy and precision of Robot 1 are shown to be low in this trial. Robot 2 consistently puts the component down at approximately the same place but this is the wrong point. Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high accuracy, because it consistently places the component at the correct target position.
Q5:- A tungsten/5% rhenium–tungsten/26% rhenium thermocouple has an output e.m.f. as shown in the following table when its hot (measuring) junction is at the temperatures shown. Determine the sensitivity of measurement for the thermocouple in mV/°C? mV °C
4.37 250
8.74 500
13.11 750
17.48 1000
Answer:For a change in mV of 4.37 and temp difference is 250 so sensitivity is the, [Scale deflection/Value of measured producing deflection] So, 4.37/25=0.01748 mV/C
Q6:- Define sensitivity drift and zero drift. What factors can cause sensitivity drift and zero drift in instrument characteristics? Answer:Zero drift or bias describes the effect where the zero reading of an instrument is modified by a change in ambient conditions. This causes a constant error that exists over the full range of measurement of the instrument. The mechanical form of bathroom scale is a common example of an instrument that is prone to bias. It is quite usual to find that there is a reading of perhaps 1 kg with no one stood on the scale. If someone
of known weight 70 kg were to get on the scale, the reading would be 71 kg, and if someone of known weight 100 kg were to get on the scale, the reading would be 101 kg. Zero drift is normally removable by calibration. In the case of the bathroom scale just described, a thumbwheel is usually provided that can be turned until the reading is zero with the scales unloaded, thus removing the bias. Zero drift is also commonly found in instruments like voltmeters that are affected by ambient temperature changes. Typical units by which such zero drift is measured are volts/°C. This is often called the zero drift coefficient related to temperature changes. If the characteristic of an instrument is sensitive to several environmental parameters, then it will have several zero drift coefficients, one for each environmental parameter. Sensitivity drift (also known as scale factor drift) defines the amount by which an instrument’s sensitivity of measurement varies as ambient conditions change. It is quantified by sensitivity drift coefficients that define how much drift there is for a unit change in each environmental parameter that the instrument characteristics are sensitive to. Many components within an instrument are affected by environmental fluctuations, such as temperature changes: for instance, the modulus of elasticity of a spring is temperature dependent. Q7:- (a) An instrument is calibrated in an environment at a temperature of 20°C and the following output readings y are obtained for various input values x: y 13.1 26.2 39.3 52.4 65.5 78.6 x 5 10 15 20 25 30 Determine the measurement sensitivity, expressed as the ratio y/x? (b) When the instrument is subsequently used in an environment at a temperature of 50°C, the input/output characteristic changes to the following: y 14.7 29.4 44.1 58.8 73.5 88.2 x 5 10 15 20 25 30 Determine the new measurement sensitivity. Hence determine the sensitivity drift due to the change in ambient temperature of 30°C? Answer:For a change in y=13.1 and x=5 the sensitivity is y/x=13.1/5=2.62 (B) y/x=14.7/5=2.94 At 50 c y/x=14.7 At 20 c y/x=13.1 Sensitivity 1.6
Temp change 30 degree centigrade So, 1.6/30=0.533=sensitivity drift Q8:- A load cell is calibrated in an environment at a temperature of 21°C and has the following deflection/load characteristic: Load (kg) 0 50 100 150 200 Deflection (mm) 0.0 1.0 2.0 3.0 4.0 When used in an environment at 35°C, its characteristic changes to the following: Load (kg) 0 50 100 150 200 Deflection (mm) 0.2 1.3 2.4 3.5 4.6 (a) Determine the sensitivity at 21°C and 35°C. (b) Calculate the total zero drift and sensitivity drift at 35°C. (c) Hence determine the zero drift and sensitivity drift coefficients (in units of μm/°C and (μm per kg)/(°C))? Answer:At 21 c deflection/load characteristics sensitivity= 1mm/kg At 35 c sensitivity is= 1.1 mm/kg Zero drift= 0.2 mm Sensitivity drift= 0.1 mm/kg In V.m/c Zero drift/c=0.2/14=0.014[1/(10)(10)(10)] Sensitivity drift/c= 0.1/14=0.007[1/(10)(10)(10)] Total zero drift=0.2 mm Q10:- Write down the general differential equation describing the dynamic response of a second order measuring instrument and state the expressions relating the static sensitivity, undamped natural frequency and damping ratio to the parameters in this differential equation. Sketch the instrument response for the cases of heavy damping, critical damping and light damping, and state which of these is the usual target when a second order instrument is being designed? Answer:If all coefficients a3 . . . an other than a0, a1 and a2 in equation (2.2) are assumed zero, then we get:
Applying the D operator again: a2D2q0 C a1Dq0 C a0q0 D b0qi, and rearranging:
It is convenient to re-express the variables a0, a1, a2 and b0 in equation (2.8) in terms of three parameters K (static sensitivity), ω (undamped natural frequency) and _ (damping ratio), where:
Re-expressing equation (2.8) in terms of K, ω and _ we get:
This is the standard equation for a second order system and any instrument whose response can be described by it is known as a second order instrument. If equation (2.9) is solved analytically, the shape of the step response obtained depends on the value of the damping ratio parameter _. The output responses of a second order instrument for various values of _ following a step change in the value of the measured quantity at time t. Case A:Where €=0, there is no damping and the instrument output exhibits constant amplitude oscillations when disturbed by any change in the physical quantity measured. Case B:For light damping of €=0.2, represented by case (B), the response to a step change in input is still oscillatory but the oscillations gradually die down. Case C & D:Further increase in the value of € reduces oscillations and overshoot still more, as shown by curves (C) and (D), and finally the response becomes very over damped.
CHAPTER 3: 3.1 Explain the difference between systematic and random errors. What are the typical sources of these two types of error? Errors arising during the
measurement process can be divided into two groups, known as systematic errors and random errors. Systematic errors describe errors in the output readings of a measurement system that are consistently on one side of the correct reading, i.e. either all the errors are positive or they are all negative. Two major sources of systematic errors are system disturbance during measurement and the effect of environmental changes (modifying inputs), as discussed in sections 3.2.1 and 3.2.2. Other sources of systematic error include bent meter needles, the use of uncalibrated instruments, drift in instrument characteristics and poor cabling practices. Even when systematic errors due to the above factors have been reduced or eliminated, some errors remain that are inherent in the manufacture of an instrument. These are quantified by the accuracy figure quoted in the published specifications contained in the instrument data sheet. Random errors are perturbations of the measurement either side of the true value caused by random and unpredictable effects, such that positive errors and negative errors occur in approximately equal numbers for a series of measurements made of the same quantity. Such perturbations are mainly small, but large perturbations occur from time to time, again unpredictably. Random errors often arise when measurements are taken by human observation of an analogue meter, especially where this involves interpolation between scale points. Electrical noise can also be a source of random errors. To a large extent, random errors can be overcome by taking the same measurement a number of times and extracting a value by averaging or other statistical techniques, as discussed in section 3.5. However, any quantification of the measurement value and statement of error bounds remains a statistical quantity. Because of the nature of random errors and the fact that large perturbations in the measured quantity occur from time to time, the best that we can do is to express measurements in probabilistic terms: we may be able to assign a 95% or even 99% confidence level that the measurement is a certain value within error bounds of, say, š1%, but we can never attach a 100% probability to measurement values that are subject to random errors. Finally, a word must be said about the distinction between systematic and random errors. Error sources in the measurement system must be examined carefully to determine what type of error is present, systematic or random, and to apply the appropriate treatment. In the case of manual data measurements, a human observer may make a different observation at each attempt, but it is often reasonable to assume that the errors are random and that the mean of these readings is likely to be close to the correct value. However, this is only true as long as the human observer is not introducing a parallax-induced systematic error as well by persistently reading the position of a needle against the scale of an analogue meter from one side rather than from directly above. In that case, correction would have to be made for this systematic error (bias) in the measurements before statistical techniques were applied to reduce the
effect of random errors. 3.2 In what ways can the act of measurement cause a disturbance in the system being measured? systematic errors are introduced either by the effect of environmental disturbances or through the disturbance of the measured system by the act of measurement. Disturbance of the measured system by the act of measurement is a common source of systematic error. If we were to start with a beaker of hot water and wished to measure its temperature with a mercury-in-glass thermometer, then we would take the thermometer, which would initially be at room temperature, and plunge it into the water. In so doing, we would be introducing a relatively cold mass (the thermometer) into the hot water and a heat transfer would take place between the water and the thermometer. This heat transfer would lower the temperature of the water. Whilst the reduction in temperature in this case would be so small as to be undetectable by the limited measurement resolution of such a thermometer, the effect is finite and clearly establishes the principle that, in nearly all measurement situations, the process of measurement disturbs the system and alters the values of the physical quantities being measured. A particularly important example of this occurs with the orifice plate. This is placed into a fluid-carrying pipe to measure the flow rate, which is a function of the pressure that is measured either side of the orifice plate. This measurement procedure causes a permanent pressure loss in the flowing fluid. The disturbance of the measured system can often be very significant. Thus, as a general rule, the process of measurement always disturbs the system being measured. The magnitude of the disturbance varies from one measurement system to the next and is affected particularly by the type of instrument used for measurement. Ways of minimizing disturbance of measured systems is an important consideration in instrument design. However, an accurate understanding of the mechanisms of system disturbance is a prerequisite for this. 3.7 What steps can be taken to reduce the effect of environmental inputs in measurement systems? An environmental input is defined as an apparently real input to a measurement system that is actually caused by a change in the environmental conditions surrounding the measurement system. The fact that the static and dynamic characteristics specified for measuring instruments are only valid for particular environmental conditions (e.g. of temperature and pressure) has already been discussed at considerable length in Chapter 2. These specified conditions must be reproduced as closely as possible during calibration exercises because, away from the specified calibration conditions, the characteristics of measuring instruments vary to some extent and cause measurement errors. The magnitude of this environment-induced variation is quantified by the two constants known as sensitivity drift and zero drift, both of which are generally included in the published specifications for an instrument. Such variations of environmental conditions
away from the calibration conditions are sometimes described as modifying inputs to the measurement system because they modify the output of the system. When such modifying inputs are present, it is often difficult to determine how much of the output change in a measurement system is due to a change in the measured variable and how much is due to a change in environmental conditions. This is illustrated by the following example. Suppose we are given a small closed box and told that it may contain either a mouse or a rat. We are also told that the box weighs 0.1kg whenempty. If we put the box onto bathroom scales and observe a reading of 1.0 kg, this does not immediately tell us what is in the box because the reading may be due to one of three things: (a) a 0.9 kg rat in the box (real input) (b) an empty box with a 0.9 kg bias on the scales due to a temperature change (environmental input) (c) a 0.4 kg mouse in the box together with a 0.5 kg bias (real + environmental inputs). Thus, the magnitude of any environmental input must be measured before the value of the measured quantity (the real input) can be determined from the output reading of an instrument. In any general measurement situation, it is very difficult to avoid environmental inputs, because it is either impractical or impossible to control the environmental conditions surrounding the measurement system. System designers are therefore charged with the task of either reducing the susceptibility of measuring instruments to environmental inputs or, alternatively, quantifying the effect of environmental inputs and correcting for them in the instrument output reading. The techniques used to deal with environmental inputs and minimize their effect on the final output measurement follow a number of routes as discussed below. 3.10 (a) What do you understand by the term probability density function? If the height of the frequency distribution curve is normalized such that the area under it is unity, then the curve in this form is known as a probability curve, and the height F_D_ at any particular deviation magnitude D is known as the probability density function (p.d.f.). The condition that the area under the curve is unity can be expressed mathematically as:
ic chapter k numerical nhi krny sir ko kahen gy k koi solution manual den phir kry gy.
View more...
Comments