Radio Frequency by Steve Winder and Joe Carr - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

20.1 Accuracy, resolution and stability

All measurements are subject to error and two instruments applied to the same piece of equipment under test may give a different answer. Tolerances must therefore be accepted. The errors arise from the following sources:

1. Human error, e.g. precision in reading a scale, use of incorrect instrument or range setting for the purpose.
2. The accuracy to which the instrument is able to display the result of a measurement or, in the case of a generator, the frequency or output level.
3. Accuracy of calibration.
4. Tolerances in the components used in the construction of the instrument. Variations in the load applied to the instrument.
5. Variations caused by long-term drift in the values of components.
6. Variations due to temperature and supply voltage fluctuations, and the warm-up time required by some instruments.
7. An effect on the circuit under test by the connection of the instrument.

There is an important difference between the accuracy and the resolution of an instrument. The accuracy is a statement of the maximum errors which may occur from the causes in statements 3 to 6 above. In instrument specifications, stability defined by statements 5 and 6 is usually quoted separately.

The accuracy of analogue measuring instruments is normally quoted as a percentage of full scale deflection (FSD). This is the accuracy of the instrument movement and components plus the scale calibration. The scale graduations, though, may not permit the user to determine the reading to the accuracy of the instrument perhaps because the graduations are cramped or parallax reading error occurs. These factors decide the resolution or precision of reading.

The accuracy of instruments with digital displays is usually quoted as a percentage of the reading plus or minus one count or one digit. While digital instruments are generally more accurate than their analogue counterparts, the fact that the least significant figure may be in error affects the resolution. Figure 20.1 shows how this can arise. Most digital instruments use a gating process to switch the input to

229

 

Signal A

Signal B Gate open
Gating pulse
Figure 20.1 Gating error introduced by signal phasing

the measuring circuitry for the appropriate period of time. The gating time itself may vary and affect the accuracy, but even with a perfectly accurate and stable gating time, the phase of the input signal at the time of switching affects the number of pulses passing through the gate and thus the resolution.

Performance figures taken from a number of manufacturer’s catalogues are listed below. They show only the more important features of the specifications and are typical of those for high quality instruments used in radio work.