Related Products:

Society for Amateur Scientists

 

 

 

 

 

Sponsored by:

Design Your Experiments Part XI: Instruments

By Kevin Kilty

Instruments and sensors.

At some point experiment design leaves the realm of what to measure and enters that of how to measure. This brings me perilously close to subjects like lab technique and measurement, which are extremely specific to each scientific discipline. For me to discuss every possibility will expand this series beyond all reason, and even worse is that I can't explained most of it easily -- I don't possess the knowledge, and it is better learned through experience anyway. Lab technique is as much an art as a science. However, experiments often depend on instruments and sensors, many of which are electronic, and it makes sense for me to summarize a few principles about them. I also have a few remarks of a more general nature.

Characteristics of instruments.

The following characteristics are important in deciding what sensors and instruments to use in an experiment. Manufacturers usually provide information on these characteristics to help in the design process.

  • Sensitivity is change in output of the measuring system per unit change in input. Input to the sensor is usually the thing we are trying to measure. Sensitivity is intimately connected with the intensity of the signal in an experiment. For example, a typical one-half bridge strain gauge, powered with a 10V supply, has a sensitivity of 0.1mV per micro-strain. So, if I place a structure under stress and read a signal of 0.025V on my meter, the strain is 250x10-6. If I need more sensitive strain measurements I can try to more accurately measure very small voltages (better resolution) or I can find a gauge with a better gauge factor (increase my signal intensity). Generally a person tries to use the most sensitive sensor available, but doing so may conflict with other requirements.
  • Resolution is the minimum signal or change of signal that produces a change in output of the sensor or system. One way to think of resolution is that it is the value of the least significant digit in the display of an instrument. However the concept goes far beyond this. It also means the smallest possible separation of two signals that an instrument can detect. Therefore, a spectrometer has a smallest wavelength difference that it can separate, a multichannel analyzer has a smallest difference of radiation energy it can separate, a microscope or telescope has a smallest angular separation it can resolve, and so forth. Lack of resolution causes separate features to merge into a composite. In many cases I can find a means to sacrifice some intensity of signal and improve resolution. For lack of a better term I'll refer to this process as apodization, although this term is used only in telescope designs as far as I know. The decision to sacrifice some intensity for better resolution involves figuring out if the experiment improves with more intense signal or better resolution. This is not always obvious. I own a telescope with superb resolution, but I almost always wish its image were brighter.
  • Dynamic range is the range of values over which a measuring system will provide useful measurement. To some extent the desire for a wide range conflicts with the desire for great sensitivity, as most measuring systems have to decrease their signal resolution in order to display larger signal intensity.
  • Linearity is the tendency of sensitivity to remain constant independently of input to the system. In other words it measures the tendency of the sensor output to remain strictly proportional to input. A system lacking in linearity is both more complicated to calibrate, and will generate harmonics when I use it to measure dynamic signals. Usually linearity is possible only over small ranges of input.
  • Selectivity is the property of a sensor to respond to a desired input and reject spurious ones. Here is a very interesting example of lack of selectivity. A human hand contains specialized nerve endings to detect warmth, coolness, and pressure. The sensors for pressure lack selectivity, however, in that they respond to coolness also. This leads to a phenomenon known as Weber's illusion in which a cold mass feels heavier to a person than a equal but warm mass.
  • Stability and Drift is the pace at which any important characteristics of the measuring system change with time, temperature, or other factor. This determines how often a person has to re-calibrate a system, or whether it is necessary to repeat earlier measurements on a schedule to correct for drift.
  • Hysteresis is behavior of a sensor where over a range of input it has two possible output values for a single input depending on its recent input. A sensor with hysteresis has memory. Having some hysteresis (a dead band) is very important to the stability of control systems, but it is an irritating complication for measurement.
  • Response time or bandwidth refers to a combined property of the sensor, measuring instruments and experimental unit. Some systems measure static signals well, but cannot respond to rapidly changing ones because they lack bandwidth. On the other hand, a system with too much bandwidth will integrate noise from spurious sources. For example, reducing bandwidth is one way to reduce the effect of Johnson noise. A general rule is that a measuring system settles into a stable output in a time approximately equal to the reciprocal of its bandwidth.
Noise which instruments, sensors, and systems contribute.

Johnson Noise
Thermal fluctuations produce noise known as Johnson Noise in both electronic and mechanical instruments. The Maxwell noise density describes the statistics of the process. Johnson noise is very low level. It is proportional to bandwidth, input impedance, and temperature. An electrometer with 1012Ohms impedance, operating at 300K (room temperature), with 1-10Hz bandwidth, will exhibit Johnson noise of about 1mV. An ordinary digital voltmeter, which has a bandwidth of perhaps 40kHz and an input impedance of 100MOhms, will exhibit Johnson noise of 1-10mV. If an experiment requires measuring signal levels of 1-100microvolt, then Johnson noise might be an issue. The finest mechanical instrument I know of, the Lacoste and Romberg gravimeter, which I've mentioned before, displays noise in its use that is 10 times or more above the limit that Johnson noise imposes. However, someday Johnson noise will provide a limit to improving the gravimeter.

Shot Noise
Electrical charges and photons come in discrete bundles. For this reason an electrical current or light intensity consists of a stream of particles which will fluctuate in number per unit time. A noise density describing this fluctuation is something like the Poisson density. However, for any reasonable signal level the number of particles is 100 or more per measurement and the noise density becomes Gaussian, or normal, for all practical purposes.

ADC Noise
Most measurement systems have become fully digital nowdays, so an additional source of noise is digitization error. An ADC cannot provide unlimited resolution but rather truncates, or rounds, at something typically like 8bit to 20bit resolution. The contributed noise from this is of uniform density in the least significant bit or digit of the instrument.

1/f Noise
Flicker noise, more widely known as 1/f noise, is really fascinating. Sometimes it is actually the signal! The source of this noise varies but it occurs in electronic devices, economic time series, and most geophysical data, such as river discharges, temperature, and so forth. It is difficult to deal with because it is has statistical measures, mean and variance, which are not constant with time. Therefore, the methods used to handle stationary noise, such as averaging, box-car integration, and synchronous detection, may not work to reduce it at all.

EMI and RFI
Interference comes in bursts. This makes it non-stationary and very much like 1/f noise. Being non-stationary it is very difficult to process out of data. Because of this the most effective way to handle it is to find the source and eliminate it. Shielding the power supply to nearby motors is one example of a way to eliminate EMI.

Background
Background noise is a type of interference except that background usually refers to a constant source of interference of a nature exactly like our signal. Thus, in audio measurements sound coming from nearby sources provides background interference; in radiation measurements background comes from naturally occuring radiation, and so forth. Background interference sets a lower limit for resolution of small differences. Coincidence detection, synchronous detection, box-car integration and signal averaging are all methods to reduce background interference.

Commonly used measurements

Time

It is easy to purchase electronic clocks that are precise to 50 parts per million (ppm) without any compensation for spurious influences, particularly temperature. This amounts to a presicion of 1500 seconds per year. If you require more precision or require accurate absolute time then there is only one reasonable solution. Use a receiver tuned to WWV. The NIST broadcasts clock pulses from Ft. Collins, Colorado that sound like one-second ticks of a clock, but are much more. They are one second ticks with an absolute accuracy of 1 part in 108. The ticks themselves are highly accurate tones. The tones repeat each hour on a schedule. Finally, the carrier is modulated with a BCD code for the time so that it can be read by equipment. Use a counting circuit to measure the number of clock ticks between two events in an experiment, or use successive clock ticks to maintain an oscillator in your laboratory on time, and you'll have a time standard precise to 1 part in 10 million without too much trouble. Get doppler shift information from NIST and your accuracy could become 1 part in 100 million.

The NIST time standard is so useful that I suggest always trying to design experiments so that your results involve a measurement of time. Time and frequency are the two items that are easiest to measure with great precision and accuracy. Radio Shack once sold a product called the "Time Cube" which we used in graduate school as the clock for portable seismic systems. I have no idea if the product is still available.

Temperature

You can purchase thermometers off-the-shelf which use a thermocouple or a PN junction as a sensor. These have a precision of 0.1F (0.05K). However, the indicated temperature might be biased by as much as a couple of Kelvin unless the sensor is calibrated on occasion. Thermocouples present an unusual challenge in this regard because thermal voltages (thermal EMFs) involve an implied comparison between a hot junction, the measuring junction, and a cold (reference) junction. Most thermocouple instruments use a thermistor to measure the temperature of the cold junction or they use a cold junction compensator. There are many different kinds of thermocouples known by their ANSI designations as type E,J,K,R,S,T and so forth, each with a different sensitivity. Some are confusingly similar to others. Sophisticated instruments have to compensate for thermal voltages being non-linear functions of temperature. The stories I have of troubles with thermocouples would fill a small book.

Another type of highly accurate lab thermometer makes use of measurements of the electrical resistance of platinum or nickel wire. They are called resistance temperature detectors (RTDs). A well constructed unit might have a precision of 0.0001K, but the calibration required to achieve an accuracy like this is quite expensive.

Thermistors are very sensitive sensors, allowing typical resolution of a milliKelvin and even a microKelvin under the best of circumstances. However, calibration is important and very time consuming. All temperature sensors which use a measuring current of any sort suffer from internal heating, which always biases their temperature readings to be slightly too high.

Pyrometers allow a person to merely point at an object and read its temperature. These are handy but not particularly accurate devices. You need to point them accurately. The hot object has to fill the field of view and be of uniform temperature over the view (unless an average temperature is useful). Pyrometers require calibration. To be accurate they require an emissivity value for the hot object, although a few pyrometer models use a laser beam to measure emissivity separately. Also they are more accurate if you know the background radiosity because the background is often reflected into the pyrometer from the foreground.

Another measuring method involves temperature sensitive coatings, like paints and liquid crystals, which have a precision of about 1K. There are even coatings, usually supplied as crayons, which don't reverse color when their temperature goes back down. These are handy for measuring peak temperature in very inconvenient locations.

In summary, it is inexpensive to measure temperature to a precision of 0.05K. It costs a bit more in terms of calibration to be accurate to 0.05K, it costs quite a lot more yet to be precise to 0.005K, and much more to be accurate to 0.005K. Eventually you have to wonder about the temperature uniformity of whatever you are measuring.

Displacement

The three most useful distance measuring instruments for any amateur are: 1) hand-held optical comparators, which include reticles with standard scales, diameters, and angles inscribed on them, 2) venier or dial calipers, and 3) an optical flat. If you require absolute distance standards, you can purchase a set of gage blocks. There are inexpensive sets of blocks from China, covering the distance range from 0.02m to 0.1m or so, which are accurate to 8 ppm, and which cost less than $100. If distance standards have to be mounted on an instrument directly you can superimpose two sets of ruled lines on glass (Ronchi rules) and interpolate to a precision of 10mm without much trouble. You can mount these two rulings at a slight angle and have them act as their own venier. An interferometer acts as a wonderful absolute distance standard, but an amateur will have to build it themselves. As a source of light for a home-made interferometer I suggest a gas laser. A diode laser has so little coherence length that an interferometer using one has a dynamic range of only about 25-50mm.An interferometer will supply precision of 100 parts per billion (ppb) without having to correct for temperature and pressure variation.

An engineers' level and theodolite are also useful to have around, especially for large scale experiments or for leveling measuring systems.

One last displacement instrument worthy of mention is the optical lever. Many amateur designs use a front-surface mirror glued to a fine wire, which they bounce a beam of light from, as an angular displacement amplifier. This is a time-honored system. However, to achieve its maximum sensitivity it will often employ a very compliant wire. With increasing compliance such a system eventually shows random fluctuations from all sorts of external influences, including even Johnson noise.

Electrical quantities

The precision which people can obtain in electrical measurements depend on how much money they spend for their instruments. Modestly expensive ($200-$300) digital multimeters will measure voltage to a precision of about 0.025% of full scale. On the smallest scale this translates into precision of 10 microvolts, and an electrometer costing 10 times as much won't really do any better except that its precision is scale independent. However, the electrometer has enormous input impedance, so it won't load high imdedance sensors and circuits, and it will provide current measurements to a precision of 10-16 amperes or charge measurements to a precision of 10-14 coulombs. Accuracy depends on having these devices calibrated once a year or so.

A few principles of measurement. Differential versus absolute measurements

I have presented several examples of physical measurements through this series, such as the time-of-fall gravimeter, and speed of light from capacitance measurements. These were all absolute measurements in which I was trying to obtain an absolute value, g in the one case and c in the other. Absolute measurements are difficult to make because they have to be both accurate and precise. Therefore a precise measurement requires an accurate reference of some sort. Propagation of error shows that absolute measurements often require that I measure several factors in the measurement equation with the same or better relative precision as the thing I am after. In my free-fall apparatus, for instance, I had to measure both length and time to slightly better absolute precision than the result I aimed at.

Differential measurements, in contrast, are a comparison of two measurements. If one these happens to be an accurate reference then the differential measure provides an absolute measure, but often just having a precise measurement of difference is useful enough. One advantage of differential measurements is that they can be designed to be signal null measurements. Nulling presents many advantages. For example, it is less sensitive to extraneous influences like power supply variation and lack of sensor linearity. An example of a nulling system is a bridge circuit (Wheatstone bridge, Kelvin bridge, impedance bridge, etc.).

Internal versus external consistency

An internally consistent experiment is one which will stand as logically consistent. A well-designed experiment provides such results. However, the conditions of a well-designed experiment might be so stringent that the results, despite their logical consistency, are not widely applicable. People refer to breadth of applicability as external consistency. Studies which simply gather data, like those in economics, anthropology, geology, and so forth often lack internal consistency while being externally consistent. Studies like this are often called quasi-experiments as a result. Although many people might argue with me, I think it is possible, though not simple, to achieve both internal and external consistency in most experiments. I won't cover quasi-experiments in this installment, but delay the topic until Part XIV.

The Hawthorne effect

Psychologists have learned that their experiments turn out differently if the experimental units (groups of people generally) and measurement systems (other people) know of the experiment. The effect is called the Hawthorne effect, and it is the reason for using blind experiments. Single blind experiments are those where the experimental units do not know what treatment they receive; double blind experiments also hide treatments from the observers.

Quite a few physicists worried about this effect before it was widely recognized in the soft sciences. As Fretter explained in his book An Introduction to Experimental Physics, Dover, 1968, p. 330, Dunnington hid exact results even from himself during a 5-year long effort to measure the ratio of charge to mass (e/m) of the electron! What a wonderful skeptic he must have been.

Blanks and Spikes

When a scientist takes field samples for later analysis there is always a danger of contamination. A well designed experiment usually requires field blanks, which follow all sampling procedures in the field, but which are filled with a sample known to be clean. When these blanks are evaluated in a laboratory they should show an analysis result that is below detection limit (BDL). If they do not, then the field samples might be contaminated by some aspect of field work. Most laboratories will run lab blanks to insure that no contamination has taken place within the laboratory. You can always add lab blanks yourself to the samples you send for analysis. Blank samples are a means of controlling for false positive results.

Spiked samples (lab spikes) are samples with a known amount of a contaminant added. They are a means of making sure that laboratory equipment is working properly, or, in other words, they are used to control for false negative results. It is possible to use field spikes in an experiment, which are spiked with a known contaminant in the field. A person might do this if they are concerned that some aspect of their field sampling is suppressing results. However, it is always risky to take a contaminant to the field during sampling, because it increases the danger of accidental contamination. Most labs will run lab spikes themselves for internal control.

The Hawthorne effect suggests that blanks and spikes should look exactly like real field samples so that lab workers will treat them exactly like they would any other sample.

A closing remark

Instruments are rapidly becoming consumer items, which means the price of increasingly better instrumentation constantly falls. Moreover, used instruments find their way onto a surplus market in many places. Check out the flea market, or advertisements on the SAS Forum, or auctions that accompany failures of businesses! Yes, even the downturn of an economy can be a boon for amateur science.

Reprinted from: