| |
|||
|
|
|
Related Products: Society for Amateur Scientists
|
Sponsored by:
|
Design Your Experiments Part XI: Instruments By Kevin Kilty Instruments and sensors.At some point experiment design leaves the realm of what to measure and enters that of how to measure. This brings me perilously close to subjects like lab technique and measurement, which are extremely specific to each scientific discipline. For me to discuss every possibility will expand this series beyond all reason, and even worse is that I can't explained most of it easily -- I don't possess the knowledge, and it is better learned through experience anyway. Lab technique is as much an art as a science. However, experiments often depend on instruments and sensors, many of which are electronic, and it makes sense for me to summarize a few principles about them. I also have a few remarks of a more general nature. Characteristics of instruments. The following characteristics are important in deciding what sensors and instruments to use in an experiment. Manufacturers usually provide information on these characteristics to help in the design process.
Johnson Noise Shot Noise ADC Noise 1/f Noise EMI and RFI Background Time It is easy to purchase electronic clocks that are precise to 50 parts per million (ppm) without any compensation for spurious influences, particularly temperature. This amounts to a presicion of 1500 seconds per year. If you require more precision or require accurate absolute time then there is only one reasonable solution. Use a receiver tuned to WWV. The NIST broadcasts clock pulses from Ft. Collins, Colorado that sound like one-second ticks of a clock, but are much more. They are one second ticks with an absolute accuracy of 1 part in 108. The ticks themselves are highly accurate tones. The tones repeat each hour on a schedule. Finally, the carrier is modulated with a BCD code for the time so that it can be read by equipment. Use a counting circuit to measure the number of clock ticks between two events in an experiment, or use successive clock ticks to maintain an oscillator in your laboratory on time, and you'll have a time standard precise to 1 part in 10 million without too much trouble. Get doppler shift information from NIST and your accuracy could become 1 part in 100 million. The NIST time standard is so useful that I suggest always trying to design experiments so that your results involve a measurement of time. Time and frequency are the two items that are easiest to measure with great precision and accuracy. Radio Shack once sold a product called the "Time Cube" which we used in graduate school as the clock for portable seismic systems. I have no idea if the product is still available. Temperature You can purchase thermometers off-the-shelf which use a thermocouple or a PN junction as a sensor. These have a precision of 0.1F (0.05K). However, the indicated temperature might be biased by as much as a couple of Kelvin unless the sensor is calibrated on occasion. Thermocouples present an unusual challenge in this regard because thermal voltages (thermal EMFs) involve an implied comparison between a hot junction, the measuring junction, and a cold (reference) junction. Most thermocouple instruments use a thermistor to measure the temperature of the cold junction or they use a cold junction compensator. There are many different kinds of thermocouples known by their ANSI designations as type E,J,K,R,S,T and so forth, each with a different sensitivity. Some are confusingly similar to others. Sophisticated instruments have to compensate for thermal voltages being non-linear functions of temperature. The stories I have of troubles with thermocouples would fill a small book. Another type of highly accurate lab thermometer makes use of measurements of the electrical resistance of platinum or nickel wire. They are called resistance temperature detectors (RTDs). A well constructed unit might have a precision of 0.0001K, but the calibration required to achieve an accuracy like this is quite expensive. Thermistors are very sensitive sensors, allowing typical resolution of a milliKelvin and even a microKelvin under the best of circumstances. However, calibration is important and very time consuming. All temperature sensors which use a measuring current of any sort suffer from internal heating, which always biases their temperature readings to be slightly too high. Pyrometers allow a person to merely point at an object and read its temperature. These are handy but not particularly accurate devices. You need to point them accurately. The hot object has to fill the field of view and be of uniform temperature over the view (unless an average temperature is useful). Pyrometers require calibration. To be accurate they require an emissivity value for the hot object, although a few pyrometer models use a laser beam to measure emissivity separately. Also they are more accurate if you know the background radiosity because the background is often reflected into the pyrometer from the foreground. Another measuring method involves temperature sensitive coatings, like paints and liquid crystals, which have a precision of about 1K. There are even coatings, usually supplied as crayons, which don't reverse color when their temperature goes back down. These are handy for measuring peak temperature in very inconvenient locations. In summary, it is inexpensive to measure temperature to a precision of 0.05K. It costs a bit more in terms of calibration to be accurate to 0.05K, it costs quite a lot more yet to be precise to 0.005K, and much more to be accurate to 0.005K. Eventually you have to wonder about the temperature uniformity of whatever you are measuring. Displacement The three most useful distance measuring instruments for any amateur are: 1) hand-held optical comparators, which include reticles with standard scales, diameters, and angles inscribed on them, 2) venier or dial calipers, and 3) an optical flat. If you require absolute distance standards, you can purchase a set of gage blocks. There are inexpensive sets of blocks from China, covering the distance range from 0.02m to 0.1m or so, which are accurate to 8 ppm, and which cost less than $100. If distance standards have to be mounted on an instrument directly you can superimpose two sets of ruled lines on glass (Ronchi rules) and interpolate to a precision of 10mm without much trouble. You can mount these two rulings at a slight angle and have them act as their own venier. An interferometer acts as a wonderful absolute distance standard, but an amateur will have to build it themselves. As a source of light for a home-made interferometer I suggest a gas laser. A diode laser has so little coherence length that an interferometer using one has a dynamic range of only about 25-50mm.An interferometer will supply precision of 100 parts per billion (ppb) without having to correct for temperature and pressure variation. An engineers' level and theodolite are also useful to have around, especially for large scale experiments or for leveling measuring systems. One last displacement instrument worthy of mention is the optical lever. Many amateur designs use a front-surface mirror glued to a fine wire, which they bounce a beam of light from, as an angular displacement amplifier. This is a time-honored system. However, to achieve its maximum sensitivity it will often employ a very compliant wire. With increasing compliance such a system eventually shows random fluctuations from all sorts of external influences, including even Johnson noise. Electrical quantities
I have presented several examples of physical measurements through this series, such as the time-of-fall gravimeter, and speed of light from capacitance measurements. These were all absolute measurements in which I was trying to obtain an absolute value, g in the one case and c in the other. Absolute measurements are difficult to make because they have to be both accurate and precise. Therefore a precise measurement requires an accurate reference of some sort. Propagation of error shows that absolute measurements often require that I measure several factors in the measurement equation with the same or better relative precision as the thing I am after. In my free-fall apparatus, for instance, I had to measure both length and time to slightly better absolute precision than the result I aimed at. Differential measurements, in contrast, are a comparison of two measurements. If one these happens to be an accurate reference then the differential measure provides an absolute measure, but often just having a precise measurement of difference is useful enough. One advantage of differential measurements is that they can be designed to be signal null measurements. Nulling presents many advantages. For example, it is less sensitive to extraneous influences like power supply variation and lack of sensor linearity. An example of a nulling system is a bridge circuit (Wheatstone bridge, Kelvin bridge, impedance bridge, etc.). Internal versus external consistencyAn internally consistent experiment is one which will stand as logically consistent. A well-designed experiment provides such results. However, the conditions of a well-designed experiment might be so stringent that the results, despite their logical consistency, are not widely applicable. People refer to breadth of applicability as external consistency. Studies which simply gather data, like those in economics, anthropology, geology, and so forth often lack internal consistency while being externally consistent. Studies like this are often called quasi-experiments as a result. Although many people might argue with me, I think it is possible, though not simple, to achieve both internal and external consistency in most experiments. I won't cover quasi-experiments in this installment, but delay the topic until Part XIV. The Hawthorne effectPsychologists have learned that their experiments turn out differently if the experimental units (groups of people generally) and measurement systems (other people) know of the experiment. The effect is called the Hawthorne effect, and it is the reason for using blind experiments. Single blind experiments are those where the experimental units do not know what treatment they receive; double blind experiments also hide treatments from the observers. Quite a few physicists worried about this effect before it was widely recognized in the soft sciences. As Fretter explained in his book An Introduction to Experimental Physics, Dover, 1968, p. 330, Dunnington hid exact results even from himself during a 5-year long effort to measure the ratio of charge to mass (e/m) of the electron! What a wonderful skeptic he must have been. Blanks and SpikesWhen a scientist takes field samples for later analysis there is always a danger of contamination. A well designed experiment usually requires field blanks, which follow all sampling procedures in the field, but which are filled with a sample known to be clean. When these blanks are evaluated in a laboratory they should show an analysis result that is below detection limit (BDL). If they do not, then the field samples might be contaminated by some aspect of field work. Most laboratories will run lab blanks to insure that no contamination has taken place within the laboratory. You can always add lab blanks yourself to the samples you send for analysis. Blank samples are a means of controlling for false positive results. Spiked samples (lab spikes) are samples with a known amount of a contaminant added. They are a means of making sure that laboratory equipment is working properly, or, in other words, they are used to control for false negative results. It is possible to use field spikes in an experiment, which are spiked with a known contaminant in the field. A person might do this if they are concerned that some aspect of their field sampling is suppressing results. However, it is always risky to take a contaminant to the field during sampling, because it increases the danger of accidental contamination. Most labs will run lab spikes themselves for internal control. The Hawthorne effect suggests that blanks and spikes should look exactly like real field samples so that lab workers will treat them exactly like they would any other sample. A closing remarkInstruments are rapidly becoming
consumer items, which means the price of increasingly better instrumentation
constantly falls. Moreover, used instruments find their way onto a surplus
market in many places. Check out the flea market, or advertisements on
the SAS Forum, or auctions that accompany failures of businesses! Yes,
even the downturn of an economy can be a boon for amateur science.
Reprinted from:
|