Related Products:

Society for Amateur Scientists

 

 

 

 

 

Sponsored by:

Design Your Experiments Part XV: Wrapping Things Up

by Kevin Kilty

I know that I promised that the previous installment, number XIV, was the last. I fibbed. It has occured to me, only because many of you send e-mail to me, that there are several things left unresolved in the series on design of experiment, and I should add one installment as an epilog of sorts.

My query regarding circularity in the speed of light experiment back in Part XI, was answered so quickly, and so well by Peter Baum, that I can't do better than just quote him...

"The short answer: With the 1983 definition of the meter as "...the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second" (see http://physics.nist.gov/cuu/Units/meter.html), we can no longer experimentally determine the speed of light but rather, all such experiments effectively determine length."

Succinctly put, Peter. Until 1983 the standard meter was an artifact in a vault in Paris. Now it follows directly from the speed of light, which is defined exactly as 299792458 meters per second. It has no uncertainty at all. Prior to this definition scientists had measured the speed of light to within 1 meter per second, or in other words, 1 part in 300 million, using the then current standard meter artifact as a reference. Therefore the new definition made no perceptible change in value of any constant and replaced an artifact with a pure physical measurement.

The new standard of meter and speed of light ties any local measurement of meter more securely to the number of wavelengths of a standard radiation. Before the change in definition, defining length in terms of wavelengths of some radiation constituted a secondary standard, which required that someone make a comparison between the artifact standard meter and wavelength using an interferometer. Now anyone can make a precise measurement of length anywhere, making use of a stable source of radiation, an interferometer and a time standard. As I mentioned in several places a time standard is easy to come by and is probably more accurate than any other reference we have access to.

As Peter also mentions "... there is a kind of circularity but not one of false circular reasoning.. Indeed if I actually ran the experiment I described and found a discrepancy between the speed of light in the experiment and the accepted definition, then the first thing I would conclude is that whatever standard of length I was using to assign a value to the geometric factor of the capacitor, was in fact wrong.

A consolidated list of references

Several people have asked for references. There are no references for my various opinions, but I'll list several sources that I used to help write the series, and which I feel explain some of the issues well. These sources are often old, but that is a reflection of the particular path I've taken in life, and not an indictment of current texts. I endorse Dover books often because they are inexpensive and are often reprints of classic texts. A relatively complete list of sources of information, and better still, wisdom, is as follows...

  • Reading in current journals is always educational. I suggest journals dealing with epidemiology (Epidemiology and The American Journal of Epidemiology are examples), statistical methods (Biometrika, Statistical Science, and publications of the American Statistical Society and American Mathematical Society), reliability (i.e. Microelectronics Reliability), and even general science (Nature, Science, and so forth).
  • E. Bright Wilson. An Introduction to Scientific Research. Dover. 1990. The original copyright is 1952, which gives you an idea of its vintage, but Dover made an affordable edition in 1990. There are parts of the text that look quiant now. The universe of information processing has exploded since 1952. But there is information on regression, analysis of variance, design of experiments, and so forth. Wilson uses the Yates order rotated by 90 in his text, which makes it seem awkward to me, and he also shows a sort of matrix design mnemonic that is useful in other ways, like in nested designs. I highly recommend the book.
  • Thomas B. Barker. Quality by experimental design. 1985. Marcel-Dekker, Inc. Although it is a tad goofy in places this is a very useful book and I highly recommend it. Barker provides examples to explain the motivation behind experimental design, and provides a wealth of advice on analysis. His algorithms are in BASIC of all things. However, he has listed algorithms that are useful in the design of experiments, which makes them unique.
  • John Mandel. The Statistical analysis of experimental data. Dover. 1984. The original edition was published in 1964. This book has a very enlightening discussion of the effect that errors in the independent variable have on regression analysis, and a very good discussion of order statistics, probability plots, and so forth.
  • The National Institute of Standards and Technology (I still prefer Bureau of Standards) is a fantastic source of information. I'll provide a home web-site. The NIST has several different servers providing very different sorts of information. You can reach them all at http://www.nist.gov .
  • Bob Bond had a really nice article on the Platinum Thermometer in the SAS Bulletin, and you may refer to his work for useful information, including links to Scientific American articles regarding calibration.
  • G.G. Vining Introduction to Engineering Statistics Duxbury Press. This is a modern text, which means it is expensive. The most unfortunate thing about Vining's book beyond its price is that he concentrates on engineering issues, which are somewhat different than scientific issues. He doesn't cover questions of why one does something, but rather how one ought to do it.
  • S. Twomey. Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements. Dover. 1996. Although I promised to discuss the debate between inversion versus inference, I never got to it. This book covers the methods and issues in inversion. Its pompous title is fortunately not an indication of how cogently Twomey presents his material. It is possibly the most clear discussion of inversion that is available.
  • Wm. Press and others. Numerical Recipes in C. AIP Press. 1986. Wonderful source of C algorithms, or at least the important fragments of them.
  • Jacob Fraden. Handbook of Modern Sensors AIP Press. 1997. There is no means of producing a comprehensive discussion of sensors because the subject area is so enormous. However, this book contains a wealth of information on selected sensors and technology.
  • Snedecor and Cochran. Statistical Methods. 1967. Iowa Univ. Press. The first edition of this was 1937, and it was the bible on experiments and analysis for anyone doing applied statistics for 30 years at least, especially in agronomy. It has deep insight into issues of sampling, design, hypothesis testing, and analysis of variance. I think it is downright wrong in a very few instances on models and model building, but it is well worth reading. There are always used copies of one edition or another at Powells Books in Portland. Don't bother with editions before the 5th in 1956, however.
  • Philip Bevington. Data Reduction and Error Analysis for the Physical Sciences. McGraw-Hill, 1969. A good texts for discussions on analysis and good sources of algorithms. Unfortunately Bevington's algorithms are in FORTRAN.
  • Beck and Arnold. Parameter Estimation in Science and Engineering. Wiley. Provides a good explanation of model building, regression, least squares and maximum likelihood methods of estimation. This is probably a hard text to find as a used book.
  • Draper and Smith. Applied Regression Analysis. 1981. Wiley. Contains not only material about regression and model building, but lots of advice on the design of experiments.

Reprinted from: