| |
|||
|
|
|
Related Products: Society for Amateur Scientists
|
Sponsored by:
Design Your Experiments Part IV: Propagation of Noise (Error)by Kevin Kilty Sources of noise working together When many sources of noise or random error are working simultaneously, the combined noise level is found through the definition of variance. Expected value is linear and variance is somewhat so. They obey the following properties. Let E() be expected value and V() be variance. E(a+b) = E(a) + E(b) where a and b are random variables. E(ca) = cE(a) where c=a constant. V(a+b) = V(a) + V(b) V(ca) = c2V(a) Using this a person designing an experiment can make a table summarizing all of the sources of noise, even if some are just estimates, then calculate a combined noise, and analyze how this affects design. For example... Example Noise Budget for Thermal Experiment (Celsius degrees)
I apologize in advance for presenting an ugly formula in this installment, but nature forces my actions. You see, it is not common to measure the thing that interests us directly. More commonly we have to measure some other quantity related to our objective and make conversions of one to the other. For example, we do not measure the wavelength of a spectral line directly, but rather we measure the displacement of a mechanical part in a spectrometer and then convert this to wavelength. On the other hand, even when we can measure directly we often have correction factors to apply to a raw value to make it useful. This means that uncertainties (noise or error) in the value of one thing propagate into a value we calculate. This brings me to propagation of error. If we make measurements, make conversions of a measurment to a value, and make corrections, then we must have a formula that relates things one to another. Suppose the equation... Y = f(x,z,T,...)provides us a value we seek (Y) if we can measure the independent parameters x, z, T, and so forth. This equation is called the measurement equation. Its parameters include output from an experimental apparatus, or maybe just counts done by hand, as well as other values needed to correct the result. Each parameter has some uncertainty associated with it. This uncertainty propagates into the result. How can we calculate this? The way to do this is to approximate the measurement equation with a Taylor series, substitute it into the definition of variance, and obtain the following equation... u2=Sum of(fx2*ux2+fz2*uz2...) + Sum of(fxfz*uxz+...) where; fx=partial derivative of f with respect to x, ux2=variance of the mean of parameter x, and uxz=covariance of x against z. This is the formula I apologized about, but it is not so bad as it seems. For one thing, uncertainties between parameters are not often correlated, which means there is no covariance between parameters, which removes the second summation entirely. The remainder is very useful for experiment design. Once more, an example is in order. Suppose I wish to measure the absolute acceleration of gravity (g). I decide to use a time-of-fall apparatus. I let a falling body drop a vertical distance (L) and measure the time it falls (t). I calculate g as being equal to 2L/t2. There are a couple of correction factors, though. I plan to perform this in an evacuated tube so I don't worry about air resistance, but temperature causes the tube to expand. So, with this correction, the measurement equation becomes... g = 2L(1+e(T-To))/t2;where e is the linear expansion coefficient of the chamber and To is the temperature at which I measured L precisely. I measure temperature and time of fall, then plug these values into the formula and calculate g. How uncertain is this value and where does most of the uncertainty arise? Uncertainty and Experimental Design There are 4 sources of uncertainty in my measurement equation--time, temperature, expansion coefficient, and length. Each contributes to the over-all uncertainty in my resulting g. I'll do the partial differentiation for you, here. Any variable with a subscript is a partial differential with respect to the subscript except for the variable u, which is always an uncertainty. Since I'm in a planning mode all estimates of u have to be obtained by other means as I have no measurements yet. The contribution from length,for example, is... = gL2*uL2 = (2(1+e(T-To))/t2)2*uL2 = (g/L)2*uL2. Taking the other derivatives and evaluating them similarly, I get... (ug/g)2=(uL/L)2+4(ut/t)2+((T-To)ue)2+(euT)2. This equation tells me quite a lot about relative uncertainty of gravity (ug/g) in my proposed experiment, and I would expect to perform such a calculation in advance to help guide my design. For one thing it tells me that relative uncertainty in time interval (t) is 4 times as important as relative uncertainty in length. If I expect to measure g to a relative uncertainty of 10-6, I must work on reducing relative uncertainty in t to much less than half this. It also indicates that I can best mitigate the effect of uncertainty in expansion coefficent by making my measurements at T=To; that is by making no correction at all. The last term is very useful. It indicates that whether I make a temperature correction or not, there is a contribution to uncertainty that is the product of expansion coefficient and temperature uncertainty. It is darned difficult to measure temperature of an apparatus to an uncertainty of 10-2K without careful design. Since e for most metals is above 10-5, I don't have to demand relative uncertainty much below 10-6 before I will find myself having to improve temperature measurement. improve temperature control, and using low expansion materials, or all three. By the way, the experiment I just described will measure the absolute value of g,which we currently know to a precision of about 1 part in 108. This is substantially better than what I planned for. I can measure relative changes in g from one place to another to the astounding precision of about 1 part in 1011! And I can do it with a purely mechanical device called a gravimeter. The story of the exploration gravimeter, like the LaCoste and Romberg Model G, is a heroic tale of design and planning for noise. In closing this part of the
series on experimental design I suggest that everyone bookmark the following
NIST (National Institute of Standards and Technology) web sites. They
are a stupendous resource of practical advice and examples about using
statistics and probability in the science of measurement. Reprinted from:
|