Related Products:

Society for Amateur Scientists

 

 

 

 

 

Sponsored by:

How to Win a Science Fair

By Joseph J. Carr
Adapted from The Art of Science.

SCIENCE FAIRS are a popular annual event for aspiring amateur scientists—many of whom are future professional scientists— in grades 7 through 12. When I was in junior and senior high school, I entered the science fairs every year. For the past several years I've been privileged to serve as a judge in a local middle school and areawide science fairs—and my interest seems more intense today than when I was a student participant. It is a real thrill to see youngsters chipping away at the edges of science, venting some of their inherent curiosity, and possibly preparing themselves for a career in science, medicine, or engineering.

The science fair sequence in most localities starts at the local school level with a schoolwide science fair where outside judges and possibly some teachers select first-place, second-place, third-place and honorable-mention awardees. The first-place awardees are sent forward to an area-wide science fair where they compete against students from other schools in the locality. The winners of the area events meet in statewide or regional science fairs, and the winners of those fairs go on to a national event.

The science fair must, unfortunately, have both winners and losers. Of course, we don't call anyone a "loser." We give all of them a certificate of attendance to reflect their hard work.

The purpose of the fair is educational, after all. But it's still nice to take home at least an honorable mention award—and of course a first place award is golden!

Some Do's and Don'ts

Over the past several years I've noticed a few patterns among the winners, and these observations may help you win your next science fair. Let me tell you about two different students. I'll call them John and Joan (not their real names), and I'll disguise their projects enough to preserve their privacy.

John is a bright 12th grader who (he told us proudly) had taken all four of the science courses offered in his school: biology, chemistry, physics, and advanced biology. He built a project that demonstrated a well-known scientific principle it basically repeated in more sophisticated form one of the experiments per formed in high school physics classes. John's display included a computer with color monitor (although, the computer turned out not to be needed), and a very costly (about $4,000) cathode ray tube oscilloscope. John's display poster panels were well made using the same kind of styrofoam-backed poster board that's used in professional advertising displays. The panels of the posters were produced on an Apple Macintosh computer, and printed out on a LaserWriter printer. They really looked sharp.

Joan's entry was a bit different. She had investigated a new science called "fractals" in an experiment (not a computer simulation) that showed the irregular patterns of the fractals, and demonstrated how changing certain well-defined parameters of the experiment resulted in wildly different patterns. Joan's apparatus was homemade, and was held together with black electrical tape. It looked, I'm afraid, a little bit sloppy. Her posters were made of ordinary poster paper that had to be propped up with flat wooden sticks. The panels were typed on a moderately good, but old-fashioned, typewriter that obviously needed a little mechanical attention. . . and two of the eight sheets were handwritten. She had the laboratory note book that showed her earlier work, and all of the false starts (several of which were abject failures). At the morning judges' committee meeting, we found that we could send any number of students ahead to the afternoon judging (not true every year). That second judging (in the afternoon) would determine the winners who would carry the local banner forward to the state or regional science fair. (In past years, the rules limited us to one entry per scientific category, but this year we had no limit on how many could go forward from the morning session to the afternoon session.) After all of the thinking and talking was over, we sent two students ahead to the afternoon judging. Joan was one of them, but John didn't even place. He left the science fair with only a certificate of attendance. Despite his computer-generated graphics, and high-priced electronic equipment, he didn't even get an honorable mention award. Why?

The judges' comments were illuminating. It was noted that John's computer was being used as little more than a high-priced audio oscillator, and that the oscilloscope could easily be replaced with a simple AC voltmeter. Both instruments, it was felt, were intended to impress the judges, and were not necessary to the experiment. We weren't impressed. The glitz would not hurt John's case, however. What did him in was deeper than surface glitter.

Furthermore, John didn't know how the computer generated the audio tones, and he was all but totally in the dark on how to operate the oscilloscope. He showed real distress when one judge (me) tried to adjust the horizontal time base controls on the 'scope. He seemed painfully aware that he lacked the technical expertise to reset the controls if I didn't return them to the right settings. Someone apparently had set them for him. It was also noted that John had an attitude problem. His cocky arrogance made him seem totally disdainful of the science fair process in general and the judges in particular. One judge (not me) reported that John made a remark about his project being a "shoo-in" for first or at least second place. Not so. . .he lost utterly.

Originality Counts...

But all that aside, John could have still won at least a third place or honorable mention award even with his attitude problem—if he hadn't made some really terrible mistakes. First, his project was a mere copy of an ordinary classroom demonstration. That is a bad approach for any student, although it is not always fatal. If you ape some other experiment, then at least extend it a little bit, do it a little better than usual, or somehow make your contribution more than merely following a teacher's lab notes. Only in the lowest grades, where students have only limited exposure to science, is copying an experiment reasonable. If John had been an 8thgrader, rather than a high school senior, then we might have been impressed.

Understand Your Experiment

Second, John was unable to explain just what the experiment was doing! He had at least some faint idea, but not at a level that one would expect from a high school senior. Furthermore, he failed to take into account, or at least mention, some easily observed critical elements of the experiment that would explain why his results departed from the theoretically predicted results. He would have had a shot at a second place award if he had known about sound-wave scattering and the effects of nonsinusoidal waveforms on the experiment.

The discussion of Joan's work was a little different. Although the judges privately criticized her apparently sloppy display presentation, they also waved it aside in favor of the positive aspects of the experiment. It was noted, for example, that she had read the two basic books on chaos theory and fractal geometry that are easily available in local libraries. While it would have been better if she had also read some of the more advanced texts, no one thought ill of her for the failure. Those books are hard to find in local libraries, expensive to purchase ($75 for one!), and were probably mathematically beyond all but the very brightest—and rarest—of high school students. It was also noted that her written report mentioned that she had sought the advice of a local mathematician who taught a fractals course in a university.

Joan had created a clever experiment that investigated a complex phenomena. While the project would not win a Nobel Prize, it was within the grasp of a high schooler, and yet was nonetheless clever and innovative. She had asked a question of nature, and then created an experiment that would answer that question "yes" or "no."

Do Your Own Work

Another plus in Joan's favor, noted by several judges, was that she had failed in a number of attempts at making the experiment. However, all of her failures were documented in her notebook, proving that the final result—clever as it was—was her own work. It also showed tenacity, and the fact that she was interested in the experiment, not simply in winning the prize. You see, there are no failures in science. There are failed experiments, to be sure, but each of these tells us something about nature.. . even if only, "That wasn't the right approach!"

Joan was also able to answer more than the most basic questions about fractals. She could either explain some anomalies seen in the experiment, or describe a method for doing an experiment to examine the issue. Some of the judges had only heard a little about this new field, so they asked her some probing questions. Her prompt answers made sense to the judges. One judge, who knew more about fractals than Joan, asked some even deeper questions. To one difficult question she gave a perfectly valid answer: "I don't know". . . but, she averred, "I'm going to look into that next week. . . it sounds interesting." Don't try to snow the judges with a guessed answer. . . it won't work. Any qualified judge knows that the very root of scientific investigation is those three little pregnant words, I don't know: the Confession of Ignorance. It is the energy and curiosity to resolve that ignorance that science fair winners and Nobel Laureates are made of. The integrity to confess ignorance, plus a bit of hard work, leads one to profound truths.

Ask a Question of Nature

Good science research calls for the experimenter to formulate a hypothesis (which means "ask a question of nature"), and then create an experiment that will yield one of two answers: Yes or no (or true/false). The question should be narrowly and clearly formulated. "Global" questions tend to be difficult to pin down in an experiment, so are likely to fail in the end.

It is not necessary to make the question asked of nature profoundly deep, for no one expects high schoolers to break totally new ground (it could happen, however!). For example, you could ask nature whether or not intense ultraviolet light affects the growth of poison ivy plants. Phrase the question something like this: "Intense ultraviolet light [slows/speeds up] the growth of poison ivy plants: True or false". Or, alternatively, use the null hypothesis: "Intense UV light does not [slow/speed up]...."

Expect to address some issues in designing the experiment. For example, what wavelength ultraviolet radiation is used? Is it filtered daylight, UV A, UV B, is it the spectrum generated by a certain brand of UV lamp? Second, ask yourself how the intensity of the UV light is to be maintained constant, and/or measured. With a modified photographic light meter? A special UV meter? With a photographic light meter equipped with a filter that passes UV but screens out other light? How much time each day are the plants exposed to UV light? In how many exposure sessions, of how long each?

Be sure to include at least one control group. A control group is one that is subjected to all conditions that the test group is subjected to, except the condition being examined (for example, in our hypothetical situation, UV light). Make all of the same observations on both control and test groups.

Sometimes, two control groups are used. When I worked in a medical center, I noticed that our physician researchers often used a test group and two control groups to test new treatments. One group received the new drug, one group received the normal "drug of choice," while the third group received a placebo. The results of all three groups were compared.

There is also a "double blind" protocol in science that is very desirable. In order to prevent the person who measures the progress from allowing his or her own preconceived opinions to affect the outcome, no one who either administers the drugs or evaluates the patient responses is allowed to know just who was in which group. Double blind protocols cannot always be followed in a science fair level experiment, but it is desirable if possible.

The idea in a scientific experiment is to keep all factors constant, except the one under test. For example, in the poison ivy experiment, the W light is the variable. The experimenter has to be certain that the temperatures of the two plant beds are the same, that the watering and feeding are the same, and that

all factors, other than the amount and wavelength of light impinging on the test plants, are the same for both populations Keep a Notebook Keep a bound laboratory notebook to document your work. If you want to be really classy, then go to a professional engineering supplies stores, university book store, or drafting supplies outlet and buy a real scientist's lab notebook. If that is not to :Y!K your liking, or too expensive, then use an ordinary bound com position notebook from the school supplies section of the local drug store. The idea of using a "bound" book is to be able to demonstrate that no embarrassing results were removed from the notebook. . . a good scientist documents the failures as well as the successes.

Write out the question that you are asking of nature on the E first page of the notebook. Next, write a description of the experiment (include any drawings or charts needed). Describe a protocol of how the experiment is to proceed. It is important

to remain consistent throughout the experiment regarding data collection, maintenance of the experiment (for example, E feeding or watering the poison ivy), and in some cases the timing of observations (how do you know how many millimeters of growth are seen per week if you don't measure the growth at the same time every week?).

As questions occur to you, enter them in the notebook and try to make some effort to find the answers . . . but don't interrupt the flow of the experiment to answer new questions. It is axiomatic in science (where real labs depend on grant money to survive) that the experiment should raise as many questions for further research as it answers. . . otherwise you are out of business next year.

When preparing your report and display, keep in mind that judges are not overly impressed by well-done graphics . . . but they help. My colleagues criticized Joan's display for its sloppiness. One comment was that "perhaps she was as sloppy in some of her observations." It is unlikely that the really good project will be denied a prize because of a little sloppiness on the graphics, but it is always good to put your best foot forward. In those years when the judges are limited in the number of entries they can recommend for the second judging, those little extra touches might make the difference between two students with roughly equal projects. The main idea here is to not give the judges any reason to shave a point or two off your score. While the scoring tends to be ad hoc, it's nonetheless real. . . and a sloppy poster or illegible report costs points.

Perhaps most important is that the display, the report, and your talk with the judges should be conservative in the claims that are made. Don't make any claim for your experiment that cannot be supported with data from your laboratory notebook, other pertinent records, or from "the literature" on the subject. Results are valid only when documented!

Statistics Are Important

Make sure that the number of data points taken in the experiment are sufficient to make the result statistically significant. For example, don't expect to find the effects of w light on poison ivy plants when the control group and test group consist of one plant each. Make the group as large as possible. One of the main mistakes seen in experiments is generalizing to the whole population what only applies to a few individuals. The way to guard against this problem is to examine many individuals to ensure that the sample is typical of the "universe" of individuals.

Also, please, please, please, if your experiment results in numerical data, or requires calculations, then be sure to understand the concept of significant figures. A calculator with a 12' digit display does not mean that either accuracy or precision results from actually displaying all of those digits. One of John's mistakes was that he listed a value as "13.4344359635" when "13.4" was the limit of his experimental apparatus.

If your experiment results in statistical data, then learn a little bit about statistics. Don't depend solely on mean or median calculations unless you have good reason. Look into the relevant chapters of this book to learn a bit about standard deviation, variance, and other factors that professional scientists use in analyzing data. Almost any local library will have elementary statistics textbooks, and most high school science teachers know more than enough to advise you in these matters.

The Judges Are Your Friends!

Finally, on the BIG DAY when you actually set up the display and confront—groan, shiver—the JUDGES, try to remain calm. . . and get some sleep the night before. The judges are not ogres, and they are not going to bite you (even a little bit). Every judge that I've met over the past several years has been interested in seeing youngsters succeed in science. . .for youngsters are the future of the profession. They may make suggestions for your future research or study. Don't be afraid to ask them questions! Science is as much an exchange of ideas between people as it is experiments in a solitary laboratory. . . and the process can be invigorating.

Conclusion

The successful science fair experiment will be well focused, properly carried out, documented in painstaking detail, and properly described in the attached report. You will find that the judges are favorably impressed by good scientific procedure, and are not impressed at all by a project that is well presented—but either improperly executed or too trivial. A simple, but properly executed, grade or age-appropriate project can be a winner if you do it right. Good luck.

Copyright 1992, Lewis Lewis and Helms, LLC. All rights reserved.