Sorting the Sound Science From the Junk
by Kristine Bradof
This article originally appeared in the July 1996 issue of the Wellspring newsletter, published by the MTU Regional Groundwater Education in Michigan (GEM) Center, now the Center for Science and Environmental Outreach at Michigan Technological University.A photograph of three children looking at a globe carries the message, "If you worry that too much junk food can damage their health, consider what junk science could do to their planet." So begins an advertisement that appeared early in 1996 in The New York Times and The Washington Post. The nonprofit Union of Concerned Scientists (UCS) placed the ad in response to the recent spread of "junk science" in popular outlets such as radio talk shows. Environmental science has been a major target of scientific-sounding misinformation that obscures the research findings of respected scientists. The intent of such misinformation is often to protect political or financial interests.
Peddlers of junk science can make very persuasive arguments because most Americans lack training in the methods of science. The danger is that unsubstantiated opinions, stated often enough, will be mistaken for facts. A study can be designed to produce almost any desired result, but such studies are by definition not science. Yet, it is very difficult for nonscientists to judge the quality of "scientific" information presented.
The nature of scientific inquiry itself can be frustrating because it is human nature to want immediate and permanent answers to questions. Especially in the environmental sciences, answers may be long in coming. It is far easier to conduct controlled experiments in a laboratory than in nature. Effects often take many years to become measurable. Even then, the causes can be very hard to pinpoint. Uncertainty is a natural part of science that doesn't translate well to public policy decisions. Still, the better we understand how science works, the better informed our decisions will be.
An introduction to the scientific method
Scientific research doesn't follow any one "recipe." Each problem or question requires a different approach. However, certain methods are basic to science. A study typically begins with observations that lead to questions. For each question, a scientist develops one or more possible explanations, called hypotheses, which can be thought of as educated guesses. (They are not the same as theories, however, which are established only after overwhelming evidence accumulates to support them.) Hypotheses must be tested by controlled experiments or predictive models. A control is a standard against which the effect of changing one or more variables, or influences, can be measured.
For example, suppose we want to test a hypothesis that bean plants exposed to "grow lights" for 8 hours each day (the control) will grow faster than beans receiving 4 hours but slower than those receiving 12 hours of light (the variable to be tested). Three sets of bean plants are needed, each exposed to light for 4, 8, or 12 hours. All other variables, such as water, temperature, soil, and nutrients, must be the same for each set of plants. A properly designed experiment ideally includes replicates. In this case, we might decide to have three plots (replicates) of beans for each of the light exposure levels. The location of each plot in the greenhouse is determined randomly to avoid any bias.
Based on the data, or results, from our experiment, we may have to change our hypothesis. We may also want to keep the amount of light constant and change one of the other variables in a follow-up experiment. Scientists must always be prepared to discard a hypothesis not supported by careful experiments. It is never acceptable to discard good data because it doesn't support the hypothesis. The ideal is to try to determine and keep in mind all possible explanations for a phenomenon—multiple working hypotheses—then design tests to narrow the choices.
When the results of a study are submitted to a scientific journal, they are usually subject to peer review. The peer review system gives experts in the field the opportunity to examine the methods, assumptions, and conclusions of the study before it is made public. After publication, the competitive nature of scientific inquiry encourages other scientists to challenge the results of the study. They may cite experiments that contradict the results, or they may repeat the experiment themselves to see if their results are the same. Repeatability of results is another requirement of sound scientific methods.
As in any other profession, science has unscrupulous practitioners who may alter, ignore, or create data to support their views. A few years ago, researchers from two laboratories claimed to have demonstrated cold fusion, touted as a promising clean energy source. Their work was discredited, however, upon the discovery that one team of researchers found excuses to discard data that did not support cold fusion. A graduate student in the other lab admitted that he had added tritium, a byproduct of cold fusion, to his samples to make it appear that cold fusion had occurred.
A consumer's guide to sound science
As you attempt to evaluate information presented by various sides in the public debate over scientific issues, here are some guidelines to keep in mind.
• Consider the source of the information, no matter which side of an issue presents it. People can be vocal without being well informed. A spokesperson should be able to cite credible scientific sources to support his or her position or to refute an opposing view. Mere opinions don't count. Talk show hosts or opinion polls should not dictate policy. Neither source represents sound science.
• Views opposing the
general consensus on an issue cannot simply be dismissed. After all,
general consensus once held that the Earth was flat! Science by its
nature is anything but static.