Make your own free website on

An Essay on Science by a Practitioner

- - Responses to an Anti-scientist * - -

By J. T. Enright

(* Footnote: In April of 1998, I received from a David Anderson an e-mail message, challenging the negative outcome of my statistical analysis of the Munich dowsing experiments of Professor H.-D. Betz and colleagues. From the content of his message, I gather that David may be a believer in many aspects of the paranormal, including water dowsing, with a negative attitude toward science and scientists. I responded to him with commentary on several sentences in his message, and that e-mail response almost became a complete (but disjointed) essay on my views about the entire scientific enterprise: why and how scientists do what they do. Those responses without the context would probably seem to most members of SDARI to be a restatement of the obvious, and a bit boring. Rather than attempting to reformulate the entire text of that response as a more coherent essay, I have chosen to provide my comments essentially verbatim, preceded by the Anderson sentences to which I was responding.)

D.A.: Statistical methods are the wrong way to attain irrefutable truth. One can't really prove anything by statistics!

J.T.E: I agree entirely that one can't prove anything by statistics, but the hope for "proof" or "irrefutable truth" seems to me to be an unreasonable expectation except in mathematics or other logical systems in which all the rules of the game are specified in advance and mutually agreed to. The basic approach in ordinary science is to propose a tentative explanation or interpretation or hypothesis, then decide what the expected consequences of that hypothesis are, and proceed to do experiments or make observations in the effort to test the hypothesis. And the critical aspect of such testing is not to try to CONFIRM or PROVE the hypothesis but to attempt to DISPROVE it. But while the term, "disprove", may sound as though absolute proof is implied, all a scientist can usually hope to accomplish is to make the hypothesis seem unlikely (or very unlikely) to be correct. Sure, some hypotheses can be rigorously shown to be false: if the hypothesis is that all birds are black, then even seeing a single exception disproves the hypothesis just as a single exception can disprove a mathematical speculation, but most scientific hypotheses are not that clear-cut or general in the sense of invoking absolute generalizations including the word, "all." And statistics is one of the primary tools by which a scientist can attempt to decide HOW unlikely a particular hypothesis should be considered. One specifies a "null hypothesis", determines its expected consequences, collects relevant data, and then a statistical test can quantify how unlikely it is that the null hypothesis is true. (Any such interpretation requires certain assumptions about how the data were collected and the like.) Anyone who understands statistics recognizes that he cannot say that "X" has been "proven" by statistical testing. Either the conclusion is that - if everything has been done according to the rules - the probability is, say, only 1 in 100 that the hypothesis is true; or, if the results come out otherwise, the conclusion is not that one has shown that the hypothesis IS true, but only found that with these data, one cannot conclude that it is false - and that's a very different conclusion from a logical point of view.

D.A.: On many occasions scientific methods are not more accurate or valid than personal observation.

J.T.E.: I'm not sure what you mean by "accurate" or "valid", but I guess that doesn't matter much here. If personal observation means that one person has observed something once, then it doesn't really matter that much whether the observer is a scientist or not: if one has good eyesight, the background of the observer may affect his ability to interpret what has been seen - a magician can better evaluate a demonstration of stage magic than I can, even though we may both have equivalent vision - but the ability to observe in its most primitive sense shouldn't depend on who is seeing. But the difference between scientific observation and personal observation usually lies in the expectation of the scientist for dozens or hundreds of measurements or observations before he's convinced of the validity of a generalization (except in cases like "All birds are black.", where a single instance is sufficient), while the layman may be convinced by a single case. That's why I am, like most scientists, skeptical about anecdotal evidence - single case histories that have not been replicated. For a phenomenon to belong in the texture of science, the usual expectation is that it be REPRODUCIBLE, both by the person who first reports the phenomenon and by others in other labs. In my opinion, a single personal observation may well be a fine basis for formulating a testable hypothesis, but it's a lousy basis for firm convictions about how the world works.

D.A.: Science is wrong when it limits the validation of personal experiences only to instances that it can already explain.

J.T.E.: I don't think I've ever encountered a practicing scientist who would advance such a proposition. The day-to-day practice of science is a matter of personal experiences - observations or measurements made by one person or - perhaps a group of people. Science involves the attempt to achieve ever deeper understanding of the world around us, and if one limited his research to instances that science can already explain, there would be nothing new to discover, there would be no scientific progress and science would be no fun at all.

D.A.: . . .The tendency to confuse the acquisition of data with the possession of wisdom.

J.T.E.: Science - or at least science as I understand it - doesn't involve the claim or the hope of achieving "wisdom." The objective, the hope, is to achieve ever closer approximations - APPROXIMATIONS - in understanding how the world works. And all scientific conclusions are, or should be, regarded as "provisional", subject to possible revision on the basis of further data or observations. Some "scientific" conclusions are sufficiently well established that it appears very unlikely that they will ever need major revision - the effects of gravity on the motion of the planets, for example; and others are extremely tentative interpretations that are very likely to be wrong, but they are the best guesses presently available.

D.A.: . . . Awareness of the true meaning of facts and the truth from which they spring.

J.T.E.: Most practicing scientists would, I think, avoid using the term "truth". The philosopher seeks "truth", the scientist seeks provisional interpretations. There are observations, and there are interpretations, and some interpretations have a greater probability of remaining unscathed by future observations than others, but ordinarily, a scientific interpretation deserves nothing more than tentative, provisional acceptance. Scientists are, or should be, deeply convinced skeptics, about essentially everything, including the foundations of their own convictions: insistent on the principle that any novel and interesting claim about nature should be accompanied by adequate, rigorous and persuasive evidence before it is accepted into the framework of our knowledge.

D.A.: . . . "Cause" in the idea of cause and effect is simply an assumption. There is no absolute way to determine whether a sequence of individual events is truly related or not.

J.T.E.: Granted, many observed correlations are accidental, fortuitous. That's why the scientist seeks reproducibility. If B follows every time that someone does A, both in London and in Tokyo, then it begins to appear that the hypothesis of a causal relationship is more tenable than its alternative of purely chance association. That's what statistics is all about. An "absolute" way? Science doesn't deal with absolutes, only with provisional interpretations.

D.A.: Taking this idea into the dowsing experiment, what is the possibility again - that some unknown forces disrupted the experiment, "causing" results that were not expected?

J.T.E.: That's a worry of the sort that plagues all scientific experimentation, and part of the reason that INDEPENDENT replication in other labs is so essential. In terms of the Munich dowsing experiments, my interpretation is that it is difficult to imagine a set of data that would represent a more convincing case AGAINST the ability of dowsers to do what they claim they can. And the experiments were undertaken by people, who were convinced that dowsing is a real phenomenon, so one can't blame poor results on the frame of mind of the experimenters; and all the participants in the experiments were convinced that they had the necessary skills, so bias because the subjects didn't WANT to do well seems unlikely. And even the very "best" of the Munich dowsers were unable to reproduce their good results in additional test series, so the phenomenon was not reproducible in any sense of the word. It's difficult for me to imagine anyone undertaking a more extensive series of tests than those done near Munich, so these represent the best data available, and it seems clear to me that those data don't support the interpretation that dowsers really have any exceptional skills. The results don't demonstrate, of course, that dowsing skills don't exist, just the interpretation that these observations do not make a convincing case in favor of dowsing. And a qualitatively similar sort of interpretation could probably be made if research were to be undertaken to demonstrate the reality of guardian angels: suppose that the tests were to come out negative: that's no PROOF of non-existence, just evidence that belief in angels lacks adequate foundation at this time, no matter how many single people were to assert that they have seen angels (anecdotal evidence). If someone, on the basis of his own observation, wants to believe in angels, that must be his own, personal decision, but failed experiments, if they were to be undertaken, would mean that the believer has no right to insist that his belief is demonstrably true and should be accepted by the rest of the world. Some of what I've written here is at the forefront of my mind due to recent reading of the first chapters in a book by science historian Michael Shermer, entitled, Why People Believe Weird Things (Ed.: W. H. Freeman and Company, New York, 1997). Shermer puts some of the thoughts expressed here much more elegantly than I can. I could try recommending that book to you, but I suspect that even the title may convince you that you don't want to have anything to do with it.

Note by the Editor:

The Munich study of dowsing, which was supported by a German government agency, concluded that some people, referred to as dowsers, had a special ability to find underground water in the absence of any physical clues. This claim was widely quoted in popular (science) magazines and news media. Dr. Enright's formal critique of the Munich experiments was published in the scientific journal Naturwissenschaften, Vol. 82 (1995), pages 360-369. A less formal version of that critique, with some updating, appeared in the Skeptical Inquirer, Jan./Feb., 1999, issue, pages 39-46, under the title "Testing Dowsing: The Failure of the Munich Experiments". Dr. Enright also has been the speaker at two of SDARI's public meetings (May 26, 1996 and March 28, 1999), discussing many aspects of this subject.


James T. Enright is a Professor at Scripps Institution of Oceanography, UCSD, and a member of SDARI.