The Scientific Method and Earth Sciences OPEN ACCESS

[+] Author and Article Information
Larry W. Lake, Steven L. Bryant

Department of Petroleum and Geosystems Engineering, The University of Texas at Austin, 1 University Station CO300, Austin, TX 78712-0228

J. Energy Resour. Technol 128(4), 245-246 (May 10, 2006) (2 pages) doi:10.1115/1.2358150 History: Received January 25, 2006; Revised May 10, 2006

The 17th Century, the so-called Age of Reason, is distant from us in nearly every respect: dress, politics, thought, and, of course, time. The collection of philosophers that wrote then can seem out of touch today. But perhaps we just need a refresher.

One of the philosophers, Rene Descartes (1596–1650 CE), in particular, seems dreadfully mundane. Cartesian coordinates is to us obvious and, though there have been volumes written about it, cogito ergo sum seems like a celebration of trivia. The Scientific Method, the subject of this short piece, also seems commonplace. But the Method is as relevant today as it was then. We write to extol the Method here as a way of making research into the Earth sciences, particularly research into hydrocarbon extraction, more effective.

The Method consists of three sequential steps: hypothesis, validation, and rejection/modificatiion.

Hypothesis. The first step is hypothesis, the easiest step in the Method, the one that often takes the most thought and, because it dictates what follows, the most important. Unfortunately, it is most often overlooked: fewer than 10% of the scientific papers or proposals we have reviewed contain a hypothesis, explicit or implied.

A hypothesis is a statement of the form “if…then…”. They are really easy; any existing research project can be restated as a hypothesis. For example,

“If parameters in my computer model have been tuned to match observations of past behavior, then the model with these parameters will predict future behavior accurately.”


“If the price of hydrocarbon goes up, then economic growth will decline.”


“If large amounts of carbon dioxide can be stored underground, then global warming will be arrested or even reversed.”


“If a particular field is developed with horizontal rather than vertical wells, the ultimate worth of the resource will be larger.”


“If we write a good opening paragraph to this piece, then you will read the whole thing.”

Hypotheses determine the ultimate success of the research: too broad will lead to failed research; too narrow will lead to no advancement. Perhaps the most important quality of the hypothesis is that it must be falsifiable, i.e., it must be possible to prove it wrong. Stephen Jay Gould says that this is the most important attribute of the Method. This comment projects the next step in the Method.

Validation. Validation constitutes the design and execution of experiments, data accumulation, numerical simulation, and theoretical analysis that corroborate the hypothesis. Validation is the heavy lifting of the Method; it is not for the faint of resolve, but without validation the hypothesis is and remains merely a guess.

An exercise in validation is most powerful when it shows a hypothesis to be false. Why would anyone want this to happen? Surely it is Truth that enables judgments and guides us away from Error? Perhaps Truth has that role in some realms of human endeavor, but (alas) in science and engineering, no amount of corroborating evidence can ever show that a hypothesis is true. We can certainly put more and more confidence in a hypothesis as successful predictions keep piling up. And this confidence can lead to some highly practical uses. But showing a hypothesis to be true is not the purpose of the Method.

Falsifying a hypothesis is not as easy as it sounds. In fact, there are several pitfalls.

Validation must address the subject of the hypothesis. Often we have completed a phase of research and then wondered what we set out to do in the first place. This is especially true in attempting to use existing procedures or data on a new hypothesis. Another lack here is the failure to account for differences in distance or time scales; no hypothesis about field behavior of hydrocarbon recovery can be validated with small experiments.

Validation must be as objective as possible, which normally means that the results of the validation must be a number. The number must be reproducible or at least have an understood bias and precision. Few things are as intellectually debilitating as trying to validate a hypothesis with noisy results. Unfortunately Earth science hypotheses are often validated with only single numbers.

Physicists in particular view validation as the change of behavior of an observation with scale, say distance or time. They are unconcerned with point-by-point agreement as long as observations change as predicted by the hypothesis. This is a view that Earth scientists could readily adopt since many of our observations have temporal or spatial character.

Validation must eliminate, as much as is possible, alternative explanations. In medical research the experimental field is divided into groups, each differing from the other only by the factor being tested. Often the identity of the groups is disguised (the double-blind test) so as to avoid unintentional bias. Any observed difference between the two groups must be because of the factor. This perhaps is the single biggest difficulty in applying the method to Earth sciences: given the heterogeneity of natural systems and the expense involved in doing experiments it is virtually impossible to have two groups that differ by only one factor.

A specific manifestation of the last pitfall that is particularly relevant for Earth scientists is validation by numerical model. We are all aware of papers in which the results of a (field or laboratory) experiment are matched by a numerical model. The matching is usually excellent (perhaps inevitably so—editors and reviewers are unimpressed by a poor match). But the match has been brought about by adjusting groups of parameters. As is almost universally acknowledged—though rarely in the paper or report describing the model—other adjustments can bring about the same agreement; this constitutes curve fitting, not validation.

We can turn this around to provide a simple litmus test: it is a validation only if failure of the model is a possible outcome. The presence of adjustable parameters often means there is no way for the model not to fit the results. The matched model may well be useful, but it does not contribute to the workings of the Method. Rigorous validation is possible only if (a) all of the parameters in the fitted model are independently measured and/or (b) the fitted model predicts results that were not used in the fitting.

And finally, the Method is practiced by humans who have reputations, prejudices, vested interests, or wish to acquire them. We are naturally averse to letting go of a previously accepted theory or pet hypothesis—especially if it is one we put forward.

Rejection/modification. Outright rejection is uncommon. It is far more common for the hypothesis to be modified and the process begun anew with a different hypothesis and a revalidation. The new validation may use other data or the hypothesis reformulated to conform to the existing data. Such a reformulation, however, requires more validation.

The continual restatement of hypothesis and revalidation is the procession of knowledge building. The process is intrinsically circular like an ever widening spiral. Progress consists not in reaching a particular destination, but in gaining a better understanding each time around.

As we have written it, the Method seems formulaic. This is not so as our concluding paragraphs (and many philosophers of science writing during the last half century) will illustrate.

Karl Popper’s various expositions of the Method often point to the 1919 measurements of bending of light from distant stars as exemplars. The idea was to test Einstein’s hypothesis about gravitational bending of light. His theory of relativity predicted twice as much bending as did Newtonian mechanics.

Popper was chiefly concerned with distinguishing science from pseudoscience, and his neat summary of the story (the observations corroborated Einstein, and classical mechanics gave way to relativistic) neglected the rather messy reality that Peter Coles has eloquently described.

Problems with the instruments made the data noisy; problems with the weather reduced the number of measurements; and problems with steamship operators prevented reference observations being made. When the data were presented at a special session of the Royal Society, the assembly did not unanimously check off the three steps of the Method and quickly adjourn to toast Einstein’s genius. Instead, there was considerable discussion between the skeptics and the convinced. Acceptance of relativity ultimately depended upon other observations that corroborated its implications. All of these are pretty much what is to be expected when humans apply the Method.

We have barely scratched the surface of the Method here. For more depth, consult the works of Popper, Kuhn, Gould, Coles, and, of course, Descartes himself.

Much of modern Earth science research largely ignores the Method, and we are the poorer for it. The lack shows up in the plague of incremental research, few real breakthroughs, the issuance of scientific papers that all seem to be the same, and the persistence of questions that never seem to be answered.

Fittingly, the final word of this piece is another hypothesis: “if more of our research is hypothesis-based, then the effectiveness of the research will improve.” The next steps in the Method are up to you.

Copyright © 2006 by American Society of Mechanical Engineers
This article is only available in the PDF format.






Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In