Why is uncertainty important in science




















Instead, it fluctuates with changes in Earth's magnetic field, the uptake of carbon by plants, and other factors. In addition, levels of radioactive 14 C increased through the 20 th century because nuclear weapons testing released high levels of radiation to the atmosphere.

In the decades since Libby first published his method , researchers have recalibrated the radiocarbon dating method with tree-ring dates from bristlecone pine trees Damon et al.

As a result, both the precision and accuracy of radiocarbon dates have increased dramatically. For example, in , Xiaohong Wu and colleagues at Peking University in Beijing used radiocarbon dating on bones of the Marquises lords of Jin recovered from a cemetery in Shanxi Province in China see Table 2 Wu et al.

As seen in Table 2, not only is the precision of the estimates ranging from 18 to 44 years much tighter than Libby's reported year error range for the Douglas fir samples, but the radiocarbon dates are highly accurate, with the reported deaths dates of the Jin the theoretically correct values falling within the statistical error ranges reported in all three cases. Karl Pearson first described mathematical methods for determining the probability distributions of scientific measurements, and these methods form the basis of statistical applications in scientific research see our Data: Statistics module.

Statistical techniques allow us to estimate and report the error surrounding a value after repeated measurement of that value.

For example, both Libby and Wu reported their estimates as ranges of one standard deviation around the mean , or average, measurement. The standard deviation provides a measure of the range of variability of individual measurements, and specifically, defines a range that encompasses The standard deviation of a range of measurements can be used to compute a confidence interval around the value.

Confidence statements do not, as some people believe, provide a measure of how "correct" a measurement is. Instead, a confidence statement describes the probability that a measurement range will overlap the mean value of a measurement when a study is repeated. This may sound a bit confusing, but consider a study by Yoshikata Morimoto and colleagues, who examined the average pitch speed of eight college baseball players Morimoto et al.

Each of the pitchers was required to throw six pitches, and the average pitch speed was found to be When he later repeated this study requiring that each of the eight pitchers throw 18 pitches, the average speed was found to be In this case, there is no "theoretically correct" value , but the confidence interval provides an estimate of the probability that a similar result will be found if the study is repeated. In science, an important indication of confidence within a measurement is the number of significant figures reported.

Morimoto reported his measurement to one decimal place He was able to distinguish differences in pitches that were Further, his instrumentation did not support the precision needed to report additional significant figures for example, Incorrectly reporting significant figures can introduce substantial error into a data set.

As Pearson recognized, uncertainty is inherent in scientific research , and for that reason it is critically important for scientists to recognize and account for the errors within a dataset. Disregarding the source of an error can result in the propagation and magnification of that error. For example, in the American mathematician and meteorologist Edward Norton Lorenz was working on a mathematical model for predicting the weather see our Modeling in Scientific Research module Gleick, ; Lorenz, Lorenz was using a Royal McBee computer to iteratively solve 12 equations that expressed relationships such as that between atmospheric pressure and wind speed.

Lorenz would input starting values for several variables into his computer, such as temperature, wind speed, and barometric pressure on a given day at a series of locations. The model would then calculate weather changes over a defined period of time.

The model recalculated a single day's worth of weather changes in single minute increments and printed out the new parameters. On one occasion, Lorenz decided to rerun a particular model scenario. Instead of starting from the beginning, which would have taken many hours, he decided to pick up in the middle of the run, consulting the printout of parameters and re-entering these into his computer.

He then left his computer for the hour it would take to recalculate the model, expecting to return and find a weather pattern similar to the one predicted previously. Unexpectedly, Lorenz found that the resulting weather prediction was completely different from the original pattern he observed. What Lorenz did not realize at the time was that while his computer stored the numerical values of the model parameters to six significant figures for example 0.

The difference between the two numbers is minute, representing a margin of systematic error less than 0. However, with each iteration of his model and there were thousands of iterations , this error was compounded, multiplying many times over so that his end result was completely different from the first run of the model.

Lorenz published his observations in the now classic work Deterministic Nonperiodic Flow Lorenz, His observations led him to conclude that accurate weather prediction over a period of more than a few weeks was extremely difficult — perhaps impossible — because even infinitesimally small errors in the measurement of natural conditions were compounded and quickly reached levels equal to the measurements themselves.

Uncertainty can also be used to indicate how likely something is to occur. For example, climate change scientists may include uncertainty in their discussions.

They have documented that change in many ways. But there is always some small bit of uncertainty around how much change is happening and where.

When scientists study how much the nutrition value of a food changes over time, their results include the uncertainty around their measurements. Check out the full list of Scientists Say here. By Bethany Brookshire April 16, at am. Even though it may seem counterintuitive, scientists like to point out the level of uncertainty. Because they want to be as transparent as possible and it shows how well certain phenomena are understood.

Decision makers in our society use scientific input all the time. But they could make a critically wrong choice if the unknowns aren't taken into account. For instance, city planners could build a levee too low or not evacuate enough coastal communities along an expected landfall zone of a hurricane if uncertainty is understated. For these reasons, uncertainty plays a key role in informing public policy. Skip to main content. Science and Uncertainty The desire for certainty is a powerful human emotion, and dispensing with uncertainty is a prime motivation for many endeavors and decisions.

And it is because new evidence can challenge any particular claim that a science is an exciting thing to do and b science is a reliable source of knowledge. Science is reliable generally because scientists are out there looking for new evidence, testing theories in new ways, challenging each other to do better—and they are doing all this because science is not certain.

So, we should embrace the uncertainty in science. But there are also implications of the uncertainty in science that should be addressed. The uncertainty in science also means that science cannot and should not be value-free.

It is the problem of assessing evidential sufficiency that makes value-free science thoroughly elusive. In the scientific mode, we all want to have beliefs based on evidence.

But how much evidence is enough? When is the evidence sufficient for us to accept a belief? This is a core question for the practice of science advice.



0コメント

  • 1000 / 1000