Unknowability and Public Discourse
There's a huge gap between how scientists talk about science among themselves and how science is understood by (and often how it's presented to) the public at large. The public thinks that science is based on the discovery of "laws" of nature. If most people's exposure to science ends with their high school biology or chemistry or physics class, this misunderstanding is not surprising. The mass public divides ideas about the world into two buckets: the fact bucket and the opinion bucket. Either something is a proven and inviolable scientific law (i.e., a fact), like the law of gravity, or it is merely an opinion, in which case it has no epistemic authority whatsoever.
What the public fundamentally lacks is the habit of thinking probabilistically. Psychologists have amassed quite a bit of data showing that humans are not naturally inclined to think accurately about probabilities (compared to, say, our naturally fantastic ability to recognize different faces), which makes the public's failure to apply probabilistic thinking to scientific questions fairly unsurprising. In truth, the way science is supposed to work according to many of the pioneers of the scientific method is by testing theories against observable data. In some rare cases, it is true that this method can completely rule out some explanations for observable phenomena, in that if the theory were true, the observable data would be impossible. However, most of the time, the outcome of an analysis will not be that black and white.
The outcome of most scientific analysis is to either lend support to a theory or to fail to lend support to a theory. Support or lack thereof is determined by the probability of finding the observable data if the theory in question were untrue. This is called a null hypothesis. Most quantitative studies set a pretty high bar there. Most studies will only consider their theory supported if the observable data had less than a 5% chance of occurring if the null hypothesis were true. That is, assuming the theory being tested is wrong, is there less than a 5% of chance of observing the data that was observed? If so, the scientific community has adopted a norm that says, yes, the theory then is supported (there are some variations on that 5% number that depend on things like sample size).
Even if a study meets this fairly high bar of 5%, there are a lot of ways that scientific analysis can still go wrong. The chance of being wrong about a theory's validity are actually quite a bit higher than 5%, given the large family of possible mistakes that can occur in statistical analyses. There are many assumptions that must obtain when conducting inferential statistics, and if those assumptions are violated, the conclusions may be wrong. Fortunately, a good number of those assumptions are testable, and also fortunately, the results of violating them are known. Some of them are not testable, and some the results of some of the violations are not known (particularly when combined together). Moreover, not every scientific journal holds the same standards for statistical rigor, and at least some portion of publications are willing to publish work that has at worst obvious errors and at best untested assumptions.
In other words, in actual science, the state of knowledge on a given question is an overall assessment of an accumulation of work that uses a variety of statistical methods, data sources, and levels of rigor. A scientist understands that their opinion about a question is a probabilistic assessment of an entire body of evidence, each piece of which is in turn based on probabilistic techniques.
However, because the public lacks this understanding of how science actually works, communicating scientific findings to the public is a real conundrum. It could be that the evidence for something is very strong, but it will never be 100% certain in the vast majority of cases. This little window of doubt means that the public feels like the idea is mere opinion. It gets thrown into the opinion bucket, and they are free to believe it or not. Evolution? Just a theory. Climate change? They can't be certain. Vaccine safety? What about that one study in that one journal?
It sucks that governments, groups, and individuals have to make decisions in the face of uncertainty. It would be nice if we could be sure about anything. But that's not what the world is like. The world does not just automatically bend to our epistemic desire for certainty. It stubbornly insists on being hard to understand and predict (almost as if our knowledge of it is not its highest priority! Imagine that!) Nevertheless, we still have to make these decisions.
But our failure to understand probability and the scientific process and our extreme discomfort with uncertainty means people lack the ability to soberly weigh evidence probabilistically. It means that our public debates are stupid. It means that the public believes all ideas held by anybody are equally valid (this is the horrible plague of false equivalency, where people feel entitled to demand that the World Is Flat Society get equal time and attention as The World is Round Society). It means we blame people for the results of their decisions rather than whether those decisions were the best bet at the time given the existing evidence. People who make good calls but catch a bad break get condemned, while people who make bad calls but get lucky are praised.
It also means that journalists that cover science and, sometimes, the scientists themselves, often feel like they have to oversell the case for the theory they are promulgating to the public. If they try to carefully discuss the uncertainties, assumptions, weaknesses in study design, and countervailing evidence against some theory, the public will only hear, "ok, so you're telling me it goes in the opinion bucket."
Scientists probably should get more comfortable and honest about the fact that some questions are simply unanswerable, and that's a damn shame, because we will probably still have to somehow come up with answers to those questions to make important life (and policy) decisions. But it would be nice if the public could me made to understand that there is a continuum of certainty about empirical questions, which means not only that the two-bucket approach is wrong, but also that some theories are a lot more worthy of our time and attention than others., and just because some crackpot believes something doesn't mean I can't say that they're an idiot.

0 Comments:
Post a Comment
<< Home