Billy Goats Gruff

Wednesday, August 18, 2010

Alpha and Beta

I recently discovered an interesting series of interviews over on Slate.com. A woman who wrote a book on being wrong interviews a random assortment of quasi-famous people.

In statistics, all matters of rightness and wrongness are dealt with probabilistically. In general, there are two classes of error: type 1 and type II (yes...statisticians are really THAT creative).

Science is the process of setting up hypotheses and then trying to disprove them. Whatever one posits as the status quo version of the world, the version the scientist wants to disprove, is set as the "null hypothesis." So, if you were trying to prove that chewing gum increases sperm count, the null hypothesis is that chewing gum does NOT increase sperm count. Most of our standard statistical techniques are set up to show the degree of certainty we can have (based on underlying probability distributions) about rejecting that null hypothesis.

However, as an inherently probabilistic enterprise, there is always the chance of error in these procedures. Type I errors occur when we reject the null hypothesis when in fact it is true. I.E, we say that chewing gum DOES increase sperm count when in fact it does not. A type II error occurs when we ACCEPT the null hypothesis when in fact it is false. We say that chewing gum has no relationship to sperm count when in fact it does.

It may not be apparent, but these two types of error are mirror images of each other. We cannot decrease the probability of one type of error without increasing the probability of the other type. In social science, we generally fear Type I errors more than we fear Type II. We would rather fail to uncover new truths about the world than to accept new "truths" that are really falsehoods. To that end, we set our Type I error rates (referred to as alpha) at 5%. We are willing to be wrong 5 times out of 100. But that means the type II error rate is going to be fairly high (the calculation of the type II error rate is a bit technical).

One of the things I really like about statistics is that it offers a precise language for describing what we do anyway. We ALL make decisions probabilistically, and we all make unconscious tradeoffs between types of error. Statistics just says, hey, let's be precise about it. I can't tell you what's "true." I can only tell you what's probable, given a certain set of assumptions. But I CAN tell you HOW probable it is. It is up to you to deal with that remainder of uncertainty.

So, I've been thinking about this in reference to the Iraq War. It appears from my fairly lazy following of the war in the media that things there have stabilized. It is not impossible that we might pull off some sort of stable, semi-democratic regime there. But, does that mean that the war was a good idea? Not necessarily. The quality of a decision has to be judged according to the probability of success that the actors faced at the time a decision was made. In the same way, a person might play the lottery and win. Does that mean it was a good idea to play the lottery? No: the expected value of a lottery ticket is negative. Playing the lottery is an objectively bad decision.

That's how we should judge policy choices as well. Let's presume for a second that the Iraq War could pass some sort of rigorous cost-benefit analysis (a big IF, but for the sake of argument, let's stipulate that). That is irrelevant for judging the decision to go to war. The question is, given the information available at the time, what was the most prudent course of action, probabilistically speaking? What was the expected value of the action? What was the variance? And what costs associated with those expected values and variances?

0 Comments:

Post a Comment

<< Home

Enter your Email


Preview | Powered by FeedBlitz

free html web counters
Bloomingdale's Shopping