I was told in grade school that the giraffe’s neck evolved to be long because taller giraffes could reach more tasty tree leaves in times of drought. It’s a lovely example of natural selection, and also completely wrong, as I discovered when researching an edit to the Wikipedia article. Eventually, someone just went and checked: it turns out that during times of drought or food scarcity, giraffes eat from low bushes.
There is an important lesson here about what it means to “explain something.”
Rudyard Kipling wrote a children’s book of myths about the origins of animals titled Just-So Stories. In it he explains the origin of the elephant’s trunk, how the camel got his hump, and where the leopard’s spots came from (they were drawn by an Ethiopian from the leftover black of his own dark skin, so that the leopard would better blend into the background when they hunted zebra together.) Clearly, making sense is not the criterion for truth. It’s very easy to forget this, when someone gives you a complex explanation and you get that “aha! I understand” feeling. Human beings constantly confuse congruence with truth.
Sensible and false explanations are such a problem in science that the term “just-so story” has come to refer to any sort of explanation that fits the facts, but cannot be verified. Scientific theories are supposed to differ from literary criticism and other forms of creative writing by demanding explanations that are true. This means testing them against reality.
A crucial point here: you can’t test a theory against the same facts that you used to come up with the theory to begin with. Of course a theory is going to fit the facts that inspired it! Instead, a theory — an explanation of something — needs to predict things that haven’t been observed yet. Prediction is the essence of science; it is the ability to say what will happen before it happens that makes it possible to “design” a bicycle rather than just gluing random objects together until they roll. If our aim is to come up with a true theory about evolution, we need to use the length of the giraffe’s neck to make predictions about something else, something we can go check (repeatedly, if we are serious about testing the theory.)
This seemingly philosophical notion is incredibly useful for spotting subtle bullshit that sounds like science.
Consider, for example, the trial of a vitamin for preventing the common cold. Let’s say it’s even a controlled trial. One hundred volunteers are given Vitamin Z daily, while another hundred are (unknowingly) given a placebo. At the end of the study, the Vitamin Z group had the same number of colds. But, the researchers discover as they analyze the data, they had fewer headaches. Does this mean Vitamin Z prevents headaches? Not necessarily, because the theory “Vitamin Z prevents headaches” was formulated by noticing a pattern, any pattern, then making up a story about how that pattern came to be. That doesn’t make the story true. And there will always be patterns. If the volunteers can suffer from hundreds of different ailments, then by sheer dumb chance the Vitamin Z group will be found to suffer from less of at least one of them. (Applied to controlled experiments, this notion can be made mathematically precise, by the way. See post-hoc analysis.)
Put another way, if you keep turning over rocks you will eventually find something. The whole point of a theory — an explanation, a model, a statement of the causal relationships of reality — is to say what you will find before the rock is turned over. Otherwise you only have a story that fits the facts, a just-so story.
I have found just-so stories to be most common in alternative medicine, economics, and evolutionary explanations of human behavior. If nothing testable has been predicted, then nothing has been “explained.”