The truth value of a scientific principle is nothing more than the measure of its predictability. If you test it, you get a result. Next time you test it, how likely will it be that you get the same result? The more precise the results, the 'truer' the principle.

Imagine you've never seen a calculator before. Someone shows you how to use it. You type in 2+2. It provides an answer: 4. You try it again. It answers 4 again. You try other equations, and they all return correct answers. After enough repeated attempts you become comfortable with the expectation that the calculator will return a correct answer, ergo, "the calculator knows math." This is your principle, which has been experimentally proven true.

Note that this is not axiomatically true; all calculators have a known bit-error-rate. Let's say, one in ten million calculations, you can type in 2 + 2 and it will answer, I dunno... twelve. Does this make the principle invalid? NO. It just makes it imperfect and still remarkably useful.

If you want to investigate why the calculator works, you might start with the hypothesis, "math elves live inside the calculator, awaiting your instructions. They then construct the answer and display it on the screen." Repeatedly using the calculator suggests that the elves do in fact come up with correct answers. But if you investigate further and take the cover off the calculator, it becomes apparent that there are no elves inside it. All you find is plastic and metal. You throw out the math elf hypothesis and form a new one regarding the materials you did observe. Let's say you eventually discover that the circuit board operates on the manipulation of electricity and the individual components behave consistently when voltage is applied to certain contacts.

The scientific method has a failure rate of zero. Not very small, not miniscule... ZERO.

How can I claim this when history documents all sorts of naive and incorrect scientific theories (i.e., phlogiston, ether, Lamarckian evolution)? Because the scientific method has nothing to do with identifying absolute truths.

It's about obtaining the most accurate model possible.

Functionally true according to the best information available at the time.

I can say the scientific method has a failure rate of zero simply because no scientific theory has ever been replaced by a competing theory that didn't fit the data as well.

Now imagine somebody comes up to you and proposes that it's not electrical interaction between the calculator's components, but that the calculator is actually a container that holds the mathematical spirit of the universe. All your electrical experiments still work. Is there any reason at all to throw those away in favor of a generic and vague explanation that ignores your data and has no explanatory power of its own?

Food for thought.

## Tuesday, October 30, 2007

### BLASPHEMY - EXPLICATION

Subscribe to:
Post Comments (Atom)

## No comments:

## Post a Comment