Tuesday, October 26, 2010

Does Intelligent Design Explain and Unify Scientific Data?

In the spirit of not trying to lure readers in by creating artificial suspense, the answer to the title question is "no."

In the spirit of getting something posted on the blog, though, I'm responding to the latest post by Casey Luskin on the Evolution News & Views blog.  Luskin is unhappy with with the suggestion that "intelligent design" is based on mystery -- unsolved questions in biology and other sciences, to be met with a "god  of the gaps" argument. Luskin argues that in fact, there is a positive argument to be made for ID.  ID proponents infer design, he claims, because they find in nature the same sort of features which result, in our day to day experience, only from design: things like:  "A vast amount of information encoded in a biochemical language; a computer-like system of commands and codes that processes the information in order to produce molecular machines and multi-machine systems."

But this is confusing drawing an analogy with actually finding the thing used as an analogy.  The genetic code can be compared to a a language; it is not actually a language, but a set of chemical reactions.  The genome and gene expression can be compared to a computer program, but a gene that regulates another gene isn't really a command line that calls up a subroutine.  And organic molecular systems can be analogized to machines, but calling them machines begs the question that Luskin purports to raise.

In many respects, biological design does not resemble human design; these respects (e.g. the consistent nested hierarchy of homologies, the use of dissimilar structures for similar functions (analogy) or similar structures for dissimilar functions (parahomology).  These dissimilarities have long been cited as phenomena that common descent can explain and that common design cannot (granted, as Luskin has previously noted, "ID ... does not intrinsically challenge common ancestry," but one wonders why, if a Designer is constantly intervening over the course of evolution, He refrains so consistently from copying design elements across lineages).  And then, of course, there are designs that work towards contrary ends: e.g. the keen eyes of predators and the camoflage of prey, or the infectious abilities of pathogens and the capabilities of hosts' immune systems.  These would imply, at least, either a multiplicity of designers or a Designer working for a multiplicity of clients.  To argue for design in the face of so many features that differ from the way known designers work requires that ID abjure and denounce any hypotheses about the motives, methods, and design philosophy of the Designer, which in turn means that they cannot find confirmations of such testable hypotheses about the designer, which means, in turn, that they are indeed reduced to "god of the gaps" arguments: "non-intelligent causes cannot, so far as we know, do this."

Luskin goes on to list various ways in which ID supposedly illuminates a broad range of scientific fields (basically, by encouraging us to marvel at the "design" in all of them).  But he also makes some specific arguments about the scientific testability of intelligent design arguments.
ID begins with the observation that intelligent agents produce complex and specified information (CSI). Design theorists hypothesize that if a natural object was designed, it will contain high levels of CSI. Scientists then perform experimental tests upon natural objects to determine if they contain complex and specified information. One form of CSI is irreducible complexity, which can be tested for by experimentally reverse-engineering biological structures through genetic knockout experiments to determine if they require all of their parts to function. When experimental work uncovers irreducible complexity in biology, researchers conclude that such structures were designed.
This is really awe-inspiringly confused.  "Complex and specified information" is not notably well-defined, which makes it difficult to say whether it has really been observed or not.  Dembski's most famous criterion for identifying it, the "explanatory filter," tries to find it by ruling out regularities of nature, then ruling out contingent combinations of regularities of nature and random chance, and identifying anything that can't be so explained as "CSI" and hence designed.  And this procedure would work very well, if we already knew every law of nature in the universe, and all the possible ways they could interact under all possible conditions.  But one would suppose that were we possessed of such near-omniscience, we wouldn't actually need the explanatory filter.

But assuming that it can be defined and is found to exist and is not defined simply as "complexity produced by intelligence and not by non-intelligent causes," then identifying it doesn't, in fact, rule out the possibility that non-intelligent causes could produce it.  Conversely, it is perfectly possible for a designer to seek simplicity, or complexity that serves no particular function except to look interesting.  So even a designed artifact will not necessarily contain "high levels of CSI" (note that it was William Dembski who first raised this point).

But in any case, "irreducible complexity" as Behe defined it, does not rule out explanations in terms of mutation and natural selection.  This was first shown, years before Behe was born, by the geneticist Hermann Muller, who used the term "interlocking complexity" for systems in which mutations had first built up and then eliminated redundancy.  Given that mutations can alter the functions of components of molecular systems (e.g. making them better at one function while robbing them of another) or delete them, it does not follow that a system that can be rendered nonfunctional by removing one component could not have been built up by mutations and natural selection.

In practice, testing of the ID hypothesis still comes down to arguments from personal incredulity and god of the gaps arguments.

No comments:

Post a Comment