Wordtrade LogoWordtrade.com
Cosmology

 

Review Essays of Academic, Professional & Technical Books in the Humanities & Sciences

 

 

No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence by William A. Dembski (Rowman & Littlefield)  Willaim Dembski outlines several key ideas in the ongoing origins controversy, elaborating and extending his work begun in The Design Inference and  Intelligent Design (see reviews below). In chapter 4, he takes on the field of computer science and evolutionary algorithms and uses the proven No Free Lunch theorems to show that any information "generated" by a computer algorithm can be traced back to an intelligent cause--the programmer. The No Free Lunch theorems state that averaged over all possible fitness landscapes, no search algorithm outperforms blind search. Therefore, in order for evolutionary algorithms to outperform blind search, the programmer must select the relevant fitness function from a superclass of all possible fitness functions--a choice which entails generating just as much or more complex, specified information than the algorithm ultimately outputs.
This relates to another of Dembski's ideas, a proposed fourth law of thermodynamics: the law of conservation of information. This law states simply that complex, specified information is always conserved or lost in natural (non-intelligent) processes. Naturally, this implies that information flows can be traced back to their sources (information doesn't just spontaneously arise in natural processes), and that source will always turn out to be an intelligent cause. Dembski describes the "shell game" whereby evolutionary programmers try to obscure the "smuggled in" information and claim that their programs have generated the output information from scratch. He illustrates the process of tracing the information trail with clear, understandable examples, notably Thomas Schneider and his evolutionary model.
Perhaps one of the most intriguing claims given by Dembski is that his proposed fourth law of thermodynamics (the law of conservation of information) actually can counter-act the second law of thermodynamics. Information theorists often speak of the "reduction of uncertainty" given by a piece of information. Certainty and uncertainty in information theory are analogous to order and disorder in physics. The possibility that information may be used to decrease entropy (thereby reversing the second law of thermodynamics) sheds new light on mysteries like Maxwell's Demon Paradox. Maxwell's paradox involves a room, divided into two sections by a wall that has a (frictionless) door in it. This door can be opened by a demon, who only opens the door when a fast-moving air molecule is approaching. By judiciously choosing when to open the door, the demon is able to decrease the entropy of the system without inputting any extra energy, separating the room into a "hot" half and a "cool" half. The puzzle has always been to explain how the demon violates the second law, but no completely satisfactory answer has to date been given. Dembski's explanation is elegant and intriguing: in order to know when to open the door, the demon had to generate and use information. The second law wasn't violated, it was just counteracted by an intelligent agent making use of the fourth law. This is just one example of how design-theoretic ideas can have profound and intriguing implications for many fields of science, and how intelligent design itself is fertile ground for scientific progress.

Of course, one of the most intensely awaited features of Dembski's book is the probability calculation for the bacterial flagellum. Treating the bacterial flagellum as a discrete combinatorial object, Dembski provides an upper bound on the probability of a bacterial flagellum forming based on a chance hypothesis. The calculation leaves no doubt that this biological feature is far beyond the universal complexity bound of 10^150 and does indeed constitute an instance of biological specified complexity.

Author Summary:

The crucial question for science is whether design helps us understand the world, and especially the biological world, better than we do now when we systematically eschew teleological notions from our scientific theorizing. Thus, a scientist may view design and its appeal to a designer as simply a fruitful device for understanding the world, not attaching any significance to questions such as whether a theory of design is in some ultimate sense true or whether the designer actually exists. Philosophers of science would call this a constructive empiricist approach to design. Scientists in the business of manufacturing theoretical entities like quarks, strings, and cold dark matter could therefore view the designer as just one more theoretical entity to be added to the list. I follow here Ludwig Wittgenstein, who wrote, "What a Copernicus or a Darwin really achieved was not the discovery of a true theory but of a fertile new point of view."9 If design cannot be made into a fertile new point of view that inspires exciting new areas of scientific investigation, then it deserves to wither and die. Yet before that happens, it deserves a fair chance to succeed.

One of my main motivations in writing this book is to free science from arbitrary constraints that, in my view, stifle inquiry, undermine education, turn scientists into a secular priesthood, and in the end prevent intelligent design from receiving a fair hearing. The subtitle of Richard Dawkins's The Blind Watchmaker reads Why the Evidence of Evolution Reveals a Universe with­out Design. Dawkins may be right that design is absent from the universe. But science needs to address not only the evidence that reveals the universe to be without design but also the evidence that reveals the universe to be with design. Evidence is a two‑edged sword: claims capable of being refuted by evidence are also capable of being supported by evidence. Even if design ends up being rejected as an unfruitful explanatory tool for science, such a negative outcome for design needs to result from the evidence for and against design being fairly considered. Darwin himself would have agreed: "A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides of each question."" Consequently, any rejection of design must not result from imposing arbitrary constraints on science that rule out design prior to any consideration of evidence.

Two main constraints have historically been used to keep design outside the natural sciences; methodological naturalism and dysteleology. According to methodological naturalism, in explaining any natural phenomenon, the natural sciences are properly permitted to invoke only natural causes to the exclusion of intelligent causes. On the other hand, dysteleology refers to inferior design‑typically design that is either evil or incompetent. Dysteleology rules out design from the natural sciences on account of the inferior design that nature is said to exhibit. In this book, l will address methodological naturalism. Methodological naturalism is a regulative principle that purports to keep science on the straight and narrow by limiting science to natural causes. I intend to show that it does nothing of the sort but instead constitutes a straitjacket that actively impedes the progress of science.

On the other hand, I will not have anything to say about dysteleology. Dysteleology might present a problem if all design in nature were wicked or incompetent and continually flouted our moral and aesthetic yardsticks. But that is not the case. To be sure, there are microbes that seem designed to do a number on the mammalian nervous system and biological structures that look cobbled together by a long trial‑and‑error evolutionary process. But there are also biological examples of nano‑engineering that surpass anything human engineers have concocted or entertain hopes of concocting. Dysteleology is primarily a theological problem." To exclude design from biology simply because not all examples of biological design live up to our expectations of what a designer should or should not have done is an evasion. The problem of design in biology is real and pervasive, and needs to be addressed head on and not sidestepped because our presuppositions about design happen to rule out imperfect design. Nature is a mixed bag. It is not William Paley's happy world of everything in delicate harmony and balance. It is not the widely caricatured Darwinian world of nature red in tooth and claw. Nature contains evil design, jerry‑built design, and exquisite design. Science needs to come to terms with design as such and not dismiss it in the name of dysteleology.

A possible terminological confusion over the phrase "intelligent design" needs to be cleared up. The confusion centers on what the adjective "intelligent" is doing in the phrase "intelligent design." "Intelligent" can mean nothing more than being the result of an intelligent agent, even one who acts stupidly. On the other hand, it can mean that an intelligent agent acted with consummate skill and mastery. Critics of intelligent design often understand the "intelligent" in intelligent design in the latter sense and thus presume that intelligent design must entail optimal design. The intelligent design community, on the other hand, understands the "intelligent" in intelligent design simply to refer to intelligent agency (irrespective of skill, mastery, or cleverness) and thus separates intelligent design from optimality of design. But why then place the adjective intelligent in front of the noun design? Does not design already include the idea of intelligent agency, so that juxtaposing the two becomes redundant? Redundancy is avoided because intelligent design needs also to be distinguished from apparent design. Because design in biology so often connotes apparent design, putting intelligent in front of design ensures that the design we are talking about is not merely apparent but also actual. Whether that intelligence acts cleverly or stupidly, wisely or unwisely, optimally or suboptimally are separate questions…

 Chapter 1: The Third Mode of Explanation. How is design empirically detectable and thus distinguishable from the two generally accepted modes of scientific explanation, chance and necessity? To detect design, two fea­tures must be present: complexity and specification. Complexity guarantees that the object in question is not so simple that it can readily be attributed to chance. Specification guarantees that the object exhibits the right sort of pattern associated with intelligent causes. Specified complexity thus becomes a criterion for detecting design empirically. Having proposed a theoretical apparatus for detecting design, I next consider the challenge that Darwin posed to design historically and indicate why his challenge is viewed among many scientists as counting decisively against design. Essentially, Darwin opposed to design the joint action of chance and necessity and therewith promised to explain the complex ordered structures in biology that prior to him were attributed to design.

Chapter 2: Another Way to Detect Design? Many in the scientific and philosophical community have staked their hopes on explaining specified complexity by means of evolutionary algorithms. Yet even without evolu­tionary algorithms to explain specified complexity, few are prepared to em­brace design. One approach, now increasingly championed by the philosopher of science Elliott Sober, is to attack specified complexity head-on and claim that it is a spurious concept, incapable of rendering design testable in the case of natural objects, and that a precise probabilistic and complexity‑theoretic analysis of specified complexity vitiates the concept entirely. In critiquing my approach to detecting design, Sober has tied him­self to a likelihood framework for probability that is itself highly problematic. This chapter demonstrates that specified complexity is a well‑defined concept and that it readily withstands the criticisms raised by Sober and his colleagues.

Chapter 3: Specified Complexity as Information. Intelligent design can be formulated as a theory of information. Within such a theory, specified complexity becomes a form of information that reliably signals design. As a form of information specified complexity also becomes a proper object for scientific investigation. This chapter takes the ideas of chapters 1 and 2 and translates them into an information‑theoretic framework. This reframing of intelligent design within information theory powerfully extends the design ­inferential framework developed in chapter 1 and makes it possible accu­rately to assess the power (or lack thereof) of the Darwinian mechanism. The upshot of this chapter is a conservation law governing the origin and flow of information. From this law it follows that specified complexity is not reducible to natural causes and that the origin of specified complexity is best sought in intelligent causes. Intelligent design thereby becomes a theory for detecting and measuring information, explaining its origin, and tracing its flow.

Chapter 4: Evolutionary Algorithms. This chapter is the climax of the book. Here I examine evolutionary algorithms, which constitute the mathe­matical underpinnings of Darwinism. I show that evolutionary algorithms are in principle incapable of generating specified complexity. Whereas this result follows immediately from the conservation of information law in chap­ter 3, this law involves a high level of abstraction, so that simply applying the law does not make clear just how limited evolutionary algorithms really are. In this chapter I therefore examine the nuts and bolts of the evolution­ary algorithms: phase spaces, fitness landscapes, and optimization algorithms. An elementary combinatorial analysis shows that evolutionary algorithms can no more generate specified complexity than can five letters fill ten mail­boxes.

Chapter 5: The Emergence of Irreducibly Complex Systems. Specified complexity as a reliable empirical marker of intelligence is all fine and well, but if there are no complex specified systems in nature, what then? The previous chapters establish that specified complexity reliably signals design, not that specified complexity is actualized in any concrete physical system. This chapter examines how we determine whether a physical system exhibits specified complexity. The key to this determination, at least in biology, is Michael Behe's notion of irreducible complexity. Irreducibly complex bio­logical systems exhibit specified complexity. Irreducible complexity is there­fore a special case of specified complexity. Because specified complexity is a probabilistic notion, determining whether a physical system exhibits specified complexity requires being able to calculate probabilities. One of the objections to intelligent design becoming a viable scientific research program is that one cannot calculate the probabilities needed to confirm specified complexity for actual systems in nature. This chapter shows that even though precise calculations may not always be possible, setting bounds for the relevant probabilities is possible, and that this is adequate for establishing specified complexity in practice.

Chapter 6: Design as a Scientific Research Program. Having shown that specified complexity is a reliable empirical marker of intelligence and having overturned the main scientific objections raised against it, l conclude this book by examining what science will look like once design is readmitted to full scientific status. The worry is that attributing design to natural systems will stultify science in the sense that once a scientist concedes that some natural system is designed, all the scientist's work is over. But this is not the case. Design raises a host of novel and interesting research questions that it does not make sense to ask within a strictly Darwinian or naturalistic framework. One such question is teasing apart the effects of natural and intelligent causation. For instance, a rusted old Cadillac is clearly designed but also shows the effects of natural causes (i.e., weathering). Intelligent design is capable of accommodating the legitimate insights of Darwinian theory. In particular, intelligent design admits a place for the Darwinian mechanism of natural selection and random variation. But as a framework for doing science, intelligent design offers additional tools for investigating nature that render it conceptually more powerful than Darwinism.

More review:

This book marvelously extends and expands Dembski's earlier work. I highly recommend it. My only hope is that critics will have the decency to actually engage the ideas presented here. Unfortunately, my observations of the way Dembski is usually treated in academia do not provide much optimism. The fact is, Dembski has (successfully, in my opinion) "taken on" the established Darwinian orthodoxy, demonstrating the insufficiency of non-intelligent processes to generate specified complexity. The challenge is out. The gauntlet is down. Now, is there anyone out there who is man (or scientist) enough to read the book, truly understand the ideas, and engage in real academic dialog? I hope there is, and I look forward to the day when these ideas get a fair hearing in the academic arena. The ideas are strong, and science now has a choice: will it ignore Dembski's revolutionary ideas in favor of comfortable, established orthodoxy, or will it engage him in a spirit of free academic inquiry, and risk discovering what is really true? I hope, for the sake of science itself, that it chooses the latter.

The Design Inference: Eliminating Chance Through Small Probabilities by William A. Dembski (Cambridge Studies in Probability, Induction and Decision Theory: Cambridge University Press) How can we identify events due to intelligent causes and distinguish them from events due to undirected natural causes? If we lack a causal theory, how can we determine whether an intelligent cause acted? This book presents a reliable method for detecting intelligent causes: the design inference. The design inference uncovers intelligent causes by isolating the key trademark of intelligent causes: specified events of small probability. Just about anything that happens is highly improbable, but when a highly improbable event is also specified (i.e., conforms to an independently given pattern) undirected natural causes lose their explanatory power. Design inferences can be found in a range of scientific pursuits from forensic science to research into the origins of life to the search for extraterrestrial intelligence. This challenging and provocative book shows how incomplete undirected causes are for science and breathes new life into classical design arguments. Philosophers of science and religion, other philosophers concerned with epistemology and logic, probability and complexity theorists, and statisticians will read The Design Inference with particular interest.

The Design Inference is Dembski's attempt to formalize valid inferences about design. That is, how can we validly infer, for any event E, that E is the product of intelligent design? Most people make such inferences all the time (how does the average person explain Stonehenge). What is the logical structure of such inferences?

Despite the math, the argument structure is actually quite simple. The way to infer that E is the product of design is to run it through what Dembski calls the 'explanatory filter.' Try to explain event E according to presently known statistical regularities (e.g., Newton's laws). If event E cannot be explained by any such statistical regularity, then it passes through the explanatory filter, and is therefore the product of design.

This argument structure is the first main weakness in Dembski's book. In employing the explanatory filter, TDI elevates an anachronistic fallacy to an imperative. Simply showing that we can't presently explain a phenomenon is not sufficient to show that it can never be explained! In the nineteenth century, the precession of Mercury in its orbit could not be explained in a well-confirmed classical worldview, but to infer design based on that would not be good science. The problems with this kind of reasoning are made clearer when we consider our early ancestors who made poor design arguments about weather patterns and illness that they couldn't explain based on physical principles.

The inferential strategy outlined above sounds rather simple, so where does all the notorious math come in? It comes in as Dembski attempts to quantitatively unpack just how to demonstrate that an event cannot be explained by a statistical regularity. For those who know some statistics, this is essentially a detailed account of how to rationally generate a rejection region in a probability distribution. The formalism emerges because Dembski's account is idiosyncratic, as he tries to show that you can generate a rejection region even *after* you have already observed the event. Most scientists would balk at this, as it would allow you to retroactively put a rejection region over the event, which to put it simply, is cheating (imagine drawing a bull's-eye around a randomly shot arrow and saying that you hit the bull's-eye by skill).

Dembski claims that it is perfectly appropriate to retroactively generate rejection regions if it would have been *possible* to specify the region before the event E actually occurred. For example, say you see someone shoot an arrow that hits a tree at a seemingly random location where there happens to be a worm. Later, however, you find out and that the person was actually hunting worms and was wearing infrared worm-hunting goggles. In such a case, you would rightly conclude that the worm was hit because of skill rather than blind luck. More importantly, it would have been possible to predict that the arrow would land on tree-worms even if you hadn't seen it happen.

While many people in our discussion group disagreed, I think this is a reasonable way to retroactively reject a chance-based explanation. However, I do *not* think that Dembski is simply describing the rejection of a hypothesis. Rather, he is describing the replacement of one hypothesis with a more reasonable alternative (in this example, the alternative to chance is that the person is a skilled worm-hunter). This leads to what I think is the second main weakness in *The Design Inference*: the engine driving the inference is not a positive theory of design, but simply the elimination of other theories. The problem is that this does not seem to conform to how people do (or should) perform design inferences. That is, people don't run through an explanatory filter, eliminating all possible statistical explanations of something, and then end up with 'design' as the last node in an explanatory filter (or explanatory sink, as I like to call it). Rather, people have a *positive theory* of intelligent agents (i.e., things with desires, beliefs, and certain capacities) and they apply this theory (or network of theories) to explain events in the world. Design inferences are not different in kind from explanations of physical, biological, social, or psychological phenomena. It is the development of such a theory and its predictions which should be the focus for Dembski.

A final note: to those interested in the debate about creationism and evolution, caveat emptor. This book contains very little direct discussion of that issue. Rather, it does what should have been done long ago: tries to outline the inferential strategy people should be employing in this debate.

Despite the two main problems outlined above, I still recommend this book to anyone seriously interested in how we make inferences about design, in particular those interested in the creation-evolution debate. While the book does no damage whatsoever to the evolutionist (partly because, as mentioned above, it does not directly address that debate) it at least makes for stimulating, thought-provoking reading. Most importantly, it will direct the creationists to be more rigorous in their arguments about design.

Intelligent Design: The Bridge Between Science & Theology by William A. Dembski, Michael J. Behe (Intervarsity Press) In the movie Contact, an astronomer played by Jodie Foster discovers a radio signal with a discernable pattern, a sequence representing prime numbers from 2 to 101. Because the pattern is too specifically arranged to be mere random space noise, the scientists infer from this data that an extraterrestrial intelligence has transmitted this signal on purpose.

William Dembski sees in this illustration an instance of identifying specified complexity, and he argues that this criteria can be empirically applied to biology and the natural sciences. Dembski, one of the leading design theorists working today, demonstrates the viability of design theory with his criteria of "specified complexity."

Just as the coherent organization of Scrabble tiles on a board indicates arrangement by an intelligent agent, complexity in genetic DNA language and other biological sources suggests design. In the same way that anthropologists, forensic scientists, cryptologists and the Search for Extra-Terrestrial Intelligence (SETI) project use design inferences to identify an intelligently caused event, so too can molecular biologists, geneticists and other scientists reliably infer design.

Dembski's position does not rely on belief in the Genesis account of creation. Rather, he demonstrates that intelligent design operates as a scientific theory of information even without any a priori commitment to Christian theism. The criteria of specified complexity is able to detect design in nature even if the researcher remains agnostic as to the identity of the designing agent.

This wide-ranging book argues that intelligent design has more epistemic support and provides greater explanatory power for the origins and development of life than Darwinist evolutionary theory. Dembski demonstrates the weaknesses of methodological naturalism and offers proposals for reinstating design within science. An appendix details Dembski's responses to common objections to design theory.

The essence of intelligent design is simple and compelling, and Dembski is possibly its best advocate. Since it seems unlikely that natural processes can possibly cobble together an extraordinary information code such found in DNA, then we're left with a terrible explanatory gap.

Dembski and his intelligent design colleagues come into that gap with the idea that a supreme intelligence in nature must have designed it, while others following the materialist natural tradition have persistent tried to find natural processes that might somehow have accomplished the same feat. The success of intelligent design theory and the political movement behind reveals how difficult it is for any of us to comprehend the immense challenge that God or nature had in 'designing' such a thing as life.

We are left with the tough decision of guessing whether Darwinian processes are really sufficient to bring life into being (science being a tentative exploration after all), and relying on our deep assumptions about nature, the theological assumption of a supreme being or the naturalistic assumption of a purely material universe that seems devoid of meaning. The place where intelligence design seems to me to go astray is not seeing intelligence in nature (for surely it is there in some sense !) but where it tries to displace naturalism as the foundation of scientific explanations. In shifting explanation to existing intelligence that performs sudden miraculous acts rather than comprehensible 'algorithmic' processes over time, Dembki's philosophy seemingly leaves us without the very foundation of the scientific method in some ways. But that part is actually a task more directly taken on by Phillip Johnson and his "wedge" strategy than by Dembski. Dembski focuses more on the miracle of information codes arising in nature.

Several reviewers compared Intelligent Design creation theorist Dembski with complexity evolution theorist Stuart Kauffman, and that's very good. I'd like to skip the preliminaries and go right to the punch line. Dembski and Kauffman both point out that we don't really know how evolution/origins works in nature, and both come up with different versions of alternatives to orthodox natural selection theory. Although I strongly suspect that natural selection does operate in nature, and that species do change over time, I also agree that both men make a persuasive point that we don't really understand exactly how it all happens yet, how "order" (Kauffman), or "complex specified information" (Dembski) actually arises in nature.

The difference is first that Dembski's theory is more intuitive, refined, and in some ways, more persuasive, while Kauffman's is more rough and speculative, and requires not one but at least two scientific heresies, in both evolutionary biology and thermodynamics. One reviewer correctly pointed out that no extant theory of self-organization completely explains the chemical evolution of the cellular information storage mechanisms, while Dembski's version of intelligent design does. Kauffman points that out in his "Investigations," and admits readily that we simply don't know the answers yet. The key question is how we respond to this level of scientific uncertainty about the scope and limitations of the natural selection theory of origins. One approach is rough science, the other is a refined philosophical mixture of science and theology.

So the second difference is the difference that makes a difference, as they say. Kauffman's work is science, as in being provisional and generating testable and disconfirmable hypotheses. If there is any doubt of that, the tone and content of his "Investigations" make it quite clear.

Dembski's work is good philosophy and heavy scholarly research. However, when the rubber hits the road, it simply isn't science as we as scientists have come to define science. It is founded at its root on articles of faith as an explanation for complex specified information, and cannot be disconfirmed by evidence. That's fine and dandy for a theologian, and works for some philosophers as well (Ayn Rand comes to mind). But is that really where we want science and religion to head, to end up having to "prove" God, and to found our science on articles of (Christian) faith instead of provisional theory? I'm not so sure that all members of other religions would be as happy with this objectification of Christian faith as many Christians seem to be. It could be that Dembski's version of creation is generic enough to satisfy other faiths? Or that his is right and theirs is wrong? But the whole thing seems a little fishy to me from both a scientific and spiritual perspective.

In other words, even if Dembski is right, which is of course possible, intelligent design is not a scientific theory. Thus making a splendid and intelligent contribution to all the rest of the excellent but theologically misleading "God of the Bible is proven by science" literature, indirectly showing why science and spirituality are separate domains of human life and should remain so, to avoid doing inadvertent violence to both.

Headline 3

insert content here