Inverted retina -- Another Darwinian icon bites the dust (PNAS May 2007)
One of the classic cases cited by evolutionists of supposedly poorly designs in nature is the inverted design of the vertebrate retina. Evolutionists have mocked how any designer could have been so unintelligent as to get the wiring backwards – with the photoreceptors behind a jumble of light-scattering cells (see the quote from Dawkins below).
But, a recent article in PNAS now indicates that there is a sophisticated network of high-performance optical fibers that funnels light into the photoreceptors without any loss. Concerning his research in this area, Andreas Reichenbach remarks (all emphases added),
- "Nature is so clever," Reichenbach says. "This means there is enough room in the eye for all the neurons and synapses and so on, but still the müller cells can capture and transmit as much light as possible."
- If the technique could be replicated with optical plates, it could mean engineers would be able to fit more into delicate sensors. "They could include lots of other things - computing elements for example," he adds.
- The research, entitled "Müller cells are living optical fibers in the vertebrate retina" is published in the 30 April edition of the Proceedings of the National Academy of Sciences (PNAS).
- [Lucy Sherriff, "Moving light past all those synapses", The Register, Tuesday 1st May 2007]
Let's start from the beginning. As explained by biochemist Michael Denton, in all non-vertebrate eyes, and in the pineal or dorsal eyes of primitive vertebrates, the photoreceptors point toward the light. However, in the vertebrate lateral eye, the photoreceptors point backwards away from the light towards the retinal epithelium and the choroidal blood sinuses. This arrangement necessitates the placement of the neural cell layer--which relays the visual image from the retina to the brain--between the photoreceptors and the light, and results in the blind spot where the axons of these neural cells leave the retina for the brain via the optic nerve.
Generations of Darwinists have seized on this apparently illogical arrangement and particularly the consequent “blind spot” as a case of maladaptation. The following comments by Dawkins are typical:
- Any engineer would naturally assume that the photocells would point towards the light, with their wires leading backwards towards the brain. He would laugh at any suggestion that the photocells might point away from the light, with their wires departing on the side nearest the light. Yet this is exactly what happens in all vertebrate eyes. Each photocell is, in effect, wired in backwards, with its wires sticking out on the side nearest to the light. This means that the light, instead of being granted an unrestricted passage to the photocells, has to pass through a forest of connecting wires, presumably suffering at least some attenuation and distortion (actually probably not much but, still, it is the principle of the thing that would offend any tidy-minded engineer!)
- [Richard Dawkins, The Blind Watchmaker, Penguin Books 1986, pp. 93-94.]
Vision is such an important adaptation in higher vertebrates that if the retina is indeed “wired wrongly” or “badly designed” it would certainly pose, as Dawkins implies, a considerable challenge to any teleological interpretation of nature. Denton already pointed out why Dawkins is mistaken..
But the article in PNAS now indicates that living optical fibers create a clear passage for light to the light-sensitive cells at the back of the eye. The Register (see above) describes thhis research as follows:
- It is an old question: how does light make its way through all the retinal layers to finally strike the light sensitive cells at the back of the eye? A group of researchers ... have demonstrated that light is collected and funnelled through long cells called Müller cells. These work almost exactly like a fibre optic plate: a "zero-length window" that optical engineers can use to transmit an image without using a lens.
- "Light has to go through all retinal layers to get to the photosensitive cells. This is not a problem for the octopus or the starfish which both have different eye structures. But it is a problem for all vertebrates," explains Andreas Reichenbach, who worked on the research.
- "The layers in front of the rods and cones act as a diffusing screen. They have a half micron diameter which is roughly the same as the incoming light, so there must be lots of scattering. So we thought, could there be a way round this?
- "We put unstained, living tissue on a microscope and focused through the layers. We found lots of light reflecting in synaptic and nerve layers, but with regular patterns of empty holes with no scattering."
- The team then built up a cross section of the eye and found that the holes were in fact tubes, running all the way through. They were able to confirm that these were the Müller cells by running tests with lasers.
- "Everyone thinks lasers are perfectly parallel, but this is not so," Reichenbach continues. "They do diverge. The Müller cells behave as a lens, and collect all the light without any loss, just like an optical plate."
- But normal optical plates have simple bundles of optical fibres that collect and transmit the light. The researchers have discovered that the vertebrate eye has gone one step further and created a funnel shaped cell that allows more light to be collected at the surface of the eye.
- The discovery doesn't have any direct medical applications, but it could pave the way for dramatic improvements in various pieces of sensing equipment.
"Nature is so clever," Reichenbach says. "This means there is enough room in the eye for all the neurons and synapses and so on, but still the m?ller cells can capture and transmit as much light as possible."
Even the pro-Darwinian AAAS news site Science Now was impressed.
- Optical solution. For an organ that delivers such crystal-clear images, the eye is curiously designed. Its light-sensing rods and cones lie hidden behind a blanket of nerve cells that carry visual information to the brain. So what prevents those neurons from obscuring our vision? The answer may be surprisingly high-tech. The entire retina is held together by a network of elongated Müller cells, and these act like optic fibers, funneling light straight through the neural veil to the rods and cones, according to a study published online the week of 30 April in Proceedings of the National Academy of Sciences. Not a bad trick for a camera designed 500 million years ago.
"Not a bad trick for a camera designed 500 million years ago"? Despite having misred the marvel of the eye, evolutionists do not question their other speculative assumption (that the marvel of the eye could arise suddenly by natural processes 500 million years ago). Strangely, however, the "D" word ('designed') somehow crept in.
Here is the abstract from the PNAS paper (emphasis added):
- Muller cells are living optical fibers in the vertebrate retina, Kristian Franze, Jens Grosche, Serguei N. Skatchkov, Stefan Schinkinger, Christian Foja, Detlev Schild, Ortrud Uckermann, Kort Travis, Andreas Reichenbach, and Jochen Guck, Proceedings of the National Academy of Sciences USA, 10.1073/pnas.0611180104, Published online before print May 7, 2007
- Abstract: Although biological cells are mostly transparent, they are phase objects that differ in shape and refractive index. Any image that is projected through layers of randomly oriented cells will normally be distorted by refraction, reflection, and scattering. Counterintuitively, the retina of the vertebrate eye is inverted with respect to its optical function and light must pass through several tissue layers before reaching the light-detecting photoreceptor cells. Here we report on the specific optical properties of glial cells present in the retina, which might contribute to optimize this apparently unfavorable situation. We investigated intact retinal tissue and individual Muller cells, which are radial glial cells spanning the entire retinal thickness. Muller cells have an extended funnel shape, a higher refractive index than their surrounding tissue, and are oriented along the direction of light propagation. Transmission and reflection confocal microscopy of retinal tissue in vitro and in vivo showed that these cells provide a low-scattering passage for light from the retinal surface to the photoreceptor cells. Using a modified dual-beam laser trap we could also demonstrate that individual Muller cells act as optical fibers. Furthermore, their parallel array in the retina is reminiscent of fiberoptic plates used for low-distortion image transfer. Thus, Muller cells seem to mediate the image transfer through the vertebrate retina with minimal distortion and low loss. This finding elucidates a fundamental feature of the inverted retina as an optical system and ascribes a new function to glial cells.
Physicist Freeman Dyson (Princeton) -- Scientist are unwise to dismiss intelligent design (March 2007)
Freeman Dyson (now retired) is a professor of physics at the Institute for Advanced Study in Princeton. As a student at Cornell University he worked with Richard Feynman and made major contributions on the unification of the three versions of quantum electrodynamics invented by Feynman and others. Dyson is a fellow of the American Physical Society, a member of the US National Academy of Sciences, and a fellow of the Royal Society of London. Dyson is quoted as saying the following (emphasis added):
- Christopher Morbey: Dear Professor Dyson: Thanks for taking time to answer questions! I’m wondering if you have an opinion regarding the new interest in “intelligent design” as an independent mode of explaining an event. Typically, pervading opinion demands that events occur only by chance and/or necessity. ...
- Freeman Dyson: My opinion is that most people believe in intelligent design as a reasonable explanation of the universe, and this belief is entirely compatible with science. So it is unwise for scientists to make a big fight against the idea of intelligent design. The fight should be only for the freedom of teachers to teach science as they see fit, independent of political or religious control. It should be a fight for intellectual freedom, not a fight for science against religion.
- [Quoted by W. Dembski, from Newsletter CCNet 66/2007 (27 March 2007)]:
Dyson once wrote:
- "The more I examine the universe and the details of its architecture, the more evidence I find that the universe in some sense must have known we were coming." [Freeman Dyson, Disturbing the Universe, New York: Harper & Row, 1979, p. 250]
Eukaryotic cell irreducibly complex - It's Origin one of the greatest Enigmas (Nature & Science April 2007)
In biology, there are two major groupings (superkingdoms) into which all organisms are divided: prokaryotes (e.g. bacteria and blue-green algae) and the more complex eukaryotes (e.g. plants and animals). The cells of eukaryotes possess a clearly defined nucleus, bounded by a membrane, within which DNA is formed into distinct chromosomes. Eukaryotic cells also contain mitochondria, chloroplasts, and other structures (organelles) that, together with a defined nucleus, are mostly lacking in the cells of prokaryotes.
The Darwinian hope was always that mechanisms would be found to show how prokaryotes could evolve into eukaryotes. However, absolutely no links between the two have been found -- not in the fossil record and not by comparing their structures. Lynn Margulis and Karlene Schwartz approvingly quote Stanier et. al. (emphasis added):
- "this basic divergence in cellular structure which separates the bacteria and the blue green algae from all other cellular organisms, probably represents the greatest single evolutionary discontinuity to be found in the present-day world".
- [Lynn Margulis and Karlene V. Schwartz, Five Kingdoms: An Illustrated Guide to the Phyla of Life on Earth, W.H. Freeman & Company; 3rd edition (January 1998), p10, quoting: R. Y. Stanier, E. A. Adelberg, and M. Doudoroff, The microbial world, 3d ed., Prentice-Hall, Englewood Cliffs, NJ; 1963.]
James A. Lake (distinguished professor in the Departments of Biology and Human Genetics, University of California) near the conclusion of his article in Nature writes (all emphases below added):
- How the eukaryotic cell came to be is one of the greatest enigmas in biology.
- [Nature 446, 983, (26 April 2007) | doi:10.1038/446983a]
Lake describes the problem as follows:
- Eukaryotic evolution is something of a Gordian knot. Using single genes to unravel it won't work, as the genomes of eukaryotes (animals, plants, fungi and protists) are derived from those of several prokaryotes (eubacteria and archaebacteria). ...
- Until recently, everyone assumed, based on a single ribosomal RNA gene, that eukaryotes descended from archaebacteria — extremophilic prokaryotes distinct from 'true' bacteria, or eubacteria. Now we know that's not the case. More than two-thirds of the nuclear genes of the yeast Saccharomyces cerevisiae, for instance, are derived from eubacteria, and the balance from archaebacteria. What we know of gene losses and gains also indicates that the eukaryotic genome probably resulted from the fusion of archaebacterial and eubacterial genomes, effectively turning the tree of life into a ring of life. But how did evolution come up with the strange distribution of eubacterial and archaebacterial genes we see in eukaryotes today?
Note that Lake assumes that it must have happened by evolution despite the difficulties. Lake continues:
- In prokaryotes there are two major gene classes: operational and informational. Operational genes are involved mainly in day-to-day processes of cell maintenance, and code for amino-acid and nucleotide biosynthesis as well as related functions. Informational genes feature primarily in transcription, protein synthesis, DNA replication and other processes to convert information from DNA into proteins.
- Because eukaryotes are derived from archaebacteria and eubacteria, one might expect to find an archaebacterial and a eubacterial copy of each nuclear gene. But strangely, archaebacterial operational and eubacterial informational genes are almost completely absent from eukaryotes, even though the first eukaryote contained two sets of informational and operational genes.
- This well-documented correlation between phylogenetic origin and gene disappearance is paradoxical because no one understands how these classes of genes left the scene. I call this correlation, which provides an important clue to the early evolution of eukaryotes, the Janus paradox. Like the two faces of the Roman god Janus, thought to represent the Moon and the Sun, the phylogenetic origins of informational and operational genes in eukaryotes are as different as night and day. Finding a gene distribution such as this is the statistical equivalent of finding that a coin tossed at night (Janus's archaebacterial face) always comes up heads (informational genes), and tossed during the day (Janus's eubacterial face) always comes up tails (operational genes).
After some attempts to answer these questions, Lake admits:
- Unfortunately, I have no good suggestion for why the archaebacterial operational genes were eliminated. I hope that this will motivate some readers to think of hypotheses and experiments. It could be that somehow, the ready availability of operational genes within the eubacterially derived cellular organelles led to the preponderance of eubacterial operational genes.
- Whatever explanations of the Janus paradox are unearthed, it will be exciting to follow the quest. How the eukaryotic cell came to be is one of the greatest enigmas in biology. It is a story so complex that no single gene can tell it. Only entire genomes can.
The day after James Lake's interesting essay appeared in Nature, acknowledging that the origin of eukaryotes is one of the greatest enigmas in biology, he appeared as a co-signatory to some correspondence, attacking a paper published last year in Science. The paper maintained that hypotheses invoking genome fusion to explain the characteristic features of the eukaryote cell have failed. The authors reviewed recent data from proteomics and genome sequences and suggested that "eukaryotes are a unique primordial lineage". The critics including Lake declared.
- Finally, and most disturbing, if contemporary eukaryotic cells are truly of “irreducible nature,” as Kurland et al.’s title declares, then no stepwise evolutionary process could have possibly brought about their origin, and processes other than evolution must be invoked. Is there a hidden message in their paper?
The paper in question is: Kurland, C.G., Collins, L.J. and Penny, D., Genomics and the Irreducible Nature of Eukaryote Cells, Science 312, 19 May 2006: 1011-1014.
The authors of the paper responded in a non-inflammatory way. They stand their ground.
- Our view is that cellular and molecular biology, especially genomics, reveals signs of an ancient complexity of the eukaryotic cell. ... our primary conclusion is that there is good progress on understanding the complexity of the ancestral eukaryote cell. [The Evolution of Eukaryotes (Letter ) William Martin, Tal Dagan, Eugene V. Koonin, Jonathan L. Dipippo, J. Peter Gogarten, James A. Lake. Response by C. G. Kurland, Lesley J. Collins, and David Penny Science 316, 27 April 2007: 542-543.]
Researchers at Oxford/MIT unable to duplicate the strength & flexibility of silk made by spiders (Sciencenews April 2007)
Creation Evolution headlines (04/18/2007 ) reports: Spiders still maintain the edge in a technology humans want: a material that absorbs huge amounts of energy without breaking. The dragline silk spun by spiders is extremely robust – ounce for ounce stronger than steel, yet more flexible than Kevlar. If a web the size of a football field could be erected in the air with strands one centimeter thick arranged in concentric circles 4 cm apart, it could stop a jumbo jet in flight.
Fascinating facts about spider silk made the cover story of Science News (171:15, p. 231, 04/14/2007). Aimee Cunningham told about teams like that of Nikola Kojic (MIT) that are trying to replicate this ideal material but have not yet succeeded in matching its strength. Human versions require high temperatures, high pressures and toxic substances to make. Your humble garden spider has no such limitations:
- In contrast, natural spider silk is produced at room temperature with water as a solvent, says Chris Holland, a zoologist at the University of Oxford in England. “It’s made in the spider, and with the spider eating flies. That produces a fiber that we can’t even come close to.”
The formula for synthetic dragline silk is a prize humans eagerly seek. Such a tough and flexible material would find many applications, from bulletproof vests to suspension cables for bridges. Maybe even Spiderman toys will come from it. “The spider hasn’t given us all the secrets,” said one researcher.
Somehow, the spider extrudes a silk dope through ducts in its abdomen, and this goop solidifies into a strand that is stretchy and very tough. “A silk thread contains hundreds of thousands of protein chains, each of which folds on its own and also arranges itself among other chains in the fiber.” One researcher found that repeating units are able to snap together like Lego blocks.
Even more amazing, spiders spin seven kinds of silk from the same machinery. Dragline silk, forming the spokes of the web, absorbs the brunt of the energy. Capture-spiral silk is stretchy and sticky. Other forms are used to wrap the prey, coat the egg sacs and perform other functions. One team found that the prey-wrapping silk is up to three times tougher than dragline silk. This adds drama to that scene of Shelob’s lair in The Return of the King.
At this point, the R&D of spider technology is still in the R (research) stage. Spiderman wannabees will probably not find webshooters under the tree this year. But even though the researchers interviewed for the article stand in awe of spider silk, they did not shy away from speculating about how evolution gave the spider a technology our brightest minds cannot emulate. “Spiders and silkworms evolved the capacity to spin silk independently of each other,” said one:
- The dopes contain different proteins, and the resulting fibers have distinct properties. Yet “what we see is that the flow properties are very similar,” [Chris] Holland [Oxford] says. Despite their differences, the spider and silkworm “use similar tricks,” he continues. “This gives fantastic insight into how silk production has evolved and how the production of an energy-efficient, high-performance fiber is made by nature.”
Not only that, it happened a long, long, time ago: “Spiders have been spinning these silks for almost 400 million years.” No questions asked.
The evolution-talk ruined an otherwise great article. Notice that the Darwin storytelling was absolutely useless. Evolution was assumed without evidence and contributed nothing to helping the scientists on their quest to reverse-engineer the technology. Help children observe the wonderful ways spiders weave their webs. Have you ever witnessed the whole web construction process? The material is amazing enough, but watching how the spider creates the pattern is a lot of fun. Their ability to turn fly guts into techno-silk should not be minimized.
Did Indians see a Jurassic Crocodile? (Science, March 30th, 2007)
How did 19th century Indians come to have familiarity with Jurassic monsters that only existed some 140 million years ago according to evolutionary thinking? The journal Science has a brief report as follows:
- Some fossils are rare, but this one recently unearthed in eastern Oregon may be positively mythic. In life, the 2-meter-long Jurassic seagoing crocodile (above), discovered by members of the North American Research Group, sported scales, needlelike teeth, and a fishtail. Some paleontologists, including Stanford University researcher Adrienne Mayor, think similar fossils may have inspired Native American representations of water monsters. Mayor notes the croc’s “remarkable” resemblance, for example, to a 19th century Kiowa artist’s drawing (inset) of a legendary water serpent.
- [Constance Holden, Random Samples, “Oregon Sea Monster,” Science, Volume 315, Number 5820, p1773, 30 March 2007]
Were the Indian artists good paleontologists or skilled at reconstructions? No evidence is supplied as to whether Native Americans were even familiar with fossils, let alone whether they ever made reconstructions based on them. The straightforward explanation that these artists or their forbears saw this beast is not even considered. The obvious reason is that there is no way in the evolutionary timetable that humans and Jurassic crocodiles could have co-existed. More information is needed before jumping to strange evolutionary conclusions.
Evolutionary Theory Not Even Skin Deep (Science, March 2007)
Nina Jablonski's recent book on the evolution and physiology of human skin called Skin: A Natural History (UC Berkeley, 2006) is reviewed by Qais Al-Awqati (Columbia U) in Science .
As noted by the reviewer: “In its discussion of the human skin, the book’s principal theme is evolution, and almost every page contains that word.” So, how did Jablonski do? Did she satisfy the reviewer’s hopes that Darwin can explain the evolution of skin?
- Although the author wants to provide an evolutionary perspective on all attributes of skin biology, the accounts she provides seldom rise above the provision of plausible hypotheses. Is it really true that we were selected to be hairless sweaty creatures? That sounds possible, but what is the actual evidence for such an assertion? Is it also true that vitamin D synthesis, a major locus of interaction between sunlight and diet, is the dominant factor in the natural selection of skin color? This idea is simply presented without any of the documentation that would make a convincing story. One would like to see the evidence of how rickets (vitamin D deficiency) might act as an agent of evolutionary selection.[1, emphasis added]
As stated by the reviewer, even in the areas of sociology, “The thorny issue of the social construction of the roles of skin color is reduced again to a brief survey of skin color biology and its evolutionary implications.” At the end of the review, Al-Awqati tried to find a few things to praise, but the shallowness of Jablonski’s evolutionary theorizing extended to her own research. “Although only a few of Jablonski’s research papers address skin evolution,” he said, “the lack of deep expertise need not prevent a nonspecialist from pulling together findings from different fields to generate an exciting, even fresh view of nature.” Apparently this book “fell short” of this mark also.
As reported in  if a fellow evolutionist comes looking for evidence for evolutionary myths and can’t find it, why should anyone else pay attention? It’s not just the sociology of skin that is Darwin’s thorn in the flesh. Heat regulation in furry apes is much different than the sweating response in human skin. Sweat glands are complex structures under the control of the nervous system. The skin is not just a surface; it has multiple layers with veins, arteries, glands, nerves, hair muscles, sebaceous glands, pores and specialized receptors for touch, heat and pain.
Werner Gitt in The Wonder of Man says that one square centimeter of skin contains 6 million cells, 100 sweat glands, 15 sebaceous glands, 5,000 sensory corpuscles, 200 pain points, 25 pressure points, 12 cold-sensitive points and 2 heat-sensitive points. Skin sloughs off dead cells while regenerating new ones in a precise balance. It is an important barrier to disease germs, and a protection from injury and dehydration. It performs a respiratory function, absorbing some of the oxygen we use, while letting some carbon dioxide in and out.
Human skin is an incomparable substance. Burn victims are not given artificial plastics; they are given skin transplants from live humans. How does evolution explain the fact that a newborn infant arrives into the world with a vernix coating to protect its skin? What evolutionary process led to the precise timing of a multitude of changes that occur in the right sequence when a baby is born? These are all matters of life and death; without them, there would be no human race.
These observational facts demand causes equal to them. Creationists have no problem with the question. Jablonski wrote a whole book on the theme of skin evolution, mentioning the Evolution word on practically every page.
1 (Qais Al-Awqati, “Anthropology: Showing Some Skin,” Science, 2 March 2007: Vol. 315. no. 5816, p. 1223, DOI: 10.1126/science.1138921).
 Thanks to Creation-Evolution headlines (03/02/2007) for the title, heads-up and sources.
Evolution takes another hit!
According to Scientific American (2000), ERNST MAYR was "one of the towering figures in the history of evolutionary biology". He was on the faculty of Harvard University, where he was the Alexander Agassiz Professor of Zoology until his death in 2005. "The author of some of the 20th century’s most influential volumes on evolution, Mayr is [was] the recipient of numerous awards, including the National Medal of Science." Mayr wrote:
- No educated person any longer questions the validity of the so-called theory of evolution, which we now know to be a simple fact. Likewise, most of Darwin’s particular theses have been fully confirmed, such as that of common descent, the gradualism of evolution, and his explanatory theory of natural selection.
- Ernst Mayr, "Darwin’s Influence on Modern Thought", Scientific American, July 2000, p83, emphasis added.
Jeffrey H. Schwartz is professor of anthropology at the University of Pittsburgh and he is a Fellow of the prestigious American Association for the Advancement of Science and the World Academy of Art and Science. A recent release from the University of Pittsburgh states:
- Jeffrey H. Schwartz's most recent article, “Critique of Molecular Systematics,” is the next step towards a counter evolutionary theory that takes a critical look at the theory of cellular and molecular change. ... Schwartz is working to debunk a major tenet of Darwinian evolution. Schwartz believes that evolutionary changes occur suddenly as opposed to the Darwinian model of evolution, which is characterized by gradual and constant change. Among other scientific observations, gaps in the fossil record could bolster Schwartz's theory because, for Schwartz, there is no “missing link.”
- "Pitt Professor Contends Biological Underpinnings Of Darwinian Evolution Not Valid", NewsFromPitt, University of Pittsburgh, Februray 9, 2007, emphasis added.
That's right folks. After evoloutionists starting with Darwin have been telling us for years that the missing transitional fossils will be found, Schwartz was forced to concede that they are just not there and to invent new speculative mechanisms to account for the Darwinian origin myth of naturalistic common descent (see his book “Sudden Origins: Fossils, Genes, and the Emergence of Species”, Wiley, 2000). His "big-step" mechanism is even harder to defend than the gradual Darwinian one because coordinated changes have to occur together with the likelihood of making the impropability calculations worse (Schwartz does not provide any detailed testable mechanisms). The University of Pittsburgh release continues:
- In an examination that further challenges the Darwinian model, Schwartz and cowriter Bruno Maresca, a professor of biochemistry at the University of Salerno, Italy, examine the history and development of what the writers dub the “Molecular Assumption” (MA) in the article “Do Molecular Clocks Run at All? A Critique of Molecular Systematics,” to be published in the Feb. 9 issue of Biological Theory.
- The MA became a veritable scientific theory when, in 1962, biochemists Emil Zuckerkandl and Linus Pauling demonstrated species similarity through utilizing immunological activity between the blood's serum and a constructed antiserum. Upon observing the intensity of the serum and antiserum reactivity between human, gorilla, horse, chicken, and fish blood, Zuckerkandl and Pauling deduced “special relatedness”-the more intense the reaction, the more closely related the species were supposed to be.
- Fish blood was most dissimilar, so it was assumed that the fish line diverged long before the other species. Human and gorilla blood were the most similar, meaning both species had the least amount of time to diverge. Ultimately, the Darwinian model of constant evolutionary change was imposed upon the static observation made by Zuckerkandl and Pauling.
- To date, the scientific community has accepted the MA as a scientific truth. It is this assumption, which Schwartz is contemplating: “That always struck me as being a very odd thing-that this model of constant change was never challenged.” Schwartz has his own theories regarding evolution, which are backed by recent developments in molecular biology.
But as Schwartz point out, the cell actually resists any change (species remain the same?):
- This regular cellular maintenance is what Schwartz points to regarding his refutation of constant cellular change. “The biology of the cell seems to run contrary to the model people have in their heads,” says Schwartz, and he contends that if our molecules were constantly changing, it would threaten proper survival, and strange animals would be rapidly emerging all over the world. Consequentially, Schwartz argues that molecular change is brought about only by significant environmental stressors, such as rapid temperature change, severe dietary change, or even physical crowding. ....
- However, it is not only the current molecular theory that intrigues Schwartz, but the failure of the scientific community to question an idea that is more than 40 years old: “The history of organic life is undemonstrable; we cannot prove a whole lot in evolutionary biology, and our findings will always be hypothesis. There is one true evolutionary history of life, and whether we will actually ever know it is not likely. Most importantly, we have to think about questioning underlying assumptions, whether we are dealing with molecules or anything else,” says Schwartz.
Brain Surgeon finds Darwinian Evolution Unconvincing - Time Online Feb. 9, 2007
In a piece at Time Online, More Spin from the Anti-Evolutionists, senior Times writer Michael Lemonick attacks ID, the Discovery Institute, the signatories of the Dissent From Darwin list, and Dr. Michael Egnor in particular. The Times writer states:
- "Darwinism is a trivial idea that has been elevated to the status of the scientific theory that governs modern biology," says dissent list signer Dr. Michael Egnor. Egnor is a professor of neurosurgery and pediatrics at State University of New York, Stony Brook and an award winning brain surgeon named one of New York's best doctors by New York Magazine.
The comments section is very illuminating as Dr. Egnor replies to and challenges Lemonick. Egnor comments:
- Can random heritable variation and natural selection generate a code, a language, with letters (nucleotide bases), words (codons), punctuation (stop codons), and syntax? There is even new evidence that DNA can encode parallel information, readable in different reading frames.
- I ask this question as a scientific question, not a theological or philosophical question. The only codes or languages we observe in the natural world, aside from biology, are codes generated by minds. In 150 years, Darwinists have failed to provide even rudimentary evidence that significant new information, such as a code or language, can emerge without intelligent agency.
- I am asking a simple question: show me the evidence (journal, date, page) that new information, measured in bits or any appropriate units, can emerge from random variation and natural selection, without intelligent agency.
Egnor repeats this request for evidence several times in his comments. Incredibly, Lemonick not only never provides an answer, he retorts: “[One possibility is that] your question isn’t a legitimate one in the first place, and thus doesn’t even interest actual scientists.”
Lemonick goes on to comment: “Invoking a mysterious ‘intelligent designer’ is tantamount to saying ‘it’s magic.’”
- Your assertion that ID is “magic,” however, is ironic. You are asserting that life, in its astonishing complexity, arose spontaneously from the mud, by chance. Even the UFO nuts would balk at that.
- It gets worse. Your assertion that the question, “How much biological information can natural selection actually generate?” might not be of interest to Darwinists staggers me. The question is the heart of Darwinism’s central claim: the claim that, to paraphrase Richard Dawkins, “biology is the study of complex things that appear to be designed, but aren’t.” It’s the hinge on which the argument about Darwinism turns. And you tell me that the reason that Darwinists have no answer is that they don’t care about the question (!).
More comments from Egnor:
- There are two reasons that people you trust might not find arguments like mine very persuasive:
- They’re right about the science, and they understand that I’m wrong.
- They’re wrong about the science, and they’re evading questions that would reveal that they’re wrong.
- My “argument” is just a question: How much new information can Darwinian mechanisms generate? It’s a quantitative question, and it needs more than an ad hominem answer. If I ask a physicist, “How much energy can fission of uranium generate?” he can tell me the answer, without much difficulty, in ergs/ mass of uranium/unit time. He can provide references in scientific journals (journal, issue, page) detailing the experiments that generated the number. Valid scientific theories are transparent, in this sense.
- So if “people you trust” are right about the science, they should have no difficulty answering my question, with checkable references and reproducible experiments, which would get to the heart of Darwinists’ claims: that the appearance of design in living things is illusory.
- One of the things that has flipped me to the ID side, besides the science, is the incivility of the Darwinists. Their collective behavior is a scandal to science. Look at what happened to Richard Sternberg at the Smithsonian, or at the sneering denunciations of ID folks who ask fairly obvious questions that Darwinists can’t answer.
- The most distressing thing about Darwinists’ behavior has been their almost unanimous support for censorship of criticism of Darwinism in public schools. It’s sobering to reflect on this: this very discussion we’re having now, were it to be presented to school children in a Dover, Pennsylvania public school, would violate a federal court order and thus be a federal crime.
After much back and forth, Egor writes (emphasis added):
- I'm on my fourth post here, and my fourth request for a number and references on the amount of information that Darwinian mechanisms can generate. The response has been handwaving, algorithms, credentials thumping, political sneers, and insults. No experimental biological data.
- The disparity between the size of the ID community and the size of the Darwinist community makes Darwinism look even worse. How is it that a handful of scientists, many of whom are not even biologists, can ask questions that the hordes of highly credentialed evolutionary biologists can't even begin to answer?
- The obvious explanation is that Darwinism is not an adequate explanation for the information content in living things.
- Darwinism isn't doing well in the bright light. It's sort of like cold fusion, but with a longer shelf life, and a constituency.
In honor of Darwin Day the ID Discovery thinktank issued their annual update to the Scientific Dissent from Darwin list (see above). As Discovery bloggers wrote, apparently, it is dishonest to point out that 700 scientists are skeptical of Darwinian evolution. Discovery has never tried to claim that a majority of scientists are Darwin doubters. The whole point of the list was to refute the claim in PBS' 2001 Evolution series that no scientists doubted Darwin. (Then it was 'no credible scientists'; which became 'well, not very many scientists'; and so on.) Still. Time Magazine journalist Michael Lemonick got himself all in a huff over the list. So much so he attacked Dr. Michael Egnor (as above) for not knowing enough about biology, for not having a degree in the field, for only being a brain surgeon. So what are Lemonick's credentials other than writing for a weekly news tabloid. His credentials in his own words are: "I've been covering science in major publications for more than two decades. Consider the fact that I may have actually learned a thing or two along the way."
Darwinian "Just So" stories
Darwinians live in a wonderful world in which speculative macro-evolutionary mechanisms such as natural selection can cause organs of sunstantial complexity to appear, by magic.
On Monday 9 August 2004, BBC Radio 4 broadcast the first of a series on “Real Just So Stories”. Stimulated by Rudyard Kipling’s stories for children, the programme presenter, Alistair McGowan, asks: “What really happened?” This episode was entitled “How the Elephant Got Its Trunk”.
Adrian Lister, Professor of Palaeobiology at University College London, explained that the trunk leaves no fossils. However, the skull can be studied for evidence of muscle attachment points. He said that all the potential ancestors for the elephant were small, possibly amphibious, and rather like a hippo in not having a trunk. However, as the animals grew in size, they would have found it difficult to get supplies of water. They could not stoop to drink because of their short necks and stocky legs. A trunk would allow them to get water without stooping. The ancestral elephants were “blessed by evolution with this wonderful structure”.
Storyteller : Adrian Lister, Professor of Palaeobiology at University College London
- "In North America the black bear was seen . . . swimming for hours with widely open mouth, thus catching, like a whale, insects in the water. Even in so extreme a case as this, if the supply of insects were constant, and if better adapted competitors did not already exist in the country, I can see no difficulty in a race of bears being rendered . . . more and more aquatic in their structure and habits, with larger and larger mouths, till a creature was produced as monstrous as a whale."
- Storyteller: Charles Darwin, Origin of Species, First edition (Not in the 6th edition)
Origin of Life not so simple (Scientific American, February 12, 2007)
Casey Luskin (Evolution News, Feb 15, 2007) reports: In an article titled “A Simpler Origin for Life” —-a title which hides the implication of the article -- Robert Shapiro, writing in Scientific American, highlights many problems with chemical origin of life scenarios. Shapiro quotes Richard Dawkins on his worship of the first self-replicating molecule and says "[a]t some point a particularly remarkable molecule was formed by accident. We will call it the Replicator." But, as Shapiro explains, the real explanation is not nearly so simple:
- Unfortunately, complications soon set in. DNA replication cannot proceed without the assistance of a number of proteins--members of a family of large molecules that are chemically very different from DNA. Proteins, like DNA, are constructed by linking subunits, amino acids in this case, together to form a long chain. Cells employ twenty of these building blocks in the proteins that they make, affording a variety of products capable of performing many different tasks--proteins are the handymen of the living cell. Their most famous subclass, the enzymes, act as expeditors, speeding up chemical processes that would otherwise take place too slowly to be of use to life. The above account brings to mind the old riddle: Which came first, the chicken or the egg? DNA holds the recipe for protein construction. Yet that information cannot be retrieved or copied without the assistance of proteins. Which large molecule, then, appeared first in getting life started--proteins (the chicken) or DNA (the egg)?
- (Robert Shapiro, "A Simpler Origin for Life," Scientific American, February 12, 2007)
Shapiro also takes aim at the hypothesis that Miller-Urey type chemistry may have led to life's building blocks meteorites:
- By extrapolation of these results, some writers have presumed that all of life's building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life. (When larger carbon-containing molecules are produced, they tend to be insoluble, hydrogen-poor substances that organic chemists call tars.) I have observed a similar pattern in the results of many spark discharge experiments.
- Robert Shapiro, "A Simpler Origin for Life," Scientific American, February 12, 2007
Shapiro then recounts that in the 1980's some suggested that "Life began with the appearance of the first RNA molecule." But Shapiro explains that RNA cannot be the first life, because "the clues I have cited only support the weaker conclusion that RNA preceded DNA and proteins; they provide no information about the origin of life, which may have involved stages prior to the RNA world in which other living entities ruled supreme." He goes on to critique the RNA world hypothesis, and lamenting that "despite the difficulties that I will discuss in the next section, perhaps two-thirds of scientists publishing in the origin-of life field (as judged by a count of papers published in 2006 in the journal Origins of Life and Evolution of the Biosphere) still support the idea that life began with the spontaneous formation of RNA or a related self-copying molecule." In his 1986 book, Shapiro explains the chicken and egg (which came first) problem for materialistic origin of life scenarios:
- Genes and enzymes are linked together in a living cell two interlocked systems, each supporting the other. It is difficult to see how either could manage alone. Yet if we are to avoid invoking either a Creator or a very large improbability, we must accept that one occurred before the other in the origin of life. But which one was it? We are left with the ancient riddle: Which came first, the chicken or the egg? In its biochemical form, protein versus nucleic acid, the question is a new one, dating back no further than Watson and Crick and our knowledge of the structure and function of the gene. In its essence, however, the question is much older, and has provoked passion and acrimony that extend beyond the boundaries of science. In an earlier, broader form, the question asked whether the gene or protoplasm had primacy, not only in the origin but also in the development of life.
- Shapiro, R., "Origins: A Skeptic's Guide to the Creation of Life on Earth," Summit Books: New York NY, 1986, p.135, Emphasis added.
- We are now ready to handle the chances for the spontaneous generation of a bacterium. ... Many scientists have attempted such calculations; we need cite only two of them to make the point. The first was provided by Sir Fred Hoyle, whose ideas we shall discuss in detail later in the book. He and his colleague, N. C. Wickramasinghe, first endorsed spontaneous generation, then abruptly reversed their position. Why did they do this? Quite obviously, they calculated the odds. Rather than estimate the chances for an entire bacterium, they considered only the set of functioning enzymes present in one. Their starting point was not a complex mixture, but rather the set of twenty L-form amino acids that are used to construct biological enzymes. If amino acids were selected at random from this set one at a time and arranged in order, what would be the chances that this process would produce an actual bacterial product? For a typical enzyme of 200 amino acids, the odds would be obtained by multiplying the probability for each amino acid, 1 in 20, together 200 times. The result, 1 in 10^120 [sic], places us on floor 120 of the Tower of Numbers, immensely higher than the level where we find the number of trials. Things need not be that bad, however. What matters is the function of the enzyme, rather than the exact order of amino acids within it. A large number of amino acid sequences might provide enzymes with the proper function. With this in mind, Hoyle and Wickramasinghe estimated that the chances of obtaining an enzyme of the appropriate type at random were `only' 1 in 10^20 . To duplicate a bacterium, however, one would have to assemble 2,000 different functioning enzymes. The odds against this event would be 1 in 10^20 multiplied together 2,000 times, or 1 in 10^40,000 . This particular item would then be available on floor 40,000 of the Tower of Numbers. If we consider that the number of trials brought us only to the fifty-first floor, we can understand why Hoyle changed his mind. His estimate of the likelihood of the event was that it was comparable to the chance that `a tornado sweeping through a junk-yard might assemble a Boeing 747 from the materials therein.' ["Hoyle on evolution," Nature, Vol. 294, 12 November 1981, p.105] In fact, things are much worse. A tidy set of twenty amino acids, all in the L-form, was not likely to be available on the early earth. This situation has not even been approached by the very best Miller-Urey experiments. Nor does a set of enzymes constitute a living bacterium." (Shapiro, R., "Origins: A Skeptic's Guide to the Creation of Life on Earth," Summit Books: New York NY, 1986, pp.125,127-128)
Shapiro pointed out that "the chances of obtaining a self-replicating machine depended on the number of parts to it" and even for "a single strand of RNA of ... 20 nucleotides" (of which there is no evidence that any length of RNA alone could replicate itself, let alone one of only 20 nucleotides), such a "replicator would have about 600 atoms" and on the analogy of "Charlie the Chimp" randomly typing even a message of "18 characters" (let alone "a 600-letter message") on a "keyboard with ... 45 keys," "Charlie will still be typing away long after the stars have ceased to shine":
- Now how difficult would it be to put together the replicator at random? The minimal published estimates of its size propose a single strand of RNA of perhaps 20 nucleotides. To build this structure, about 600 atoms would have to be connected in a specific way, much less than the many millions needed for a bacterium. ... But what are the odds? J.B.S. Haldane recognized that the chances of obtaining a self-replicating machine depended on the number of parts to it. If the number was small, there was no problem: "By mere shuffling you will get the letters ACEHIMN to spell 'machine' once in 5040 trials on an average." [Haldane, J.B.S., "The Origins of Life," in Johnson, M.L., Abercrombie, M. & Fogg, G. E., eds, "The Origin of Life," New Biology, No. 16, Penguin Books: London, April 1954, p.14] If you could shuffle at the rate of once per second, it would require only 84 minutes to run that many tries. This analogy suggests that it should not be hard to put together a smallish replicator, so we must look more closely at it. We will stay with the metaphor of language, but set aside the letters on cards in favor of another much-used situation: the monkey at the typewriter. Let's call him Charlie the Chimp. Charlie is special. He never gets tired, and types out one line per second, completely at random. ... Now let us give Charlie a normal keyboard with, say, 45 keys. The odds suddenly escalate to 1 in 45^7, or 1 in 370 billion tries. It would take Charlie (or his descendants) 11,845 years to run that many attempts. The word `machine' does not arise as readily as Haldane's first analogy would suggest. Things get rapidly worse when we use longer messages. We will let Charlie try for a bit of Hamlet. The phrase `to be or not to be' has 18 characters, if we count the spaces as characters. The chances that our chimp will type this out are 1 in 45^18, or 1 in 6 x 10^9. At one try per second, it will take poor Charlie more than 10^22 years to do that number of tries. Should the open model for the universe be correct, Charlie will still be typing away long after the stars have ceased to shine and all the planets have been dispersed into space through stellar near-collisions. But now we have developed a real thirst for Shakespeare. We want our monkey to type out `to be or not to be: that is the question,' which has 40 characters. The chances then become 45^40, or about 10^66, to 1. This is a number 10 million times greater than the number of trials maximally available for the random generation of a replicator on the early earth. There we have it. If the chances of getting the replicator at random from a prebiotic soup are less than that of striking `to be or not to be: that is the question' by chance on a typewriter, we had best forget it. The replicator would have about 600 atoms. The chances of Charlie typing a 600-letter message (twice the size of this paragraph) correctly are 1 in 10^992. ... There is a further irony. Even should the miracle occur and the replicator find itself awash in the seas of the prebiotic earth, its fate would be unkind. It would perish without further issue. For in this random sea, it would encounter only hosts of unrelated chemicals, and not the subunits it needs to reproduce itself. A second miracle would be needed to surround it with exactly the ingredients it needs for further progress." (Shapiro, 1986, pp.168-170).
Haldane, one of the giants of Neo-Darwinism, wrote:
- If the minimal organism involves not only the code for its one or more proteins, but also twenty types of soluble RNA, one for each amino acid, and the equivalent of ribosomal RNA, our descendants may be able to make one, but we must give up the idea that such an organism could have been produced in the past, except by a similar pre-existing organism or by an agent, natural or supernatural, at least as intelligent as ourselves, and with a good deal more knowledge.
- Haldane, J.B.S., "Data Needed for a Blueprint of the First Organism," in Fox, S.W., ed., "The Origins of Prebiological Systems and of Their Molecular Matrices," Proceedings of a Conference Conducted at Wakulla Springs, Florida, October 27-30, 1963, Academic Press: New York NY, 1965, p.12).
In the next quote Haldane confirms that intelligent design is a legitimate scientific explanation:
- The first enzyme very possibly contained the sequence Asp-Ser-Gly, which is part of the active centers of phosphoglucomutase, trypsin, and chymotrypsin. Ribonuclease contains 124 amino acid residues. If all were equally common, this would mean 540 bits. The number is actually a little less than that. This number could be somewhat reduced if some amino acids were rare both in the medium and in the enzyme. I suggest that the primitive enzyme was a much shorter peptide of low activity and specificity, incorporating only 100 bits or so. But even this would mean one out of 1.3 x 10^30 possibilities. This is an unacceptable, large number. If a new organism were tried out every minute for 10^8 years, we should need 10^17 simultaneous trials to get the right result by chance. The earth's surface is 5 x 10^18 cm2. There just isn't, in my opinion, room. Sixty bits, or about 15 amino acids, would be more acceptable probabilistically, but less so biochemically. I suggest that the first synthetic organisms may have been something like a tobacco mosaic virus, but including the enzyme or enzymes needed for its own replication. More verifiably, I suggest that the first synthetic organisms may be so constituted. For natural, but not for laboratory life, a semipermeable membrane is needed. This could be constituted from an inactivated enzyme and lipids. I think, however, that the first synthetic organism may be much larger than the first which occurred. It may contain several different enzymes, with a specification of 5000 bits or so-about the information on a page of Chamber's 7-figure logarithm tables. This should be quite within human possibilities. The question will then arise: How much smaller may the first natural organism have been? If this minimum involves 500 bits, one could conclude either that terrestrial life had had an extraterrestrial origin (with Nagy and Braun) or a supernatural one (with many religions, but by no means all)." (Haldane, Ibid., p.14, emphasis added).
Humans are now only 94% similar to chimps, not 98.5% (Scientific American News, December 16, 2006)
- A lot more genes may separate humans from their chimp relatives than earlier studies let on. Researchers studying changes in the number of copies of genes in the two species found that their mix of genes is only 94 percent identical. The 6 percent difference is considerably larger than the commonly cited figure of 1.5 percent. The new finding supports the idea that evolution may have given humans new genes with new functions that don't exist in chimps, something researchers had not recognized until recently. The older value of 1.5 percent is a measure of the difference between equivalent genes in humans and chimps, like a difference in the spelling of the same word in two similar languages. Based on that figure, experts proposed that humans and chimps have essentially the same genes, but differed in when and where the genes turn on and off.
- Scientific American Science News, JR Minkel, "Human-Chimp Gene Gap Widens from Tally of Duplicate Genes: There's a bigger genetic jump between humans and chimps than previously believed", December 19, 2006 (emphasis added).
The Scientific American report is based on an article in PLoS One:
- Gene families are groups of homologous genes that are likely to have highly similar functions. ... Along the lineage leading to modern humans we infer the gain of 689 genes and the loss of 86 genes since the split from chimpanzees, including changes likely driven by adaptive natural selection. Our results imply that humans and chimpanzees differ by at least 6% (1,418 of 22,000 genes) in their complement of genes, which stands in stark contrast to the oft-cited 1.5% difference between orthologous nucleotide sequences. This genomic ‘‘revolving door’’ of gene gain and loss represents a large number of genetic differences separating humans from our closest relatives.
- "The Evolution of Mammalian Gene Families" Jeffery P. Demuth1 et. al., PLoS ONE 1(1): e85., December 2006, doi:10.1371/journal.pone.0000085, emphasis added.
Earlier articles (e.g. Nature, 437:69-87, 2005) provided a draft of the chimp genome. This is quite a stunning achievement. But researchers called it "the most dramatic confirmation yet" of Darwin's theory of common descent that man supposedly shares a common ancestor with the apes. The "proof" of the so called chimp-human link was that the genetic difference was "only" 4%! This is a strange kind of proof because that is more than double the 1.5% difference claimed earlier. The PLos article now puts the difference at 6%, so it seems that in evolution-think the greater the differences, the more proof we have of the putative link!
From the Nature article, we see that about 35 million mutations (due to single base pair substitutions) would be needed to bridge the gap between the chimp and the human genome. Another 80 million mutations are needed to account for either insertions deletions. The minimum number of insertion events is about 5 million mutations (an insertion can account for a sequence of nucleotides). Thus there at least 40 million separate mutation events that separate the chimp from the human genome, involving the need to invent hundreds of new genes.
To put this in perspective, it would take 10,000 pages of meaningful text to account for 40 million letters (assuming 4000 letters per page). You may want to try producing meaningful text (that accounts for especially human capabilities of bipedalism, language and cognition) via chance natural processes to see what kind of mechanism are implicated. Such claims are similar to arguing that the markings on the Rosetta stone were produced purely by wind and erosion.
Haldane's dilemma (the trade secret of evolutionary genetics)
The problem: Let's ignore for the moment the need to add new specified information to the chimp genome for uniquely human characteristics such as bipedalism and the larynx and brain power needed for language. Let's ignore the fact that each mutation must preserve the viability of the organism while blindly evolving in a direction that supports decidedly human characteristics. Let's focus for a moment just on the problem of getting the required 35 million mutations (or more).
In 1957, the famous evolutionary geneticist, J.B.S. Haldane, showed that for higher vertebrates (species with low reproduction rates), the long-term rate of beneficial substitution cannot plausibly be faster than one substitution per 300 generations.
Haldane's Dilemma establishes a limit of 1,667 beneficial substitutions over the past ten million years of the lineage leading to humans. The origin of all the uniquely human adaptations would have to be explained within that limit. Thus 6 million years or even 10 million years is not enough time to account for the hundreds of genes that must be innovated. There is barely time to innovate a single gene of 1667 nucleotides. That is a serious problem!
It seems that creationist ReMine is correct to insist that it is time to revisit Haldane's maximum limit of 1667 mutations in the given period. Darwinian evolution simply does not have the population resources to fix that many base pairs of difference (not enough individuals, not enough mutations, not enough time).
What is Haldane's "cost of substitution"? When a beneficial mutation occurs in an individual in the population, it has to spread to other individuals for evolution to proceed. But the rate at which this can happen is limited. A major factor limiting the rate of substitution is the reproduction rate of the species. For a human-like creature with a generation time of about 20 years and low reproduction rate per individual, the rate of growth in numbers of a mutation in a population will be slow. The required reproduction rate is called the "cost of substitution".
Imagine a population of 100,000 ape-like creatures the putative progenitors of humans and modern apes. Suppose, optimistically, that a male and a female both received a mutation so beneficial that they out-survived everyone else, i.e. only these two survive and the rest (99,998 of them) die out.
Suppose, optimistically, that the surviving pair had enough offspring to replenish the population in one generation. Suppose, optimistically, that this repeats every generation (every 20 years) for 10 million years, more than the supposed time since the last common ancestor of humans and apes. Thus, at most 500,000 beneficial mutations could be added to the population (i.e. 10,000,000/20). This is much too small to accommodate the required 40 million mutations. Even with this completely unrealistic scenario, which maximizes evolutionary progress, only about 0.02% of the human genome could be generated. Considering that the difference between the DNA of a human and a chimp is 6%, evolution has an obvious problem in explaining the origin of the specified information in the human genome.
With more realistic rates of fitness/selection and population replenishment, Haldane calculated that no more than 1,667 beneficial substitutions could have occurred in the supposed 10 million years since the last common ancestor of apes and humans. This is a mere one substitution per 300 generations, on average, and the origin of all that makes us uniquely human has to be explained within this limit.
Ken Miller, Cold-fusion of Chromosome #2 and Berra's Blunder
A substitution is a single mutational event such as: gene duplications, chromosomal inversions, or a single nucleotide substitution. Biologists have found that the vast majority of substitutions are indeed single nucleotides, so Haldane's limit puts a severe constraint on what is possible with evolution, because 1,667 single nucleotide substitutions amount to less than one average-sized gene. These figures are likely to become worse for evolutionary tales when we have to include problematic and possibly unlikely events such as the claim that two chimp chromosomes fused into human chromosome #2.
A major distinction between chimpanzees and humans is the fact that chimpanzees have 48 (24 pairs) chromosomes while humans have 46 (23 pairs). At the Dover trial, biologist Ken Miller testified how human chromosome #2 has two centromeres, which are the central attachment points used for pulling a chromosome to one end of a cell during mitosis. Chromosomes normally only have one centromere, but human chromosome #2 looks like two chromosomes were fused together, because it has two centromeres (or at least, it has one normal centromere, and another region that looks a lot like a centromere). Futhermore, Miller noted how chromosome #2 has a section where there are two telomeres, structures normally at the tips of chromosomes, which are found in the middle of chromosome #2. Essentially, these two telomeres are oriented in a way that it looks, genetically speaking, like the ends of two chromosomes were fused together.
But, as Casey Luskin puts it: Evidence for fusion in a human chromosome tells you nothing about the alleged Common Ancestry with chimps. All Miller has done is provide some evidence of a chromosomal fusion event in humans. But evidence for a chromosomal fusion event is not evidence for when that event took place, nor is it evidence for the ancestry prior to that event. The fusion-evidence implies that some of our ancestors may have had 48 chromosomes. But Miller has not provided any evidence that the individual with 48 chromosomes was historically related to modern apes. Chromosome #2 has banding patterns similar to two ape chromosomes, but given that our chromosome structure is generally similar to that of apes anyways, it is not a stretch to assume that any 48 chromosome ancestor of you and me had a chromosome structure similar to apes, regardless of whether or not that individual was related to apes (see Berra's Blunder below). Claiming that banding pattern similarities is evidence of common ancestry with apes simply invokes the mistaken “similarity = ancestry” argument, and thus begs the question. It is entirely possible that humans underwent a chromosomal fusion event within its own separate history.
The story gets even worse. Chromosome fusions can occur but are thought to reduce reproductive success due to the resulting monosomy and trisomy in the zygotes produced by the mating of a normal genotype and an individual with the fused chromosomes. Such chromosomal defects are associated with mental retardation such as Down's syndrome (e.g unbalanced translocation such as the fusion of chromosome 21 with chromosome 21). So, what would be needed is the chance event of the same chromosome fusion occurring in two individuals at the same time in the same place such that they just happened to mate with one another to produce viable male and female offspring. The benefit of such a fusion is also not known. Nor do we know why there are 9 pericentric inversions in chromosome 2 (a stretch of nucleotides in a chromosome that appears to have been spliced out and reinserted in the reverse order), or why there are modifications in the Y chromosome.
Casey Luskin put it like this: Under Neo-Darwinism, genetic mutation events (including chromosomal aberrations) are generally assumed to be random and unguided. Miller's Cold-Fusion tale becomes more suspicious when one starts to ask harder questions like "how could a natural, unguided chromosomal fusion event get fixed into a population, much less how could it result in viable offspring?" Miller's account must overcome two potential obstacles:
- In most of our experience, individuals with the randomly-fused chromosome can be normal, but it is very likely that their offspring will ultimately have a genetic disease. A classic example of such is a cause of Down's syndrome.
- One way around the problem in (1) is to find a mate that also had an identical chromosomal fusion event. But Valentine and Erwin imply that such events would be highly unlikely:
- "[T]he chance of two identical rare mutant individuals arising in sufficient propinquity to produce offspring seems too small to consider as a significant evolutionary event." (Erwin, D..H., and Valentine, J.W. "'Hopeful monsters,' transposons, and the Metazoan radiation", Proc. Natl. Acad. Sci USA, 81:5482-5483, Sept 1984)
In other words, Miller has to explain why a random chromosomal fusion event which, in our experience ultimately results in offspring with genetic diseases, didn’t result in a genetic disease and was thus advantageous enough to get fixed into the entire population of our ancestors. Given the lack of empirical evidence that random chromosomal fusion events are not disadvantageous, perhaps the presence of a chromosomal fusion event is not good evidence for a Neo-Darwinian history for humans.
Miller may possibly have found empirical evidence for a chromosomal fusion event. But, our experience with mammalian genetics tells us that such a chromosomal aberration should have resulted in a non-viable mutant, or non-viable offspring. Thus, Neo-Darwinism has a hard time explaining why such a random fusion event was somehow advantageous.
If it were to somehow turn out that the fusion of two chromosomes can only result in a viable individual if the fusion event takes place in a highly unlikely and highly specified manner, then we may actually be looking at a case for a non-Darwinian intelligent design event in the history of the human genus.
The following quotes will help us understand how Ken Miller has fallen into a variation of Berra's Blunder.
- “Comparing the genetic code of humans and chimps will allow us to comb through each gene or regulatory region to find single changes that might have made a difference in evolution,” they say, but remind us that the oft-quoted 96%-similar-gene figure between chimps and humans must be seen in context: “At a conservative estimate we share about 88% of our genes with rodents and 60% with chickens. Applying a more liberal definition of similarity, up to 80% of the sea-squirt’s genes are found in humans in some form. So it’s no surprise that we are still asking, ‘What makes us human?’”
- “The question of what genetic changes make us human is far more complex. Although the two genomes are very similar, there are about 35 million nucleotide differences, 5 million indels and many chromosomal rearrangements to take into account. Most of these changes will have no significant biological effect, so identification of the genomic differences underlying such characteristics of ‘humanness’ as large cranial capacity, bipedalism and advanced brain development remains a daunting task.” (emphasis added)
- A third of apparent segmental duplications in the human genome (defined by more than 94% sequence identity) are not found in the chimp genome. This team compared the two genomes, and figured that this required a duplication rate of 4 to 5 million bases per million years since humans and chimps parted evolutionary ways. Most of the changes, surprisingly, deal with chromosome structure. No clear picture emerges for how or why these differences arose: “It is unknown whether slow rates of deletion, high rates of duplication or gene conversion are largely responsible for the evolutionary maintenance of these duplicates.” A surprising conclusion is that “when compared to single-base-pair differences, which account for 1.2% genetic difference, base per base, large segmental duplication events have had a greater impact (2.7%) in altering the genomic landscape of these two species.”
Berra's Blunder and the chimp!
Most introductory biology textbooks carry drawings of vertebrate limbs showing similarities in their bone structures. Biologists before Darwin had noticed this sort of similarity and called it "homology," and they attributed it to construction on a common archetype or design. In The Origin of Species, however, Darwin argued that the best explanation for homology is descent with modification, and he considered it evidence for his theory.
In his book Evolution and the Myth of Creationism (Stanford University Press, 1990), biologist Tim Berra compared the fossil record to a series of Corvette models: "If you compare a 1953 and a 1954 Corvette, side by side, then a 1954 and a 1955 model, and so on, the descent with modification is overwhelmingly obvious." (emphasis added)
However, as Berkeley law Professor Phillip Johnson pointed out, Berra forgot to consider an obvious and crucial point. A Corvette, so far as anyone has yet been able to determine, does not give birth to little Corvettes. Like all automobiles, Corvettes are intelligently designed by people working for the auto companies. They do not appear on the production line via unguided forces that did not have them in mind. So, although Berra thought he was supporting Darwinian evolution rather than the pre-Darwinian archetype design explanation, he unwittingly showed that the fossil evidence is compatible with either. Johnson dubbed this : "Berra's Blunder."
As a friend of mind once pointed out -- he doubts that anybody would argue that his father-in-law's Cadillac descended with modification (via unguided chance natural processes) from his Honda Civic, despite the fact that they had an apparently shared design (wheels, the combustion engine, crankshaft gears etc).
The lesson of Berra's Blunder is that we need to specify a natural mechanism before we can scientifically exclude designed construction as the cause of homology. But biologists have known for a hundred years that homologous structures are often not produced by similar developmental pathways. And they have known for thirty years that they are often not produced by similar genes, either. So there is no empirically demonstrated mechanism to establish that homologies are due to common ancestry rather than common design.
Those who claim that since chimps and humans share the ATGC genetic code or have similar chromosomes are likewise committing a form of Berra's blunder. Where are those detailed testable Darwinian pathways that take us from a chimp to the vast capabilities of humans. We share 50% of our genome with the banana - but that does not make us a banana. It is the differences that make the difference.
The significant differences between the chimp and human genome, Haldane's dilemma, and the need to account for the specified complexity that make us uniquely human surely stretch our tolerance of evolutionary thinking to the limits of credulity.
Fine tuned Squid's Eyes are better than state-of the-art Zeiss lenses (Science, 26 Jan. 2007)
Every day, biology looks more and more like it is designed!
- Seeing clearly underwater requires a special spherical lens with a high refractive index in the center but a lower index toward the edge. This gradation is achieved with progressively lower concentrations, from the lens's center outward, of proteins called crystallins." In recent research into the complexity of the squid eye, the variety of crystalline variants has been established, evoking the comment: "It's amazing how finely tuned the squid lens is to do its job." The researcher "is deeply impressed by cephalopod vision." Indeed, she noted, the shipboard tests showed that the vampire squid's lens, which appeared early in the evolutionary history of cephalopods, "has a visual acuity better than in a state-of-the-art Zeiss dissecting microscope."
- Elizabeth Pennisi, "Loopy Lens Proteins Provide Squid with Excellent Eyesight", Science 315, 26 January 2007: p456 (emphasis added).
The real issue is whether chance and naturalistic processes can account for squid lenses better than state-of-the-art Zeiss lenses. Researcher Alison Sweeney went to sea to study the origin of eyes. While the ship rolled beneath her, she dissected the eyes of squid freshly retrieved from 1000 meters below and tested how well each lens resolved the details of a panel of ever-narrower black and white stripes. Back at Duke University in Durham, North Carolina, as a graduate student in the lab of Sonke Johnsen, she combined those results with biochemical and modeling data on the optical and chemical properties of lens proteins to reconstruct the history of vision in cephalopods (squid, octopi, and their relatives). From just one ancestral lens protein--vertebrates started with several--these marine invertebrates have "evolved" lens-based eyesight more than once, Sweeney reported at the meeting.
So, where are all those inmtermediate eyes that "evolved" more than once. Some fossils perhaps? Some detailed testable Darwinian pathaways, perhaps?
- Abstract: When attempting to understand evolution, we traditionally rely on analysing evolutionary outcomes, despite the fact that unseen intermediates determine its course. A handful of recent studies has begun to explore these intermediate evolutionary forms, which can be reconstructed in the laboratory. With this first view on empirical evolutionary landscapes, we can now finally start asking why particular evolutionary paths are taken.
- "Empirical fitness landscapes reveal accessible evolutionary paths", Frank J. Poelwijk, Daniel J. Kiviet, Daniel M. Weinreich and Sander J. Tans, Nature, 445, 383-386 (25 January 2007) | doi:10.1038/nature05451, emphasis added.
David Tyler at Access Research Network writes as follows: Hailed as a "Progress" article by the editors of Nature, a group of evolutionary biologists have responded to "our current ignorance of intermediates" in the trajectories of evolutionary transformation. The key concept is the adaptive landscape, a device developed by Wright in 1932. The difference is that these authors want to use the concept to "explore the step-by-step evolution of molecular functions." They acknowledge implicitly that the adaptive landscape is theory-laden (rather than a fruit of empirical research), because from Darwin onwards, evidence of intermediates as organisms moving around an adaptive landscape is thin. They write: "Although Darwin developed a convincing rationale for their absence, he did realize that the lack of intermediates as proof leaves room for criticism." (It should be noted that not everyone finds Darwins's rationale for their absence convincing).
They also comment: "Indeed, in their opposition to evolution, the proponents of 'intelligent design' have seized on our current ignorance of intermediates." It should be stated that ID proponents do not have a blanket opposition to evolution! Adaptive change does occur and is non-controversial. The real issue is whether adaptive change leads to novel specified complexity and, in particular, to irreducibly complex systems. Furthermore, ID proponents are not arguing from "our current ignorance" but from our current knowledge!
What has this research achieved? "The experimental reconstruction of evolutionary intermediates and putative pathways has provided an exciting first look at molecular adaptive landscapes" and "The molecular systems interrogated so far represent only a start, but one with great potential to spark further exploration." There is no hint that molecular systems suggested to be irreducibly complex have been addressed in this research. It really is a "first look" and "only a start". It is perhaps an indictment of evolutionary biologists that it has taken so long to realise that there are real challenges to the adaptive landscape concept and that serious research is only in the initial stages.
There is Poetry in the Genetic Code (Chemical and Eng. News, Jan. 22, 2007)
As William Dembski put it, if there is poetry in the genetic code, then that does mean that natural selection is a Poet?
Silent No Longer: Researchers unearth another stratum of meaning in the genetic code, Ivan Amato (Chemical & Engineering News, January 22, 2007, Volume 85, Number 04, pp. 38-40).
The more scientists study the genetic code, the more it reads like poetry. In a poem, every word, every line break, even every syllable can carry more than a literal meaning. So too can the molecular letters, syllables, and words of the genetic code carry more biologically relevant meanings than they appear to at first.
Now, a cadre of researchers is discovering intriguing depths of meaning in “synonyms” in the genetic code—very short wordlike sequences, or codons, that translate into exactly the same amino acids during the construction of a protein. Scientists are finding that synonymous codons influence the temporal pattern by which a messenger RNA (mRNA) molecule bearing genetic specifications from a cell’s nucleus is translated by machinelike ribosomes into protein molecules.
These punctuations in the RNA-to-protein translation process have unexpected consequences: They can change the timing by which nascent proteins fold as they elongate and peel away from ribosomes. This means that two stretches of mRNA that differ only in synonymous codons can translate into two proteins that have identical amino acid sequences but different three-dimensional shapes. Such differences can convey important, even grave, biological and medical meanings. It’s akin to the way the same hand can fold into an affirming thumbs-up gesture or into a shape involving the middle finger that conveys another sentiment altogether.
“We know that one individual given drug A will have to sleep for three days, but another taking the same drug will suffer no such effect,” notes Michael M. Gottesman, chief of the Laboratory of Cell Biology at the National Cancer Institute (NCI) in Bethesda, Md. He now thinks that such individual differences in response to drug treatments and in susceptibility to diseases could correspond to different synonymous codons that lead to differently folded protein products. Most researchers have assumed that this type of genetic variation is too subtle to matter much. In fact, an often-used moniker for the variation is “silent polymorphism.” Nonsilent polymorphisms are those variations in a gene’s code that do lead to amino acid changes.
Last month, Gottesman and coworkers reported results of their investigation of a silent polymorphism that isn’t so silent (Science, DOI: 10.1126/science.1135308). They found it in the gene that codes for P-glycoprotein (P-gp), a protein that takes residence in cell membranes, where it pumps drug molecules out of the cell. By purging the cell of drugs, this protein renders about half of human cancers resistant to a diversity of drugs.
Gottesman’s group discovered that a silent polymorphism sometimes found in this gene gives rise to a version of P-gp that is less effective at expelling drugs from cells than the “wild type” of the protein. The researchers conjecture that the altered protein function derives from a synonymous codon’s effects on the timing of translation and folding as the P-gp protein is being made and as it insinuates itself into a cell’s membrane. In their studies, the researchers expressed the gene with and without the silent polymorphism in cultured human carcinoma cells, an AIDS-related human cell line, and two lines of cells derived from monkey kidney.
“The beauty of the paper is that it is based on natural examples,” that is, living cells, comments Anton Komar of Cleveland State University. He was one of the first scientists to suggest, in the late 1980s, that silent polymorphisms in genes might have important biological consequences. Previously, Komar and others had found evidence that synonymous codons might affect protein folding, but those studies were done in cell-free test-tube preparations. “Nobody paid attention,” Komar recalls. The consensus view, he points out, has long been that only those polymorphisms that translate into amino acid substitutions in the associated proteins were biologically or medically significant. To Komar, Gottesman’s findings ought to change that view.
“Looking closely at silent polymorphisms could become a vast project now,” Komar says. “We have the whole genome in hand.”
Gottesman was attracted to research into silent polymorphisms three years ago during a discussion with Randall Kincaid, a former immunology lab head at the National Institutes of Health in Bethesda, who now runs Veritas, a biotech company in nearby Rockville. Kincaid mentioned a malaria vaccine project that required him to produce loads of a human protein in a microbial host. The protein, however, kept folding up and aggregating into unusable clumps. Kincaid told Gottesman that to circumvent this protein-folding headache, his team used a genetic engineering technique that involves exchanging some of the codons in the human gene with synonymous codons that are more prevalent in the microbial host used to manufacture the protein in bulk.
During that 2003 discussion, Gottesman says, “a light bulb went off in my head.” For Gottesman, Kincaid’s protein-folding headache sounded like a potential answer to a mystery he and his colleagues had been encountering in their research on P-gp. Listening to Kincaid, Gottesman wondered if the differences in folding that his team had observed stemmed from the silent polymorphisms found in the gene for P-gp.
Silent polymorphisms are among a more general class known as single-nucleotide polymorphisms, or SNPs (pronounced “snips”). SNPs consist of one nucleotide letter substituting for another. In the mRNA transcribed from a gene, every string of three nucleotides constitutes a codon that corresponds to and is ultimately translated into one of 20 amino acids.
For example, the mRNA codon designated UUU (uracil-uracil-uracil) encodes the amino acid phenylalanine, whereas the codon UUA (uracil-uracil-adenine) encodes leucine. Because a leucine replaces a phenylalanine, the polymorphism is nonsilent in this case, and the codons are nonsynonymous. On the other hand, the mRNA codons GGU, GGC, GGA, and GGG all encode glycine. That makes them synonymous codons, and their protein constructs all have the same amino acid sequence.
Gottesman’s group traced one particular silent SNP in the gene for P-gp—in which a GGC codon changes into GGT—to altered protein activity. Both codons correspond to glycine. Using several analytical methods, the researchers concluded that the folding, final shape, and function of P-gp indeed are influenced by silent SNPs.
“These results may not only change our thinking about mechanisms of drug resistance, but may also cause us to reassess our whole understanding of SNPs in general and what role they play in disease,” states NCI Director John E. Niederhuber in a press release.
Komar conjectures that synonymous codons might affect protein folding by tweaking the timing of that folding. In cells, he notes, the concentrations of amino acid-toting transfer RNA (tRNA) molecules, each of which corresponds to a specific mRNA codon, roughly mirror the overall frequencies at which the codons appear.
During protein translation, the mRNA codons sequentially specify which tRNA must come into the ribosome complex to deliver the next amino acid to be stitched onto the growing protein. A polymorphism that substitutes an infrequent codon for a relatively common but synonymous codon ought to result in a delay in translation because there is less of the corresponding amino acid-bearing tRNA around, Komar says. Because of the momentary pause, the growing protein could fold in a different way than if the pause were absent.
The details of the altered folding kinetics remain largely unknown, but recent work by Luda Diatchenko of the University of North Carolina and her colleagues has opened up one route of investigation into those matters (Science 2006, 314, 1930). Like Gottesman’s group, they found that different synonymous codons in a gene can lead to changes in the production of its protein product. The gene Diatchenko’s team studied encodes a neurotransmitter-degrading enzyme called human catechol-O-methyltransferase, or COMT. This enzyme is central to the regulation of pain perception. The COMT gene exists in three common variants, each one consisting of both silent and nonsilent codon changes.
Depending on which variant a person has, he or she is likely to have low, average, or high pain sensitivity. The researchers found that differences in COMT production derive far more from differences in synonymous codons in the COMT gene than in nonsynonymous ones that lead to amino acid changes.
Moreover, Diatchenko and her colleagues were able to relate those codon and clinical differences to the presence or absence of a specific stabilizing loop structure in the mRNA molecules encoding the enzyme. The mRNAs that were more stable yielded COMT activities up to 25 times higher than that associated with the least stable mRNA. The researchers surmise that these stability differences influence either the rate at which the mRNA molecules are degraded or at which they can be translated into protein. Because the more stable mRNAs produce more of the neurotransmitter-degrading enzyme, they ultimately correspond to less pain sensitivity.
“We need to give much more weight to synonymous changes,” Diatchenko concludes. “Now that we know that the difference in COMT expression depends on the secondary structure of mRNA, we can think of targeting this mechanism” to alleviate such conditions as persistent pain, she says.
Confirming that the genetic code has built into it “colons or commas” that influence the kinetics of protein synthesis and folding, Komar notes, is a reminder that the code has yet to be fully decrypted. It’s a molecular poem whose deconstruction must continue. The question now for Komar and others is whether they’ve identified a previously hidden stratum of meaning in the genetic code that will significantly help account for the differences that make individuals unique, in illness and in health.
Cells Use Zip Codes to Determine Their Body Location (PLos July 8, 2006)
How cells know where they are in the body has always been a puzzle, and now it turns out the cell address is coded into the DNA, i.e. cells have the equivalent of a Zip-code built in to their DNA that codes their location in the body. Three locations in the DNA itself were found to correspond to the location of the cell in the body, specifying whether it came from the upper or lower torso, near to or far from to the center of the body, and near to or far from the surface of the body.
- A major question in developmental biology is, How do cells know where they are in the body? For example, skin cells on the scalp know to produce hair, and the skin cells on the palms of the hand know not to make hair. Overall, there are thousands of different cell types and each has a unique job that is important to overall organ function. It is critical that, as we grow and develop, each of these different cells passes on the proper function from generation to generation to maintain organ function. In this study, the authors present a model that explains how cells know where they are in the body. By comparing cells from 43 unique positions that finely map the entire human body, the authors discovered that cells utilize a ZIP-code system to identify the cell’s position in the human body. The ZIP code for Stanford is 94305, and each digit hones in on the location of a place in the United States; similarly, cells know their location by using a code of genes. For example, a cell on the hand expresses a set of genes that locate the cell on the top half of the body (anterior) and another set of genes that locates the cell as being far away from the body or distal and a third set of genes that identifies the cell on the outside of the body (not internal). Thus, each set of genes narrows in on the cell’s location, just like a ZIP code. These findings have important implications for the etiology of many diseases, wound healing, and tissue engineering. (Rinn JL, Bondre C, Gladstone HB, Brown PO, Chang HY, "Anatomic Demarcation by Positional Variation in Fibroblast Gene Expression Programs", PLoS Genet 2(7): e119 DOI: 10.1371/journal.pgen.0020119)
As Creation Evolution Headlines puts it (01/13/2007): "This study is obviously just a first step in what will be a long and complex process to determine not only how a cell knows where it is, but how it knows what to do there. At each of the many cell divisions from single egg cell to fully formed person with trillions of cells, each cell must pass on information to the next cell about what what kind of cell it is and what happens next. At some point in the path from single cell to human being, there is a single cell that goes on to become an arm, or a leg or a pancreas. Contained in that cell is the information about how to build an arm, leg, or pancreas, but also, it is somehow keeping track of the fact that it is the cell whose job it is to become that arm, leg or pancreas. Building a body requires not only the information in the DNA to be able to manufacture the right parts, it must also contain the information on how to put the parts together. Scientists are just beginning to find that this information is there in the DNA too. Breaking the code so that they can read it, something that cells do all the time, will take much hard work. Evolution is not mentioned at all in this article, which is not surprising in light of the enormous complexity of building a working human body from the plans coded in our DNA. Anyone working day to day with the problem of trying to fathom how our body builds itself probably doesn't want to think about how this could have all come about accidently."
There are some massive challenges here for evolutionists who must offer coherent thoughts on the origin of such information-rich systems via unguided chance mechanism that dis not have us in mind. This is yet another case of good research leading to the uncovering of deep levels of complex specified information that is best understood by reference to plan and purpose via a transcendental intelligence.
Cambrian Explosion becomes mysterious once again (Nature, 11 January 2007)
Even Richard Dawkins was forced to admit that the Cambrian explosion (sudden appearance of virtually all modern phyla without precursor fossils) suggests meta-natural creation rather than chance naturalistic evolution. Researchers previously were hoping that there were cleaved embryos in the strata under the Cambrian explosion layers which might be candidates for the missing precursors. However, now a paper in Nature reclassifies them as giant bacteria, not embryos (Bailey et al, “Evidence of giant sulphur bacteria in Neoproterozoic phosphorites,” Nature 445, 198-201, 11 January 2007, doi:10.1038/nature05457).
Some evolutionists had hoped the discovery of animal embryos would somewhat explain the explosion by pushing the origin of symmetrical body plans further back in time. However, in the News and Views article in the same issue of Nature, Philip C. J. Donoghue (University of Bristol) termed this an “embryonic identity crisis” that defeats all those Darwinian hopes. Donoghue writes that the oldest known animal fossils, identified as eggs and embryos, had been expected to reveal secrets from a period of great evolutionary change, but that "like all other theories about Precambrian animals, the classification of these fossils is far from resolved, even at the kingdom level" (Philip C. J. Donoghue, “Palaeontology: Embryonic identity crisis,” Nature 445, 155-156, 11 January 2007, doi:10.1038/nature05520).
Ancient Precambrian suddenly became young Pleistocene (!) (January 2007)
The bulletin of the Geological Society of America started 2007 with a bang (the exclamation mark "!" is called a "bang" in computerese). Titles of scientific papers rarely contain the exclamation mark. However, the paper by Donald R. Lowe (Stanford) and Gary R. Byerly (Louisiana State) does contain the exclamation mark to convey something of the shock they must have felt when they had to reclassify a rock formation from one end of the geologic column to the other.
Until this year, the Barberton deposits in South Africa were confidently dated as among the most ancient geological formations on earth to a supposed age of 3.55 billion years (Archean), e.g.
- The 3.55-3.22 Ga Barberton Greenstone Belt, South Africa and Swaziland, and surrounding coeval plutons can be divided into four tectono-stratigraphic blocks that become younger toward the northwest. Each block formed through early mafic to ultramafic volcanism (Onverwacht Group), probably in oceanic extensional, island, or plateau settings. ... Evolution of the Barberton Belt may reflect an Early Archean plate tectonic cycle that characterized a world with few or no large, stabilized blocks of sialic crust. [Donald R. Lowe, Stanford University, Department of Geological and Environmental Sciences, "Accretionary history of the Archean Barberton greenstone belt (3.55-3.22 Ga), Southern Africa", Geology, December 1994, v. 22; no. 12; p. 1099-1102)
However, the latest evidence indicates promotes the supposedly ancient Barberton layers to the most recent geological layer of the the Pleistocene epoch (putatively dated to have begun about 1.8 million years ago).
- Irregular bodies of goethite and hematite, termed ironstone pods, in the Barberton greenstone belt, South Africa, have been previously interpreted as the Earth's most ancient submarine hydrothermal vent deposits and have yielded putative evidence about Archean hydrothermal systems, ocean composition and temperature, and early life. This report summarizes geologic, sedimentological, and petrographic evidence from three widely separated areas showing that the ironstone was deposited on and directly below the modern ground surface by active groundwater and spring systems, probably during periods of higher rainfall in the Pleistocene.... These deposits represent a remarkable iron oxide-depositing Quaternary hydrologic system but provide no information about conditions or life on the early Earth. [Lowe DR, and Byerly GR (2007), “Ironstone bodies of the Barberton greenstone belt, South Africa: Products of a Cenozoic hydrological system, not Archean hydrothermal vents!” GSA Bulletin, January 2007, Vol. 119, No. 1 pp. 65-87, DOI: 10.1130/B25997.1, emphasis added]
Evolutionists regularly pointed to these rocks to make up stories about the origin of primitive life forms. Now we discover that these rocks "provide no information about conditions or life on the early Earth". The rocks could be forming today.
So that's the big bang of 2007. The most ancient layers are suddenly demoted from the ancient age of about 3.5 billion years down to no more than a supposed age of at most 1.8 million years (or about 0.6% of its previous age). That's right, the age was off by three orders of magnitude.
Only evolutionists working in the Origin Sciences can be 99% wrong and still keep their jobs. Professionals such as Engineers and Doctors would not last too long with that kind of error rate. Engineers are also less likely to believe that chance and nature alone can explain the marvels of life than their evolutionary colleagues. Paul Meyer of the Discovery Institute put it like this:
- The origin of a new structure, of a miniature machine, or an information-processing system, or a circuit, is an engineering problem. Oftentimes people have criticized the intelligent design movement because there are so many prominent professors of engineering in our number. But we don’t make any apologies for that, because engineers are precisely the scientists that know what it takes to design things, to build things. And the question of origins is essentially a question of engineering. How did these systems get built? And when you have so many top-level professors of engineering — in mechanical, electrical or software engineering — saying, I think we’re looking at systems that clearly show evidence of design, I think the Darwinists have a serious problem. If they can’t persuade those people, that the 19th-century mechanism of selection and variation is up to this task, I think that the theory is in serious trouble.
Mechanical Engineer Stuart Burgess (at Bristol University) out it like this:
- Stuart Burgess is Professor of design and nature in the department of mechanical engineering at Bristol University. He argues that intelligent design is as valid a scientific concept as evolution.
- Current scientific philosophy is to rule out completely the possibility that a creator was involved. But there is no scientific justification for making such a sweeping assumption. Science should always be open-minded.
- Newton, Kelvin, Faraday and Pascal had no problem with a creator and with design. There is no reason why a modern scientist cannot take the same position as these eminent scientists. Three hundred years ago, there was so much support for intelligent design that life could be difficult if you were an atheist. Now the opposite is true; life can be difficult if you show the slightest sympathy for intelligent design.
- Evolution cannot be taken as a fact of science because of the ambiguities in the evidence. The fossil record can be evidence for and against evolution because of the gaps. Similarities in DNA code can be just as much evidence for a common designer as for evolution. Most significantly, scientists have failed to reproduce the spontaneous generation of life for 60 years.
- I've been designing systems like spacecraft for more than 20 years. One of the lessons I've learnt is that complex systems require an immense amount of intelligence to design. I've seen a lot of irreducible complexity in engineering. I have also seen organs in nature that are apparently irreducible. An irreducibly complex organ is one where several parts are required simultaneously for the system to function usefully, so it cannot have evolved, bit by bit, over time.
- The mammalian knee-joint is an organ that appears irreducible. Everyone has a four-bar linkage in their knee. Engineers know that for this to work, you need all four bars to be present. Every time we walk, we're using irreducible mechanisms. Evolutionists have not been able to explain how the knee joint evolved step by step. We cannot prove that an intelligent being designed these, but at present no one can prove that they evolved, either.
- Against The Grain: 'There are strong indications of intelligent design', The Independent, Interview by Nick Jackson, 08 February 2007
(Thanks to Creation and Evolution headlines, 01/03/2007, for the information on Barberton).
Genetic information: Codes and enigmas (Nature, Nov. 2006)
Report below is adapted from David Tylor (ARN)
There's more than one way to read a stretch of DNA, finds Helen Pearson — and we need to understand them all. Nature 444, 259-261 (16 November 2006) | doi:10.1038/444259a)
Computer buffs interested in cracking codes have developed software routines to prise out hidden information. “We are treating DNA as we used to treat problems in intelligence” [Shepherd] says. “We want to break the code at the most fundamental level.”
This is the first point where an ID perspective will help research. “Breaking the code” cannot be reduced to an exercise in computer science. We need to recognise the biological context for the DNA operation and to treat the whole cell as a complex system. This will lead to a systems engineering methodology for analysis.
A highly significant paragraph is as follows:
Multitasking is something we typically associate with intelligence. Getting a code to convey one message is a challenge in itself, but getting the same code to carry several messages is evidence of higher level intelligent agency. ID helps here, allowing the premise that the ‘degenerate’ aspects of the triplet code are actually designed to permit sophisticated encoding.
The writer, however, goes on to make an extraordinary comment on this. “This elegance is surely the handiwork of evolution – and if the way in which that hand had worked to solve these problems were clearer, the simultaneous decoding of all the messages involved might become easier.”
It is extraordinary because of the word “surely”. Why “surely”? Not because of the sophisticated design features! Not because the whole thing is “elegant”! I can only think that the word “surely” is deductive: because “we know” that Darwinism is true. The same rationale is behind this comment that appears earlier in the article: “Biology has probably figured out a way to squeeze every bit of information from that molecule it can”.
There is no empirical base for suggesting that these design features can emerge from evolutionary processes. Our recognition of them as a phenomenon comes only because we have met them before in intelligently designed digital information.
That sentence should read: “This elegance is surely the handiwork of an Intelligent Designer – and if the way in which His hand had worked to solve these problems were clearer, the simultaneous decoding of all the messages involved might become easier.”
One further point on the use of anthropomorphic language. Examples already cited are: “Biology has probably figured out a way to squeeze every bit of information from that molecule it can”. “This elegance is surely the handiwork of evolution” “the way in which that hand had worked to solve these problems”
In each case, we have intelligent agency attributed to the mechanistic processes of evolution. As Dembski’s design filter shows, these mechanisms give us law-like and chance-like characteristics, but these are distinct from design-like characteristics. Although the evolutionary paradigm dies not permit intelligent agency, those within its mould have to resort to anthropomorphisms to develop their ideas. Sad. That's perhaps the biggest "enigma" in this essay.
Chance and Necessity Do Not Explain the Origin of Life (May 2006)
In May 2006, Norwegian cellular biologist, Øyvind Albert Voie published an article (in Chaos, Solitons & Fractals) arguing that “chance and necessity cannot explain sign systems, meaning, purpose, and goals” in the DNA system. Voie concluded that since “mind possesses other properties that do not have these limitations,” it is “therefore very natural that many scientists believe that life is rather a subsystem of some Mind greater than humans.
Voie was encouraged to submit his work by Dave Abel who was a reviewer of Hubert Yockey’s recent book on information theory and the origin of life. Abel is with The Gene Emergence Project and The Origin-of-Life Foundation. Abel’s recent paper on the origin of life entitled "Chance and Necessity Do Not Explain the Origin of Life" mentions the problems posed by Turing machines (computers) which appear in all biological systems.
Voie explores these ideas further and outlines important considerations. Computers of necessity must transcend the chemical and physical properties of the materials which make them. Voie illustrates why the origin of biological computers (or any other computer for that matter) can not be attributable to chemical and physical laws alone.
The abstract of Abel's paper is:
New species almost never observed (2006)
A key problem with the argument over evolution (e.g. Dawrinian evolution by natural selection acting on random mutations) is that so few actual examples of speciation (i.e. new species forming) have ever been observed. We really have no way of knowing for sure whether Darwin had the right idea. Jonathan Wells notes as follows:
- So except for polyploidy in plants, which is not what Darwin's theory needs, there are no observed instances of the origin of species. As evolutionary biologists Lynn Margulis and Dorion Sagan wrote in 2002: "Speciation, whether in the remote Galapagos, in the laboratory cages of the drosophilosophers, or in the crowded sediments of the paleontologists, still has never been directly traced." Evolution's smoking gun is still missing. (Jonathan Wells, "Politically Incorrect Guide to Darwinism and Intelligent Design", 2006, p. 55, quoting Lynn Margulis and Dorion Sagan, Acquiring Genomes: A Theory of the Origin of Species, New York, Basic Books, p. 32; see also footnote to the Wells text)
(Note: Polyploidy is gene-swapping by the doubling of chromosomes, from which new species of plants can arise. It is no real help for evolution because sexual reproduction is not involved.)
Bacteriologist Alan H. Linton (University of Bristol, England) went looking for direct evidence of speciation and concluded in 2001:
- None exists in the literature claiming that one species has been shown to evolve into another. Bacteria, the simplest form of independent life, are ideal for this kind of study, with generation times of twenty to thirty minutes, and populations achieved after eighteen hours. But throughout 150 yeas of the science of bacteriology, there is no evidence that one species of bacteria has changed into another ... Since there is no evidence for species changes between the simplest forms of unicellular life, it is not surprising that there is no evidence for evolution from prokaryotic [i.e., bacterial] to eukaryotic [i.e. plant and animal] cells, let alone throughout the whole array of higher multicellular organisms." (Alan Linton, "Scant Search for the Maker," Times Higher Education Supplement, April 20, 2001, Book section, 29.)
The real question is not whether new species originate (occasionally they might, but the two species are not that different from each other, and thus what we have is micro-evolution at best, i.e. small differences within kinds such as Finches) or whether chance natural processes such as random mutation and natural selection ever influence the course of events (in small ways they might). The real question whether evolution really is the engine of the vast complexity of life that we see around us.
An intelligent design [blog http://www.uncommondescent.com/index.php/archives/1437] posted the following interesting information (for which I would like to see proper scientific attribution).
- For every 1000 species that has ever lived during the history of our planet, [supposedly] 999 of them became extinct in an evolutionary dead end street (no species descended from them). Estimates range up to 5 billion species that have walked, crawled, swam, flew, rooted, or slimed our planet in the past. About 10 million are alive today and we have names for about 1 million of those. The average lifespan of a species is about 10 million years. Most species enter the fossil record abruptly and disappear abruptly looking mostly the same at both entrance and exit. The next time you’re thinking of how random mutation and natural selection works keep in mind that in the vast majority of cases it keeps a species looking pretty much the same for about 10 million years then kills it without leaving any descendents.
The comments to the blog are interesting. Here is a summary.
The supposed evolutionary estimate of the age of the Earth is a speculative 4.5 billion years. If the supposed number of species that existed is about 5 billion, then, if the above figures for species are accurate, then we are talking about a new species every year! Some estimates put the supposed number of species as high as 50 billion [see David Raup's 1991 book "Extinction: Bad Genes or Bad Luck?"], in which case new species should be developing many times a year (of course, this depends on your definition of a species).
Kurt Wise is a creationist with a Ph.D. in paleontology from Harvard where he studied under Steven J. Gould. He has a reputation of being a straight shooter. Even Dawkins calls him an “honest creationist.” The blog reports the following 1990 conversation:
- Dr. David Menton: There’s one area where evolutionists and creationists do agree, I think there are several, but there is one important one we ought to bring up now and that’s this matter of extinction. Here we have no argument. Both sides agree that the vast majority of those organisms that have ever existed on earth are no longer with us and have become extinct. Wise: No. I don’t agree. That’s incorrect. Menton: Is that right? Wise: The fossil record has 250,000 species, more or less, in its record, we… Menton: What percentage do you think are extinct? Wise: What percentage? I believe a very small percentage. We have over a million and half species living today, 250,000 fossil species. I mean, if we take things literally, most species exist today and not in the fossil record.
These are interesting figures and, if correct, do not leave much room for all those transitional fossils. It would be interesting to get accurate numbers.
Zebrafish Heart Regeneration Functionality Lost - Noise?
In information systems, a noisy channel destroys information (i.e. noise is unlikely to create new meaningful messages). Likewise, it does not appear that natural selection acting in a noisy environment (e.g. genetic drift and random mutations) would be able to create the specified complexity seen in certain life-forms which have the ability to regenerate new organs while closely related species do not. The following article illustrates this phenomenon.
- Key to zebrafish heart regeneration uncovered (Eureka, Nov 2, 2006)
- “Interestingly, some species have the ability to regenerate appendages, while even fairly closely related species do not,” Poss added. “This leads us to believe that during the course of evolution, regeneration is something that has been lost by some species, rather than an ability that has been gained by other species. The key is to find a way to ‘turn on’ this regenerative ability.”
Here we see a loss of specified complexity over time. If random mutation and natural selection have a hard time preserving fuctionality, how do they evolve major organs in the first place?
The article continues: "If you look in nature, there are many examples of different types of organisms, such as axolotls, newts and zebrafish, that have an elevated ability to regenerate lost or damaged tissue," said Kenneth Poss, Ph.D., senior researcher for the team, which published the findings on Nov. 3, 2006, in the journal Cell. First authors of the paper were Alexandra Lepilina, M.D., and Ashley Coon.
Hat tip: Uncommon Descent (merrit-2006).
It’s Hard to Break a Bone (PNAS Nov 9, 2006)
The following report is adapted from creationist crev.info (11/14/2006).
People wearing a cast right now may not feel comfortable, but should be thankful it’s hard to break a bone. Scientists at Max Planck Institute discovered “a novel construction principle at the nanoscale which prevents bones from breaking at excessive force,” making them “nearly unbreakable.” Because of the way the rigid components of bone tissue are arranged in a hierarchical structure, the ability for bones to deform and absorb strain far exceeds the ability of the components themselves.
There was no mention of evolution in the press release, but there was mention of “natural design principles.” The scientists also thought bone design could be utilized by engineers:
There was no mention of evolution in the paper, but the paper does end:
“Natural design principles.” Interesting phrase. It has the "Design" word, but is nondescript enough to avoid tripping red alarms on the NCSE radar (the NCSE is the anti-creationists National Center for Science Education). Maybe they need to tighten their algorithms. If a paper mentions design but not evolution, alert the ACLU.
The theory of intelligent design states that certain features of the natural world are best explained as the result of intelligent causes, rather than as the result of undirected natural causes. How could you get a hierarchically arranged system by undirected natural causes? Suppose there was deformation at the nanoscale, but not at the millimeter or centimeter scale. Bones would not be nearly as resistant to breakage. How many fish and amphibians had to die of broken bones before all the levels of the hierarchy arrived independently at their own optimum design principle? Remember, evolution does not allow for coordinated effects toward a common design goal. Do an experiment: try the Random Mutation Generator simultaneously at the letter, word, sentence, paragraph, page and chapter levels independently, and see if you get a meaningful book.
This article illustrates again that science would get on just fine without Darwinism. The authors had no need of that hypothesis. They would have done the same experiments with the same equipment, drawn the same charts and graphs, and reached the same conclusions had they been working explicitly on the basis of intelligent design. Tacking on a Darwinian tale about how bones got this way would have been useless and pointless – a mere wishbone. Thinking “design” at the outset, by contrast, would have motivated them to expect to see design, and to find it. Furthermore, it would have stimulated even more interest in imitating the design.
So we’ve got a bone to pick with the Darwin Party. The strain of accumulating facts has deformed your theory beyond the breaking point, leaving it limp and lame. Intelligent design is the biology and biophysics of the 21st century. No dead-Charlie bones about it.
Why the Bacterial Flagellum is Irreducibly Complex (Kitzmiller, Nov 3, 2005)
Transcript of Testimony of Scott Minnich pgs. 99-108, Kitzmiller, No. 4:04-–CV-2688 (M.D. Pa., Nov. 3, 2005). Minnich is a microbiologist who testified as follows on the next-to-last-day of the trial about his own research:
Q. Do you now employ principles and concepts from intelligent design in your work?
A. I do.
Q. And I'd like for you to explain that further. I know you're prepared several slides to do that.
A. Sure. All right. I work on the bacterial flagellum, understanding the function of the bacterial flagellum for example by exposing cells to mutagenic compounds or agents, and then scoring for cells that have attenuated or lost motility. This is our phenotype. The cells can swim or they can't. We mutagenize the cells, if we hit a gene that's involved in function of the flagellum, they can't swim, which is a scorable phenotype that we use. Reverse engineering is then employed to identify all these genes. We couple this with biochemistry to essentially rebuild the structure and understand what the function of each individual part is. Summary, it is the process more akin to design that propelled biology from a mere descriptive science to an experimental science in terms of employing these techniques.
Q. Do you have some examples employing this particular concept of the flagella?
A. I do, in the next slide. Hopefully this will cut to the chase and show you what we're talking about. This is an organism that my students and I work on. This is a petri dish about 15 millimeters size, filled with this soft auger food source for the organism. It's soft in the sense the organisms can swim in it, but it has some rigidity that they just don't slosh around. Now, each one of these areas showing growth were inoculated with a toothpick of cells, the wild type parent here. So this is yersinia enterocolitica, a good pathogen, double bucket disease if you ingest it.
Q. That's the center?
A. Yeah, that's the center, okay? So it can swim. So it was inoculated right here, and over about twelve hours it's radiated out from that point of inoculant. Here is this same derived from that same parental clone, but we have a transposon, a jumping gene inserted into a rod protein, part of the drive shaft for the flagellum. It can't swim. It's stuck, all right? This one is a mutation in the U joint. Same phenotype. So we collect cells that have been mutagenized, we stick them in soft auger, we can screen a couple of thousand very easily with a few undergraduates, you know, in a day and look for whether or not they can swim.
Q. I'm sorry, just so we're clear on the record, the two you're talking about on the bottom left, the first one was the bottom left and the second one was the bottom right?
Q. Where you took away a portion of the flagella?
A. We have a mutation in a drive shaft protein or the U joint, and they can't swim. Now, to confirm that that's the only part that we've affected, you know, is that we can identify this mutation, clone the gene from the wild type and reintroduce it by mechanism of genetic complementation. So this is, these cells up here are derived from this mutant where we have complemented with a good copy of the gene. One mutation, one part knock out, it can't swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We've done that with all 35 components of the flagellum, and we get the same effect.
Expert witnesses Scott Minnich and Stephen Meyer defined intelligent design not only as a critique of Darwinism but also as a positve argument as follows:
- Molecular machines display a key signature or hallmark of design, namely, irreducible complexity. In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role in the origin of the system. Given that neither standard neo-Darwinism, nor co-option has adequately accounted for the origin of these machines, or the appearance of design that they manifest, one might now consider the design hypothesis as the best explanation for the origin of irreducibly complex systems in living organisms. That we have encountered systems that tax our own capacities as design engineers, justifiably lead us to question whether these systems are the product of undirected, un-purposed, chance and necessity. Indeed, in any other context we would immediately recognize such systems as the product of very intelligent engineering. Although some may argue this is a merely an argument from ignorance, we regard it as an inference to the best explanation [21, 22], given what we know about the powers of intelligent as opposed to strictly natural or material causes. We know that intelligent designers can and do produce irreducibly complex systems. We find such systems within living organisms. (Scott A. Minnich and Stephen C. Meyer, Genetic Analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria).
According to Minnich and Meyer, ID is not "an argument from ignorance", but is rather based upon "what we know about the powers of intelligent [causes].” As Minnich and Meyer write, "In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role the origin of the system." Thus, we have a positive argument for design that is not merely based upon the refutation of evolution, and is not an argument from ignorance. There is no question that Ken Miller is misconstruing the way ID proponents have defined their theory. Unfortunately, Judge Jones bought Ken Miller’s misconstruals of ID.
Does antibiotic resistance provide evidence for macro-evolution?
Bacteria can acquire immunity to antibiotics via mutation, conjugation and transformation. For example, DNA transposition can result in reduced permeability of the cell wall to certain substances, sometimes providing an increased resistance to antibiotics.
The issue is not whether bacteria develop resistance to antibiotics through alterations in their genetic material. They do. The issue is whether or not such changes provide evidence for macro-evolution.
Mutations may confer antibiotic resistance, but such mutations may also decrease an organism’s viability. In sickle-cell anemia (which is caused by a mutation), “carriers” of the disease do not die from it and are resistant to malaria, which at first would seem to be an excellent example of a good mutation. However, that is not the entire story. While resistant to malaria, these people do not possess the stamina of, and do not live as long as, their non-carrier counterparts. Likewise, bacteria may be resistant to a certain antibiotic, but that resistance comes at a price. Acquiring resistance does not lead necessarily to new species or types of organisms. According to microbiolgist Scott Minnich at the University of Idaho ("Icons of Evolution", Coldwater Media), when the antibiotic is removed, the mutant is less able to compete with the more virulent un-mutated forms, and the mutants gradually die out.
But the main problem is that regardless of how these bacteria gain their resistance, they are still exactly the same bacteria after receiving the new trait as they were before receiving it. Microbiologists have studied extensively two genera of bacteria in their attempts to understand antibiotic resistance: Escherichia and Salmonella. In speaking about Escherichia in an evolutionary context, France’s renowned zoologist, Pierre-Paul Grassé, observed:
- ...bacteria, despite their great production of intraspecific varieties, exhibit a great fidelity to their species. The bacillus Escherichia coli, whose mutants have been studied very carefully, is the best example. The reader will agree that it is surprising, to say the least, to want to prove evolution and to discover its mechanisms and then to choose as a material for this study a being which practically stabilized a billion years ago (Pierre-Paul Grassé, The Evolution of Living Organisms, Academic Press, 1977, p. 87).
Although E. coli have supposedly has undergone a billion years’ worth of mutations, those changes have occurred within narrow limits. No long-term, large-scale macro-evolution has occurred.
Limits to Natural Selection (Kaufman, 1995)
- If selection could, in principle, accomplish “anything,” then all the order in organisms might reflect selection alone. But, in fact, there are limits to selection. Such limits begin to demand a shift in our thinking in the biological sciences and beyond. We have already encountered a first powerful limitation on selection. Darwin’s view of the gradual accumulations of useful variations, we saw, required gradualism. Mutations must cause slight alterations in phenotypes, But we have now seen two alternative model “worlds” in which such gradualism fails. The first concerns maximally compressed programs. Because these are random, almost certainly any change randomizes the performance of the program. Finding one of the few useful minimal programs requires searching the entire space requiring unthinkably long times compared with the history of the universe even for modestly large programs … But the matter is even worse on such random landscapes. If an adapting population evolves by mutation and selection alone, it will remain frozen in an infinitesimal region of the total space, trapped forever in whatever region it started in. It will be unable to search long distances across space for higher peaks. Yet if the population dares try recombination, it will be harmed on average, not helped. There is a second limitation on selection. It is not only on random landscapes that evolution fails. Even on smooth landscapes, in the heartland of gradualism, just where Darwin’s assumptions hold, selection can again fail and fail utterly. Selection runs headlong into an “error catastrophe” where all accumulated useful traits melt away…. Thus there appears to be a limit on the complexity of a genome that can be assembled by mutation and selection!
Stuart Kaffman, At Home in the Universe: The Search for Laws of Self-Organization and Complexity (New York: Oxford University Press, 1995), 183-184.
Wikipedia suppresses Haldane's Dilemma
Creationist Walter ReMine explains the problem as follows:
Evolutionists never solved Haldane's Dilemma, instead they garbled it into oblivion back in the 1970s. They were negligent for allowing that to happen, and they were negligent for never telling the public about Haldane's Dilemma -- a limit of 1,667 beneficial mutations to explain the origin of all the uniquely human adaptations.
Notice I said they were "negligent" in the 1970s.
Since 1993, I have been exposing Haldane's Dilemma to public understanding. I believed evolutionary geneticists would respond by earnestly addressing the issue, in their journals, and in public. Unfortunately, they took the opposite tactic. They now move beyond negligence, into active suppression of Haldane's Dilemma.
We now have three well-documented cases of suppression. Two cases involve technical papers to journals (see those details here).
The latest case involves Wikipedia (the online encyclopedia), and is immediately open to your inspection (through a complete log of edits and discussion). Despite creationist efforts to make the article insightful, evolutionists repeatedly garbled the Wikipedia article, as follows:
- The key figure -- a limit of 1,667 beneficial mutations to explain human evolution -- was brushed aside (by falsely blaming it on creationists, instead of acknowledging that it arises solely from evolutionary theory, evolutionary genetics, and J.B.S. Haldane). This key figure was repeatedly expunged from the article, leaving readers with no idea about the severity of Haldane's Dilemma. Evolutionists suppressed this key figure. They also suppressed their history -- the fact that they never revealed any such figure to the general public.
- Evolutionary geneticists, James Crow and Warren Ewens, peer-reviewed my paper and acknowledge it is correct, yet they rejected it from publication, claiming they and their associates "knew" my material "in the 1970s". Nonetheless, the Wikipedia article omits all the clarifications given in my paper, and explicitly promotes the various confusion factors identified in my paper. Evolutionary geneticists (including Crow and Ewens) negligently -- and knowingly -- allowed confusion to prevail. The Wikipedia article is a vivid demonstration of it.
- Instead of giving real insight, the Wikipedia article wearies the reader into giving up -- through its needlessly tedious mathematical derivations, and its opaque definitions of key terms. Rather than illuminate the real problem of Haldane's Dilemma today, the article wearies the reader with a relatively fruitless and misleading journey into the "origin of the term" in 1963. The article seems intentionally designed to wear-out readers, rather than deliver understanding.
- The Wikipedia article falsely pretends Haldane's 1957 description is the clearest available -- as though no clarifications occurred since then.
- The Wikipedia article does not reveal (nor hint at) the great breadth of confusion and contradiction that remains unresolved among evolutionary geneticists on this topic. The evolutionary literature on Haldane's Dilemma is a quagmire of confusion and contradiction among authorities, and the Wikipedia article conceals it.
- Creationists added the quotation from renowned evolutionary theorist, G.C. Williams, that "In my opinion the [Haldane's Dilemma] problem was never solved, by Wallace or anyone else." Evolutionists repeatedly deleted that admission from the Wikipedia article.
- The Wikipedia article does not reveal the problem, much less provide solutions.
- Evolutionists happily inserted misrepresentations of my material into the article. But after creationists corrected the misrepresentations, evolutionists deleted my material entirely. That dynamic occurred more than once in the Wikipedia editing. Recently, evolutionists again deleted the entirety of my material -- thereby leaving readers in the dark about the basis and severity of Haldane's Dilemma.
The Wikipedia article demonstrates how evolutionists actively suppress Haldane's Dilemma from public view. It is a scandal that will not go away.