Life is COMPLEX!

DeepScience Summary of Design vs Darwin v1.2
DeepScience Summary of Design vs Darwin v1.1 (this should be the main location eventually)
DeepScience Summary of Theistic Evolution

Chance and DNA
British cosmologist Sir Fred Hoyle was an atheistic evolutionist when he began his inquiry into the chances of a living thing evolving from chemical. (He has since greatly changed his view.) Hoyle has said that if you filled the solar system shoulder-to-shoulder with blind men shuffling Rubik's cubes randomly (this would mean 1050 blind men), the chances of getting one simple long chain molecule of the type on which life depends is the same as all of those blind men simultaneously achieving the solution by random shuffling! He further points out that we would then only have one single useless molecule compared to the intricate and interrelated machinery of a functioning, living cell.

Deep Blue Sea
Apparently IBM has been building a supercomputer that will simulate a protein molecule being created. The new computer called "Blue Gene" will be 500 times as fast as the fastest supercomputers (at the end of 1999) on the planet. It'll still take Blue Gene a year to simulate the protein folding. In humans the actual process only takes a second. Ianman continues:

You know, since there are (from memory) 31.56 megaseconds in a year, Blue Gene would have to be 31.56 million times as fast as it's going to be to actually work at the speed of life. (One year to model something that happens in one second.) That's 15.78 billion times as fast as the fastest supercomputer (as of the end of 1999). The proposed speed of Blue Gene is hardly the speed of life. Go life!

IBM Research Page
IBM Announcements Page

Unbelievable Super Complexity
Today the scientists working on mapping the human genetic code decided to do another news release. Apparently the whole thing is, get this, incredibly more complex than anyone ever thought. Human DNA consists of about 3 billion base pairs (represented by just 4 different bases). Out of that huge code (about 500,000 pages worth, or 756MB) about 1% codes for proteins. But instead of bit of code making just one simple protein, many proteins are made. Instead of being simple they are incredibly more complex than proteins in any other living thing. Wait, there's more. Each protein can be found in multiple forms and each form can do multiple different jobs in the same cell (let alone other cells). Then comes the interaction between the different proteins, and finally, this matter of 99% of our DNA which they don't even have a clue about what it does! This folks, is complex. If you found a watch in a field you'd know someone made it. If you study the genetic code of life you can't run away from the fact of it's Creator.

Steve Lohr Article
December 6, 1999 IBM Plans Computer to Work at Speed of Life By STEVE LOHR Two years ago, when an IBM supercomputer known as Deep Blue beat the world chess champion, Gary Kasparov, it seemed a confirmation of the Computer Age, a triumph of machine over man. On Monday, IBM is announcing a five-year, $100-million program to build a supercomputer whose ambitions dwarf Deep Blue's.

The goal of the new supercomputer, to be called Blue Gene, is to simulate one of the most common routines in natural biology -- the process by which amino acids intricately fold themselves into full-fledged proteins, the body's molecular work force whose chores range from metabolizing food to fighting disease.

Protein folding may be routine, but it is a routine of enormous complexity. To simulate the process is beyond the reach of contemporary computing. To meet the challenge, Blue Gene is being built to run 500 times faster than the world's fastest supercomputer today, by using an innovative design that seeks to sharply accelerate the already torrid rate at which the speed of computers improves.

If successful, the IBM project would not only be a breakthrough in cutting-edge computing, but would also help supply fundamental insights into the basic physics and chemistry of biology, opening the door to a new understanding of diseases and more effective drugs, perhaps ones individually tailored to a person's genetic makeup.

The publicly financed human genome project, an international research initiative begun in 1990 with the goal of deciphering all of the human genetic code by 2005, is already generating a huge quantity of biological data for computation. IBM's Blue Gene program is, in a sense, an effort to take the next step -- feeding the genetic data into a supercomputer to try to understand basic biological processes.

Computer scientists and biologists who have been briefed in advance of Monday's announcement are impressed by the ambition and promise of the IBM effort.

"The combination of all the ideas that IBM is putting together to make a supercomputer on this scale is really exciting," said Ken Kennedy, a computer scientist at Rice University who is co-chairman of the President's Information Technology Advisory Committee.

"And there will be a lot more benefit to society from this project than there was from having Deep Blue beat Kasparov in chess," he added.

Biologists and medical experts say the broadest impact of the IBM research, at least during the next few years, will likely come from its contribution to improving the field of computer simulations of molecular biology in general, rather than the "grand challenge" of protein folding itself. It is expected to take five years before Blue Gene is ready to begin the marathon simulations of protein folding.

For more than a decade, biological researchers have used computer simulations to study the activity of proteins in the body -- how drugs bind to proteins, for example, or how cell membranes absorb some substances while screening out others.

Being able to run faster, longer and better simulations of more modest molecular mysteries than protein folding could have big health-care payoffs in understanding ailments like heart disease and high blood pressure.

"The promise of what IBM is doing is far beyond the one machine," said Bernard Brooks, a principal investigator at the National Institutes of Health and a leading expert in computer simulations of molecular biology. "The really important work that can be done with this technology is in smaller-scale simulations rather than the demonstration project of protein folding."

For IBM, Blue Gene is a research program of its renowned Watson Labs.But it is the expected trickle-down of research knowledge into commercial uses that justifies the company's $100 million investment on Blue Gene.

Since mid-1998, IBM has jumped from third in supercomputer installations worldwide, after the Cray division of Silicon Graphics and Sun Microsystems, to the top spot.

In that time, IBM has nearly doubled its share of the 500 most powerful machines, from 75 to 141 last month, according to "The Top 500 Supercomputing Sites," a list compiled by three academic supercomputer experts. At the same time, the number of installations for Cray fell sharply while Sun Microsystems held steady.

"There is no doubt in our mind that a lot of that improvement is because of what we learned with Deep Blue," said Paul Horn, senior vice president of research. "The payoff can be enormous."

Several IBM supercomputers are already at work on the human genome project worldwide, including one that is host to one of the project's central databases in Toronto.

The announcement Monday, just as the project is getting under way, is also clearly an image-burnishing step by IBM, intended to emphasize its commitment to supercomputing and to research. Blue Gene, experts agree, is a multidisciplinary endeavor requiring not only computer hardware, software and manufacturing expertise but also mathematicians, biologists, chemists and physicists.

In addition, the Blue Gene project should serve as a kind of recruiting tool for IBM research -- and perhaps serve as a venture that could lift the stature of computer-science research in general. Such a lift, according to Kennedy of Rice University, is badly needed. Computer talent, to be sure, has perhaps never been in such great demand as it is today. Yet the excitement of Internet start-ups and the lure of stock options, Kennedy notes, has meant that computer-science students increasingly shun graduate studies and advanced research.

"A few projects like this could re-establish research institutions -- academic or corporate -- as centers of excitement in computing," Kennedy said. "It's going to bring some of those minds back."

The frontier of computational biology is certainly a field that can stir excitement in the research community as well as hold out the promise of being a huge industry someday. In the last few years, IBM has built a 30-person team of researchers in computational biology.

IBM hopes its supercomputer project will stimulate the field. "We want to attract significant interest and involvement from university researchers and from the scientific community in general," Dr. Sharon Nunes, a senior research manager, said. "If we can influence this fundamental research, it will happen faster."

The computing innovation behind Blue Gene, in essence, is to build a computer that works much as nature works -- a triumph, if it succeeds, of marrying simplicity and complexity.

The computer scientists at IBM plan to sharply simplify the RISC (reduced instruction-set computing) architecture used in the chips that run engineering work stations and supercomputers today. The "instruction set" -- the total vocabulary of machine-language instructions a computer understands -- will number 57 for Blue Gene, compared with about 200 for most RISC machines.

Then, instead of putting a single microprocessor on a chip, Blue Gene will have 32 microprocessors -- the calculating engines of computers -- on each chip. Sixty-four such chips will be inserted on each motherboard, with eight motherboards in each of the 64 computing towers of Blue Gene.

When completed, Blue Gene will stand about six feet high, occupying a floor space of 40 feet by 40 feet at the Watson labs in Yorktown Heights, N.Y. It will have a total of about 1 million microprocessors.

Among the innovations computer scientists find most impressive about Blue Gene is that IBM will place memory for storing data on the same chip as the microprocessor. In conventional computer designs, the memory for storage is separate from the processor.

Shuttling data from the memory to the processor is a major bottleneck in computers, slowing them down. Only within the last year or so, because of advances in chip making and miniaturization, has it become possible to consider putting memory and processing on the same chip in the way that IBM is developing.

To attain the speeds Blue Gene seeks within five years, IBM must try a new architecture of computing. The conventional wisdom holds that microprocessor speeds can theoretically double every 18 months, a phenomenon known as Moore's Law, for Gordon Moore, the chip pioneer who first observed it. With Moore's Law, it would take about 15 years to achieve the speed target for Blue Gene.

"There's no way you get to where IBM is heading unless you change today's computing architecture," said Arvind, a computer scientist at the Massachusetts Institute of Technology, who uses only a single name. "It looks as if they have an outstanding engineering plan. If they can execute it properly, it will be a real breakthrough."

Blue Gene's speed target is a petaflop -- that is, a thousand trillion floating point operations, or calculations, each second. Such a speed would make the machine 500 times faster than the two fastest supercomputers in operation today -- an IBM supercomputer at the Lawrence Livermore national laboratory, and an Intel machine at the Los Alamos lab.

To translate Blue Gene's speed into a personal computer scale: If a fast PC was represented as an inch tall, the IBM machine would be 20 miles high.

The hardware design of Blue Gene is innovative indeed, but the real challenge, as is so often the case in computing, will be the software. For in simplifying the hardware design for speed, the complexity of protein folding is left to the software. And the software, among other things, has to be "self-healing" so that the simulation does not grind to a halt if a few processors break down. The software must recognize the flawed processors and re-route the data.

"We have some idea how we're going to do this," said Marc Snir, a senior researcher at Watson. "But I would be lying if I said we have solved this. We do have research to do."

If all the computer wizardry works as planned, it will still take Blue Gene about a year to simulate on the computer the folding of a single protein. How long does it take the body to fold one? Less than a second.

"It is absolutely amazing the complexity of the problem and the simplicity with which the body does it every day," Ajay Royyuru, a researcher in IBM's computational biology center, noted.

Intelligent Design vs. Darwinian Evolution Summary of Issues
Version 1.2
2004 Trevor Mander, www.DeepScience.com

General Observations
1. Darwinists use the news media to cast all opponents as religious dogmatists preventing learning inserting religion into secular school

2. All these are attacks against character (ad hominem) but which don't deal with the scientific issues.

3. The fact remains that there are scientific problems with Darwinism that are quite independent of what anybody thinks of the Bible.

4. In addition, the doctrine of Darwinism can be shown to be a philosophical assumption not proved by scientific observation.

5. Intelligent design includes a belief in God, Darwinism includes a belief in materialism. Both are "religious" or philosophical worldviews.

6. Materialism and Naturalism did not found modern science, but the belief in God did: For example, Johannes Kepler, Blaise Pascal, Robert Boyle, Nicolaus Steno, Isaac Newton, Michael Faraday, Louis Agassiz, James Young Simpson, Gregor Mendel, Louis Pasteur, William Thomson (Lord Kelvin), Joseph Lister, James Clerk Maxwell, William Ramsay.

7. There is a difference between origin science (a type of forensic science which looks into evidence for past events) and operation science (which is observation of current events).

Intelligent Design
1. Entropy (chaos) is increasing - therefore there was a beginning to the universe

2. Time is Limited - only a finite number of moments before this one

3. Limited Causality - can't have infinite series of causes of "being".

4. The universe had a beginning - three logical possibilities: Uncaused (but nothing never causes something) Self-caused (but it would have to exist before it existed in order to cause its own existence - which is silly - like pulling yourself into the air by tugging on your shoe laces.) Caused by another - the most logical choice

5. Anthropological principle - Earth is so finely balanced to support life that it is practically impossible (as opposed to theoretically impossible) that this would have come about by random chance.

6. Intelligent Design arguments do not rule out religious solutions to the problem of pain and suffering in the world like materialistic Darwinian evolution does.

7. Specified complexity exists in all living things. There is no simple life - even a single celled amoeba has the complexity of the city of London and reproduces that complexity in only 20 minutes. Primary information is the chemical structure of something. Eg, cover a school white board with marker ink - the ink has a chemical structure. Secondary information is information that is added on top of and in addition to the chemical structural layer. Eg, write "take out the rubbish please" on the school white board. The ink still has exactly the same chemical structure of ink but that structure now also contains a second level of information. The message carried has nothing to do with the chemical structure - it has its own meaning. Just as books are not just complicated (binding, pages, ink etc) living things are more than just complicated groupings of chemicals. Both these things have specified complexity, not just complexity. It's the difference between any old mountain and Mt Rushmore. There is no natural process which can blindly construct secondary information structures. Only a guided process of construction can do it, ie. Either copying information from one place to another or the presence of an intelligent designer. DNA, the building structure for all living things has a chemical structure and a secondary information structure. If written down, the code for a human being might cover 500,000 pages of text. The presence of a digital watch lying in a field would point to a intelligent designer because of its specified complexity. The simplest form of life is more complicated than a jumbo 747 jet plane.

Darwinian Evolution
Darwinism Defined:
Many transitional forms will be found (there are fewer today than in Darwin's day, some forms were found to be fakes)
New species will be made (they have not)
Purely natural processes (natural selection and random mutation) have created the different species observed today

Common Problems:
1. Incorrect Distinction: There is a difference between micro-evolution, and macro-evolution. Evidence given for evolution is (almost always) evidence of micro-evolution, basically that's small changes within species. Everybody accepts that micro-evolution (commonly known just as "evolution") occurs. Evidence for (micro) evolution is not evidence for macro evolution, darwinian evolution, or big changes from one species to another.

2. Begging the question, avoiding the issue, and materialism "You have a religious bias, why can't you accept the findings of science?" "All events are natural events because supernatural events don't happen."

3. Distancing the problem doesn't make it go away "Ok, so life didn't form on earth, it came from outer space."

4. Not understanding the problem "simple life is easy to make."

5. Strawman arguments that also attack character not the issue: "All creationists are biblical literalists" "All creationists believe the universe is 10,000 years old." "All creationists are fundermentalist Christians who don't have proper training in science."

6. Absence of pre-cambrian fossil ancestors. The missing link is still missing. There are a bunch of fossils in the ground. People can see links between them in the same way that people see shapes in the clouds. Recognising similar design is not the same as showing causal order.

7. Can't offer a solution to the existence of pain and suffering

8. About 90% of people don't believe in Darwinian evolution

9. It is essentially a religious dogma unsupported by the scientific evidence

10. Changes in Finch beak length shows adaptation or micro evolution, it is not proof that humans are the result of a random, purposeless, materialist universe, slowly being accidentally changed from an amoeba.

11. Ultimately Darwinian evolution can't explain: The origin of first life (incredible specified complexity) The origin of species (fundermentally different forms of specified complexity)

Back to the top

Home -- About Us -- Links -- Site History