FINAL Bio Computing
FINAL Bio Computing
Abstract
The fields of computing and biology have begun to cross paths in new ways. In this paper a review of the current research in biological computing is presented. Fundamental concepts are introduced and these foundational elements are explored to discuss the possibilities of a new computing paradigm. We assume the reader to possess a basic knowledge of Biology and Computer Science. Biological computers are
special
types
of
microcomputers
that
are specifically
designed
to
be
used
for
medical applications. The biological computer is an implantable device that is mainly used for tasks like monitoring the body's activities or inducing therapeutic effects, all at the molecular or cellular level. The biological computer is made up of RNA (Ribonucleic Acid an important part in the synthesis of protein from amino acids), DNA (Deoxyribonucleic Acid nucleic acid molecule that contains the important genetic information that is used by the body for the construction of cells; it's the blue print for all living organisms), and proteins.
Introduction
It is easy to miss natures influence and subsequent impact on living forms. This applies to our day to day activities as well. Humans use a variety of gadgets and gizmos without realizing that the gadget could be working on a pattern already patented and perfected by Mother Nature. Computers and software are no exception. The last few decades have ushered in the age of computers. Electronics have invaded all walks of life and we depend on electronics to accomplish most of our day to day activities. As predicted by Dr. Gordon E. Moore, modern day electronics has progressed with miniaturization of electronic components. According to Dr. Moore, the miniaturization of integrated electronics will continue to be bettered once every 12 18 months with a reduction in cost (Moore, 1965). True to his prediction modern day chips
have up to 1 million transistors per mm2.However as with other things, miniaturization cannot continue forever, the laws of nature and in particular physics will soon catch up to impose a limit on the silicon chip. Such limitation will not prevent us from progression. The route is clear but the ways to reach it may be unusual. Imagine having billions of Deoxyribonucleic (DNA) acids instead of silicon chips powering the computer. The fact that silicon chips will even be replaced will be anathema to some but we are well on our way for some surprises. Hence it is imperative that software engineers have an understanding even if it just includes the basics of microorganisms and how they will impact computing. Our fascination and its logical conclusion, which is reflected in this paper, is due to the behavioral similarity between microorganisms (DNA) and computers. As soon as you understand what microorganisms can do, then relating that to a computer or program that runs on a computer becomes easy. Much like microorganisms, computers have evolved over a period of time. However time will tell if DNA will indeed play a prominent role in their march to future glory. It is our endeavor to shed light on biological computing thru a lay persons eyes.
Todays silicon-based microprocessors are manufactured under the strictest of conditions. Massive filters clean the air of dust and moisture, workers don spacesuit-like gear and the resulting systems are micro-tested for the smallest imperfection. But at a handful of labs across the country, researchers are building what they hope will be some of tomorrows computers in environments that are far from sterilebeakers, test tubes and Petri dishes full of bacteria. Simply put, these scientists seek to create cells that can compute, endowed with intelligent genes that can add numbers, store the results in some kind of memory bank, keep time and
perhaps one day even execute simple programs. All of these operations sound like what todays computers do. Yet these biological systems could open up a whole different realm of computing. It is a mistake to envision the kind of computation that we are envisioning for living cells as being a replacement for the kinds of computers that we have now, says Tom Knight, a researcher at the MIT Artificial Intelligence Laboratory and one of the leaders in the biocomputing movement. Knight says these new computers will be a way of bridging the gap to the chemical world. Think of it more as a process-control computer. The computer that is running a chemical factory. As a bridge to the chemical world, biocomputing is a natural. First of all, its extremely cost-effective. Once youve programmed a single cell, you can grow billions more for the cost of
simple nutrient solutions and a lab technicians time. In the second place, biocomputers might ultimately be far more reliable than computers built from wires and silicon, for the same reason that our brains can survive the death of millions of cells and still function, whereas your Pentium-powered PC will seize up if you cut one wire. But the clincher is that every cell has a miniature chemical factory at its command: Once the organism was programmed, virtually any biological chemical could be synthesized at will. Thats why Knight envisions biocomputers running all kinds of biochemical systems and acting to link information technology and biotechnology. Realizing this vision, though, is going to take a while. Today a typical desktop computer can store 50 billion bits of information. As a point of comparison, Tim Gardner, a graduate student at Boston University, recently made a genetic system that can store a single bit of informationeither a 1 or a 0. On an innovation timeline, todays microbial programmers are roughly where the pioneers of computer science were in the 1920s, when they built the first digital computers. Indeed, its tempting to dismiss this research as an academic curiosity, something like building a computer out of Tinker Toys. But if the project is successful the results could be staggering. Instead of painstakingly isolating proteins, mapping genes and trying to decode the secrets of nature, bioengineers could simply program cells to do whatever was desiredsay, injecting insulin as needed into a diabetics bloodstreammuch the way that a programmer can manipulate the functions of a PC . Biological machines could usher in a
whole new world of chemical control. In the long run, Knight and others say, biocomputing could create active Band-Aids capable of analyzing an injury and healing t h e damage. T h e technology could be used to program bacterial spores that would remain dormant in the soil until a chemical spill occurred, at which point the bacteria would wake up, multiply, eat the chemicals and return to dormancy. In the near termperhaps within five yearsa soldier might be carrying a
biochip device that could detect when some toxin or agent is released, says Boston University professor of biomedical engineering James Collins, another key player in the biocomputing field. The New Biology
Biocomputing research is one of those new disciplines that cuts across well-established fieldsin this case computer science and biologybut doesnt fit comfortably into either
culture.Biologists are trained for discoveries, says Collins. I dont push any of my students towards discovery of a new component in a biological system. Rockefeller University postdoctoral fellow Michael Elowitz explains this difference in engineering terms: Typically in biology, one tries to reverse-engineer circuits that have already been designed and built by evolution. What Collins, Elowitz and others want to do instead is forward-engineer biological circuits, or build novel ones from scratch. But while biocomputing researchers goals are quite different from those of cellular and molecular biologists, many of the tools they rely on are the same. And working at a bench in a biologically oriented wet lab doesnt come easy for computer scientists and engineersmany of whom are used to machines that faithfully execute the commands that they type. But in the wet lab, as the saying goes, the organism will do whatever it damn well pleases. After nearly 30 years as a computer science researcher, MITs Knight began to setup his biological lab three years ago, and nothing worked properly. Textbook reactions were failing. So after five months of frustratingly slow progress, he hired a biologist from the University of California, Berkeley, to come in and figure out what was wrong. She flew cross-country bearing flasks of reagents, biological sampleseven her own water. Indeed, it turned out that the water in Knights lab was the culprit: It wasnt pure enough for gene splicing. A few days after that diagnosis, the lab was up and running. Boston Universitys Gardner, a physicist turned computer scientist, got around some of the challenges of setting up a lab by borrowing space from B.U. biologist Charles Cantor, who has been a leading figure in the Human Genome Project. But before Gardner turned to the flasks, vials and culture dishes, he spent the better part of a year working with Collins to build a mathematical model for their genetic one-bit switch, or flipflop. Gardner then set about the arduous task of realizing that model in the lab. The flip -flop, explains Collins, is built from two genes that are mutually antagonistic: When one is active, or expressed, it turns the second off, and vice versa. The idea is that you can flip between these two states with some external influence, says Collins.It might be a blast of a chemic al or a change in temperature. Since one of the two genes produces a protein that fluoresces under laser light, the researchers can use a laser-based detector to see when a cell toggles between states. In January, in the journal Nature, Gardner, Collins and Cantor described five such flip-
flops that Gardner had built and inserted into E. coli. Gardner says that the flip-flop is the first of a series of so-called genetic applets he hopes to create. The term applet is borrowed from contemporary computer science: It refers to a small program, usually written in the Java programming language, which is put on a Web page and performs a specific function. Just as applets can theoretically be combined into a full-fledged program, Gardner believes he can build an array of combinable genetic parts and use them to program cells to perform new functions. In the insulin-delivery example, a genetic applet that sensed the amount of glucose in a diabetics bloodstream could be connected to a second applet that controlled the synthesis of insulin. A third applet might enable the system to respond to external events, allowing, for example, a physician to trigger insulin production manually. GeneTic Tock As a graduate student at Princeton University, Rockefellers Michael Elowitz constructed a genetic applet of his owna clock. In the world of digital computers, the clock is one of the most fundamental components. Clocks dont tell timeinstead, they send out a train of pulses that are used to synchronize all the events taking place inside the machine. The first IBM PC had a clock that ticked 4.77 million times each second; todays top-of-the-line Pentium III computers have clocks that tick 800 million times a second. Elowitzs clock, by contrast, cycles once every 150 minutes or so. The biological clock consists of four genes engineered into a bacterium (see A Clock in a Cell, p. 72). Three of them work together to turn the fourth, which encodes for a fluorescent protein, on and off.Elowitz calls this a genetic circuit.Although Elowitzs clock is a remarkable achievement, it doesnt keep great timethe span between tick and tock ranges anywhere from 120 minutes to 200 minutes. And with each clock running separately in each of many bacteria, coordination is a problem: Watch one bacterium under a microscope and youll see regular intervals of glowing and dimness as the gene for the fluorescent protein is turned on and off, but put a mass of the bacteria together and they will all be out of sync. Elowitz hopes to learn from this tumult. This was our first attempt, he says. What we found is that the clock we built is very noisythere is a lot of variability. A big question is what the origin of that noise is and how one could circumvent it. And how, in fact, real circuits that are produced by evolution are able to circumvent that noise. While Elowitz works to improve his timing, B.U.s Collins and Gardner are aiming to beat the corporate clock. Theyve filed for patents on the genetic flip-
flop, and Collins is speaking with potential investors, working to form what would be the first biocomputing company. He hopes to have funding in place and the venture launched within a few months. The prospective firms early products might include a device that could detect food contamination or toxins used in chemical or biological warfare. This would be possible, Collins says, if we could couple cells with chips and use themexternal to the bodyas sensing elements. By keeping the modified cells outside of the human body, the startup would skirt many Food and Drug Administration regulatory issues and possibly have a product on the market within a few years. But Collins eventual goal is gene therapy placing networks of genetic applets into a human host to treat such diseases as hemophilia or anemia. Another possibility would be to use genetic switches to control biological reactorswhich is where Knights vision of a bridge to the chemical world comes in. Larger chemical companies like DuPont are moving towards technologies where they can use cells as chemical factories to produce proteins, says Collins. What you can do with these control circuits is to regulate the expression of different genes to produce your proteins of interest. Bacteria in a large bioreactor could be programmed to make different kinds of drugs, nutrients, vitaminsor even pesticides. Essentially, this would allow an entire factory to be retooled by throwing a single genetic switch. Amorphous Computing Two-gene switches arent exactly new to biology, says Roger Brent, associate director of research at the Molecular Sciences Institute in Berkeley, Calif., a nonprofit research firm. Brentwho evaluated biocomputing research for the Defense Advanced Research Projects Agencysays that genetic engineers have made and used such switches of increasing sophistication since the 1970s. We biologists have tons and tons of cells that exist in two states and change depending on external inputs. For Brent, whats most intriguing about the B.U.
researchers genetic switch is that it could be just the beginning. We have two-state cells. What about four-state cells? Is there some good there? he asks. Lets say that you could get a cell that existed in a large number of independent states and there were things happening inside the cell...which caused the cell to go from one state to another in response to different influences, Brent continues. Can you perform any meaningful computation? If you had 16 states in a cell and the ability to have the cell communicate with its neighbors, could you do anything with that? By itself, a single cell with 16 states couldnt do much. But combine a billion of these
cells and you suddenly have a system with 2 gigabytes of storage. A teaspoon of programmable bacteria could potentially have a million times more memory than todays largest computers and potentially billions upon billions of processors. But how would you possibly program such a machine? Programming is the question that the Amorphous Computing project at MIT is trying to answer. The projects goal is to develop techniques for building self - assembling
systems. Such techniques could allow bacteria in a teaspoon to find their neighbors organize into a massive parallel-processing computer and set about solving a computationally intensive problemlike cracking an encryption key, factoring a large number or perhaps even predicting weather. Researchers at MIT have long been interested in methods of computing that employ many small computers, rather than one super-fast one. Such an approach is appealing because it could give computing a boost over the wall that many believe the silicon microprocessor evolution will soon hit (see The End of Moores Law? p. 42). When processors can be shrunk no further, these researchers insist, the only way to achieve faster computation will be by using multiple computers in concert. Many artificial intelligence researchers also believe that it will only be possible to achieve true machine intelligence by using millions of small, connected processors essentially modeling the connections of neurons in the human brain. On a wall outside of MIT computer science and engineering professor Harold Abelsons fourth-floor office is one of the first tangible results of the Amorphous Computing effort. Called Gunk, it is a tangle of wires, a colony of single-board computers, each one randomly connected with three other machines in the colony. Each computer has a flashing red light; the goal of the colony is to synchronize the lights so that they flash in unison. The colony is robust in a way traditional computers are not: You can turn off any single computer or rewire its connection without changing the behavior of the overall system. But though mesmerizing to watch, the colony doesnt engage in any fundamentally important computations. Five floors above Abelsons office, in Knights biology lab, researchers are launching a more extensive foray into the world of amorphous computation: Knights students are developing techniques for exchanging
data between cells, and between cells and larger-scale computers, since communication between components is a fundamental requirement of an amorphous system. While Collins group at B.U. is using heat and chemicals to send instructions to their switches, the Knight lab is working on a communications system based on bioluminescence light produced by living cells. To date, work has been slow. The lab is new and, as the water-purity experience showed, the team is
inexperienced in matters of biology. But some of the slowness is also intentional: The researchers want to become as familiar as possible with the biological tools theyre using in order to maximize their command of any system they eventually develop. If you are actually going to build something that you want to controlif we have this digital circuit that we expect to have somewhat reliable behaviorthen you need to understand the components, says graduate student on Weiss. And biology is fraught with fluctuation, Weiss points out. The precise amount of a particular protein a bacterial cell produces depends not only on the bacterial strain and the DNA sequence engineered into the cell, but also on environmental conditions such as nutrition and timing. Remarks Weiss: The number of variables that exist is tremendous. To get a handle on all those variables, the Knight team is starting with in-depth characterizations of a few different genes for luciferase, an enzyme that allows fireflies and other luminescent organisms to produce light. Understanding the light generation end of things is an obvious first step toward a reliable means of cell-to-cell communication. There are cells out there that can detect light, says Knight. This might be a way for cells to signal to one another. Whats more, he says, if these cells knew where they were, and were running as an organized ensemble, you could use this as a way of displaying a pattern. Ultimately, Knights team hopes that vast ensembles of communicating cells could both perform meaningful computations and have the resiliency of Abelsons Gunkor the human brain. Full Speed Ahead Even as his laband his fieldtake its first steps, Knight is looking to the future. He says he isnt concerned about the ridiculously slow speed of todays genetic approaches to biocomputing. He and other researchers started with DNA-based systems, Knight says, because genetic engineering is relatively well understood. You start with the easy systems and move to the hard systems. And there are plenty of biological systemsincluding systems based on nerve cells, such as our own brainsthat operate faster than its possible to turn genes on and off, Knight says. A neuron can respond to an external stimulus, for example, in a matter of milliseconds. The downside says Knight, is that some of the faster biological mechanisms arent currently understood as well as genetic functions are, and so are substantially more difficult to manipulate and mix and match. Still, the Molecular Sciences Institutes Brent believes that todays DNA-based biocomputer prototypes are steppingstones to computers based on
neurochemistry. Thirty years from now we will be using our knowledge of developmental neurobiology to grow appropriate circuits that will be made out of nerve cells and will process information like crazy, Brent predicts. Meanwhile, pioneers like Knight, Collins, Gardner and Elowitz will continue to produce new devices unlike anything that ever came out of a microprocessor factory, and to lay the foundations for a new era of computing.
Concept
This paper talks about how two diverse systems, biology and computers are brought together to take mankind into the future. A basic understanding of the lowest unit (Deoxyribonucleic acid - DNA) of life will help. People should not imagine that DNA will replace the CPU in biological computing. In our opinion such a scenario is at least two decades or more away from reality. As like other inventions one can safely anticipate or expect baby steps in this direction before conceiving bigger pictures. Although not exceeding a few microns in size, the DNA molecule has a number of tricks that will be useful for biological computing. One of them is the ability to generate proteins. Once programmed, by altering the cell by chemical or changing the environment the reprogrammed cell does its job to near perfection as per the changed environment Another trick that may be useful is the ability of DNA to make exact copies of itself. Imagine the advantage of having such molecules programmed for different purposes and its impact on applied sciences like medicine, agriculture, and various industries, in fact such molecules act like micro computers. There is no clear road map for this programmable feature to be taken advantage of to eventually replace the CPU. In essence, Biological computing is about harnessing the enormous potential of the DNA to the benefit of mankind by manipulating the DNA. Having laid down the concept and to provide clarity to better understand and appreciate biological computing we are providing a brief introduction to DNA. We will also provide the similarities between DNA and the computer; briefly provide information on current research and finally touch upon trends, impact and future prospects.
as 1868. According to the WatsonCrick model, the DNA molecule consists of two polymer chains. Each chain comprises four types of residues (bases) namely A (adenyl), G (guanyl), T (thymidyl), and C (cytidyl). The sequence of bases in one chain may be entirely arbitrary, but the sequences in both chains are strongly interconnected because of the complementary principle so that: A is always opposite T; T is always opposite A; G is always opposite C; C is always opposite G. DNA was recognized as the most important molecule of living nature. In living organisms, DNA does not usually exist as a single molecule, but instead as a tightly-associated pair of molecules. These two long strands entwine like vines, in the shape of a double helix. The nucleotide repeats (structural units of DNA, Figure 1) contain both the segment of the backbone of the molecule, which holds the chain together, and a base, which interacts with the other DNA strand in the helix. In general, a base linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. If multiple nucleotides are linked together, as in DNA, this polymer is called a polynucleotide (FrankKamenetskii, 1997). At the time of discovery of the structure by Watson-Crick, it was a great step for mankind in the field of biology but very difficult to have dreamt that half a century later it will also help mankind in another field computing.
imagine. A double stranded DNA within a single cell is fully self contained. It works with clockwork precision, has the ability to repair itself; provide backup; create new patterns; select the best for its survival. Most complex computers exhibit the above in one way or the other. As a DNA has to survive in nature, only the fittest survive and hence the ability to adapt to changing environment. However the same environment (extreme heat, chemicals, Ultra Violet rays, etc.) can sometimes causes changes to DNA that may make them loose some of their magic and in carries to successfully replicate thereby passing some cases can be catastrophic. In real life, the DNA is intelligent enough to recover from catastrophic failures. There are many tools that it on the important traits to its progeny. A few of them include redundancy, self recovery by protein synthesis/translation, and ability to shut down malfunctioning parts of DNA. Compare this with a computer and the software that runs the computer. Even a pure software engineer will now be able to link the computer to the DNA. In fact I would go as far to say that what we know in computer jargon as Primer, Reusability, etc., has been in existence since time immemorial in the DNA molecule. Microsoft had in fact coined the terminology DNA in the late 1990s to market their Distributed networking solutions (since then Microsoft has dropped it for whatever reasons) and one can safely assume that they had borrowed it from biology. Table 1 below compares a DNA with a modern day computer.
Figure 2: Simplified diagram of Protein synthesis We would like to touch upon a few of the points mentioned in the above comparison table to highlight the benefits of taking biological computing to its next step, which is to make it a reality. The ability to store billions of data is an important feature of the DNA and hence to biological computing. While DNA can be measured in nano grams, the silicon chip is far behind when it comes to storage capacity. A single gram of DNA can store as much information as 1 trillion audio CDs (Fulk, 2003). This offers storage possibilities previously unheard of and at the same time businesses can reduce the cost of storage and plough investments into other areas. While we are all familiar with Von Neumanns sequential architecture which has stood the test of time, the fact that we could have millions of DNA molecules in a small vial allows us to think of massive parallel computing when using microorganisms. Parallel processing using DNA can achieve speeds that man could not have imagined. For comparison, the fastest supercomputers can perform around 1012 operations per second, but even current results with DNA computing has produced levels of 1014 operations per second or one hundred times faster. Experts
believe that it should be possible to produce massively parallel processing in biological computers at a level of 1017operations per second or more, or a level that silicon-based computers will never be able to match (Fulk, 2003).
DNA
Fully Autonomous
Computer / Programs
Self contained to a great degree dependent on peripherals, power etc.
Has inbuilt redundancy Ability to recover from failure is remarkable redundancy, shut down etc.,
Depending on need has redundancy built inside Depending on need. Redundancy, backups, disks, additional power sources etc help systems to recover.
Can adapt to environment Store billions of pieces of information due to their size Can reproduce information with precision
Can be manipulated by external stimulus chemicals, heat, etc., Impacted by environment changes No toxic byproduct is generated
Can be manipulated by external stimulus mouse, external commands etc Less Impacted by environment changes Composed of Toxic products and generates lot of heat
Energy Efficient
Second, in the case of DNA computing, the biological reactions involved produce very little heat, wasting far less energy in the process. This allows for these computing processes to be up to one billion times as energy efficient as their electronic counterparts. Third, the components of a computer composed of DNA as the primary unit is non toxic when compared to the current systems which is highly toxic due to use of chemicals and other materials that are not easily degradable. Not only is the material toxic but in some cases production of such materials also results in toxic byproducts. The damage of such toxic materials to the environment is unimaginable and the cost to clean up is also high.
Lastly, DNA has the inbuilt ability to repair itself in case of any impact to its functioning. This type of self-healing is not possible in a hardware based computer. It may sound a bit like an H.G Wells story, but imagine a computer that does not break down after a few years in operation and one that does not require a hardware upgrade? The benefits of moving towards biological computing appear immense.
Current Research
Before embarking on this paper we did some research to find out where the world is in terms of biological computing? As one would expect we see a lot of baby steps being taken in this field. Part of the reason is because software engineers need to first understand Biological sciences. It is a radically different field where there is no easy way to debug; to fix and run a program. Take for example the Genetic Circuit, worked on by Michael Elowitz and his team (Garfinkel, 2000). The circuit consists of four genes engineered into a bacterium. Three of them work together to turn the fourth, which encodes for a fluorescent protein, on and off. Although this circuit is a remarkable achievement, it doesnt keep great timethe span between tick and tock ranges anywhere from 120 minutes to 200 minutes. And with each clock running separately in each of many bacteria, coordination is a problem: watch one bacterium under a microscope and youll see regular intervals of glowing and dimness as the gene for the fluorescent protein is turned on and off, but put a mass of the bacteria together and they will all be out of sync. This is a big first attempt and we have many miles to go (Garfinkel, 2000). Another interesting work with a name that almost rhymes like a software object is being carried out by James J. Collins at Boston University. The main focus is on Genetic Applets. Similar to what a Java applet is and does the genetic applet is modeled on the same lines, i.e. programmatically altered to perform one or more functions repeatedly with perfection (Garfinkel, 2000) One might wonder how such DNA molecules that are programmed for one or a few functions can one day replace the CPU. To answer this one must look into the work that is carried out by Dr. Thomas F. Knight. His team has forayed into what is known as amorphous computing. Knights lab is working on techniques to exchange data between cells and between cells and large scale computers as communication between components is a fundamental functionality of computers. The concept of bioluminescence is used for this purpose. Needless to say all of the techniques involve splicing and dicing of genetic materials which is nothing but the DNA.
molecules and a computer. Knowingly or unknowingly biology has been the inspiration for computers to a great extent. The similarities are too many to think otherwise. So it is time for harnessing the power of DNA using computers as the inspiration. While we live in the age of computers, biological computing is slowly gaining prominence but without much fanfare. True, biological computing has played a big role in modern medicine and will continue to do so, but to see a computer being solely powered by microorganisms/DNA is far away. We feel that we are not even close enough to say that the next years will see the dawn of biological computing where CPU is replaced by DNA. Some of the challenges that stare us in our face to eventually replace silicon chips with DNA include: a) Ability to control the DNA. b) How to make the various altered DNAs to communicate with each other. c) Can the programmed DNA or microorganism go wrong? d) Can it impact health? Maybe the above may not be an issue at all but still they need to be answered. For all those hard core computer professionals who are wedded to silicon chips it is time to look at the future and prepare for the next big thing in computers. The future for biological computing is bright. Already some of the medical/industrial products like Vaccines, Insulin (for diabetes treatment) are benefiting from this research. Most of the design/patterns coming out of various software companies have already been in existence in nature (DNA) and all we need to do to effectively use the DNA is to reverse engineer, understand the inner workings and make it fit to work to our requirements. The advent and gaining popularity of Nano technology offer more avenues to use DNA. Under laboratory conditions, DNA selfassembly has been demonstrated successfully, simple patterns (e.g., alternating bands, or the encoding of a binary string) that are visible through microscopy has been used successfully for simple computations such as counting, XOR, and addition (Wooley and Lin, 2005).
Advantages The main advantage of this technology over other like technologies is the fact that through it, a doctor can focus on or find and treat only damaged or diseased cells. Selective cell treatment is made possible. The biological computer can also perform simple
mathematical calculations. This could enable the researcher to build an array or a system of biosensors that has the ability to detect or target specific types of cells that could be found in the patient's body. This could also be used to carry out or perform target-specific
medicinal operations that could deliver medical procedures or remedies according to the doctor's instructions. This not only makes the healing process easier. It also allows the doctors to focus only on the damaged, diseased or cancerous cells found in the patient's body without causing stress to other healthy and normal cells.
HowItWorks Biological computers are made inside a patient's body. The researchers or doctors merely provide the patient's body with all of the necessary information or a "blueprint" along which lines the biological computer would be "manufactured." Once the "computer's" genetic blueprint has been provided, the human body will start to build it on its own using the body's natural biological processes and the cells found in the body. As of today, reading signals produced by cell activity is not yet possible due to technological limitations. However, through the use of a tiny implantable biological computer, these cellular signals could easily be detected, translated and understood using existing medical and laboratory equipment. Through Boolean logic equations, a doctor or researcher can easily use the biological computer to identify all types of cellular activity and determine whether a particular activity is harmful or not. The cellular activities that the biological computer could detect can even include those of mutated genes and all other activities of the genes found in cells. As with conventional computers, the biological computer also works with an output and an input signal. The main inputs of the biological computer are the body's proteins, RNA and other specific chemicals that are found in the human cytoplasm. The output on the other hand it could be specified using laboratory equipment.
Applications
The implantable biological computer is a device which could be used in various medical applications where intercellular evaluation and treatment are needed or required. It is especially useful in monitoring intercellular activity including mutation of genes.
We have many interesting and ingenious ways of looking at biological processes. The biotech revolution has allowed us to develop methods for detecting and quantifying molecules produced by living cells; we can detect gene expression and activity, and we can pinpoint within a cell the precise location of proteins. However, while these tasks are relatively easy to perform in vitro on a lab bench, imagine the benefits to medicine if we could apply them in vivo (in a whole, living animal). Nanotech machines could be injected into a patient that would then monitor for certain conditions and respond accordingly. There is a paper, published online today in Nature Biotechnology that brings this dream a little bit closer to reality. Scientists at Harvard and Princeton have detailed the construction of a biological circuit that uses siRNA to affect boolean logic statements. The circuit works by having two different mRNA strands that code for the same protein but contain untranslated regions that correspond to different siRNA sequences. Different endogenous inputs will control the expression of the various siRNAs, thereby affecting which of the two mRNA strands gets expressed; an example would be inputs A and B targeting one mRNA, and inputs X and Y inputting the other mRNA, thereby giving the logic expression (A AND B) OR (X AND Y). Other mRNA strands can be designed to work for (A AND NOT B), and so on. The output of the mRNA strand that isn't silenced can be a reporter protein: luciferase or GFP, for example.Although this research describes relatively simple artificial molecular machinery, it doesn't take much imagination to see the potential. Biological machines can be implanted or even built within a patient's own cells that will act as biosensors, watching out for disease markers. Should they find such markers, the molecular logic circuits like this could choose the most appropriate action. That could involve inducing programmed cell death in the case of cancerous cells or synthesis of a drug in specific tissues. Obviously such therapies remain vapor ware for now, but that won't remain true for much longer.
Biocomputers constructed entirely of DNA, RNA and proteins can function inside the body as "molecular doctors," according to Harvards Yaakov Kobi Benenson, a Bauer Fellow in the Faculty of Arts and Sciences Center for Systems Biology.Each human cell already has all of the tools required to build these biocomputers on its own, says Harvards Benenson. All that must be provided is a genetic blueprint of the machine
and our own biology will do the rest. Your cells will literally build these biocomputers for you.
demonstrate that
Also, they have developed a conceptual framework by which various phenotypes could be represented logically. Phenotypes are characteristics that are measurable and that are expressed in only a subset of the individuals within that population (like blond hair or browneyes).
In theory, using a biocomputer as the calculation mechanism, researchers could build biosensors or medicine delivery systems that could single out specific cell types in the body. These molecular doctors could target only cancerous cells, for example, ignoring healthyones.
Bimolecular computers have been proved in concept by researchers at the Weizmann Institute of Science; see the article Biomolecular Computer: The Tiniest Doc?.Dr. Leonard Adleman, a computer scientist at USC, discussed the
possibility of biocomputers as early as 1994. Science fiction fans didn't have to wait so long; they could read about the intellectual cells in Greg Bear's 1984 novel Blood Music:His first E. coli mutations had had the learning capacity of planarian worms; he had run them through simple T-mazes, giving sugar rewards. They had soon outperformed planaria...Removing the finest biologic sequences from the altered E. coli, he had incorporated them into B-lymphocytes, white cells from his own blood...Using artificial proteins and hormones as a means of communication, Vergil had "trained" the lymphocytes in the past six months to interact as much as possible with each other and with their environment-a much more complex miniature glass maze.
http://www.technovelgy.com/ct/Science-Fi...wsNum=1051 For a scientist who has just staked a claim to the first programmable and autonomous biological nanocomputer, Professor Ehud Shapiro is remarkably low-key when asked to
predict
how
such
research
may eventually
change
the
world.
He refuses to get drawn into detailed discussions of futuristic applications for the technology, and prefers to leave prophesying to others. At the same time, his incremental approach to the embryonic science of turning DNA into trillions of tiny computers, swimming inside a test tube, has given Shapiro a keen sense of direction as he embarks uponalong-termmission.
Shapiro does not see his computer as a potential competitor to silicon -based electronic computing, as some have suggested. Instead, he envisions DNA computers as a "molecular computing device that can operate initially in a test tube and eventually inside an organism and interact with its biochemical environment."
DNA computing could possibly be used to streamline laboratory analysis of DNA, by eliminating the need for sequencing. This, he said, could happen within a decade.
"In the longer term, you may have medical applications in which this device can operate in vivo, inside a living organism," he says. "Based on the information it receives from the environment and medical knowledge encoded in the software it may diagnose the problem and prescribe a solution, and then it could synthesis that molecule and output it."
Modest initial goal was to find a way to use turn DNA into the most elementary mathematical computing device known as a finite automaton, capable of answering "yes" or "no" to very basic questions about a bunch of zeroes and ones. They constructed a molecular realization of this mathematical device.It has input, it has software and it has hardware components; and when it computes it produces output, which is
another molecule.To do this, Shapiro and his colleagues used the four components of a DNA strand known as A, C, G and T to encode the zeroes and ones and create an input molecule with an exposed "sticky" end. Then, another DNA strand -- the software -swoops in to try and hook up with an exposed edge like a Lego piece attempting to lock into a complementary block. Each exposed edge has a specific complementary DNA strand. After hooking up, the hardware gets to work. An enzyme called ligase seals the
link, and another called Fok-1 moves in to snip the strand, leaving the next section exposed. The process continues several times until the computer delivers an answer to the question. There are 765 different possible software programs that can be used for simple calculations, such as whether there are an even or odd number of zeroes or ones.Shapiro's research is the latest step forward in a field founded by Leonard Adleman of the University of Southern California, Los Angeles. In 1994, Adleman proved that DNA could compute, when he used the stuff to solve the "traveling salesman" problem, in which the shortest route between several cities must be mapped without going through thesame citytwice. Conventional computers have extreme difficulty solving the problem, especially when dealing with many points on a map. This is because electronic computers are based on sequential logic, which makes them good at solving a problem requiring lots of computations in a row. But posed with a puzzle of how to figure out the shortest route between 100 cities -- a problem best cracked by simultaneously performing an enormous number of short operations --conventional computers do not make the grade. Adleman demonstrated that DNA could be an efficient way to solve such problems.Shapiro says his DNA computer is fundamentally different from Adleman's breakthrough. Although Adleman's computer was composed of many trillions of tiny DNA molecules swimming around in a test tube, Shapiro says it was essentially a large operation that required active involvement of scientists."The calculation needed to be carried out by humans. In our case, the computer is just the molecules," says Shapiro, who can put a trillion of his own biological computers into a drop of solution. "His computer is measured in meters, ours is measured in nanometers."Experts point out that Shapiro faces stiff competition and will be challenged to scale up the work to perform more complex computations.John Reif, professor of computer science at Duke University, described Shapiro's work as "ingeniously constructed experiments" that clearly demonstrated the ability to perform simple computations via solid experimental protocols."But there is a lot of competition out there in the DNA computing world," he added, singling out DNA computing research at Princeton University and the University of Wisconsin that has gone beyond the finite automaton. "People are really aggressively pushing the limits, so the challenge for the Israelis is to go
in and push those limits as defined by some of those strong competitors," Reif said. Shapiro has no illusions. The biggest stumbling block now is the dependency on natural enzymes, meaning scientists must search for the right enzymes that could help perform computations on DNA. Science till has no clue how to create designer enzymes that could pave the way to dramatic progress. For his part, alongside the finite automaton, Shapiro has taken an important theoretical step forward by building a model of a molecular Turing Machine, which is a representation of a computing device capable of an infinite number of computations. It is in this green, squarish model, sitting in a cardboard box in his office, that Shapiro sees the real potential for molecular computing. The ability to create a molecular Turing Machine would allow scientists to use DNA to generate massive computing power. Inthe meantime, he is keeping focused on the scientific challenges ahead -- and plans to be tied up in his DNA strands for a while. y Biocomputers constructed entirely of DNA, RNA and proteins can function inside the body as "molecular doctors," according to Harvards Yaakov Kobi Benenson, a Bauer Fellow in the Faculty of Arts and Sciences Center for Systems Biology. Each human cell already has all of the tools required to build these biocomputers on its own, says Harvards Benenson. All that must be provided is a genetic blueprint of the machine and our own biology will do the rest. Your cells will literally build these biocomputers for you.
Benson and colleagues claim to demonstrate that biocomputers can work in human kidney cells in a culture. Also, they have developed a conceptual framework by which various phenotypes could be represented logically. Phenotypes are characteristics that are measurable and that are expressed in only a subset of the individuals within that population(like blond hair or brown eyes). In theory, using a biocomputer as the calculation mechanism, researchers could build biosensors or medicine delivery systems that could single out specific cell types in the body. These molecular doctors could target only cancerous cells, for example, ignoring healthy ones. Bimolecular computers have been proved in concept by researchers at the Weizmann Institute of Science; see the article Bimolecular Computer: The Tiniest Doc?.
Dr. Leonard Adleman, a computer scientist at USC, discussed the possibility of biocomputers as early as 1994. Science fiction fans didn't have to wait so long; they could read about the
intellectual cells
in
Greg
Bear's
1984
novel
Blood
Music:
His first E. coli mutations had had the learning capacity of planarian worms; he had run them through simple T-mazes, giving sugar rewards. They had soon outperformed planaria... Removing the finest biologic sequences from the altered E. coli, he had incorporated them into Blymphocytes, white cells from his own blood...Using artificial proteins and hormones as a means of communication, Vergil had "trained" the lymphocytes in the past six months to interact as much as possible with each other and with their environment - a much more complex miniature glass maze.
Conclusion
Biological computing is a young field which attempts to extract computing power from the collective action of large numbers of biological molecules. In our opinion the CPU being replaced by biological molecules remains in the far future. However if one can imagine such a scenario then it is safe to imagine or think of a biological computer as a massively parallel machine where each processor consists of a single biological macromolecule. By employing extremely large numbers of such macromolecules in parallel, one can hope to solve computational problems more quickly than the fastest conventional supercomputers. To many pure software professionals this may be far-fetched. A good compromise could be a hybrid system. A part of the system can be made of biological and the other using current or new hardware that may become available. This would give us the combined benefit of both systems. Companies and scientist that are involved in the biological computing work need to take care of legal, moral regulations. Maybe it is time to overcome Moores law as the rate of doubling has slowed down (Mathews, 2006). We need to think of computing in a radically different way and who knows if in the near future we will be tackling real viruses instead of the electronic virus.