The Poverty of the Instructed Organism
Are You and Your Cells Programmed?
By Stephen L.
This article builds on several previous essays. You will find the latest versions of all the currently available articles in this series at the website, “From Mechanism to a Science of Qualities”. By clicking on the shaded rectangles at the end of many scientific terms, you can immediately read a definition of the terms in a separate glossary window (or tab, if your browser is set that way).
For more than two centuries biologists have struggled to find an elusive pathway between the lifelessness of inanimate nature and what they feared as the supernatural, mystical, or vitalistic excess easily induced in weak, sentimental observers by an overly close, personal attention to actual living creatures.
As a convinced materialist, the biologist prefers to model the organism as nothing more than an improbably differentiated and well-crafted mass of law-abiding physical substance. (The materials in the human body, as the old saying went, have a market value of about 97 cents — before adjusting for inflation.) Such a mass, of course, is receptive to all those “mechanistic explanations” we ceaselessly encounter in the technical literature. The only problem is that, however intricately assembled, a lump of earth is not very convincing at sustaining a pretense of life.
But on the other hand, we hear the actual organism faithfully described as a striving, intending, cognitively shrewd creature through and through. It senses its environment, responds intelligently according to its own needs, maintains homeostasis, has become a crafty expert at surviving and reproducing, and knows exactly when to feign altruism in its own interest (or the interest of its genes). The fly in the ointment here is that such a self-possessed being of subtle insight and well-directed initiative seems almost preternatural — too alive for a materialistic biology still obsessed with the real or imagined sins of the vitalists.
The solution to both the lack of vitality and the excess of it has proven effective beyond all hopes. First we educate the earthen mass by investing a subset of its molecules — DNA and RNA above all — with an impressive technical content called “information”. And then we tame the too-exuberant and cunningly intelligent life of the organism by reconceiving it as the mechanistic operation of an information-processing computer. Our lump of earth, now pulsing with computational know-how, comes safely and mechanistically “alive”.
The great advantage of programmatic information is that it can seem to rival Solomon’s sagacity while at the same time lying low and disguising itself as “dumb matter”. And so, in this age of information and computers, the materialist has learned to breathe freely at last. “There is no spirit-driven life force”, proclaimed Richard Dawkins with an almost audible sigh of relief, “no throbbing, heaving, pullulating, protoplasmic, mystic jelly. Life is just bytes and bytes and bytes of digital information” (1995, pp. 18-19).
But perhaps we should take a look for ourselves.
Suppose a mutation changes a single “letter”, or nucleotide base, of the DNA double helix in one of our chromosomes. According to a common informational viewpoint, this now-altered letter, like every other letter, encodes (roughly) two bits of information, representing a choice among the four possible bases at that position1.
How, in reality, might we assess the significance of the change for the organism? (You will recall from biology class that DNA can serve as a kind of template for transcription, whereby a complementary RNA molecule is produced. This RNA, if derived from a protein-coding gene, may in turn be transported from the nucleus into the cytoplasm and there translated into protein. The production of RNA and protein from genes is referred to as gene expression.)
Our single mutated letter, as it happens, may exert its influences (and in turn be influenced) in remarkably diverse ways. It might, for example, raise or lower the melting temperature of its local stretch of DNA — the temperature at which the two strands of the DNA double helix disengage from each other — thereby making transcription easier or more difficult at that location. Or it might contribute toward a more or less bendable DNA sequence, crucially affecting the DNA’s ability to interact with various regulatory proteins. Or it might promote a local narrowing of one of the two spiraling grooves running the length of the double helix, again modulating accessibility for certain regulatory factors. Or it might alter the electrical characteristics of the DNA. Or, in various other ways, it might change the local binding potentials of any of the numerous molecules involved in structuring chromatin — the elaborate DNA-RNA-protein complex that constitutes our chromosomes and is so central to today’s vast literature on gene regulation. And, yet again, it might contribute to the formation of those non-canonical DNA structures (alternatives to the straightforward double helix) now gaining increasing attention — structures such as G-quadruplexes, Hoogsteen base pairs, and DNA-RNA triplexes. Or it might do several of these things at once — and many others, still unmentioned.
All such effects of our mutation can have dramatic implications for gene expression. And they can also influence yet higher-order structural features that bear on expression. For example, by contributing to an anchoring sequence for a particular protein known as “CTCF”, a single base change can modulate the way a chromosome segment loops dynamically through the cell nucleus, bringing nearby or distant segments of the chromosome together according to their need for interaction. Or it might add its own influence to the various forces and factors determining how entire chromosomes are disposed spatially in the nucleus — a spatial distribution that relates to chromosome territories, “transcription factories”, and the distribution of chromosome regions between the periphery of the nucleus and its interior. All this dynamic activity is the means by which genes come to expression in the right intensity, in the right combination, at the right time, in the right place, and in the company of the right molecular partners.
In other words (as so many technical papers emphasize today): genes must find their places within the ever-changing, three-dimensional, spatiotemporal order of the nucleus in order to contribute their part to the current needs of the organism. And any base change can modulate this well-choreographed performance of the larger nucleus.
There’s also the whole matter of DNA repair and rearrangement. Depending on circumstances, the mutation might trigger one of the DNA damage response pathways, and this could lead to a “correction” of the mutation. But it could also lead to a more radical remodeling of the DNA around the mutation. Moreover, if the mutation occurs in a germline cell, it could transform one of many cryptic meiotic recombination hotspots into an active hotspot, leading to a kind of larger-scale DNA reorganization involving chromosomal crossover, which occurs during meiotic cell division in sexual reproduction (Wahls and Davidson 2011).
Classically, the geneticist would have asked first of all about the effects of our mutation either upon a protein-coding gene, or upon the activity of the gene’s associated transcription factors — proteins that bind to DNA and help to bring about gene expression or repress it. Transcription factors were thought to “read” DNA as a straightforward linear code, bind to appropriately encoded sequences, and more or less directly induce (or repress) the transcription of nearby genes. But now researchers cite a mind-numbing variety of methods by which plastic transcription factors interact with plastic stretches of DNA, producing endless nuances of expression (Rohs et al. 2009). Many transcription factors can bind to thousands of different genes, but regularly do bind only to a few needful ones. They often find their relevant targets by entering into a sculptural embrace whereby they re-shape the local DNA and in turn are re-shaped by it and by many cooperating proteins in a performance of mutual gesture and response that defines the functional outcome. It is not so much a matter of code as of the compatibility of “athletically” interacting forms.
None of this vast wealth of expressive potential looks very much like the kind of thing that might be specified by the approximately two “yes-or-no” bits of information supposedly contained in one letter of the DNA sequence — or by a thousand bits, for that matter. Despite the earlier views, write molecular biologists Barry Honig and Remo Rohs, “it soon became clear that there was . . . no simple code to be read”. Citing the “plasticity of the canonical double helix”, they go on to explain:Part of the problem is that DNA can undergo conformational changes that distort the classical double helix. The resulting variations in the way that DNA bases are presented to proteins can thus affect the recognition mechanism . . . Protein–DNA binding is still commonly thought of purely in terms of codes and sequence motifs, rather than as the binding of two large macromolecules that have complex shapes and considerable conformational flexibility. (Honig and Rohs 2011)
In reality, everything is a matter of shape and transformation. This is equally true of that other premier “informational molecule”, RNA. A research team from the department of chemistry and biophysics at the University of Michigan write in Nature that “Changes to the conformation of coding and non-coding RNAs form the basis of elements of genetic regulation and provide an important source of complexity, which drives many of the fundamental processes of life”. The authors go on to write: “The conventional view that one sequence codes for one structure and one function is being replaced by a dynamic view of RNA”, whose repertoire of shifting conformational states can be employed by the cell as needed to produce “a broad range of functional outcomes” (Dethoff et al. 2012).
A one-dimensional code is giving way to the infinite expressive potentials of dynamic spatial form changing with time. The molecules of our cells dance their way through life rather than yielding to the militaristic rigidity of fixed codes.
Two Kinds of Description
It is well to picture as concretely as possible the intentional molecular communities engaged in gene expression, as we saw in more detail, for example, in Getting Over the Code Delusion and The Unbearable Wholeness of Beings. The activity of these molecular communities tends to be highly specific to their immediate environment. An improbable combination of molecules performs in concert to carry out an almost unthinkably intricate series of tasks in just the right, highly directed, yet non-“hardwired” sequence — all this as opposed to wandering off in other directions and engaging in perfectly respectable molecular behavior unrelated to the current needs of the organism or this particular tissue or this particular cell or this particular stage of the cell cycle. A dividing cell “asks” very different things of its molecules than, say, a cell in resting stage.
And why should the molecules not wander “off-topic”? Such undirected behavior is, in fact, the rule for those same molecules when they move about in a decaying corpse rather than a living organism. It is not, after all, the business of molecules, understood solely in terms of physical law, to “know” what their task should be here and now, as opposed to there and later. It is not their business to pay attention to the needs of anyone or anything. Yet such needs and such attention are exactly what the biologist is continually tracing.
All this should remind us of something almost never alluded to in the workaday literature: there are two very different sorts of description or explanation, only one of which is distinctively biological. Nearly everything about gene expression described above implies a kind of implicit cognitive functioning of the cell. Call it “biological wisdom” if you wish. In any case, what we see is intentional aim and well-timed coordination; it’s a matter of getting a job done correctly in light of the particular challenges and needs of the moment. Biological description is always of this sort. It’s full of the kind of meaning we find in storytelling — something I pointed out at length in “How Biologists Lost Sight of the Meaning of Life”.
But we also necessarily look at these same processes through the eyes of the physicist and chemist. Their eyes dare not see molecules pursuing aims or molecular interactions shaped by challenges and needs. At the inorganic level we observe processes lacking what is usually referred to as teleology (goal-directedness). We have learned not to look here for intentions, plans, or purposes, whether explicit (conscious) or otherwise. Rocks do not intend to roll downhill. This is why a science of the inorganic never gives us biological explanation as such.
The biological reason for characterizing the strictly physical and chemical processes of the organism is so that, taking up a higher perspective from which we can survey those processes together, we can begin to recognize in them patterns of coordination and intention — patterns the physicist and chemist could never see while remaining within the cause-and-effect terms of their own disciplines. Only so do we begin to apprehend an organism’s life, rather than the more universal lawfulness we might call its “corpse-potential”. (For more on these two kinds of description, see the earlier article in this series, “What Do Organisms Mean?”)
Whether or not biologists have adequately reckoned with — or even recognized — the problems posed by the two different sorts of description, they have instinctively responded by embracing information-processing machines as models for understanding organisms. For, in its own way, the humanly programmed computer also submits to two levels of description. It combines unimpeded physical lawfulness with cognitively informed patterns of activity.
That is exactly the combination the biologist needs. The problem is that, as models of organisms, computers display almost nothing beyond a spectacular potential for misrepresentation.
Seeing Computers in Organisms
It would be a mistake to think that what I have said about the prevalence of teleological, or goal-directed, language in biology is controversial. Such language is everywhere acknowledged by those eminent biologists who have taken any time to reflect upon the matter. The widely accepted or implied view today is that the presence of information and a computer-like program in DNA explains the intentional workings of the organism. This is because a programmed computer illustrates how a strictly physical device can be aimed at the achievement of particular ends. We can design into it a directed, teleological character. As an explanation of organisms, the computer model is not meant to get rid of intentional or teleological language, but rather to legitimize and “naturalize” it, freeing it from non-material or vitalistic overtones.
After a long history of agonizing over the problem of teleological language in the life sciences, materialistically inclined biologists embraced this apparent resolution of the problem with great eagerness. One of the twentieth century’s preeminent evolutionary theorists, Ernst Mayr, commented that during an earlier historical period you had to choose between being “a woolly vitalist (finalist or teleologist) or a naïve physicochemical reductionist”. But now, he could gladly report, the responsible biologist need be neither a vitalist nor a reductionist:An individual who — to use the language of the computer — has been “programmed” can act purposefully . . . The purposive action of an individual, insofar as it is based on the properties of its genetic program . . . is no more or less purposive than the actions of a computer that has been programmed to respond appropriately to various inputs. It is, if I may say so, a purely mechanistic purposiveness (1976, p. 365).
This is the solution I mentioned at the outset: dumb mechanism somehow manifesting wise purpose, courtesy of information and computation. The biologist could finally accept the purposeful organism without shame. François Jacob, who won his Nobel Prize for discoveries in gene regulation, summarized the matter with French flair: “For a long time, the biologist treated teleology as he would a woman he could not do without, but did not care to be seen with in public. The concept of [a computer] programme has made an honest woman of teleology” (1973, pp. 8-9).
The fact is both notorious and undisputed: biologists employ a language heavily laced with references to computer programs, codes, instructions, and information. While philosophers and historians of biology have noticed the many problems posed by the diverse and inconsistent usages, biologists themselves have shown few scruples, and continue to speak “computerese” with abandon. Jacob in his day was hardly shy about it: “The programme is a model borrowed from electronic computers. It equates the genetic material of an egg with the magnetic tape of a computer”. And “everything urges one to compare the logic of heredity to that of a computer. Rarely has a model suggested by a particular epoch proved to be more faithful” (1973, pp. 9, 265).
Mayr was no less unequivocal. “All organisms possess a historically evolved genetic program, coded in the DNA of the nucleus of the zygote. . . Nothing comparable to it exists in the inanimate world, except for manmade computers” (1982, p. 55).
As usual, if you want an extreme rhetorical flourish that, in its quest for dramatic impact, tends toward the reductio ad absurdum (thereby performing a useful service), Richard Dawkins is most obliging. In the organism, he tells us, “every cell contains a miniature analogue of a computer’s instruction-obeying machinery” (1996, p. 271). But Dawkins’ computational vision is not restricted to such mundane material phenomena as individual organisms. There is, he believes, a river of DNA that “flows through time, not space. It is a river of information, not a river of bones and tissues: a river of abstract instructions for building bodies, not a river of solid bodies themselves. The information passes through bodies and affects them, but it is not affected by them on its way through” (1995, p. 4).
Or, again, “You can, if you wish, think of the genes in all the populations of the world as constituting a giant computer, calculating costs and benefits and currency conversions, with the shifting patterns of gene frequencies doing duty for the shuttling 1s and 0s of an electronic data processor” (1996, p. 72).
And to complete the dizzying flight into highest informational abstraction, capping it off with a remarkable physical image, Dawkins offers this: “It is raining instructions out there; it’s raining programs; it’s raining tree-growing, fluff-spreading, algorithms. That is not a metaphor, it is the plain truth. It couldn’t be any plainer if it were raining floppy disks” (2006, p. 158).
The thoroughly entrenched conviction that in the organism we’re dealing with something like an information-processing machine with an instructional program seems immune to the kind of withering assaults that have been offered over several decades by the likes of Harvard geneticist Richard Lewontin, developmental systems theorist Susan Oyama2, MIT science historian, Evelyn Fox Keller, and many others. But I am not sure the critics have brought forward the most obvious and compelling facts of the case.
Organisms Who Refuse to Compute
How does one produce a programmable medium, such as an integrated circuit?
Today’s computer chips containing integrated circuits require the most extreme precision of manufacturing, and are produced in facilities costing over a billion dollars. High-tech “clean rooms” prevent disastrous contamination by a single speck of dust, while specialized equipment creates discrete, unthinkably minute, and precisely machined pathways in silicon. These pathways are designed for maximum permanence, and any breakdown or leakage from one to another in an operating computer can spell disaster. Busses — communication channels linking different computer components — are also manufactured for physical permanence and consistent, unvarying performance. Moreover, everything is designed to operate in lock-step, marching in strictest obedience to a clock that “ticks” up to billions of times per second. If one or another communication channel were to get its timing wrong by some tiny fraction of a second, chaos would ensue.
There’s much more, but that’s already enough to make a point no one should ever have had to argue. You simply cannot take a single step in realistically relating what goes on in an organism to what transpires during the execution of a computer program. Nothing even approaching the surgically precise, rigidly excavated, and unvarying silicon grooves of a computer chip exists anywhere in a living creature. It is laughable even to imagine, say, a signaling pathway in an organism as if it were uncontaminated by goings-on in its surroundings. Such “contamination” — directly or indirectly by almost anything in the larger surround — is, in fact, very much of the essence. In one guise it is referred to today as “crosstalk” — the never exactly predictable influence that one sort of process exerts upon another.
And it is more than laughable to picture a pathway that can be traversed, exactly and without “outside” disturbance, billions of times in succession — let alone traversed as mere electrical pulses lacking the complexly sculpted, ever diverse and changing molecular structure that lends specific function to the signaling molecules, transcription factors, enzymes, and all the other elements of the cell, right down to the all-important water molecules. It is safe to say that no cell ever again regains precisely the structure it had one moment before.
All this metamorphosing substance of the organism, this openness of every process to its never fully predictable surroundings, makes the idea of computer-like rules inconceivable. Those who speak of programmed instructions in the organism are obligated to point at least vaguely to the sort of reality they have in mind. But there is nothing whatever to point to.
A living vortex. The “pathways” in the organism, such as they are, must function within what, at first sight, might easily appear to be a chaotic whirl of activity and exchange. Chromosomes, under the influence of all the cellular forces brought to bear on them, are animated — more like a writhing nest of vipers than a set of programmed computer chips. And the forms of their writhing turn out to be vital to their expression — a response to the music of the larger cell and organism. The chromosome, we heard Christophe Lavelle of France’s Curie Institute remark in an earlier article, “is a plastic polymorphic dynamic elastic resilient flexible nucleoprotein complex” (Lavelle 2009).
We are not dealing here with precise bits of information traveling the right-angled pathways of an unvarying grid, but with the subtleties of significant form and gesture expressing the unique meaning of the moment. In conducting its internal performances, including its genetic performances, the cell looks more like an improvisational acrobatic troupe than a computer executing a program.
It has proven easy for many to forget that information by itself, as conventionally conceived, does not do anything. Genes as mere containers for sequential information are catatonic — hopelessly inadequate to explain life processes. The cell’s doings are what we need to understand, and they couldn’t be less machine-like or program-driven. The noted twentieth-century cell biologist, Paul Weiss, described howin contrast to a machine, the cell interior is heaving and churning all the time . . . The only thing that remains predictable amidst the erratic stirring of the molecular population of the cytoplasm and its substructures is the overall pattern of dynamics which keeps the component activities in definable bounds of orderly restraints (1973, p. 40).
Here we see the undeniable truth that, at just about every level of observation in the organism, we witness higher-level order in the face of a certain fluidity and seeming disorder at the next lower level. The organelles of a cell retain their functional identity despite the continual churning of their contents and reshaping of their structure, just as the cell retains its identity in the presence of ceaseless remodeling of its organelles, and just as entire tissues and organs flourish despite the ongoing birth and death of individual cells.
Weiss, with his exceptional rigor of observation, deserves to be read closely. The “organism as computer” analogy does not fare well, for example, in light of this remark:While definite regions of the early embryo are predictably the rudiments for the formation of heart, liver, kidney, brain, etc., the individual fates of the cells derived from those areas are still unsettled. In other words, overall regularity in the gross is attained and maintained not as a mechanical result and a reflection of a corresponding underlying regularity of rigidly stereotyped behavior of the component elements down to the smallest detail, but on the contrary, in spite of a high degree of vagrancy among the latter. (1973, pp. 24-5)
This could hardly be more unlike the computer. Whatever the programmer’s overall goals may be, they can be achieved only by enlisting low-level, hardware processes whose staccato military discipline must be maintained with unvarying, microscopic precision, lest the entire programmatic operation come to a crashing halt.
Organisms have bodies. Computers, of course, needn’t have minuscule components and needn’t operate at incomprehensible speeds. They can be built with parts operating at many different scales, both spatially and temporally. But this has no bearing on the point I am making about the necessity for well-defined sequentiality and undisturbed, repeatable precision. It does, however, point to another problem for the organism-computer analogy: computers can be built from virtually any material — just as their essential logic has been contrived from radio tubes, magnetic cores, silicon wafers, optical disks, and so on. You can read in the artificial intelligence literature how to build a computer with a roll of toilet paper as a central functioning part.
It would be nonsensical to think of translating a yak or orangutan into an alternative set of materials in this way. And the reason is instructive: our goal in building computers was to abstract as far as possible away from qualitative, material reality. The primary aim was calculation and manipulation of abstract information rather than appreciation of the qualities and forms of things. Unfortunately — Star Trek fantasies aside — the yak and orangutan, with their highly distinctive ways of being, are not so easily abstracted from their material existence.
The essence of computation lies in a disembodied logic quite indifferent to its physical implementation — a rather odd fact when you set it beside the biologist’s yearning for a materialist science. And certainly it’s a fact impossible to reconcile with the organism’s insistence upon manifesting itself, not as a computable abstraction, but in the living powers of its own flesh and blood.
A Metaphor That Only Misleads
Decisive as all this may seem, it is only the beginning of troubles. If the organism were modeled after a computer, it would have to be a computer that builds itself — or, rather, grows itself. But no computer functions by growing the organs of its own functioning — that is, by overseeing the growth of its own hardware. It takes but a moment’s reflection to realize that, whatever such growth might entail, it represents fundamental kinds of processes not even imaginable in anything remotely resembling a computer. And this fact of growth is no mere sideshow in the life of organisms.
The issue here has indeed been routinely acknowledged — where “routinely” means “without much thought”. One experiences a kind of mental vertigo upon reading the casual and unexplained remark by Jacob that the organism “determines the production of its own components, that is, the organs which are to execute the programme” (1973, p. 9). Or Mayr’s equally bald assertion that “one of the properties of the genetic program is that it can supervise its own precise replication and that of other living systems such as organelles, cells, and whole organisms . . . it required the rise of computer science before the concept of such a program became reputable” (1982, p. 56).
Embraced by unreflective biologists, perhaps — but reputable? How does an all-but-unthinkable concept — that of a computer that grows itself and all its parts even as it operates — become reputable?
Different organisms making common cause. Then, too, there is the fact of hybridization among organisms — not only among closely related species, but sometimes among distantly separated ones, even, in some cases, representatives of different phyla with their radically different body plans (Ryan 2011). This hybridization illustrates a life-potential in the organism, a power of adaptation, mutualism, and self-restructuring, that is again impossible to conceive in computational terms.
Try this by contrast: take two hugely complex computer programs performing different tasks on different hardware, and then, after arbitrarily deleting significant parts of each program, throw the remaining pieces together in a small bit of novel hardware. Is there any hope that the program fragments will now mutually adapt to each other and operate in concert — will function at all — let alone grow their own hybrid “body”, or hardware?
Actually, it is not only the hybrid organism that exhibits this striking power of transformation. The offspring of normal sexual reproduction achieve much the same thing on their own scale. The differing biological potentials passed on from the two parents are blended in the zygote, which is an organism unlike either parent.
If, as usually supposed, the organism’s program is a genetic one, borne by DNA, we might also ask whether there are trillions of programs and computers, located in almost every cell of the human body, with the many stably distinct cell types running stably distinct subprograms executed by stably distinct cellular hardware. If so, what program controls this differentiation of cell types, and how are they all coordinated so as to function as one organism? If there a single, organism-wide computer running trillions of programs simultaneously, where do we find that program?
Before we hear someone reply with a vague reference to “parallel distributed computing”, we need to cease the nonsense and ask for specifics. Exactly what understanding is the computer metaphor supposed to convey? There has been precious little effort to sketch out or justify even the most salient elements of the comparison.
Where, for example, do we find the cell’s program? Most would answer, “In the DNA”. But, as MIT science historian Evelyn Fox Keller has pointed out, we might more appropriately think of the “information” in DNA as data, with the cell’s cytoplasm executing the program that processes the data (2001). But how do you get precise data out of the many superimposed dimensions of sculpted and re-sculpted form we observed in the chromosome when discussing gene expression above, and where do you find anything like a computer program in cytoplasm — that “heaving and churning” sea we heard Paul Weiss talking about?
Vacuity of the computational metaphor. John Mattick, the Australian biologist at the forefront of much current research in epigenetics, argues that non-protein-coding RNAs rather than DNA constitute the “computational engine of the cell” (Mattick 2009; Mattick et al. 2009). But he usefully warns us that “one has to be careful in making these analogies, because some people take them too literally. It’s very hard to map biological space into computational space and vice versa” (Mattick 2010).
Hard indeed. Perhaps far harder than Mattick himself has realized, with his frequent resort to computational language. What, then, is the point? How could someone with the penetrating insight and profound influence of Ernst Mayr assure us that “There is a strict equivalence of the ‘program’ of the information theorists, and [genetic program] of the biologist” (1992, p. 129)? Where has this equivalence ever been spelled out?
It is common to cite the benefits of a metaphor despite its seriously inappropriate aspects. But in the case of the computer program, I see nothing beyond a combination of vacuity on one hand, and an invitation to misunderstanding on the other. I’ve cited a few of the misunderstandings. As for the vacuity, one might ascribe some sort of value to a general notion of “program”, as in “programs for educational reform”. But “program” in any such broad sense is about as helpful to the biologist as saying, “the organism goes about a certain well-ordered work here”. We know that already. The question is how it does so.
Certainly the organism does accomplish extraordinarily complex tasks. And certainly there are many approximate regularities as it pursues its meaningful and highly directed life. We could hardly expect otherwise. But the leap from this truth to the idea of DNA or the cell or the organism as a programmable medium (analog of Dawkins’ “floppy disk”) is so wildly contrary to everything we know about the organism that it raises questions about the degree to which the biologists making this comparison have even bothered to look at the organism, as opposed to reflecting upon an independent web of long-accumulated and now archaic theoretical constructs plastered over it.
Then, of course, there is the problem of the programmer. In the case of the only programs we know, he or she must stand apart from the programmable medium in order to design its program. When this problem is raised, one routinely hears an appeal to natural selection as the “real” programmer. Yet, far from explaining the presence of programs, natural selection, as conceived by evolutionary theorists, can only be another name for the outcome of competition among many already existing programs. Natural selection assumes the goal-directed activities of organisms, including their drives to sustain their own lives and reproduce. The theory does not explain these activities; it is founded upon them.
The Organism Abandoned
I imagine many readers, upon reading Paul Weiss’ reference above to the “heaving and churning” sea of cytoplasm, may have been slightly startled by its antithetical resonance with the earlier remark we heard from Dawkins: “There is no spirit-driven life force, no throbbing, heaving, pullulating, protoplasmic, mystic jelly. Life is just bytes and bytes and bytes of digital information”.
The fact of the matter is that if you leave aside the ridiculing intention of “mystic jelly” and “spirit-driven life force”, what remains of that first sentence gives us a fair, if limited, description of the living cell — pretty much what Weiss, as a meticulous and life-long observer of cells, was routinely describing. But, of course, all Dawkins’ terms were intended to ridicule — which leads one to wonder what sort of problem he has with actual living substance. Perhaps Susan Oyama was on to something when she found herself “smiling at the gross bodily dampness of his imagery” (2009).
One way or another, it’s a dampness Dawkins apparently would prefer to do without. He yearns, not for the cell’s seminal plasmic seas, teeming with the unruly forms and movements of molecular life, but for the pure, thin, clean air of abstraction — not actual chromosomes with their snaking, coiling, looping dynamism amid all the purposeful turbulence of the viscous nucleoplasm, but toy computer programs that manipulate precise little strings of numbers as substitutes for organisms. His is a clean and safe playland where there is nothing but all those well-behaved, uncontaminated, all-but-dematerialized “bytes and bytes and bytes of digital information”. (See especially the discussion of his computer programs in Dawkins 1996; 2006. Also Talbott 2007.)
When, over three years ago, I turned to the technical literature of molecular biology for the first time, I was surprised to find how immediately understandable much of it was. Certainly there were challenges — and a great deal of specialized terminology to master — but I felt strangely at home in the atmosphere and logic of these texts. It took me a while to realize why this was. My background in computer engineering was the key. The papers I was reading were very like the books about computers and software I was so familiar with.
I mentioned this in a brief article a year or two ago. But now I find that my realization had already been given voice fifteen years earlier — by Richard Dawkins. “What is truly revolutionary about molecular biology”, he wrote in River Out of Eden, “is that it has become digital”. We now know that genes “are long strings of pure digital information . . . The machine code of the genes is uncannily computerlike. Apart from differences in jargon, the pages of a molecular-biology journal might be interchanged with those of a computer-engineering journal” (1995, pp. 17-19).
Yes, the computer engineers have conquered molecular biology. I suppose that science historians of the future will puzzle for many years over the question, “How could Dawkins and so many of his eminent peers have gotten it so crazily and unthinkably wrong?” A puzzle indeed. In exchanging the heaving seas of protoplasm for digital information and programs, biologists have wandered very far from home.
So: What About the “Informational” DNA Sequence?
1. In information theory, a single bit represents the amount of information we gain when we learn the actual state of a device that can be in either of two equally likely states. For example, when we flip a coin and discover whether the result is “heads” or “tails”, we have gained one bit of information. Putting it differently: our uncertainty about two equally possible outcomes is resolved. Since (simplifying a little) DNA can have any one of four possible nucleotide bases at a given position, we gain two bits of information by learning the actual base (or “letter”, as many put it) at that position. It is also often said that the letter contains two bits of information. In any case, “two bits” is only roughly correct because, in the case of nucleotide bases, the choices are not equally likely to any high precision.
2. One of the most prescient, multifaceted, and incisive treatments of informational notions in the life sciences was (and still is) Susan Oyama’s 1985 book, revised in 2000: The Ontogeny of Information. While she makes no explicit attempt to lead over from information to meaning (as I do here and in previous articles), much of her work contributes to such a broadening of perspective.
Barfield, Owen (1967). Speaker’s Meaning. Middletown CT: Wesleyan University Press.
Dawkins, Richard (1995). River Out of Eden: A Darwinian View of Life. New York: BasicBooks.
Dawkins, Richard (1996). Climbing Mount Improbable. New York: W. W. Norton.
Dawkins, Richard (2006). The Blind Watchmaker, second edition. New York: W. W. Norton. First edition published in 1986.
Dethoff, Elizabeth A., Jeetender Chugh, Anthony M. Mustoe and Hashim M. Al-Hashimi (2012a). “Functional Complexity and Regulation through RNA Dynamics”, Nature vol. 482 (Feb. 16), pp. 322-30. doi:10.1038/nature10885
Holdrege, Craig (1996). Genetics and the Manipulation of Life: The Forgotten Factor of Context. Hudson NY: Lindisfarne.
Honig, Barry and Remo Rohs (2011). “Flipping Watson and Crick”, Nature vol. 470 (Feb. 24), pp. 472-3.
Jacob, François (1973). The Logic of Life: A History of Heredity, translated by Betty E. Spillmann. New York: Random House. Originally published in French in 1970.
Keller, Evelyn Fox (2001). “Beyond the Gene But Beneath the Skin”, in Oyama 2001, pp. 299-312.
Lavelle, Christophe (2009). “Forces and Torques in the Nucleus: Chromatin under Mechanical Constraints”, Biochemistry and Cell Biology vol. 87, pp. 307-22. doi:10.1139/O08-123
Mattick, John S. (2009). “Has Evolution Learnt How to Learn?” EMBO Reports vol. 10, no. 7, p. 665. doi:10.1038/embor.2009.135
Mattick, John (2010). “John Mattick: From a Hobby to a Genomic Revolution” (interview), Quest vol. 5, no. 1 (Quarter 3), pp. 38-43.
Mattick, John S., Ryan J. Taft and Geoffrey J. Faulkner (2009). “A Global View of Genomic Information: Moving Beyond the Gene and the Master Regulator”, Trends in Genetics vol. 26, no. 1. doi:10.1016/j.tig.2009.11.002
Maynard Smith, John (2000). “The Concept of Information in Biology”, Philosophy of Science vol. 67 (June), pp. 177-94.
Mayr, Ernst (1976). Evolution and the Diversity of Life. Cambridge MA: Belknap/Harvard University Press.
Mayr, Ernst (1982). The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Cambridge MA: Belknap/Harvard University Press.
Mayr, Ernst (1992). “The Idea of Teleology”, Journal of the History of Ideas vol. 53, no. 1 (Jan. - Mar.), pp. 117-35.
Oyama, Susan (2000). The Ontogeny of Information, 2nd edition, foreword by Richard C. Lewontin. Durham NC: Duke University Press. First edition published in 1985 by Cambridge University Press.
Oyama, Susan, Paul E. Griffiths and Russell D. Gray, editors (2001). Cycles of Contingency: Developmental Systems and Evolution. Cambridge MA: MIT Press.
Oyama, Susan (2009). “Compromising Positions: The Minding of Matter”, in Mapping the Future of Biology: Evolving Concepts and Theories, edited by A. Barberousse, M. Morange, and T. Pradeu, pp. 27-45. Boston Studies in the Philosophy of Science, vol. 266. Springer Science+Business Media B.V.
Rohs, Remo, Sean M. West, Alona Sosinsky et al. (2009). “The Role of DNA Shape in Protein-DNA Recognition”, Nature vol. 461 (Oct. 29), pp. 1248-53. doi:10.1038/nature08473
Ryan, Frank (2011a). The Mystery of Metamorphosis: A Scientific Detective Story. Foreword by Dorion Sagan and Lynn Margulis. White River Junction VT: Chelsea Green Publishing.
Shilatifard, Ali and Peter Verrijzer (2011). “Chromosomes and Gene Expression Mechanisms: Peeling Away the Many Layers of Transcriptional Control”, Current Opinion in Genetics and Development vol. 21 (April), pp. 121-3. doi:10.1016/j.gde.2011.02.004).
Talbott, Stephen L. (2007). “Ghosts in the Evolutionary Machinery: The Strange, Disembodied Life of Digital Organisms”, The New Atlantis no. 18 (fall), pp. 26-40. Originally published in NetFuture #170 (July 19, 2007). Latest version available at http://thenewatlantis/publications/ghosts-in-the-evolutionary-machinery.
Talbott, Stephen L. (2010). “Getting Over the Code Delusion”, The New Atlantis no. 28 (summer), pp. 3-27. Original version published in NetFuture #179 (Feb. 18, 2010). Available at http://natureinstitute.org/txt/st/mqual/genome_4.htm.
Wahls, Wayne P. and Mari K. Davidson (2011). “DNA Sequence-Mediated, Evolutionarily Rapid Redistribution of Meiotic Recombination Hotspots”, Genetics vol. 189 (Nov.), pp. 685-94. doi:10.1534/genetics.111.134130
Weiss, Paul (1973). The Science of Life: The Living System — A System for Living. Mount Kisco NY: Futura Publishing.
This article originally appeared in NetFuture Nr. 185 without the image.