(Note: This is Part 12 of Dr. Wadhawan’s series on Complexity. All previous parts of the series can be accessed through the Related Posts list at the bottom of this article.)
According to one model of the origins of life, it is likely that life originated twice, with two separate kinds of organisms, one capable of metabolism without exact replication, and the other capable of replication without metabolism; at some stage the two features came together. Another model is that life originated with the emergence of RNA molecules which could act as both enzymes and self-replicators. In either case, the emergence of self-replicators also marked the first step towards the evolution of consciousness.
12.1 Freeman Dyson’s Dual-Origin Model for Life
Freeman John Dyson is a theoretical physicist and mathematician, well known for his work in quantum field theory, solid-state physics, and nuclear engineering. In 1949 he demonstrated the equivalence of the two formulations of quantum electrodynamics, one by Richard Feynman and the other by Julian Schwinger and Sin-Itiro Tomonaga. In 1985 he wrote a little book Origins of Life, in which he argued that metabolic reproduction and replication are logically separable propositions, and that natural selection does not require replication, at least for simple creatures. In higher-level life as seen today, reproduction of cells and replication of molecules occur together. But there is no reason to presume that this was always the case. According to Dyson, it is more likely that life originated twice, with two separate kinds of organisms, one capable of metabolism without exact replication, and the other capable of replication without metabolism. At some stage the two features came together. When replication and metabolism occurred in the same creature, natural selection as an agent for novelty became more vigorous.
Dyson acknowledged the influence of Erwin Schrödinger and John von Neumann on his work. Two other scientists whose work he used for proposing his dual-origin hypothesis for life were the chemists Manfred Eigen and Leslie Orgel. They had demonstrated that a solution of nucleotide monomers will, under suitable conditions in the laboratory, give rise to a nucleic-acid polymer molecule (RNA) which replicates and mutates and competes with its progeny for survival. For achieving this, Eigen used a polymerase enzyme, which was a protein catalyst extracted from a bacteriophage (the synthesis and replication of the RNA depends on the structural guidance provided by the enzyme). Orgel did something complementary to the experiment of Eigen. He made RNA grow out of nucleotide monomers by adding a template for the monomers to copy, but did not add a polymerase enzyme. Thus Eigen made RNA using an enzyme but no template, and Orgel made RNA using a template but no enzyme. Living cells use both templates and enzymes for making RNA. This pointed to a possible parasitic development of RNA-based life in an environment created by a pre-existing protein-based life.
Dyson also drew support and inspiration from the work of Lynn Margulis, who has been a major proponent of the idea that parasitism and symbiosis were the driving forces in the evolution of cellular complexity. [Symbiosis means a prolonged living arrangement or physical association among members of two or more different species. Levels of partner integration in symbiosis may vary in intimacy; and integration may be behavioural, metabolic, of gene products, or ‘genic.’]
Margulis has been hammering home the point that the main components of eukaryotic cells have descended from independent living creatures which ‘attacked’ the cells from outside. In due course, the attackers and the host evolved a relationship of mutual dependence and benefit. In stages, the erstwhile invading organisms became first chronic parasites, then symbiotic partners, and finally an indispensable part of the host. The evidence for this is that the molecular structures of mitochondria and chloroplasts are indeed very close to certain bacteria.
Margulis has marshalled evidence to argue that most of the big steps in cellular evolution were caused by parasites. And the nucleic acids were the oldest and the most successful cellular parasites. According to Dyson’s model, the original living creatures were cells with a metabolic apparatus directed by certain proteins (enzymes), which had no genetic appurtenances to start with. Such cells lacked the ability for exact replication, but could still grow and divide and reproduce themselves in an average statistical manner.
12.2 ATP and RNA
During millions of years of chemical (and now also biological) evolution, the initial primitive but living cells diversified and refined their metabolic reaction pathways. In particular, they evolved the synthesis of ATP (adenosine triphosphate) through some autocatalytic reaction mechanisms (cf. Part 9). ATP is the main energy-carrying molecule in all present-day cells. ATP-carrying primitive cells had an evolutionary advantage over other, less efficient, cells. In time, other molecules like AMP (adenosine monophosphate) emerged; or perhaps AMP came first, and then ATP.
Now, although ATP and AMP have similar chemical structures (see figure), they play totally different roles in present-day cells. ATP is the universal biological currency for energy. AMP, on the other hand, is one of thenucleotides in the structure of the RNA molecule. RNA functions as the carrier of information, and it can replicate exactly. [RNA is like DNA, except that thymine (T) is replaced by uracil (U). In RNA, A bonds to U only, and G bonds to C only.] AMP is the A (i.e. the nucleotide adenine) in the RNA structure.
If ATP loses two of its three phosphate groups, it becomes AMP. Dyson argued that, although the primitive cells had no genetic apparatus to begin with, they were loaded with ATP molecules which could easily convert to AMP molecules. Accidentally, in one such cell which happened to be carrying AMP and other nucleotides (the ‘chemical cousins’ of AMP), the Eigen experiment for synthesizing RNA happened spontaneously. With some help from pre-existing enzymes, an RNA molecule got produced. Once created, it went on replicating itself because of the proclivity of base A to hydrogen-bond with base U, and of G to hydrogen-bond with C.
Thus, RNA first appeared as a parasitic disease in the cell. Although most such cells died of disease, some evolved to survive the infection, à la Lynn Margulis. In such cells, the parasite gradually became a symbiont. Further evolution resulted in a situation in which the protein-based life learnt to make use of the ability for exact replication provided by the chemical structure of RNA. This is how the modern genetic mechanism came into being. Hardware came before software, and that makes sense.
Is it really true that proteins emerged before RNA? The early evidence came from laboratory experiments done during the 1950s. The well-known experiments by Miller and others (done from 1953 onwards) demonstrated that amino acids form easily in a reducing atmosphere from the still simpler molecules, in the presence of ultraviolet radiation. What about nucleotides?
They are more difficult to synthesize from their constituents in a Miller-style experiment. A nucleotide has three parts: an organic base, a sugar, and a phosphate ion. The phosphate ion occurs naturally as a constituent of rocks and sea water. The sugar part can be synthesized with substantial efficiency from formaldehyde. And the synthesis of an organic base was demonstrated by Oró in 1960. He prepared a concentrated solution of ammonium cyanide in water, and just let it stand. Adenine was self-created, with a 0.5% yield. Guanine also got synthesized in a similar way. But the catch here is that it is difficult to imagine how such high degrees of concentration of ammonium cyanide could occur in Nature, although some possible scenarios have been suggested. In any case, the nucleotide molecules, even if formed, are unstable in solution, and tend to get hydrolysed back into their components. Another major difficulty is to get the three components of a nucleotide into a correct configuration for bonding.
All told, whereas it is easy to simulate a pre-biotic synthesis of amino acids in the laboratory, the same is not the case for nucleotides (but see below). Dyson argued that this lends credence to his model that proteins appeared on the scene before RNA etc. Of course, he was also quick to point out that perhaps we have not been clever enough to create proper simulation conditions in the laboratory. I shall return to this point in Section 12.6.
12.3 How the Mystery of Cell Differentiation was Solved
For introducing certain concepts and terminology, I make a small digression here and discuss cell differentiation. Each cell of our body carries the same genome. What tells some cells to become kidney cells, and others to become liver cells, and still others to become neurons? The term ‘cell differentiation’ is used for this phenomenon. How does cell differentiation occur, and with such high precision?
French scientists François Jacob and Jacques Monod were awarded (along with Andre Lwoff) the Nobel Prize for physiology or medicine for 1965 for their work on ‘genetic circuits.’ There are thousands of genes arrayed along a DNA molecule. Jacob and Monod discovered that a small fraction of these are ‘regulatory’ genes which can function as switches. Such activity is triggered by, say, the availability of a particular hormone in the surroundings of a cell. This hormone may switch-on a particular gene. The newly activated gene sends out chemical signals to fellow genes, that can switch them on or off, depending on the states they are already in. The altered state of each of these genes then releases, or stops releasing, other chemical signals, which are received by the genetic switches in the network, altering their states in turn, in a cascading manner. This continues till the network of genetic switches settles down to a stable, self-consistent pattern.
This work had several implications. For example, it established DNA as not just a repository of the blueprint for the cell, telling it how to manufacture the various proteins, but also as an engineer in charge of construction. The DNA was established to be a molecular-scale computer that computed how the cell was to build and repair itself, and how it was to interact with the surrounding world.
The work of Jacob and Monod also solved the mystery of cell differentiation. It was concluded from this work that each type of cell corresponds to a different pattern of the genetic network, influenced by the presence of specific hormones etc. Although there is only a single genome involved, the genome can have many stable patterns of activation, each corresponding to a different cell type (liver, kidney, brain, etc.). Thus the genome was viewed as a complex network of interacting components, which control homeostasis and differentiation through very specific control circuits among the genes. [Homeostasis is the ability of higher animals to maintain an internal consistency.]
Back to Dyson. Further support for his dual-origin model for life has come from the work of Stuart Kauffman who carried forward the regulatory-genetic-networks idea. Before describing this, I must introduce the important idea of attractors in phase space. Sorry about the digression; I had vowed not to use any unexplained jargon in this series of articles.
12.4 Attractors in Phase Space
The concept of phase space or state space was introduced in Section 6.2 (Part 6) of this series. Imagine a loosely wound spring, oriented vertically (i.e. along the z-axis), and fixed securely at its top end to some heavy object. At its bottom end I attach a small particle. I am interested in the dynamics of this particle after I pull the lower end of the spring by a small distance, and then release the spring. The spring will be set into vibration, and the attached particle will execute an oscillatory, up-and-down motion. At any instant of time, the particle has position coordinates (x, y, z), and momentum coordinates (px, py, pz). What is the phase-space trajectory for this system? The answer is that it is a closed loop in a plane defined by the z-axis and the pz-axis. Let us see how.
To start with (i.e. at time t = 0), the particle attached to the spring is at rest, and its representative point in phase space has the ‘coordinates’ (0, 0, 0, 0, 0, 0). At the moment I release the spring after pulling it by a small distance z, the phase-space coordinates are (0, 0, -z, 0, 0, 0).
When I was pulling the spring, I was doing work against its restorative force, and this work got stored as the potential energy of the spring. When I release the spring, this stored potential energy is available for doing work, making the spring (and the particle attached to it) move towards the initial position (0, 0, 0) of the particle in real space. By the time the particle reaches this point, all the potential energy has got converted to kinetic energy, and the representative point in phase space now has the coordinates (0, 0, 0, 0, 0, pz). Nothing much is happening along the x-axis and the y-axis, as also along the px-axis and the py-axis. All the action is along the z-axis and the pz-axis, so we can use a more compact notation, and say that at the moment when all the potential energy has got converted to kinetic energy, the representative point in phase space has the coordinates (0, pz).
The kinetic energy of the spring will make the particle overshoot the origin point of the z-axis till the particle reaches the representative point (z, 0); this is when the particle will be at rest again, as all the kinetic energy has been converted back to potential energy. This potential energy will again make the particle move in the opposite direction. And so on. Thus the particle will successively and repeatedly pass through a whole continuum of points in phase space, including the following points: (-z, 0), (0, pz), (z, 0), (0, -pz).
If there is no dissipation of energy, the phase-space trajectory in this experiment is a closed loop, as the particle repeatedly passes through all the allowed (i.e. energy-conserving) position-momentum combinations again and again. Since the trajectory is fixed or constant, the area enclosed by it is also constant.
But in reality, dissipative forces like friction are always present, and in due course all the energy I expended in stretching the spring will be dissipated as heat. What happens to the phase-space trajectory of the particle as the total energy (potential energy plus kinetic energy) is lost gradually? As the total energy decreases, the maximum value of the z-coordinate during the trajectory cycle, as also the maximum value of pz, will also decrease, implying that the area enclosed by the trajectory in phase space will decrease, till the particle finally comes to a state of rest or zero momentum.
This final configuration corresponds to an attractor in phase space: It is as if the dissipative dynamics of the system is ‘attracted’ by the point (0, 0, 0, 0, 0, 0) as its energy gets dissipated. Thus, because of the gradual dissipation of energy, the closed-loop phase-space trajectory spirals towards a state of zero area. This is like a particle set rolling in a bowl, spiralling towards the bottom of the bowl; the bowl thus acts as a basin of attraction. Similarly, the phase-space region around the attractor (0, 0, 0, 0, 0, 0) is the basin of attraction for the oscillator problem we have considered here. I had pulled the spring by an arbitrary small amount. The exact magnitude of this small amount of pulling is not important. In each such experiment (with different starting values of z), the dissipative system always gets attracted towards the same attractor. We say that there is a unique basin of attraction around the unique attractor.
Nonlinear Dynamical Systems
In the above experiment, if we pull the spring only by a small amount, its restorative force is linearly proportional to the displacement of the tip of the spring to which we have attached the small particle: If we plot this force fz as a function of z, we get a straight line (which is a linear curve). Incidentally, this is the defining feature of what is called a simple-harmonic oscillator.
But if the displacement is too large, the restorative force is no longer linearly proportional to the displacement z, and we are then dealing with a nonlinear dynamical system: The plot of fz against z is no longer a straight line. Most real-life phenomena involve nonlinear dynamics. In particular, all evolution of complexity in Nature concerns systems which receive a persistent and therefore cumulatively large amount of energy from the surroundings, and are thus pushed into the nonlinear regimes of dynamic behaviour. All complex systems are nonlinear, although not all nonlinear systems may exhibit complex behaviour.
12.5 Kauffman’s Work on the Origins of Life
In 1993 Kauffman had established by his cellular-automata approach that regulatory genetic networks can indeed arise spontaneously in complex systems by self-organization. But he still had to tackle the question of how extremely large molecules like RNA and DNA came into existence in the first place. In any case, as stated earlier in this series of articles, even DNA requires the availability of certain protein molecules for its genetic role. Therefore, there must have been a mechanism which resulted in the spontaneous creation of protein molecules without the intervention of DNA.
In other words, there must have been a non-random origin of life. There must have been another way, independent of the need to involve DNA molecules, for self-reproducing molecular systems to have got started. Kauffman carried Melvin Calvin’s (1969) idea of autocatalytic reactions (cf. Part 9) much further to explain how this could happen: In Kauffman’s model, like in Dyson’s, life originated before the advent of RNA or DNA. And Kauffman’s network model could incorporate features like reproduction, as also competition and cooperation for survival and evolution (including coevolution). Kauffman had introduced in 1969 his ‘random Boolean networks’ (RBNs) as a part of his pioneering work on the functioning of genetic regulatory networks. He went a step further than Jacob and Monod and demonstrated that even randomly constructed networks of high molecular specificity can undergo homeostasis and differentiation:
In the absence of knowledge regarding the parameters describing real cells, Kauffman investigated (on a computer) a variety of genetic control networks to see if any of them simulated biological activity reasonably well. In his binary network model (or the RBN model), a gene (represented as a node of the network) was modelled as a binary device, the whole network having N such nodes. Thus, each node or gene had two possible states: ‘on’ or 1, and ‘off’ or 0. The ‘on’ state meant that the gene was being transcribed, and the ‘off’ state meant that it was not being transcribed. Each gene or node was modelled as receiving exactly K (K less than or equal toN) inputs from randomly chosen ‘controlling’ genes or nodes, and also receiving one random ‘update’ function for its K inputs. The update function prescribes the state of the gene or the automaton in the next time step, given its state in the current time step, and is chosen according to some probability-distribution function. By varying N and K for these RBNs, the behaviour of a variety of such finite sequential switching automata could be investigated. At any time step, each gene or node had a value 1 or 0, and the network was a collection of these 1s and 0s, representing the ‘state’ of the network or the biological cell. This pattern of 1s and 0s served as the input, determining the pattern for the next time step of the automaton. Shown in the adjoining figure is the activity pattern for an RBN with 16 nodes for 50 time steps. The initial state is the column furthest to the left with nodes represented vertically and time moving to the right.
The RBN has 2N possible states; i.e. it has a finite number of states. This finiteness, coupled with the fact that the dynamics is det erministic, implies that, as the RBN proceeds through a sequence of states, it must eventually return to a pattern it had at some earlier time step, and from then on it must repeat the same pattern-sequence periodically. That is, it must be trapped in a re-entrant cycle of states, or an attractor in phase space. Each such state cycle or attractor represents a distinct temporal mode of behaviour of the net, and was equated by Kauffman with a distinct cell type (kidney, liver, etc.). Cell types differ only in the pattern of gene activity; they all carry the same genome. Shown in the adjoining figure is a periodic attractor (yellow) and its basin of attraction (cyan). Each point in the state space represents a network state.
Kauffman focussed his attention on ‘critical‘ RBNs. These lie at the ‘edge of chaos,’ i.e. at the boundary between frozen networks and chaotic networks. Frozen networks have very short attractors or cycle lengths. And chaotic networks have large-sized attractors that may include a substantial portion of the phase space. To quote Kauffman:
Let’s talk about networks as a model of the genetic regulatory system. My claim is that sparsely connected networks in the ordered regime, but not too far from the edge (of chaos) do a pretty good job of fitting lots of features about real embryonic development, and real cell types, and real cell differentiation. And insofar as that’s true, then it is a good guess that a billion years of evolution has in fact tuned real cell types to be near the edge of chaos. So that’s very powerful evidence that there must be something good about the edge of chaos. So let’s say the phase transition is the place to be for complex computation. Then the second assertion is something like ‘Mutation and selection will get you there.’
[The edge-of-chaos idea is very important for understanding complexity and the origin and sustenance of life, and I shall discuss it in some detail in a separate article.]
Jacob and Monod’s cell types, distinguished from one another by the distinct and stable network patterns of gene activity, were interpreted by Kauffman as represented by different attractors in phase space. For K = 1and for K = N the length of the attractor cycles is very large. But for K = 2, i.e. when there are two inputs per gene, the lengths of the cycles are very small, roughly scaling as ~√N for critical networks. For example, for N = 1000, i.e. for 21000 possible states of the network, the modelled genome was found to cycle typically among just 30 time steps, a remarkable result indeed. Kauffman also found that the number of cell types scales as √N, in line with the biological information available at that time.
Thus Kauffman demonstrated that highly ordered dynamical behaviour is typical even for randomly constructed genetic networks getting just a few inputs per component. This implied that homeostasis in living complex systems is a direct consequence of the high molecular specificity among the macromolecules involved. Similarly, cell differentiation reflects the capacity of complex adaptive systems to behave in several distinct, highly localized ways. Kauffman’s work established that complex genetic networks could come into being by spontaneous self-organization, without the need for slow evolution by trial and error. After all, the whole thing had to be there together, and not partially, to function at all. He also established that genetic regulatory networks are no different from neural networks.
Kauffman’s work, though extremely important and path-breaking, was handicapped by the limited computational power available at that time, as also the limited nature of biological data. We now know that the number of genes (N) is not proportional to the mass of DNA, contrary to what was assumed by biologists at that time; it is much smaller for higher organisms. And that, for larger N, the increase in the number of genes with N is much faster than √N. In fact, the attractor number, as also the attractor length of K = 2 networks, both increase with the size of the network faster than any power law.
12.6 Freeman Dyson Revisited
I summarize here an updated version of Dyson’s ideas, as given in the recent (2008) book Life: What a Concept! In his model, there are six stages in the evolution of chemical complexity, leading to the emergence of life as we see it today.
Stage 1. The early cells were just little bags of some kind of cell membrane; this is the ‘garbage bag model’ for Stage 1. And inside the bag there was a more or less random collection of organic molecules, with the characteristic that small molecules could diffuse in through the membrane, but big molecules could not diffuse out. The ‘garbage bag’ situation was conducive to the conversion of small molecules into large molecules. And the higher concentration of organic material in the bag led to a higher efficiency of the chemical processes involved. This was conducive to fairly rapid evolution of chemical complexity.
And this evolution did not involve any replication processes. ‘When a cell became so big that it got cut in half, or shaken in half, by some rainstorm or environmental disturbance, it would then produce two cells which would be its daughters, which would inherit, more or less, but only statistically, the chemical machinery inside. Evolution could work under those conditions. In Stage 1, evolution was happening, but only on a statistical basis. This was pre-Darwinian evolution.’
Stage 2. Parasitic RNA appeared in some of the cells in Stage 2. ATP had appeared in one of the garbage bags by a random process in Stage 1, and the cell hosting it had a metabolic advantage over other cells. Therefore many cells with large amounts of ATP got created. Then, again by chance, ATP changed to AMP in one of the cells, and AMP is nothing but the adenine nucleotide. In due course, AMP and its chemical cousins polymerized into a primitive form of RNA. Thus there was parasitic RNA inside these cells, forming a separate form of life, which was pure replication without metabolism. To quote Dyson: ‘Then the RNA invented viruses. RNA found a way to package itself in a little piece of cell membrane, and travel around freely and independently. Stage two of life has the garbage bags still unorganized and chemically random, but with RNA zooming around in little packages we call viruses carrying genetic information from one cell to another. That is my version of the RNA world. It corresponds to what Manfred Eigen considered to be the beginning of life, which I regard as stage two. You have RNA living independently, replicating, travelling around, sharing genetic information between all kinds of cells.’
Stage 3. This stage started when the protein and the RNA systems started to collaborate. This happened after the emergence of the ribosome. Although this arrangement had the rudiments of the modern cell, the genetic information was shared mostly via viruses travelling from cell to cell. This was some kind of open-source heredity. The chemical inventions made by one cell could be shared with others. Evolution went on in parallel in many different cells. The best chemical devices could be shared between different cells and combined, so the chemical evolution was very rapid, as it occurred in parallel by many pathways. This is when most of the basic biochemical inventions must have been made.
The emergence of the ribosome is still a scientific mystery. This is one reason why I did not dwell on it when I discussed in Part 9 the role of autocatalytic sets of molecules for explaining the emergence of complex molecules. The ribosome plays a crucial role in the production of proteins in the cell. This production involves the transcription of a stretch of DNA into a portable form, namely the mRNA. The mRNA travels to the cytoplasm of the cell, where the information is conveyed to the ribosome. This is where the code is read, and the corresponding amino acid is brought into the ribosome. Each amino acid comes connected to a specific tRNA molecule. There is a three-letter recognition site on the tRNA that is complementary to, and pairs with, the three-letter code sequence for that amino acid on the mRNA.
Stage 4. Speciation and sex appeared in Stage 4, and that marked the beginning of the Darwinian era, when species appeared. ‘Some cells decided it was advantageous to keep their intellectual property private, to have sex only with themselves or with the members of their own species, thereby defining species. That was then the state of life for the next two billion years, the Archeozoic and Proterozoic eras. It was a rather stagnant phase of life, continued for two billion years without evolving fast.’
Stage 5. Multicellular organisms appeared in Stage 5, which also involved death.
Stage 6. This is the stage when we humans appeared.
12.7 The RNA-World Hypothesis
At present there is a strong section of opinion, embodied in the so-called RNA-World hypothesis, according to which RNA acted both as an information-storage molecule and as an enzyme at an early stage in theappearance of life. In other words, life started as nude replicating RNA molecules. This view of the origin of life had its genesis in the discovery, made in the mid-1980s by Thomas Cech and coworkers, that certain RNA sequences called ribozymes can themselves act as enzymes and catalyze reactions. The dual functionality of RNA might have allowed for the existence of an RNA species that could replicate itself and thus seed the beginning of molecular evolution. RNA is indeed known to be involved in a number of fundamental cell biological processes. Moreover, the ribosome is made up largely of RNA sequences, along with some proteins, and the ribosome machinery is almost identical throughout the living world; perhaps it existed almost from the beginning of life on Earth.
Several scientists have expressed reservations about this model. I shall quote the objections of one of them, namely Stuart Kauffman, author of the 1995 book At Home in the Universe: The Search for the Laws of Self-Organization and Complexity, and the 2000 book Investigations.
- It is difficult to get RNA strands to reproduce in a test tube. ‘No one has succeeded in achieving experimental conditions in which a single-stranded DNA or RNA could line up free nucleotides, one by one, as complements to a single strand, catalyze the ligation of the free nucleotides into a second strand, melt the two strands apart, then enter another replication cycle. It just has not worked’ (Kauffman 2000).
- Even if life did tend to originate and evolve by the RNA route, naked RNA molecules must have suffered an ‘error catastrophe’ during the replication processes, thus corrupting the genetic message from generation to generation. In present-day cells, such errors (mutations) are kept to a minimum by ‘proofreading’ and ‘editing’ enzymes.
- RNA-based life, even if it did emerge, was not complex enough to sustain itself. In other words, it was too far from the edge of chaos where complexity thrives best. Why viruses do not have life? Why is it that the simplest free-living cells are the so-called pleuromona, and nothing less complex than them? Pleuromona are the simplest known bacteria, and they are complete with cell membrane, genes, RNA, protein-synthesizing machinery, proteins. All free-living cells have at least the minimal molecular diversity of pleuromona. Why nothing simpler exists that is alive on its own? The nude RNA or the nude ribozyme polymerase idea for the origin of life offers no decent explanation for the observed minimum necessary complexity of any life form.
- The explanation, discussed in detail by Kauffman in his trilogy of books (culminating in the 2000 book Investigations), has to do with the all-important self-organization feature of complex adaptive systems which makes them gradually but inexorably climb the complexity ladder till they reach the ‘phase transition’ region (or the edge of chaos) in state space. Once there, they tend to stay there. Nude RNA was probably not complex enough to have self-propagated and survived as a life form.
I wish to say that life is an expected, emergent property of complex chemical reaction networks. Under rather general conditions, as the diversity of molecular species in a reaction system increases, a phase transition is crossed beyond which the formation of collectively autocatalytic sets of molecules suddenly becomes almost inevitable. If so, we are birthed by molecular diversity, children of second-generation stars.
Stuart Kauffman, Investigations (2000)
12.8 The First Step towards the Evolution of Consciousness
The emergence of self-replicators like RNA and DNA marked the first step towards the evolution of consciousness. Of course, nobody equates a self-replicator with a conscious entity. But a self-replicator which has repeatedly survived the depredations of the second law of thermodynamics, namely its own decay into a state of disorder and destruction, must have entailed the existence of a reason for surviving.
In the beginning, there were no reasons, only causes and effects. No self-interests, no purpose, no function, no teleology. The emergence of replicators changed all that. The fact that some of them have survived means (in anthropomorphic terms) that they had a kind of ‘interest’ in self-replication.
The blind forces of Nature did not distinguish between a piece of rock and a replicator. Nobody cared (there was nobody to care) whether or not a rock or a replicator survived for any length of time. But the fact is that we can see that a certain kind of replicator has indeed survived by repeated self-replication. And this has happened in spite of the fact that nobody did anything deliberately to ensure the survival of the replicator. Survival by self-replication requires the existence of a suitably conducive environment. There were all kinds of replicators (chemical entities) to start with, but those which could avoid the ‘bad’ conditions and seek ‘good’ conditions had a better chance of survival. This was just natural selection at the molecular level.
Such a successfully self-replicating entity thus ‘creates’ for itself a ‘point of view’, according to which it partitions the environment into ‘favourable’, ‘unfavourable’, and ‘neutral’. If this chemical entity is such that there is a better chance that it would ‘seek’ favourable environments and ‘avoid’ unfavourable ones, it has the equivalent of what we humans recognize as ‘self interest’. The chemical entity is not doing anything ‘consciously’, but the end result is the same. As Daniel Dennett (1984) pointed out, once an entity comes to have ‘interests’ and is a ‘problem-solver,’ the world and its events begin creating reasons for it. The first problem faced by such primitive problem-solvers was to ‘learn’ how to recognize and act on the reasons that their very existence brought into existence.
What is more, boundaries become important for any self-preserving entity. The entity must ‘know’ what to preserve; the boundaries limit and determine what needs to be preserved by self-replication. This primordial form of ‘selfishness’ is a characteristic of life. The distinction between everything on the inside of a closed boundary and everything outside is a central feature of all biological processes.
Thus the emergence of self-replicating entities in Nature led to:
- reasons to recognize;
- points of view from which to recognize or evaluate; and
- the need to distinguish between ‘here inside’ and ‘the external world.’
The point of view of a modern-day conscious observer is, of course, not identical to, but is a sophisticated descendant of, the primordial points of view of the first self-replicators which divided their worlds into good and bad.
12.9 Concluding Remarks
First an updated (2008) summary of Dyson’s model, and in his own words: ‘The essential idea (regarding the origin of life) is that you separate metabolism from replication. We know modern life has both metabolism and replication, but they’re carried out by separate groups of molecules. Metabolism is carried out by proteins and all kinds of other molecules, and replication is carried out by DNA and RNA. That maybe is a clue to the fact that they started out separate rather than together. So my version of the origin of life is that it started with metabolism only.’
I mentioned the RNA-world hypothesis in Section 12.6, which is at variance with what Dyson and Kauffman have been emphasizing. I am inclined to agree with Kauffman that the RNA-world hypothesis is probably not a good one because it ignores the minimum-necessary-complexity requirement for a live system to sustain and propagate itself. The tendency for the edge-of-chaos existence of complex adaptive systems (which I shall discuss in Part 14 of this series) is another argument in favour of Dyson’s model, which involves the existence of proteins before RNA emerged.
Following Dennett (1984), I have made an important point in this article that the emergence of self-replicators like RNA and DNA provided the first reason for the evolution of consciousness.
In 1944, Oswald Avery successfully converted one strain or species (the so-called R-strain) of pneumococci bacteria into another (the S-strain) by exposing the R-strain to an extract of the heat-killed S-strain (this extract was shown to consist of pure DNA). In June 2007, Craig Venter announced the results of the work done in his laboratory on genome transplantation. He reported the successful transformation of one type of bacteria into another; the new bacterium was dictated entirely by the transplanted chromosome. In other words, one species became another. We can say that he created life in the form of a new species (without any ‘divine’ intervention). This was an event of enormous significance. Life had been created in the laboratory, even though there was a concomitant annihilation of a different form of life. The next target is to create life starting from ‘scratch’, i.e. by not using any precursors derived from living organisms. I have no doubt that this will happen in the near future. Such is the power of the scientific method we humans have invented and nurtured.