COMPLEXITY EXPLAINED: 11. Cellular Automata

Written by December 10, 2009 9:57 pm 0 comments

(Note: All previous parts of Dr. Wadhawan’s series ‘Complexity Explained’ can be accessed through the Related Posts list at the bottom of this article.)

C:\Documents and Settings\Owner\My Documents\My Pictures\Nature2\Picture 161.jpg

There is a distinction between replication and reproduction. Probably, the earliest living entities were able to reproduce but not to replicate. Cells can reproduce, but only molecules can replicate. Reproduction in the case of such primitive cells means to divide into two cells with the daughter cells inheriting approximately equal shares of the constituents of the cell. By contrast, replication for a molecule means the creation of an exact copy of itself by suitable chemical processes. Here I describe John von Neumann’s computer-simulation studies on self-reproduction, after introducing the notion of cellular automata. Studies on cellular automata help us understand life processes.

11.1 Introduction

Present-day life processes involve metabolic reproduction and replication. Freeman Dyson (1985) argued that metabolic reproduction and replication are logically separable propositions. He pointed out that Darwinian natural selection does not require replication, at least for simple creatures. According to Dyson, it is likely that life originated twice, with two separate kinds of organisms, one capable of metabolism without exact replication, and the other capable of replication without metabolism. At some stage the two features came together. He suggested, probably, that the earliest living creatures were able to reproduce but not to replicate. I shall discuss Dyson’s dual-origin-hypothesis for life in the next article in this series. Here I focus on some computer-simulation aspects of replication and reproduction.

An automaton has two components, which are now known by the names hardware and software. Roughly speaking, software embodies information, and hardware processes information. And the rough analogy to biology is: nucleic acid is software, and protein is hardware. Usually, protein is the essential component for metabolism, and nucleic acid is the essential component for replication. An automaton that has only hardware but no software can exist independently and maintain its metabolism so long as it finds food to eat or numbers to crunch. By contrast, an automaton that has only software but no hardware can lead only a parasitic existence (e.g. viruses).

11.2 Cellular Automata

image11_2

John von Neumann

The notion of cellular automata was put forward in the 1940s by Stanislas Ulam, a colleague of John von Neumann. What Ulam suggested to Neumann was to consider a digital programmable universe in which time is imagined as defined by the ticking of a cosmic clock, and space is a discrete lattice of cells, eachcell occupied by an abstractly defined very simple computer called a finite automaton. Very simple local rules determine the state of any cell at any discrete point of time. There are only a finite number of states available to a cell or automaton. These states could be, say, a few colours, or a few integers, or just ‘dead’ or ‘alive’, etc. At each tick of the digital clock, every automaton changes over to a new state determined by its present state and the present states of the neighbouring cellular automata (CA). The rules by which the state of each automaton changes at a given instant of digital time are the equivalent of the physical laws of the universe. There is thus a state-transition table, which describes how each automaton changes for each of the possible configurations of the states of the neighbouring cells.

11.3 John Conway’s ‘Game of Life’

A particularly popular example of CA is the Game of Life invented (around 1970) by John Conway. It provides a graphic demonstration of ‘artificial evolution’ because the fascinating evolving structures can be seen on a computer screen. One starts with a 2-dimensional lattice of square cells, each 11_3cell randomly taken to be either black or white. Let black mean that the ‘creature’ denoted by a cell is ‘alive’, and let white mean that the corresponding creature is ‘dead.’ Very simple local rules are introduced for how the cells will change from one time step to the next. For example, if a cell has two or three neighbours which are alive (i.e. black), then the cell becomes alive (black) if it was dead to start with, or remains alive if it was already so. If the number of live neighbours is less than two, the cell dies of ‘loneliness.’ And if the number of neighbours is more than three, again the cell dies, this time due to ‘overcrowding.’ Remarkable patterns emerge on the computer screen when this program is run. Every run is different, and it is not possible to predict or exhaust all possibilities, in keeping with what one would expect from a complex system. The live cells organize themselves into coherent and ever-changing patterns, like real creatures in Nature.

11.4 Wolfram’s ‘New Kind of Science’

What secret it is that allows nature seemingly so effortlessly to produce so much that appears to us so complex (?)

Stephen Wolfram

Studies on cellular automata got a big boost through the work of Stephen Wolfram. He introduced a whole new dimension to the study of complex systems. His monumental book on complexity (A New Kind of Science (NKS)), published in 2002, has been widely read, and continues to raise debate. I mentioned in Part 4 the notion of algorithmic irreducibility analyzed by Chaitin. That was information irreducibility. An information-irreducible digit stream is that for which there is no theory or model more compact than simply writing the string of bits directly. There is no program for calculating the string of bits that is substantially smaller than the string of bits itself. A somewhat different but related kind of irreducibility, namely computation-irreducibility was analyzed on a computer by Wolfram. (It is also called time-irreducibility because the longer a computation is, the more time it takes to perform it.) It pertains to physical and other systems for which there are no computational shortcuts, and for which the quickest way to see what a system will be like at a distant time is just to run the computer program (or the automaton) that is modelling the system. By contrast, a computationally reducible system is one which can be described by exact mathematical formulas that give the outcome at any chosen instant of time without working through all the time steps.

http://www.uvm.edu/~cems/newsevents/gfx/wolfram.jpg

Wolfram is of the view that the complex phenomena we see in the world around us can be usually thought of as the running of simple computer programs. And the best and often the only way to understand these phenomena is by modelling them on a computer, rather than by working out the consequences of idealized and approximate mathematical models based on a set of equations. Wolfram’s idea of a ‘simple program’ typically has the following ingredients: a set of transformation or operation rules (usually local rules); data to operate on; and an engine that applies the rules to the data. In a cellular automaton, the data enter only at the beginning of the computation (‘initial conditions’), and the engine keeps applying the same deterministic rules to the outputs of its previous application of the rules. Extremely complex-looking patterns can be generated by any of a large number of simple programs investigated by Wolfram.

Shown here is a very simple 1-dimensional cellular automaton, which generates a complex-looking (nested) pattern. It consists of a row of squares, and each square can be either black or white. Starting from just one such row of squares (the top row in the figure), each time the system is updated, a new row of squares is created just below the previous row, following a simple rule. The simple rule operative in this figure says that a square in the new row should be black only if one or the other, but not both, of its vertically-above predecessor’s neighbours is black. Shown in a separate figure at the bottom is a graphical depiction of this rule, in which 8 blocks are drawn, corresponding to the 8 conceivable configurations of three neighbouring cells, each configuration determining the colour of the cell in the next row. Starting with a single black square in the top row of squares, this rule produces a complex pattern of nested triangles.

http://www.wolframscience.com/media/images/rules/rule90sequence.gif

http://www.wolframscience.com/media/images/rules/rule90thumb.gif

http://www.wolframscience.com/media/images/rules/ElementaryRule90.gif

Here is another, beautiful, example of what a simple cellular automaton can generate:

http://www.wolframscience.com/media/images/color_images/from_book/page181.gif

Any process that follows definite rules can be regarded as a computation. Thus the CA can carry out computation, as can Turing machines, and many other systems in Nature. In computations carried out by humans on computers, the computer programs define the rules of computation. In Nature, the rules of computation are nothing but the laws of Nature.

The notion of a universal computer emerged from the work of Alan Turing in the 1950s, and this launched the computer revolution. It was demonstrated that it is possible to build universal machines with a fixed underlying construction, but which can be made to perform different computations by being programmed in different ways. With suitable programming, any computer system and computer language can be ultimately made to perform exactly the same set of tasks. Can at least some of the CA be universal computers? ‘Yes’ according to Wolfram. He describes the construction of a CA that can be a universal cellular automaton (UCA). The rule for this UCA is extremely simple. In fact it is a somewhat complex version of the so-called ‘rule 90′, which we have illustrated in the black-and-white figure above. My recent article should be consulted for more details on this.

http://t3.gstatic.com/images?q=tbn:KguL6TNbK0S4_M:http://www.sdtimes.com/blog/post/2009/image.axd%3Fpicture%3D2009%252F9%252F1954_turing_large.jpg

Alan Turing

The immense number of CA examined by Wolfram has led to the formulation of his Principle of Computational Equivalence (PCE). According to this principle, almost all processes that are not ‘obviously simple’ correspond to computations that are of equivalent complexity. In other words, irrespective of the simple or complicated nature of the rules or the initial conditions of a process, any such process will always correspond to a computation of equivalent difficulty or sophistication.

The genesis of the PCE lies in the idea of computational universality: It is possible to construct universal systems that can perform essentially any computation, and which must therefore all be capable of exhibiting the highest level of computational sophistication. It does not matter how simple or complicated either the rules or the initial conditions for a process are; so long as the process itself does not look obviously simple (e.g. purely repetitive or purely ‘nested’), it will almost always correspond to a computation of equivalent sophistication. The PCE, though still under some debate, may have far-reaching consequences for science, and for much else.

Let us now see how the PCE rationalizes the rampant occurrence of computational irreducibility (or complexity) in Nature. For this we have to address the question of comparing the computational sophistication of the systems that we study with the computational sophistication of the systems that we use for studying them. The PCE implies that, once a threshold has been crossed, any real system must exhibit essentially the same level of computational sophistication. And this applies to our own perception and analysis capabilities also; according to the PCE they are not computationally superior to the complex systems we seek to observe and understand. Beyond a certain threshold, all systems are computationally equivalent in terms of complexity.

If predictions about the behaviour of a system are to be possible, it must be the case that the system making the predictions is able to outrun the system it is trying to make predictions about. But this is possible only if the predicting system is able to perform more sophisticated computations than the system under investigation. And the PCE does not allow that. Therefore, except for simple systems, no systematic predictions can be made about their behaviour at a chosen time in the future. Thus there is no general way to shortcut their process of evolution. In other words, most such systems are computationally irreducible.

As Wolfram emphasizes, the whole idea of doing science with mathematical formulas makes sense only for computationally reducible systems. For others there are no computational shortcuts; practically the only way of knowing a future configuration is to actually run through all the evolutionary time steps. And Wolfram’s NKS is ideally suited for that purpose. Exploiting the immense power of modern computers, one can generate a huge repertoire of the consequences of all sorts of simple programs as embodied in the corresponding CA. For understanding the basics of a given complex system observed in Nature, one can try to see if the observed behaviour pattern can be matched with any of the archived CA. If yes, then the simple program used for generating that particular CA pattern is the ‘explanation’ of the time or space evolution of the complex behaviour observed in the actual physical system under study (entailing a collapse in the degree of complexity of the system; cf. Part 5). Thus, rather than using CA to mimic or model the observed behaviour of complex systems, Wolfram advocates their use to reveal unknown aspects of the systems that they model. It remains to be seen how far this will turn out to be a useful way of doing science.

11.5 Does Randomness Rule Our Universe?

Is there a fundamental deterministic rule from which all else follows? Conventional wisdom says that randomness is at the heart of quantum mechanics, and because of this randomness the universe has infinite complexity. Wolfram suggests that this may or may not be so. According to him, there may be no real randomness; only pseudo-randomness, like the randomness produced by random-number generators in a computer. The computer generates these numbers by using mathematical equations, and what we get are actually deterministic sequences of numbers.

Wolfram gives the analogy of π = 3.1415926. . . Suppose you are given, not the whole equation, but only a string of digits coming from far inside the decimal expansion. It would look random, in the absence of complete knowledge. In reality it is only pseudo-random. Wolfram puts forward the viewpoint that, similarly, the randomness we see in the physical world may really be pseudo-randomness, and the physical world may actually be deterministic. It is simply that we do not know the underlying law, which may well be a simple CA for all we know. But there is also an important computational-irreducibility aspect to this scenario, as described above.

According to Wolfram, complexity in Nature follows from the existence of computational equivalence: ‘We tend to consider (a certain) behaviour complex when we cannot readily reduce it to a simple summary. If all processes are viewed as computations, then doing such reduction in effect requires us as observers to be capable of computations that are more sophisticated than the ones going on in the systems we are observing. But the PCE implies that usually the computations will be of nearly the same sophistication – providing a fundamental explanation of why the behaviour we observe must seem to us complex’ (cf. the website www.wolframscience.com).

There is a human or anthropic angle to the meaning of complexity. Let us go back to the equation π = 3.1415926… . It has only a small information content or degree of complexity: A small algorithm using the fact that π is given by the ratio of the circumference of a circle to its diameter can generate the entire information contained in this equation. But if we humans do not have knowledge about the entire equation, but are given only a string of digits coming from far inside the decimal expansion, then the degree of complexity is just about as large as the length of the string and can, in principle, be infinite. For us humans, the degree of complexity of a system depends on our knowledge about the system. As more knowledge is acquired by us, the degree of complexity may keep collapsing. Of course, this happens only for systems which are not irreducibly complex. If the complexity of a system is irreducibly or intrinsically large, our increasing knowledge about the system can have little effect on its degree of complexity.

11.6 Self-Reproducing Automata

In the late 1940s, Neumann got interested in the question: Can a machine be programmed to make a copy of itself? Can there be self-reproducing machines? To bring out the essence of self-reproduction, Neumann imagined a thought experiment. Consider a machine moving around on the surface of a pond. The pond contains all sorts of machine parts. Our machine is a ‘universal constructor‘ (rather like a universal computer); i.e., given a recipe for constructing any machine, it can search the pond for the right parts and construct the desired machine. In particular, it can construct a copy of itself if the requisite description is known to it.

But it is still not a self-reproducing machine, because the copy it has constructed of itself has no information about its own description for constructing another copy of itself. Neumann argued that for this to be possible, the original machine must have a ‘description copier’; i.e. a mechanism for duplicating the original description and for attaching this duplicate description to the new copy it is constructing of itself. The offspring will then have the wherewithal for a sustainable self-reproduction in this so-called Neumann universe.

For testing his thought experiment, Neumann used the CA suggested by Ulam. He proved that there exists at least one cellular-automaton pattern which can reproduce itself. The pattern he found involved a large lattice of cells, with 29 possible states for each cell. This was an important result because it meant that self-reproduction was possible in machines, and was not confined to living beings only.

Thus any self-reproducing system (living or nonliving) must play two roles: It should serve as an algorithm that can be executed during the copying and constructing process; and it should serve as a data bank that can be duplicated and attached to the offspring. These two predictions were confirmed for real-life systems in 1953 when Watson and Crick determined the structure of the DNA molecule. The DNA molecule not only serves as a data base for synthesizing various proteins etc., but it also unwinds and makes a copy of itself when the biological cell divides into two. Studies on CA help us understand life processes.

11.7 Concluding Remarks

As demonstrated by the work on CA, very simple local rules can lead to enormous amounts of complexity. This is why, although the laws of Nature are simple, we see so much complexity around us, including biological complexity.

The emergence of RNA and DNA molecules resulted in the replication aspect of self-reproduction in Nature. The next article in this series will elaborate on this when I discuss the origin(s) of life on Earth.

Dr. Vinod Kumar Wadhawan is a Raja Ramanna Fellow at the Bhabha Atomic Research Centre, Mumbai and an Associate Editor of the journal PHASE TRANSITIONS.

This post was written by:

- who has written 36 posts on Nirmukta.

Dr. Vinod Wadhawan is a scientist, rationalist, author, and blogger. He has written books on ferroic materials, smart structures, complexity science, and symmetry. More information about him is available at his website. Since October 2011 he has been writing at The Vinod Wadhawan Blog, which celebrates the spirit of science and the scientific method.

Leave a Reply


Comments are moderated. Please see our commenting guidelines

Trackbacks