(For previous articles in the series check the links to ‘Related Posts’ that follow the article.)
Equilibrium is death, because equilibrium means a state of maximum entropy or disorder. Living beings are complex systems that need energy to fight entropy and stay away from a state of thermodynamic equilibrium. The immediate effect of intake of energy by a system at equilibrium is that it is driven away from equilibrium. In this article I discuss how complexity emerges in systems driven far away from equilibrium.
6.1 The Space We Live in
Complex systems usually involve a large number of interacting subunits. Some powerful concepts have been introduced in science for dealing with large assemblies of subunits. The notion of ‘phase space’ is one such basic concept. But let us first consider the ‘real’ or actual 3-dimensional space in which we live (actually the relevant number of dimensions here is four, with time as the special fourth dimension, very important for the evolution of complexity). Suppose we want to specify the location of a particle in this space. We introduce three reference axes or coordinate axes, one for each dimension of the 3-dimensional space. These three axes meet at a point which we call the origin of the coordinate axes. If we move a distance x along the first axis, followed by a distance y along the second axis, followed by a distance z along the third axis, we reach a point to which we assign the coordinates (x, y, z). If the point moves to a different location, it would have a different set of coordinates, say (x’, y’, z’).
Imagine a space which is 2-dimensional, rather than 3-dimensional. This means that now only two coordinates are needed to specify the location of any point; e.g. (x, y). In our real 3-diemsnsional space, we take certain things for granted. Have you ever wondered whether your life would have been possible in a 2-dimensional world? The answer is No. The picture shown here (courtesy Stephen Hawking, The Universe in a Nutshell) makes the point. The camel has an alimentary canal, just like you and I have. So there is a provision for input of matter and energy (in the form of food etc.), and there is a provision for excretion of waste products, thus ensuring that there is a flow of energy and matter through the complex system. Imagine the plight of the animal in 2-dimensional space (see picture). The alimentary canal will make the fellow fall apart! The relevant number of dimensions is a very important aspect of the physics of a problem.
We are alive in a 3-dimensional world, and we would not exist in a 2-dimensional world. Thus one can say that we exist because the world is 3-dimensional. One can also say that the world is 3-dimensional because we exist. This last statement is a trivial (almost tautological) version of the so-called anthropic principle. I have discussed the principle in a recent article, written for science students. In a 2-dimensional world, we shall not be there to ask the question whether the world exists or not.
6.2 Phase Space or State Space
Imagine a system of N particles. At any instant of time, any particle in this assembly is at a particular point in space, so we can specify its location in terms of three coordinates, say (x, y, z). At that instant of time, the particle also has some velocity and momentum (momentum is mass multiplied by velocity). The momentum (being a vector) can be specified in terms of its three components, say (px, py, pz). Thus six parameters (x, y, z, px, py, pz) are needed to specify the position and momentum of a particle at any instant of time. Therefore, for N particles, we need to specify 6N parameters for a complete description of the system at any instant of time. For real systems like the molecules in a gas, the number N can be very large, being typically of the order of the Avogadro number (~1023). Physicists cannot be happy with such a messy way of depicting graphically such a system of N particles, and have solved the problem of the graphical representation by using some imagination.
What they do is to imagine a hypothetical 6N-dimensional space, as follows. We have just now referred to the actual 3-dimensional space in which we live, and we have agreed to specify the coordinates (x, y, z) of a particle with reference to three coordinate axes, one for each coordinate. We have done something similar for the three momentum components (px, py, pz). Suppose we imagine a 6-dimensional space in which three of the axes are for specifying the position coordinates of a particle, and the other three axes are for specifying the momentum components of the same particle. In this imaginary ‘hyperspace’, the position and momentum of a particle at any instant of time can be represented by a single point. Similarly, for representing simultaneously the configuration of N particles, we imagine a 6N-dimensional space, and an appropriate point in this space (called phase space or state space) represents the state of the entire system of N particles at a given instant of time. As time progresses, this representative point in phase space traces a trajectory, called the phase-space trajectory. Such a trajectory records the history or time-evolution of the dynamical system.
Some variations of the concept of an imaginary phase space or state space are: representation space; search space; configuration space; and so on. ‘Hyperspace’ is a general word for all these imaginary spaces.
6.3 Phase Transitions
Water exists as ice at low temperatures, as steam at high temperatures, and as liquid water at intermediate temperatures. The basic chemical species is H2O. Its boiling point at atmospheric pressure is 100oC, and its freezing point is 0oC. We say that H2O can exist in three phases: vapour, liquid, and solid (please note that the word ‘phase’ is being used here in a very different sense from what was meant above in our description of phase space). And we can go from one phase of water to another by changing some control parameter; typically temperature. Suppose the temperature is above 100oC. As we cool the system, the steam condenses to liquid water at 100oC. That is, it makes a transition from the vapour phase to the liquid phase, so we speak of a phase transition. If we go on cooling the system, another phase transition occurs at 0oC, when the liquid phase changes to ice, which is a crystalline solid phase.
Why do phase transitions occur? Very simple. Just invoke the second law of thermodynamics (cf. Part 3). The law says that all processes occur so as to minimize the overall free energy. At any particular temperature, that phase of H2O (or of any other system) will be favoured for which the free energy is the least. Suppose the temperature is, say, 30oC. At this temperature, the amount of thermal agitation is such that the liquid phase of water is the most stable (i.e. has the lowest free energy), compared to the other two competing phases, namely the vapour phase and the solid phase. As we cool the system, the degree of thermal agitation goes on decreasing, and at 0oC a different phase of water (namely ice) becomes a stronger contender for existence: The system can lower its free energy by a substantial amount by making a transition to the ice phase. Any phase transition occurs because the system can lower its free energy by undergoing the phase transition.
6.4 Symmetry Breaking
We are surrounded by symmetry and broken symmetry. Take the socks and shoes example. The two socks you wear are identical. We say that the pair possesses permutation symmetry. We can interchange (or permute) the two socks and the end result will be as if we did not perform any permutation operation. The permutation is a symmetry operation here.
What about the two shoes? They are not identical. But you intuitively feel that there is something symmetrical about the pair. They are mirror images of each other. The act of reflecting across a mirror is a symmetry operation in this case. The reflection of the left shoe across a mirror looks identical to the right shoe, and vice versa. We say that the pair of shoes possesses mirror symmetry.
The water example can be further used to make an important point regarding the evolution of complexity. Consider again the liquid-to-solid phase transition, occurring at 0oC. Which phase has more order, ice or liquid water? Ice has a nice and regular crystalline arrangement of molecules on a lattice, so its degree of order is high (or entropy is low) compared to the liquid phase. For liquid water the temperature is higher than for ice, so the molecules are not located at fixed positions; instead they are moving around in a chaotic fashion. Therefore liquid water has a higher degree of disorder (or entropy) compared to ice. What about the symmetry of the two phases?
A variety of symmetries are encountered in Nature. Here we are talking about the symmetry of the atomic or molecular structure of liquid water and of ice. Ice is a crystal. This means that we can identify a certain small building block in it (called the unit cell) in which the molecules of H2O are arranged in a certain fixed manner, and we can generate the entire crystal of ice by repeatedly stacking this building block in a space-filling fashion along three specific directions. Thus, even though the total number of molecules in a typical crystal of ice is very large, only a small amount of information is needed for specifying the positions of all the atoms: All we need to know is the coordinates of the very few atoms in the unit cell, and the magnitudes and directions of the three repeat vectors (called the lattice vectors). The ice crystal has more order compared to liquid water, because the latter has near-random and changing positions and orientations of the molecules of H2O. The liquid phase is also more symmetric compared to the crystalline phase.
This may sound counter-intuitive, as we tend to associate symmetry with order, rather than with disorder. But we are talking here about the symmetry of the atomic or molecular structure of the two phases of water. To be concrete, let us talk about the directional symmetry of the atomic structure: What are the directions along which the atomic structure looks the same? The atomic structure of liquid water looks the same along all directions (we say it is isotropic), so it has much higher symmetry compared to that of a crystal of ice. The regular arrangement of atoms in ice on a lattice gives it the property of anisotropy: the atomic structure does not look the same when viewed from different directions. It may look the same only along some specific directions. Thus a disordered state is more symmetric than an ordered state.
This means that a symmetry-lowering or symmetry-breaking phase transition occurs when liquid water changes to ice. And it is ‘spontaneous’, in the sense that all we did was to lower the temperature across the phase transition. Such symmetry-breaking is ubiquitous in Nature, and is at the heart of how complexity builds up ‘spontaneously’. Strictly speaking, it is not spontaneous because we are dealing with non-isolated systems, so it is energy which drives this change. Nevertheless, what is spontaneous is that a certain lower symmetry emerges which is an inherent property of the system at that temperature, and was not imposed from the outside: It is self-organized.
There is a relationship between order and the degree of complexity of a system (cf. Part 4). After the Big Bang, the universe has been cooling and expanding. Spontaneous breaking of symmetry has occurred again and again, and this breaking of symmetry of various types is responsible for the myriad evolutions of complexity. Every lowering of symmetry leads to the emergence of new order, and order and complexity have a deep connection. Jean-Marie Lehn (2002) described it remarkably well:
As the wind of time blows into the sails of space,
the unfolding of the universe nurtures the evolution of matter
under the pressure of information.
From divided to condensed
and on to organized, living, and thinking matter,
the path is toward an increase in complexity through self-organization.
When a system is driven sufficiently away from equilibrium, it undergoes a ‘bifurcation’ in phase space. The idea of bifurcations was developed by Ilya Prigogine and coworkers. In a way, a bifurcation is a more general or ‘liberal’ example of a phase transition, and is a very common phenomenon in nonlinear dynamical systems pushed far away from equilibrium. At a bifurcation point in phase space, the system has two choices (in the so-called ‘pitchfork bifurcation’), and the choice actually made is purely random, i.e. a matter of chance. It is this chance aspect which is responsible for the unpredictable nature of evolution of complexity.
Let us hark back to the phase-transition analogy to illustrate the statement regarding the random nature of the choice made by a system at a bifurcation point in phase space. Iron is a crystalline material, and we are all familiar with the fact that a piece of iron at room temperature can be processed to act like a magnet. The processing essentially amounts to taking the piece of iron to a high enough temperature, and then cooling it slowly under the influence of a magnetic field. What happens in this ‘poling’ process is something like this:
There is a (solid-to-solid) phase transition. At high-enough temperatures, iron exists in a so-called paramagnetic phase. And at a certain phase-transition temperature it changes to a ferromagnetic phase. So we speak of a ferromagnetic phase transition. In the ferromagnetic phase, each unit cell of the crystal is a tiny magnet. As is familiar, a magnet has a ‘north pole’ and a ‘south pole’, and we can draw a line connecting the two, which gives the direction of the magnetization or the ‘magnetic dipole’. We use a more technical term for it, particularly at the unit-cell level: We say that the south-north direction corresponds to a ‘spin up’ configuration, and a north-south direction corresponds to a ‘spin down’ configuration. Any particular portion of a crystal of iron in the ferromagnetic phase can opt for a spin-up configuration or a spin-down configuration. Even the most minor of thermal or other fluctuations can push a particular portion of the system to one bifurcation branch (‘ferromagnetic domain’) or the other. The nature of the fluctuation that may happen to occur at the moment of the bifurcation point cannot be predicted.
Thus there are spin-up and spin-down domains in the ferromagnetic phase of iron. The poling process mentioned above amounts to coaxing the domains to align preferentially along the direction of the external magnetic field. In the absence of poling, a piece of iron in the ferromagnetic phase cannot act as a good magnet, because there are just about as many domains pointing one way as the other, thus cancelling out the net magnetization. After the poling, the piece of iron as whole has a net magnetization, making it act like a magnet.
Countless examples of bifurcations and ‘generalized phase transitions’ in complex systems can be given. A laser is a case in point. After a laser system has been assembled, a certain fine-tuning is needed to effect the lasing action. Thus there is a control parameter, and at some critical value of this parameter a bifurcation occurs in state space, resulting in a kind of ‘phase transition’ to an ordered state characteristic of any laser, namely a state in which coherent emission of radiation occurs. Lasing action is an emergent phenomenon, arising in a complex system. There is an emergence of order out of disorder at the bifurcation point in phase space.
To summarize, I have explained the important notion of bifurcations for understanding the evolution of complexity. A bifurcation may occur when a system is pushed sufficiently far away from equilibrium. Commonly, there is also a concomitant breaking of symmetry of some kind, which results in a more ‘ordered’ state (like ice in the example of H2O). This ordering is spontaneous or self-organized, in the sense that it is a property of the system, and not something imposed from the outside. Each such ordered or self-organized state can set the stage for the emergence of the next level of order or complexity, because bifurcations can occur repeatedly if the system is pushed more and more away from equilibrium. The choice that a system makes between the alternative branches of the bifurcation is a purely random event, and thus it cannot be predicted. The history of our universe is one grand saga of the successive bifurcations that just happened to have occurred that way. A different set of bifurcations would have led to a totally different universe, perhaps with no emergence of conscious beings, and therefore no discussion of free will and consciousness etc. It is difficult to understand why many people still want to think in terms of a prayer-answering God who created everything, down to the last detail.
As discussed in previous articles in this series, there is a thermodynamic approach to understanding complexity, and there is an information-theoretic approach. We have taken the thermodynamic route in this article for understanding the evolution of complexity. The information-theoretic approach, though equivalent, provides some new insights into how complexity evolves. I refer the reader to the great book by Seth Lloyd (2006): Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Lloyd is a pioneer, who is actually involved in the designing and construction of quantum computers. This book is a fascinating introduction to the world of what I may call ‘applied quantum mechanics.’
For the time being, let Doyne Farmer have the last word:
It will be interesting to see if we can articulate a notion of ‘progress’ that would involve emergent structures having certain feedback loops (for stability) that weren’t present in what went before. The key is that there would be a sequence of evolutionary events structuring the matter in the universe in the Spencerian sense, in which each emergence sets the stage and makes it easier for the emergence of the next level.
My plan for the next few articles in this series is as follows. I shall discuss the evolution of complexity in various categories of complex systems, in the following order: cosmic, chemical, biological, artificial, terrestrial, and cultural evolution of complexity. The article on chemical evolution will explain how life arose out of no-life, marking a major milestone in the evolution of complexity on Earth. These articles will be interspersed with some introductory material from chaos theory, network theory, cellular-automata theory, and game theory. Finally I shall touch on ‘complexity and consciousness.’ Feedback from readers about the content and style of my articles will help me make mid-course corrections, if needed. Please do let me know what you think. Should I be doing it differently?