Stephen Hawking’s Grand Design for Us
There has been a raging debate about how the universe began. This article is a summary of where Stephen Hawking’s quest for answering this fundamental question has culminated. His latest book (co-authored with Leonard Mlodinow) provides a fairly self-consistent picture of what is reality, why there is something rather than nothing, why are things the way they are, and a host of other fundamental issues.
Image credit: http://www.okeartist.com/532/stephen-hawking.html
Stephen Hawking (SH) is one of the greatest scientists ever. He is a cosmologist, particularly well-known for his work on black holes. The book THE GRAND DESIGN, published last year by Hawking and Mlodinow (H&M), has a touch of finality, as if an unusually sharp scientific brain has finally succeeded in finding rational answers to the basic questions about ourselves and our universe. Here is a sampling of the answers this book provides.
What Is Reality?
There are several umbrella words like ‘consciousness’, ‘reality’, etc., which have never been defined rigorously and unambiguously. H&M argue that we can only have ‘model-dependent reality‘, and that any other notion of reality is meaningless.
Does an object exist when we are not viewing it? Suppose there are two opposite models or theories for answering this question (and indeed there are!). Which model of ‘reality’ is better? Naturally the one which is simpler and more successful in terms of its predicted consequences. If a model makes my head spin and entangles me in a web of crazy complications and contradictory conclusions, I would rather stay away from it. This is where materialism wins hands down. The materialistic model is that the object exists even when nobody is observing it. This model is far more successful in explaining ‘reality’ than the opposite model. And we can do no better than build models of whatever there is understand and explain.
In fact, we adopt this approach in science all the time. There is no point in going into the question of what is absolute and unique ‘reality’. There can only be a model-dependent reality. We can only build models and theories, and we accept those which are most successful in explaining what we humans observe collectively. I said ‘most successful’. Quantum mechanics is an example of what that means. In spite of being so crazily counter-intuitive, it is the most successful and the most repeatedly tested theory ever propounded. I challenge the creationists and their ilk to come with an alternative and more successful model of ‘reality’ than that provided by quantum mechanics. (I mention quantum mechanics here because the origin of the universe, like every other natural phenomenon, was/is governed by the laws of quantum mechanics. The origin of the universe was a quantum event.)
A model is a good model if: it is elegant; it contains few arbitrary or adjustable parameters; it agrees with and explains all the existing observations; and it makes detailed and falsifiable predictions.
Feynman’s Sum-Over-Histories Model Of Quantum Reality
One of the main ingredients of Hawking’s theory of our universe is Richard Feynman’s sum-over-histories version of quantum mechanics.
To get a feel for this, let us hark back to the famous double-slit experiment performed by Davisson and Germer in 1927. They shot a beam of electrons through two parallel slits, and recorded the positions of the electrons on a flat screen on the other side.
What they found was that the electrons behaved, not as particles, but as waves, forming a diffraction pattern like the one we would expect from a beam of light. This established the wave-particle duality of elementary particles.
Richard Feynman was intrigued by the further variations carried out on this experiment. Suppose you close one slit and carry out the experiment. You record a pattern reminiscent of the single-slit experiment carried out earlier by Young for a beam of light. There is a diffraction pattern comprising of a central maximum and a number of secondary maxima. Now you close this slit and open the other one, and repeat the experiment. You again get a similar diffraction pattern, a little displaced from the first one. But if you superimpose these two patterns, what you get is not the same as when the two slits are open at the same time. Even if you reduce the intensity of the electron beam so much that the electrons come one at a time, you still get the same result. This is intriguing. How does a single electron ‘know’ which slit is open, or whether one or two slits are open?
During the 1940s Feynman formulated a new version of quantum mechanics, in which when an elementary particle goes from point A to point B, it has available to it all possible trajectories, and not just the shortest-path trajectory. So the electron in the above experiment actually samples all paths (simultaneously), including the one in which it finds that one of the slits is closed. This approach was in line with the Heisenberg uncertainty principle, according to which an electron can be anywhere in the universe, and not on any particular trajectory. Of course, there is a stronger probability that it would be in the vicinity of the slits in the above experiment, but the probability cloud characterising its position extends over all space. There is one probability distribution at one instant of time, and a slightly different one at the next instant, and so on. Therefore all possible histories or trajectories are equally real, each with its own probability value. Moreover, since the probability cloud extends over all space, all the alternative histories get enacted simultaneously.
Feynman developed the necessary mathematics, including his famous ‘path integrals‘, for summing over all possible trajectories, or all alternative histories (each with an amplitude term and a phase term), for calculating the net effect. As can be seen from the corresponding ‘Argand diagrams’, trajectories which deviate too much from the straight trajectory contribute less to the overall sum (or integral), but they do contribute. Feynman’s formulation is able to reproduce all the laws of quantum physics.
It may appear as if nothing much has been gained, because we can obtain the same results by treating the electrons as waves and carrying out a Huygens construction as we do for explaining an optical diffraction pattern (or solve the Schrodinger equation). But there was a conceptual breakthrough here, because we must carry out this sum over alternative histories for any quantum system, even for the evolving universe. This is what SH’s model of the universe does.
In Newtonian theory, the past is visualized as a definite series of events. Not so in quantum theory. No matter how thoroughly we observe the present, the unobserved past, as also the future, is indeterminate, and exists only as a ‘spectrum of possibilities’. This means that the universe does not have a single past or history.
The Standard Model Of Particle Physics
There are four known forces or interactions of Nature: (i) the electromagnetic interaction; (ii) the gravitational interaction; (iii) the weak nuclear interaction; and (iv) the strong nuclear interaction. Formulating a quantum-mechanical version of these has been a highly nontrivial task.
The first one to be quantized was the electromagnetic interaction, and for this we are once again indebted to the genius of Feynman. The resulting field of research is called quantum electrodynamics (QED).
In physics there are particles and there are fields. Two particles interact because each creates a field around itself, which is felt by the other particle. In the quantum version of the fundamental interactions, even the fields are quantized and associated with corresponding elementary particles. The photon is an example of the quantization of the electromagnetic field. Two charged particles, say electrons, interact by the exchange of photons. One particle emits a photon which is absorbed by the other particle, resulting in an interaction between them.
In quantum field theory, the matter particles are called fermions, and the field quanta are called bosons. The electron is a fermion, and the photon is a boson.
There is not just one way in which a photon may be emitted by one electron and absorbed by another. All possible modes or histories of emission and absorption must be considered and summed up vectorially. When this was done by Feynman, the problem of ‘infinities‘ was encountered. There are infinitely many histories to sum, so the QED theory ended up calculating an infinite mass and an infinite charge for the electron, which was an absurdity.
The pioneering genius Feynman not only established QED, he also introduced the vitally important and much-used ‘Feynman diagrams‘ (see the wiggly and straight lines alongside his picture above), and the ‘renormalization‘ procedures to get over the problem of infinities.
The renormalization involved subtracting infinite but negative terms from the sum over histories such that what were left were finite numbers. This was not a very elegant theory because it predicted just about any value for the electron charge and mass. But the saving grace was that if you inserted the experimentally known values for the charge and the mass (as adjustable parameters) into the theory, then all further predictions of the theory were borne out by experiment to a very high degree of precision.
Encouraged by the success of QED, physicists attempted to formulae quantum field theories for the other three fundamental interactions. Alongside, work was also going on for ‘unifying’ all the interactions into a single interaction. It had been realized that as we go up the energy scale, the interactions would merge one by one, so that at high-enough energies only one interaction would prevail. The highest-energy scenario, of course, was what transpired at and soon after the moment of the Big Bang, when there must have been only one fundamental interaction in operation. As the cosmos expanded and cooled, ‘symmetry breaking’ transitions occurred, resulting in the emergence of the four interactions we know at present.
That unification is the correct approach became clear when attempts were made to formulate a quantum field theory of the weak nuclear force. It was found that the nuisance of infinities cannot be gotten over by a renormalization procedure, except when the electromagnetic interaction and the weak nuclear interaction are treated as one single interaction, since called the electroweak interaction. Glashow, Salam and Weinberg were awarded the Nobel Prize for this work in 1973.
Renormalization of the quantum field theory for the strong nuclear force can be carried out on its own, and the theory is known as quantum chromodynamics (QCD). According to this theory, protons, neutrons and some other fundamental particles are composed of still more fundamental particles called quarks. Murray Gell-Mann was awarded the Nobel Prize for his work on quarks. Three quarks, one of each of the three ‘colours’ assigned to them, form baryons, of which protons and neutrons are examples. Practically all the normal matter mass in our universe comes from these baryons.
Various attempts have been made for uniting the electroweak interaction with the strong nuclear interaction. These ‘grand unification theories’ (GUTs) have not been particularly successful.
So an interim status report is that there is the so-called standard model, in which there is the unified electroweak interaction, a separate strong nuclear interaction, and the (most problematic) separate gravitational interaction. Most problematic because there is still no quantum version of the gravitational interaction, or a theory of quantum gravity.
‘There are grounds for cautious optimism that we may now be near the end of the search for the ultimate laws of nature.‘ (SH)
The uncertainty principle is one reason why it is so hard to formulate a quantum theory of gravity (Einstein’s general theory of relativity is a wholly classical theory). The uncertainty principle applies to pairs of ‘conjugate parameters’. For example, the position of a particle along the x-axis and its momentum component along the same direction are one such pair of conjugate parameters. Energy and time are another example. A third such pair is the value of a field and its rate of change. The more accurately one is determined, the more uncertain the value of the other is. This means that there is no such thing as empty space. An empty space would mean that both the value of a field and its rate of change are exactly zero; this is not allowed by the uncertainty principle.
Thus when we speak of vacuum in quantum physics, we really mean a space which has a certain minimum-energy state. This state is subject to ‘quantum fluctuations’, which means that pairs of (virtual) particles can make momentary appearances (within the limits prescribed by the uncertainty principle), and then disappear by merging into each other. There are infinitely many such virtual pairs possible, each having energy. But if the vacuum state has infinite energy, it would curve the universe to an infinitely small size, according to the general theory of relativity. This is not what actually happens, so we are plagued by another infinity problem.
This time the problem is much more vicious than what has been described above because, unlike say the QED theory in which we could use the mass and the charge of the electron as adjustable parameters, there are not enough renormalizable parameters available in the general theory of relativity. This means that perhaps the only way out is that all the infinities should somehow cancel, without our having to resort to renormalization.
In 1976 the idea of supersymmetry was put forward in this context. According to it, force particles (bosons) and matter particles (fermions) are symmetry-related, or rather supersymmetry-related. They are two facets of the same thing. This means that for every matter particle (e.g. a quark) there must be a ‘super’ force particle, and for every force particle (e.g. a photon) there must be a ‘super’ matter particle.
This scenario has the potential to solve the infinities problem. It turns out that the infinities from matter-related virtual particles are all negative, while they are positive for force-related virtual particles, so they can cancel each other out. The necessary calculations are very difficult to carry out, but many believe that the notion of supergravity which emerges when we invoke supersymmetry has the potential to unify gravity with the other three interactions.
The idea of supersymmetry had actually originated earlier when ‘string theory‘ was being formulated. In string theory the elementary particles are envisaged, not as points, but rather as patterns of vibration that have length but no width or height (‘strings’). There are several string theories, and they are consistent only if spacetime has 10 dimensions, rather than 4. We see only four because the other six are curved up (or curled up) into a space of very small size. An analogy will help understand this. Consider a straw you use for drinking lemonade. Its surface is 2-dimensional: We need two numbers or coordinates for specifying the location of any point on it. But if the straw is extremely thin (say a million-million-million-million-millionth of an inch), it is practically 1-dimensional; the other dimension has just curled up into near-nothingness in terms of visibility. The extra dimensions in string theory are said to have curled up into ‘internal space‘.
An awkward problem faced in early days was that there appeared to be at least five different string theories, and millions of ways in which the extra dimensions could be curled up. Then, in the early 1990s, ‘dualities‘ were discovered: It was realized that the different string theories, as also the myriad ways of curling up the extra dimensions, are simply different ways of describing the same phenomena in four dimensions. It was also found that supergravity is also related to the other theories in this manner.
Many experts are now convinced that the five string theories, as also supergravity, are merely different approximations to a more fundamental theory (called the ‘M-theory’), each valid in different situations.
M-theory involves 11 dimensions instead of 10. It is this extra dimension which unifies the five string theories. Moreover, M-theory allows for not just 1-dimensional strings, but also point particles, 2-dimensional membranes, etc., all the way up to 9-dimensional entities (p-branes, with p running from 0 to 9). M-theory is the unique supersymmetric theory in 11 dimensions.
A crucial feature of M-theory is that its mathematics restricts the ways in which the dimensions of the internal space can be curled. Thus the theory comes up with unique (rather than arbitrary) values for the fundamental constants and the ‘apparent’ laws of physics corresponding to any particular mode of curling (see below).
How Did Our Universe Arise?
‘My goal is simple. It is a complete understanding of the universe, why it is as it is and why it exists at all.‘ (SH)
The laws of M-theory allow for different universes, each with its own set of apparent laws, depending upon how the internal spaces are curled up. We say ‘apparent laws’, because the more fundamental laws are those of the M-theory. There are ~10^500 different modes of curling up, meaning that that many different universes are possible. Only one of them is the universe we inhabit.
But how did our universe arise anyway, along with others? Our universe is known to be expanding. This means that if we extrapolate backwards in time, there must have been a moment when our universe was extremely small, almost a point. It has been estimated that that happened ~13.7 billion years ago. There was a Big Bang, and our universe has been expanding ever since then.
The Big Bang point is taken as the zero of spacetime. It is a ‘singularity’ because certain quantities become infinite in Einstein’s equations of general relativity. Therefore Einstein’s theory is applicable only a little after the Big Bang, and not at the singularity.
Evidence in support of the Big Bang model comes from many sources. One is the observation of the cosmic microwave background radiation (CMBR). The observed distribution of this radiation is quite uniform, but not very uniform. In fact, the minute structure it has is responsible for the evolution of galaxies etc.
How could our universe get created spontaneously out of nothing? Is there a violation of the principle of conservation of energy? No. We can explain the emergence of positive energy (radiation or/and matter) out of nothing if there is a simultaneous emergence of a balancing amount of negative energy. This negative energy arose because the Big Bang was accompanied by the emergence of the gravitational interaction, which is an attractive interaction. Any attractive interaction engenders a negative contribution to the total energy because it takes positive energy to break free from the binding force of the attractive interaction. By contrast, a repulsive interaction (like the one between two positive charges or two negative charges) makes a positive contribution to the overall energy.
There is no reason why only one universe, namely ours, should emerge out of nothing. The M-theory tells us that a very large number of universes can emerge, and go there separate ways.
Cosmological observations make it necessary for us to postulate a brief period of very rapid ‘inflation‘ soon after time-zero, much more rapid than even the speed of light (this is possible because the expansion of space itself can be faster than the speed of light). It is this inflation which explains the ‘bang’ in the Big Bang.
But there is a problem. For explaining inflation and its aftermath, some very special conditions must exist at time-zero. The model proposed in H&M for the creation of the universe is such that this problem gets eliminated.
Time-zero was the moment when spacetime came into existence. We know from Einstein’s general theory of relativity that the gravitational interaction can be viewed as a warping of spacetime, so when gravitation came into existence, so did spacetime. And one reason the time dimension gets mixed with the space dimensions is that matter and energy warp time. This mixing is a key element for understanding the beginning of time.
Some earlier research work of SH had established that when we add the effects of quantum theory to the general theory of relativity, in extreme cases warpage can occur to such an extent that time behaves like another dimension of space. Thus in the early universe, when it was small enough to be governed by both general relativity and quantum mechanics, there were effectively four dimensions of space and none for time. Time as we know it did not exist when we extrapolate backwards in time towards the very early universe. So how did time begin? I quote from H&M:
‘Suppose the beginning of the universe was like the South Pole of the Earth, with degrees of latitude playing the role of time. As one moves north, the circles of constant latitude, representing the size of the universe, would expand. The universe would start at a point at the South Pole, but the South Pole is much like any other point. To ask what happened before the beginning of the universe would become a meaningless question, because there is nothing south of the South Pole. In this picture spacetime has no boundary – the same laws of nature hold at the South Pole as in other places. In an analogous manner, when one combines the general theory of relativity with quantum theory, the question of what happened before the beginning of the universe is rendered meaningless.’
The term ‘no-boundary condition‘ is used for the idea that the histories of the universe are closed surfaces without a boundary (in an appropriate hyperspace).
Since the origin of the universe was a quantum event, Feynman’s sum-over-histories formulation for going from spacetime point A to spacetime point B occupies centre stage. But we have knowledge only about the present state of the universe (point B), and we know nothing about the initial state A. Therefore we can only adopt a ‘top down‘ approach to cosmology, wherein every alternative history of the universe exists simultaneously, and the histories relevant to us are only those which satisfy the no-boundary condition and which, when summed up courtesy Feynman’s path integrals, give us our present universe (point B).
The picture that emerges is that the universe, or rather a whole lot of them, appeared spontaneously (and the M-theory allows for ~10^500 of them). Most of these multiple universes were not relevant to us because their apparent laws were not conducive to our emergence and survival.
What enters the sum over histories relevant to us in not just one universe. Although one particular universe with a completely uniform and regular history does have the highest relative probability amplitude and therefore contributes the maximum to the Feynman sum, several others also, which have slightly irregular or deviant histories but still significant probability amplitudes, also contribute to the sum over histories. This should account for the slight nonuniformities during the inflation era, as evidenced by the CMBR plot. These irregularities were important for the emergence of galaxies. ‘We are the product of quantum fluctuations in the very early universe.’
Is this theory testable? Yes. The no-boundary condition implies that the probability amplitude is the highest for histories in which the universe starts out completely smooth. And it is somewhat smaller for universes which are slightly irregular by comparison. Starting from the M-theory one can work out the details of how the CMBR pattern should look, and then compare with detailed and accurate experimental observations.
The M-theory offers ~10^500 possibilities of start-up universes. We have to single out those which correspond to the curling up of exactly those dimensions which we find to be the case for the universe we inhabit. The narrowed-down choice should also predict conditions which make it possible for inflation to start and proceed exactly the way it actually did for our universe. Of course, we select those histories which reproduce the observed mass and charge of the electron, and other such observed fundamental parameters.
The Weak And The Strong Anthropic Principle
‘We are just an advanced breed of monkeys on a minor planet of a very average star. But we can understand the Universe. That makes us something very special.’ (SH)
The weak version of the anthropic principle says that our very existence selects rules determining from where and at what time in the cosmic chains of events it is possible for us to observe the universe. In other words, we can draw conclusions about the apparent laws of physics based on the fact that we exist. H&M suggest that it should really be called a ‘selection principle‘ because it is about how our own knowledge of our existence imposes rules that select, from among all possible environments, only those environments that have characteristics that allow life.
Scientists had no trouble accepting the weak version. After all, the terrestrial and other environmental conditions have to be consistent with our existence; otherwise we shall not be here. And the term ‘environmental conditions’ can include even parameters like the present age of our universe. We could not possibly exist at a time when the universe was too young and therefore too hot. Similarly, we shall not be there when the universe becomes too cold in the distant future.
But there was a ‘strong’ version of the anthropic principle which was frowned upon by scientists. It said that even the numerical values of the fundamental constants, as also the laws of Nature, were fined-tuned to be such that human existence became possible. This is as if human beings and other form of life on Earth are so important that some designer designed the laws of physics and the fundamental constants to be such that our life became possible. This is clearly nonsense.
A fallout of Hawking’s model for our universe is that even the strong anthropic principle acquires validity, provided it is stated properly and in the context provided clearly and scientifically by SHs worldview. The new statement of the strong version can go something like this: Out of the various possible universes, our universe just happens to have the fundamental constants and physical laws it has; other universes (which we cannot observe) have different laws of physics and different values for the fundamental constants. Our existence in our universe has been possible because it is compatible with our apparent laws of physics and our set of fundamental constants; other universes may or may not be conducive to life of any kind. Thus the term ‘environment’ now includes not only our specific solar system etc., but also the apparent laws of physics of our universe.
There is a bunch of people who have been misinterpreting the anthropic principle to suit there irrational worldview. They argue how improbable it is for such a large number of parameters to get fine-tuned by themselves, and that therefore there must be a Creator who did all this. This argument about the extremely-low-probability business reminds me of how Richard Feynman used to handle it. I quote Bill Bryson (2003):
“The physicist Richard Feynman used to make a joke about a posteriori conclusions – reasoning from known facts back to possible causes. ‘You know, the most amazing thing happened to me tonight,’ he would say. ‘I saw a car with the licence plate ARW 357. Can you imagine? Of all the millions of licence plates in the state, what was the chance that I would see that particular one tonight? Amazing!’ His point, of course, is that it is easy to make any banal situation seem extraordinary if you treat it as fateful.”
The important thing is to make a distinction between something being possible or impossible. If it is possible, then no matter how improbable it is, Feynman’s licence-plate example can be quoted to make a point, say, about the validity of the anthropic principle. In any case, H&M have taken care of even the low-probability argument by talking in terms of apparent laws relevant to our universe.
Poor Deepak Chopra
I conclude this article on a lighter note. The mystic Deepak Chopra has been brandishing his own queer interpretation of quantum theory, sprinkled with quotations from the work of SH and perhaps some other scientists as well. I wonder how many PowerPoint slides on his laptop he has had to delete, and how many alterations he has had to make to his website, after reading this passage from H&M:
‘According to M-theory, ours is not the only universe. Instead, M-theory predicts that a great many universes were created out of nothing. Their creation does not require the intervention of some supernatural being or god. Rather, these multiple universes arise naturally from physical law. They are a prediction of science. Each universe has many possible histories and many possible states at later times, that is, at times like the present, long after their creation. Most of these states will be quite unlike the universe we observe and quite unsuitable for the existence of any form of life. Only a very few allow creatures like us to exist. Thus our presence selects out from this vast array only those universes that are compatible with our existence. Although we are puny and insignificant on the scale of the cosmos, this makes us in a sense the lords of creation.’
Stephen Hawking’s worldview is at once grand and rational. His assertion that model-dependent reality is the only kind there is makes tremendous sense. One can take pride in the fact that the human mind, guided by the scientific method, can come up with such brilliant model-building and reasoning and give credible answers to the most fundamental questions ever.
I do hope the M-theory gets further validated.
Dr. Vinod Kumar Wadhawan is a Raja Ramanna Fellow at the Bhabha Atomic Research Centre, Mumbai and an Associate Editor of the journal PHASE TRANSITIONS.
All parts of Dr. Wadhawan’s series on Complexity Explained can be found here.