COMPLEXITY EXPLAINED: 16. Evolution of Intelligence and Consciousness

(Note: All previous parts in the Complexity Explained series by Dr. Vinod Wadhawan can be accessed through the ‘Related Posts’ listed below the article.)

The human brain is a physical organ, governed by the laws of physics. The mind is ‘brain power,’ or the capacity of the brain to feel, think, and 1reason. The brain carries the mind, as well as what we often call consciousness (although we cannot tell where exactly in the brain is the so-called consciousness located). Our intelligence may be no different from ‘swarm intelligence,’ the swarm here being that of neurons. There is a belief that the transition from intelligence to consciousness needs the acquisition of a human language. The ‘society of mind’ (comprising of ‘communities’ of large numbers of interacting neurons) emerged as a hierarchical structure, so typical of any complex adaptive system. Consciousness is an emergent phenomenon.

16.1 Evolution of the Mammalian Brain

Any living entity exploits the existing structure and order of its surroundings to ensure its survival and reproduction. Consider a single-celled organism in a pond. On its surface are molecules which can ‘detect’ (are influenced by) the presence of nutrients. There is usually a gradient of the nutrient concentration, so that it is higher on one side of the organism than on the other. The single-celled organism has chemical sensors which can detect this gradient. Biological evolution has programmed it to propel itself in the direction of increasing concentration of nutrient. An attribute of intelligence is the problem-solving capacity of the system; other important attributes are prediction and memory capabilities. As Hawkins (2004) points out, both prediction and memory are involved here. The prediction is that, by moving in the direction of increasing concentration of nutrient, more nutrient will be found. This is not something the organism has ‘learnt’ and ‘remembered’ in its lifetime. The memory, evolved over many generations of evolution, is in its DNA.

To cut a long evolutionary story short, let us jump from bacteria to plants. Plants also exploit the existing order and structure (constancy or sameness over reasonably long time scales) by employing memory and prediction. The memory in the genes of a tree tells it that it will find greater sunshine by sending its branches and leaves towards the sky. And that it will find water and minerals by sending its roots down into the soil. These actions are automatic, and there is no ‘thinking’ involved, just as there is no thinking involved in the actions of a bacterium.

At a certain stage in the evolutionary history of plants, more complex behaviour emerged in the form of communication systems among the various parts of a plant, based mainly on chemical signals. Suppose an insect damaged some part of a tree, and this led to the slow transmittal of a chemical through the vascular system to its other parts. This triggered a defence mechanism; e.g. the making of a toxin for the insect.

It is conceivable that neurons evolved in due course, as a faster way of communicating information to different parts of an organism. The electrochemical spikes in a neuron travel much faster than the diffusion of chemicals. In due course, the ‘synaptic’ connections between neurons became modifiable. A neuron may or may not send a signal, depending on what happened in the past. This rudimentary nervous system had elements of both memory and learning.

The evolutionary advantage of this to the animal was qualitatively different. Instead of depending on just ‘genetic memory’ and instinct coded in DNA, the animal could now learn from experience during its own lifetime, and modify its behaviour for achieving better survival and propagation rates. In particular, if the environmental structure and order changed rather suddenly, the animal could still make a generally adequate response, instead of having to depend only on the somewhat outdated (and therefore inadequate) genetic memory and instinct. Such plastic nervous systems entailed a huge evolutionary advantage, and there was a burst of new species from fish to snails to mammals, including humans.

Why is it that intelligence evolved mainly in the animal kingdom, but not in the plant kingdom? As explained by the noted robotist Hans Moravec, the difference has arisen because animals are mobile and plants are generally not. The mobility of animals presents to them an ever-changing environment, and therefore intelligence is an important prerequisite for survival and propagation: An animal can survive only if it has a large repertoire of solutions to the continuous stream of problems it faces in a changing environment.

2The human brain, like the brain of any other mammal, has something distinctly additional compared to the brain of reptiles from which it evolved, namely the neocortex. Thus the human brain has two main parts: the ‘old brain’ or the reptilian brain or the R-brain or the ‘primitive’ brain, and the neocortex.

Practically everything we associate with conscious memory and intelligence occurs in the neocortex, although the thalamus and the hippocampus also play important roles. In the evolutionary history of life on Earth, sophisticated sensory and actuation organs had evolved in reptiles, and their behaviour was controlled by the old brain, with no cortex. The evolution of the cortex in one of the offshoots of the reptiles, along with the availability of a stream of sensory inputs into it which it could remember and analyse much better than reptiles could, gave the mammals an evolutionary advantage: When they found themselves in situations they remembered to have faced earlier, their much-improved memory and analysis power told them what to expect next, and how to respond effectively.

16.2 The Human Brain

3The human brain, along with the spinal chord, comprises the central nervous system. The top outer portion of the brain, just under the scalp, is the neocortex (or cortex for short). It covers most of the R-brain, and has a crumpled appearance, with many ridges and valleys. The R-brain is rather similar in reptiles and mammals, and has a number of parts, including the thalamus and the hippocampus.

Humans are special compared to other mammals because of their very prominent prefrontal cortex (or frontal lobe).The prefrontal cortex (particularly the upper two-thirds of it, including the dorsolateral prefrontal cortex) can be regarded as the rational centre of the brain; or the rational brain. The rest of the human brain is the emotional brain.

The human cortex, if stretched flat, is the size of a large napkin, and about 2 mm thick. It has six layers, each roughly the thickness of a playing card. There is a branching hierarchy among the layers. Layer 6 is at the bottom of the hierarchy, and Layer1 is at the top. The inputs from the various sensory organs are received in Layer 6, and then interpreted and correlated. Then more and more abstract and generalized versions of the information are sent up the hierarchical layers. There is a very high degree of feedback and feedforward among the layers, as also cross-correlations.


There are ~1011 nerve cells or neurons in the human cortex. Most of them have a pyramidal shaped central body or nucleus, as well as an axon, and a number of branching structures called dendrites. We can think of the axon as a signal emitter, and the dendrites as signal receivers. When a strand of an axon of one neuron (the presynaptic neuron) ‘touches’ a dendrite of another neuron (the postsynaptic neuron), a connection called a synapse is established. A typical axon is involved in several thousand synapses.

Portions of the cortex can be identified as different functional areas or regions. For example, a portion of the frontal lobe (see illustration) is the motor cortex. It controls movement and other actuator functions of the body.

The cortical tissue can be functionally divided into vertical units or columns. Neurons within a column respond in a similar manner to external signals with a particular attribute.

When a sensory or other pulse (‘spike’) involving a particular synapse arrives at the axon, it causes the synaptic vesicles in the presynaptic neuron to release chemicals called neurotransmitters into the gap or synaptic cleft between the axon of the first neuron and the dendrite of the second. These chemicals bind to the receptors on the dendrite, triggering a brief local depolarization of the membrane of the postsynaptic cell. This is described as a firing of the synapse by the presynaptic neuron.

If a synapse is made to fire repeatedly at high frequency, it becomes more sensitive; i.e. subsequent signals make it undergo greater voltage swings or spikes. Building up of memories amounts to formation and strengthening of synapses.

The firing of neurons follows two general rules: (1) Neurons which fire together wire together. Connections between neurons firing together in response to the same signal get strengthened. (2) Winner-takes-all inhibition. When several neighbouring neurons respond to the same input signal, the strongest or the ‘winner’ neuron will inhibit the neighbours from responding to the same signal in future. This makes these neighbouring neurons free to respond to other types of input signals.

The functionality of the cortex is arranged in a branching hierarchy. The primary sensory regions constitute the lowest rung of the hierarchy (Layer 6). The sensory region for, say, vision (called V1) is different from that for hearing etc. V1 feeds information to higher layers called V2, V4 and IT, and to some other regions. The higher they are in the hierarchy, the more abstract they become. V2, V4 etc. are concerned with more specialized or abstract aspects of vision. The higher echelons of the functional region responsible for vision have the visual memories of all sorts of objects. Similarly for other sensory perceptions.

In the higher echelons are areas called association areas. They receive inputs from several functional regions. For example, signals from both vision and audition reach one such association area.

Although the primary sensor mechanism for, for example, vision is not the same as for hearing, what reaches the brain at higher levels of the hierarchy is qualitatively the same. The axons carry neural signals or spikes which are partly chemical and partly electrical, but their nature is independent of whether the primary input signal was visual or auditory or tactile. Finally, they are just patterns.

16.3 Creation of Short-Term and Long-Term Memories

Creation of short-term memory in the brain amounts to a stimulation of the relevant synapses, which is enough to temporarily strengthen or sensitize them to subsequent signals.

This strengthening of the synapses becomes permanent in the case of long-term memory. This involves the activation of genes in the nuclei of postsynaptic neurons, initiating the production of proteins in them. Thus learning requires the synthesis of proteins in the brain within minutes of the training. Otherwise the memory is lost.

Information meant to become the higher-level or generalized memory, called declarative memory, passes through the hippocampus, before reaching the cortex. The hippocampus is like the principal server on a computer network. It plays a crucial role in consolidating long-term memories and emotions by integrating information coming from sensory inputs with information already stored in the brain.

16.4 The Prefrontal Cortex and its ‘Working Memory’

What sorts of ‘rules’ could possibly capture all of what we think of as intelligent behaviour however? Certainly there must be rules on all sorts of different levels. There must be many ‘just plain’ rules. There must be ‘metarules’ to modify the ‘just plain’ rules; then ‘metametarules’ to modify the metarules, and so on. The flexibility of intelligence comes from the enormous number of different rules, and levels of rules. The reason that so many rules on so many different levels must exist is that in life, a creature is faced with millions of situations of completely different types. In some situations, there are stereotyped responses which require ‘just plain’ rules. Some situations are mixtures of stereotyped situations – thus they require rules for deciding which of the ‘just plain’ rules to apply. Some situations cannot be classified – thus there must exist rules for inventing new rules … and on and on. Without doubt, Strange Loops involving rules that change themselves, directly or indirectly, are at the core of intelligence. Sometimes the complexity of our minds seems so overwhelming that one feels that there can be no solution to the problem of understanding intelligence – that it is wrong to think that rules of any sort govern a creature’s behaviour, even if one takes ‘rule’ in the multilevel sense described above.

Douglas Hofstadter, Gödel, Escher, Bach

We cannot make decisions without involving emotions. This conclusion of modern psychology goes against the grain of what was believed to be the case about the nature of rational behaviour for most of the 20th century. The conventional picture has been that at the bottom of the hierarchical complexity of the human brain is the brain stem, which controls bodily functions like heartbeat, breathing, and body temperature. At the next higher level is the diencephalon, which regulates hunger pangs and sleep cycles etc. Then comes the limbic region, which generates and controls emotions (violence, lust, impulsive behaviour, etc.). These three levels of brain complexity are common to all mammals, including humans. Lastly there is the prefrontal cortex, predominantly responsible for our reasoning power and intelligence etc. Although it enables us to suppress emotions to a small or large extent, it is wrong to think that this ‘rationality’ portion of our brain can completely overpower or overrule what the three hierarchically lower parts of the brain tend to do. In other words, it is impossible for us to make decisions which are completely dispassionate or ‘reasoned.’

It is also true that a substantial portion of the prefrontal cortex is involved in our emotional behaviour. How do we ‘manage’ our emotions? We do so by thinking about them, and the thinking is done mainly by the prefrontal cortex. The term metacognition is used for the capacity of our prefrontal cortex to contemplate about our own mind. The frontal cortex knows when we are, say, angry. In fact, every emotional state comes with self-awareness attached to it. This enables us to figure out or ‘think’ why we are feeling the way we are feeling. Thus we humans are able to exercise a certain degree of control over our emotions by what is commonly called ‘rational thinking.’ This is also how we make decisions. The emotional brain is constantly sending out signals about its likes and dislikes. The prefrontal cortex monitors these emotional outputs and tries to decide which signals to take seriously and which ones to overrule. Although the rational brain cannot silence emotions, it can help figure out which ones should be followed. A highly readable account of the role of intuition and emotions in our decision-making process has been given in a recent (2009) book How We Decide by Jonah Lehrer.

Unlike other regions (columns) of the cortex, which specialize in processing specific types of stimuli, the cells of the prefrontal cortex can process whatever kind of data they need to process. This enables our brain to look at a given problem from a variety of vantage points, and even come out with creative solutions. How does the prefrontal cortex accomplish this? The answer has to do with its special kind of memory called the working memory. It is a short-term memory, but it has a persistence feature. It is a meeting ground, and also a melting pot, of information from various sources. Neurons in this part of the brain fire in response to a stimulus, and then keep on firing for several seconds after the stimulus has disappeared. This allows the brain to make creative associations. This is the so-called restructuring phase of problem-solving: Here information is mixed together in new ways and overlapping of ideas occurs, leading to new insights. The resultant novel neural wiring enables you to identify the answers you were looking for. This is an important feature of human intelligence.

The emotional brain is very important too

Excessively rational thinking can backfire, because it often amounts to suppressing what the primitive brain is trying to tell us. This problem arises because the rational brain is not an infinitely powerful supercomputer, meaning that rational analysis cannot always provide the best solution to a complicated problem. The cumulative wisdom buried in the (much larger) primitive brain must also be used.

The psychologist George Miller demonstrated in his essay ‘The Magical Number Seven, Plus or Minus Two’ that the conscious brain can only handle about seven pieces of data at any one moment. The computational circuitry of the rational part of our brain is only a tiny fraction of the total capacity of the brain, ‘just a few microchips within the vast mainframe of the mind.’ As a result, too many choices, or too much data, can overwhelm the prefrontal cortex, leading to bad decisions. The trick lies in learning when to trust your intuitions more than your reasoning power. ‘Because working memory and rationality share a common cortical source — the prefrontal cortex — a mind trying to remember lots of information is less able to exert control over its impulses. The substrate of reason is so limited that a few extra digits can become an extreme handicap’ (Lehrer 2009). The fact of life is that the rational part of our brain (which is really a very recent novelty on the evolutionary time scale) has a rather slow and small, even erratic, CPU. Too much information can interfere with understanding. When the prefrontal cortex is overwhelmed, correlation is confused with causation, and people tend to make theories out of coincidences.

Excessive dependence on the emotional brain can be risky too. The ideal situation is that exemplified by, say, a champion chess player. Through an unhurried analysis of the games he won or lost, he builds up experience (turning mistakes into educational events) which gets ‘internalised’ into his emotional brain. In due course, it becomes ‘second nature’ for him to make the right moves, not having to consciously analyse the consequences of too large a number of prospective moves. The emotional brain is a huge supercomputer, with massive parallel-processing capabilities.

16.5 Marvin Minsky’s ‘Society of Mind’

Our minds did not evolve to serve as instruments for observing themselves, but for solving such practical problems as nutrition, defence, and reproduction

Marvin Minsky (2006)

Marvin Minsky is a pioneer of the field of machine intelligence. Efforts at developing machine intelligence have resulted in deep insights into how the human brain functions.

In 1986 Minsky published his book The Society of Mind, in which he formulated his ideas about human cognition. His next book, The Emotion Machine, published in 2006, reflects the progress made in gaining insights into the workings of the human mind via the machine-intelligence approach.

Minsky’s ‘society’ of mind comprises of ‘agents’ or ‘resources,’ which are the simplest individuals that populate the brain. Each agent or resource can 5be visualized as a typical component of a computer program, like a simple subroutine or data structure. The agents can get connected and composed into larger systems called agencies or societies of agents. The agencies self-organize into still larger conglomerates that can perform still more complex functions, and so on into still higher and higher levels of self-organization and complexity, ultimately leading to the emergence of abilities we attribute to minds. There is a hierarchical structure and organization, like in any complex adaptive system.

The idea of hierarchical levels of organization was well documented in an earlier publication of Minsky (1980): One could say but little about “mental states” if one imagined the Mind to be a single, unitary thing. But if we envision a mind (or brain) as composed of many partially autonomous “agents”-a “Society” of smaller minds-then we can interpret “mental state” and “partial mental state” in terms of subsets of the states of the parts of the mind. To develop this idea, we will imagine first that this Mental Society works much like any human administrative organization. On the largest scale are gross “Divisions” that specialize in such areas as sensory processing, language, long-range planning, and so forth. Within each Division are multitudes of subspecialists-call them “agents”-that embody smaller elements of an individual’s knowledge, skills, and methods. No single one of these little agents knows very much by itself, but each recognizes certain configurations of a few associates and responds by altering its state.

As is the case with any complex adaptive system, we cannot predict with certainty the properties of the mind-system in terms of the laws of physics applied to the constituent agents, nor can we start from the observed complexity of the brain and work our way downwards all the way to understand why the increasing complexity took a particular route in phase space. To quote Minsky (1990): ‘The functions performed by the brain are the products of the work of thousands of different, specialized sub-systems, the intricate product of hundreds of millions of years of biological evolution. We cannot hope to understand such an organization by emulating the techniques of those particle physicists who search for the simplest possible unifying conceptions. Constructing a mind is simply a different kind of problem-of how to synthesize organizational systems that can support a large enough diversity of different schemes, yet enable them to work together to exploit one another’s abilities.’

Here is Minsky’s (1986) take on consciousness: ‘In this book, the word (consciousness) is used mainly for the myth that human minds are “self aware” in the sense of perceiving what happens inside themselves. I maintain that human consciousness can never represent what is occurring at the present moment, but only a little of the recent past – partly because each agency has a limited capacity to represent what happened recently and partly because it takes time for agencies to communicate with one another. Consciousness is peculiarly hard to describe because each attempt to examine temporary memories distorts the very records it is trying to inspect.’

Minsky describes ‘free will’ as a myth, the myth that human volition is based upon some third alternative to either causality or chance.

The ‘Single-Self’ concept

Some of us subscribe to the concept that there is creature (or a set of creatures) inside us that does all the feeling or thinking for us, and makes all the important decisions for us. It is our ‘identity’ or ‘self.’ Even our legal system distinguishes between deliberate wilful murder, and murder that was not pre-planned. This Single-Self concept may be useful, but has no scientific basis.

Why do humans entertain such fiction? It may be partly because it makes life look pleasant, ‘by hiding from us how much we’re controlled by all sorts of conflicting, unconscious goals.’ According to Minsky, ‘That image makes us efficient, whereas better ideas might slow us down. It would take too long for our hardworking minds to understand everything all the time. However, although the Single-Self concept has practical uses, it does not help us to understand ourselves-because it does not provide us with smaller parts we could use to build theories of what we are. When you think of yourself as a single thing, this gives you no clues about issues like these: What determines the subjects I think about? How do I choose what next to do? How can I solve this difficult problem? Instead, the Single-Self concept offers only useless answers like these: My Self selects what to think about. My Self decides what I should do next. I should try to make my Self get to work.’ He goes on to say that: ‘Whenever you think about your “Self” you are switching among a huge network of models, each of which tries to represent some particular aspects of your mindto answer some questions about yourself.’

16.6 Daniel Dennett’s Model of Consciousness

6Dennett’s 1994 book Consciousness Explained has been hailed as a major milestone in understanding the nature of consciousness. Both he and Minsky give due respect to what people say about their feelings and emotions and other internal subjective experiences, but only as evidence of how things appear to them to be, rather than as direct evidence of ‘things as they actually are.’ Dennett calls this the heterophenomenological approach.

Dennett has formulated his so-called multiple drafts model of consciousness. A point emphasized by both Dennett and Minsky is that mental processes are spread over both space and time. Consider the analogy of the preparation and publication of a book. The manuscript undergoes a number of draftings and distributions among the author, the referees, and the editor, and is thus spread over both space and time before it is ultimately finalized. The multiple drafts of the book are also a reality. Ditto with what we perceive as consciousness: There are multiple drafts, and only one may get chosen in a given situation.

Dennett emphasizes that it is only an illusion that a person is conscious of what is perceived as ‘now.’ Processes in the brain occur at millisecond (and not infinite) speeds, and many of them occur simultaneously. Therefore it is impossible to carry out a sequential timing or ordering of events in the brain at and below the millisecond time scale. There is no objective ‘now’ for a person’s brain; there can be only a subjective ‘now’ which depends on the choice made by the brain from among the recent events and processes occurring in it.

In other words, there is no central or single place in the brain (the so-called Cartesian Theatre) where everything is presented together (to Minsky’s ‘single agent’), and decisions are made. Dennett presents evidence for this model from a vast range of experiments in cognitive psychology and neuroscience, as well as from ideas from evolutionary biology.

He not only rejects the notion of a Cartesian Theatre in the brain, but also those of qualia and homunculus. The term ‘qualia’ refers to the mistaken notion that feelings associated with sensation are somehow independent of sensory input. And homunculus is the name used for the now-discredited unproductive and paradoxical idea of a small agent or intelligent thing or experiencing subject, located deep inside a person’s head, determining or controlling his behaviour.

Dennett also rejects the philosophy of Cartesian Dualism, according to which consciousness (a subjective experience) belongs to a different plane of reality than the one on which the material universe is constructed. Consciousness arises from the processes of information exchange in the brain. Multiple sets of sensory information, memories and emotional cues are competing with each other at all times in the brain, but at any particular instant only one set of these factors dominates the brain. At the next instant, another set of slightly different factors are dominant. At all instants, multiple sets of information are competing with each other for dominance. This creates the illusion of a continuous stream of thoughts, leading to the impression that consciousness is the entirety of the mental functions of the individual.

Dennett (2006) believes that acquisition of a human language is a necessary prerequisite for consciousness to emerge:

I believe, but cannot yet prove, that acquiring human language (an oral or sign language) is a necessary precondition for consciousness – in the strong sense of there being a subject, an I, a ‘something it is like something to be.’ It would follow that nonhuman animals and prelinguistic children – although they can be sensitive, alert, responsive to pain and suffering, and cognitively competent in many remarkable ways (including ways that exceed normal adult human competence) – are not really conscious, in a strong sense: There is no organized subject (yet) to be the enjoyer or sufferer, no owner of the experience as contrasted with a mere cerebral locus of effects.

16.7 Hawkins’ Model for Intelligence and Consciousness

Jeff Hawkins, in his 2004 book On Intelligence, proposed the so-called memory and prediction theory of how human intelligence arises. The basic idea of 7Hawkins’ theory of intelligence, in his own words, is as follows: The brain uses vast amounts of memory to create a model of the world. Everything we know and have learnt is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence.

Hawkins points out that the neocortical memory differs from that of a conventional computer in four ways:

  1. The cortex stores sequences of patterns. For example, our memory of the alphabet is a sequence of patterns. It is not something stored or recalled in an instant, or all together. That is why we have difficulty saying it backwards. Similarly our memory of songs is an example of temporal sequences in memory.
  2. The cortex recalls patterns auto-associatively. The patterns are associated with themselves. One can recall complete patterns when given only partial or distorted inputs. During each waking moment, each functional region is essentially waiting for familiar patterns or pattern-fragments to come in. Inputs to the brain link to themselves auto-associatively, filling in the present, and auto-associatively linking to what normally flows next. We call this chain of memories, thought.
  3. The cortex stores patterns in an invariant form. Our brain does not remember exactly what it sees, hears, or feels; the brain remembers the important relationships in the world, independent of details.
  4. The cortex stores patterns in a hierarchy.

Storing sequences, auto-associative recall, and invariant representation are the necessary ingredients for predicting the future based on memories of the past. How this happens is the subject matter of Hawkins’ book. According to him, making such predictions is the essence of intelligence.

Hawkins takes the view that perhaps consciousness is simply what it feels like to have a neocortex. He suggests that the self-awareness aspect of consciousness is synonymous with the formation of declarative memories. These are memories we can recall and talk about.

Hawkins, while formulating his theory of intelligence, was enamoured of the so-called Mountcastle’s hypothesis. Since the same types of layers, cell types and connections exist in the entire cortex, Mountcastle (1978) put forward the following hypothesis: There is a common function, a common algorithm, that is performed by all the cortical regions. What makes the various functional areas different is the way they are connected. He went further to suggest that the reason why the different functional regions look different when imaged is because of these different connections only. Hawkins suggests that, although hearing, touch, vision etc. are processed by the same algorithm in the neocortex, they are handled differently in the R-brain: ‘Hearing relies on a set of audition-specific subcortical structures that process auditory patterns before they reach the cortex. Somatosensory patterns also travel through a set of subcortical areas that are unique to somatic senses. Perhaps qualia, like emotions, are not mediated purely by the neocortex. If they are somehow bound up with subcortical parts of the brain that have unique wiring, perhaps tied to emotion centres, this might explain why we perceive them differently, even if it doesn’t explain why there is any sort of qualia sensation in the first place.’

The structure of the inputs (i.e. the spatio-temporal information pattern) is qualitatively different for, say, the auditory nerve and the optic nerve. The optic nerve has a million fibres, and the auditory nerve has only thirty thousand. The optic nerve caries information that is more spatial than temporal, and the auditory nerve carries information that is more temporal than spatial. This may have bearing on why is red red and green green. No matter how consciousness is defined, memory and prediction play crucial roles in creating it.

Here is how Hawkins answers why our thoughts appear to be independent of our bodies: ‘To the cortex our bodies are just part of the external world. Remember, the brain is in a quiet and dark box. It knows about the world only via the patterns on the sensory nerve fibres. From the brain’s perspective as a pattern device, it doesn’t know about your body any differently than it knows about the rest of the world. There isn’t a special distinction between where the body ends and the rest of the world begins. But the cortex has no ability to model the brain itself because there are no senses in the brain. Thus we can see why our thoughts appear independent of our bodies, . .’

16.8 Concluding Remarks

Consciousness is subjective and internal; perhaps a ‘virtual reality.’ In this article I have briefly discussed a few models of consciousness. The clear message is that there is nothing mystical or supernatural about consciousness. In fact, conscious superintelligent machines (robots) are likely to be a reality in the present century itself.

How did consciousness arise out of no-consciousness? It did so via the complexity-evolution route as an emergent property. Through self-organization and through cumulative natural selection, neurons emerged as a means of more efficient communication among the various parts of the brain. Interactions among neurons led to a further increase in complexity in the form of memory and prediction, and thence intelligence. From intelligence to consciousness is a difficult conceptual step because in science we have place only for testable or falsifiable statements, made in terms of symbols or words with a preassigned unambiguous meaning. But there is no agreement on what exactly we mean by the word ‘consciousness.’ There is a whole spectrum of definitions of this word.

Richard Dawkins takes the stand that, if you take a set of statements made about consciousness, and replace this word by some meaningless word like hkzisrkjd everywhere, you would have lost or gained nothing in understanding the meaning of that set of statements!

The philosopher Daniel Dennett takes consciousness very seriously. And he ends up saying that nonhuman animals and prelinguistic children are not really conscious (in the ‘strong’ sense of the word). He admits that this assertion will shock many people, but also says that ‘. . . of course, the truth of the empirical hypothesis is in any case strictly independent of its ethical implications, whatever they are.’

Marvin Minsky uses the word ‘myth’ for describing consciousness. Like any complex adaptive system, the human brain functions in a way that cannot always be understood in terms of a few simple fundamental rules or laws. To quote Marvin Minsky (2006): ‘… every brain has hundreds of parts, each of which evolved to do certain particular kinds of jobs; some of them recognize situations, others tell muscles to execute actions, others formulate goals and plans, and yet others accumulate and use enormous bodies of knowledge. And though we don’t yet know enough about how each of those brain-centres works, we do know their construction is based on information that is contained in tens of thousands of inherited genes, so that each brain-part works in a way that depends on a somewhat different set of laws.’ According to him, none of the popular psychology words like ‘feelings,’ ’emotions,’ and ‘consciousness’ is about any single and definite process. Each such ‘suitcase word‘ vaguely refers to the effects of a large network of processes in the brain. Minsky argues that feelings are not basic at all, but are processes made of many parts. Similarly he demonstrates that ‘consciousness’ refers to more than 20 different processes (e.g. the process of reasoning and making decisions; the process of how the brain represents ‘our’ intentions; the process of how the brain knows what it has done recently; and so on).

Jeff Hawkins takes the view that ‘reality’ is largely a matter of how accurately our cortical model of the world reflects the true nature of the world.

As Douglas Hofstadter has explained in detail, consciousness emerges in a system that is powerful enough to have a sort of self-referential, self-modelling capability (‘strange loops’ is the term he uses in this context). The stage for this conclusion of his was set by Kurt Gödel’s discovery in 1931 that even things as simple as integers are powerful enough to be used for representing (at a different level) statements about themselves. Hofstadter builds on this fact to argue how conscious beings can think about and represent themselves.

. . . our intelligence is not disembodied, but is instantiated in physical objects: our brains. Their structure is due to the long process of evolution, and their operations are governed by the laws of physics. Since they are physical entities, our brains run without being told how to run.

Douglas Hofstadter, Gödel, Escher, Bach

Dr. Vinod Kumar Wadhawan is a Raja Ramanna Fellow at the Bhabha Atomic Research Centre, Mumbai and an Associate Editor of the journal PHASE TRANSITIONS

About the author

Vinod Wadhawan

Dr. Vinod Wadhawan is a scientist, rationalist, author, and blogger. He has written books on ferroic materials, smart structures, complexity science, and symmetry. More information about him is available at his website. Since October 2011 he has been writing at The Vinod Wadhawan Blog, which celebrates the spirit of science and the scientific method.


  • Excellent summary of an otherwise very, very complicated subject. As a practicing psychiatrist who has to deal with complex interaction between emotions and rational decisions and actions, I find these observations extremely astute.

    I do believe that animals, such as dogs, share with humans the ability to experience complex emotions and make rational decisions based on their assessment of the situation. An example: I had a golden retriever, which I took for walks. Sometimes I let him walk before me without restraints. He would run ahead for a hundred feet or so. Then suddenly return to me to be petted. When I petted him on his head he would immediately jump across the road and go to the neighboring yard to explore. This was unacceptable behavior to me. When I shouted for him to come back, returned sheepishly, full of humiliation. It took me a while to realize that when he returned to me to be petted, he was in effect asking me, “Can I go over there and explore a little?” To him, my petting his head was the ‘permission’ he needed to go. When I scolded him for doing so, he was baffled and humiliated.

    Dogs attempt to communicate their emotions and needs by hundreds of verbal and nonverbal cues. It must be extremely frustrating to them that we humans are so dumb that we cannot understand them. Dogs experience at least 20 of 36 common human emotions, such as fear, hurt, anger, sadness, guilt, shame, disappointment, frustration, helplessness, hopelessness, humiliation, hate bitterness, resentment, envy, jealousy, terror, horror, disgust, remorse, rage and the like. They are not just conditioned creatures who are incapable of consciousness or self-awareness. The problem is we, self-absorbed humans, are not capable of understanding what they are trying to communicate to us.It takes a person of extremely high level of awareness and ability to tune in with animals and plants around him to understand what they are trying to tell us. I have reservations about this statement by D. Dennet: “nonhuman animals and prelinguistic children are not really conscious (in the ’strong’ sense of the word).” Our radio is not capable of picking their signals.

    Great article.

  • I am pleased to have happened upon this article, Vinod.

    Your own considerations and those of Hawkins strike a strong note of resonance with the (deliberately) unconventional interpretations expressed in my recent book “Unusual Perspectives.

    However, In my view, the section on Minsky, whose ideas are quite dated, serves no useful purpose, and the vague wafflings of Dennett only cloud what in other respects is a perceptive and cogent document.

    Some particularly important points raised by yourself and Hawkins are underlined in my book (albeit in a very different manner) are:

    1. That part of the brain which we consider responsible for “intelligence” can most properly be regarded as merely an evolutionary extension of the navigation facility required by animals and which can be traced back to the basic chemoreceptors and photoreceptors of primitive organisms.

    2. As indicated by Hawkins, the unique property which is so characteristic of our species is the prodigious enhancement of the ability to generate and manipulate models of the external environment within the mind.
    Of its very nature it requires a very large amount of memory, as well as appropriate memory processing systems.

    What has driven this development is a matter of conjecture although I do suggest an unorthodox but plausible source of evolutionary pressure for this in “Unusual Perspectives”

    The greatest obstacle to clarity of thought in interpreting such phenomena is anthropocentrism.

    We must remember that, while we have naturally evolved to consider ourselves as individuals, we are more properly, and more usefully, treated as communities of cells. I am sure that, with a little thought, you will agree with this.

    In this light, the much-debated problem of the nature of consciousness is resolved in a very straight-forward way. The mysterious anthropocentric “I”, so hard to interpret in self-referential arguments, is simply the navigational facility (with its associated “map room” of models) that has of necessity evolved to serve the community of cells with which it is associated.

    To properly serve that function of interacting with its environment it has necessarily evolved with some limited degree of autonomy and agency, Wherein lies “self”.

    I have serious issues with the word “intelligence”!

    Use of this very vaue and ill-defined term clouds many issues.

    I strongly prefer the use of the term “Imagination” (The capitalisation is intentional) to describe the special capability in which our species is observed to excel.

    My definition of Imagination, within this technical sense,is simply “The ability of a mind to generate and process models of its environment.”

    This definition would appear to be quite comprehensive and clear. It is also quite close to, though not always identical with the everyday use of the word.

    When quantified in some way it can also be used to describe the similar (but much lesser) capabilities of other species

    Chapters 5 and 6 of “Unusual Perspectives” contains a fuller discussion of these topics although they are by no means independent of the wider context of the book, The latest edition of which is available in electronic format for free download from the eponymous website.

    • Thanks, Peter, for the comments and the information. Can you please give more information about how to download your book?

      I got interested in machine intelligence when I was writing my book on SMART STRUCTURES (published by the OUP in 2007). Marvin Minsky’s 2006 book should not be called ‘dated’. Some of his proposals may be dated, but the basic approach is sound. He rightly calls consciousness a ‘myth’ and a ‘suitcase word’.

      What I particularly liked about Jeff Hawkins’work was that it made the idea of creation of ‘truly’ intelligent machines look so plausible. At the end of his 2004 book he had made a number of ‘testable predictions’ from his model of intelligence. I wonder how many of them have been checked.

  • Thanks lijey. On a related matter, I believe that there is a complex interaction between plants and birds/animals, which can be discerned if we observe it closely. Here is an example: I planted a stick of ivy in a barren corner place outside my house. There was no soil there at all except for concrete debris. The plant had little chance to survive without enough nutrients. I watered the plant now and then. Very soon after the plant started to grow up on the wall, it shot a few branches away from the wall in such a way that birds could build nests. Sparrows built their nests on the clusters of these little branches. Soon their droppings enriched the concrete debris and the ivy plant began to show very fast growth. Within no time at all the ivy grew all over the high wall. Once the ivy had taken deep roots under the debris, the outcrops of ivy stopped and birds could no longer build their next. I have observed many such ‘almost deliberate’ interactions between plants and birds.

  • The article presented is one sided as I can see. The author has only considered the reductionist approach (the extreme materilism) of understanding the most difficult questions of brain, mind, intelligence, and consciousness. Also I found that the author has not cited and discussed the view points of many great thinkers in this field.

    Because there are also alternative understanding of these questions author should examine those question with equal emphasis. For example I would like to quote the statement by Prof David Charmer “Consciousness, the subjective experience of an inner self, could be a phenomenon forever beyond the reach of neuroscience. Even a detailed knowledge of the brain’s workings and the neural correlates of consciousness may fail to explain how or why human beings have self-aware minds.”

    With Regards,

    • The author has only considered the reductionist approach (the extreme materilism) of understanding the most difficult questions of brain, mind, intelligence, and consciousness.

      Quite the contrary, the physics of Complexity is the opposite of the reductionist approach. Dr. Wadhawan has been writing a long series on Complexity, which is why this is part 16.

    • I think I did cover many of the relevant issues. When you mention ‘great thinkers’, you may have in mind people who are intelligent, learned, and experienced. There are two types: Those who make testable statements, and those do not. Being a physicist, I am comfortable with the first type: Sooner or later what they say gets tested. The latter type make statements which are, at best, just opinions. It is your choice what you do with such statements, but there is no place for them in science.

      Consciouness (whatever it means) is an EMERGENT phenomenon. The word ’emergent’ has a technical meaning, which I exdpalined in this series on complexity. As Ajita has already pointed out, emergence, holism, complexity are the very antitheses of reductionism.

  • As a computer scientist who has dabbled enough in problems within the domain of the so-called “artificial intelligence”, I have a few things to say regarding this question of consciousness.

    1) The first thing we need to understand the nature of consciousness / creativity in human mind (or in anything in the universe) is scepticism. No belief of whatever kind (belief in souls, belief in extreme reductionism) can be taken for granted.

    2) These question of consciousness is within the domain of science and can be tested using properly designed scientific experiments.

    3) Simple extrapolation of our limited knowledge on how neurons work to the problem of wholesome human intelligence is not warranted.

    4) The domain that can provide definitive answers to these questions of consciousness is computer science, neither neuroscience nor philosophy. Everything that a neuroscientist understands can be replicated (simulated) in a computer and can be validated through experiment. On the other hand, AI can happen in formats not at all inspired from neuroscience. As we have realized in aviation, one need not design an aeroplane in the shape of a bird.

    5) There are some critical bottlenecks in computer science today on the realization of AI tasks : related to the so-called NP-complete problems, which take exponential time for solving even moderately sized problems. All current formulations of “AI” tasks such as visual object recognition, speech understanding, theorem proving etc.. degenerate into NP-complete problems in the generic case. But humans (and even animals/birds with much smaller brains) excel in solving these problems. So either we computer scientists have been getting it wrong and there exists better formulations for these problems that are not NP-complete. Or there is something in our brains that is beyond the nature of Turing computability.

    6) So we have to be sceptical about the nature of our minds. And on the question of “consciousness” towards the complete realization of human intelligence. Neuroscientists (or sometimes, cognitive scientists) find some minor connection in how our brains work and extrapolate it to the total problem of intelligence by sitting in their armchairs. But there are exponential barriers (related to NP-complete problems) in such extrapolation. They will hit these barriers only when they try to solve practical problems using computers : like visual understanding, language understanding etc..

    • Thanks. I agree with most of your statements.

      ‘5) There are some critical bottlenecks in computer science today on the realization of AI tasks : related to the so-called NP-complete problems, which take exponential time for solving even moderately sized problems. All current formulations of “AI” tasks such as visual object recognition, speech understanding, theorem proving etc.. degenerate into NP-complete problems in the generic case. But humans (and even animals/birds with much smaller brains) excel in solving these problems. So either we computer scientists have been getting it wrong and there exists better formulations for these problems that are not NP-complete. Or there is something in our brains that is beyond the nature of Turing computability.’

      You have hit the nail on the head. The NP-complete problem is our stumbling block. I shall be interested in knowing about the latest on this topic. I am aware that much progress has been made in overcoming it (or partially solving it) or bypassing it in all sorts of ingenious ways. But there is no general solution yet. A conceptual breakthrough is needed. Or perhaps we may never find a general solution.

  • Readers of Nirmukta may be interested in reading the three articles mentioned by John Stewart in his email to me:

    Hi Vinod,

    I came across your great articles on complexity. They are extremely clear and well written, and I see that you are working towards writing a book on complexity for a popular audience.

    Given your interest in consciousness in the context of complexity, I thought I would draw your attention to a paper of mine titled ‘The Future Evolution of consciousness’. It was published in the Journal of Consciousness Studies (2007, Vol 14, No 8, pp 58-92). The central relevance of this paper to your work is that it outlines a ‘materialist’, information-processing model of the development of consciousness. The paper applies this model to understand what are currently referred to as ‘spiritual experiences’ and ‘spiritual development’. It is a step towards the integration of the discoveries of the spiritual traditions into a scientific and ‘materialist’ worldview.

    A full copy of the published paper is at: .

    I would also draw your attention to an article that the Guardian newspaper commissioned me to write recently. It outlines how a ‘materialist’ evolutionary worldview can answer the big existential questions about the sense and significance of life on this planet, and human existence in particular. If you are interested, the article titled ‘Is this the meaning of life?’ is at: The article is a condensation of my paper ‘The Meaning of Life in a Developing Universe’ which is being published in the journal Foundations of Science and is online in full at

    Given that it is an attempt to develop a ‘materialist’ worldview that can replace religious worldviews, the Guardian article and the paper may be of interest to some other authors who write for the Nirmukata website. If so, feel free to pass this email or a relevant extract of it on to them.

    Good luck with your writing!

    Kind regards,


  • The text in “Evolution of Intelligence and Consciousness” discusses consciousness clearly from various points. However as Vinod K. Wadhawan admits, the term “consciousness” remains quite obscure: “…there is no agreement on what exactly we mean by the word consciousness”. Another obscure term is the mind. Let me deliver my propositions for definitions.
    The material world consist of entities like particles, waves, forces, and fields. The “mental world” consists of conscious experiences: sensations, feelings, thoughts, will, and self-consciousness. These entities do not belong to the material world. On the basis of physics or physiology we cannot describe pain or color. Somehow brain is able to call for such conscious experiences.
    I suggest that consciousness is the combination of current conscious experiences. Such consciousness is not located anywhere in the world, especially not in the brain!
    Mind is not an entity of the material world, but neither it belongs to conscious experiences. I define it as such subset of brain information which can be brought to consciousness.
    I analyze the terms in detail in my blog

    • ‘The “mental world” consists of conscious experiences: sensations, feelings, thoughts, will, and self-consciousness. These entities do not belong to the material world. On the basis of physics or physiology we cannot describe pain or color. Somehow brain is able to call for such conscious experiences.’

      In this computer age, why not use the term ‘virtual reality’ to describe consciouness etc.?

  • Hello, Vinod K. Wadhawan. I was curious which university you teach at? I’d love to check out some of your lectures as I am touring the world entirely for this reason this fall through spring. I am creating a list of professors to audit and perhaps meet and befriend. Email me if you are still active on this forum. I am a professor in the physics department at university of Calgary, Alberta, Canada. Thanks!

Leave a Comment