The hardest problem: Decoding the puzzle of human consciousness
Scientific American, September 2018
Special Issue: The Science of Being Human, 48-52
Note: This is an early draft of my article. It was edited by Scientific American before publication. Do not quote.
Might we humans be the only species on this planet to be truly conscious? Might lobsters and lions, beetles and bats, be unconscious automata, responding to their worlds with no hint of conscious experience? Aristotle thought so, claiming that while humans have rational souls, other animals have only the instincts needed to survive. In medieval Christianity the ‘Great chain of being’ placed humans on a level above soulless animals and below only God and the angels. And in the sixteenth century, French philosopher René Descartes argued that other animals have only reflex behaviours. Yet the more biology we learn the more obvious it is that we share not only anatomy, physiology and genetics with other animals but also systems of vision, hearing, memory, and emotional expression. Could it really be that we alone have an extra special something – this marvellous inner world of subjective experience?
The question is hard because, although your own consciousness may seem the most obvious thing in the world, it is perhaps the hardest to study. We do not even have a clear definition beyond appealing to a famous question asked by philosopher Thomas Nagel back in 1974, ‘What is it like to be a bat?’. Nagel chose bats because they live such very different lives from our own. We may try to imagine what it’s like to sleep upside down or to navigate the world using sonar, but is it like anything at all? The crux here is this: if there is nothing it’s like to be a bat we can say it’s not conscious; if there is something (anything) it is like for the bat, it’s conscious. So is there?
We share a lot with bats; we too have ears and can imagine our arms as wings, but try imagining being an octopus. You have eight curly, grippy, sensitive arms for walking, swimming and catching prey, and no skeleton; so you can squeeze yourself through tiny spaces. Only a third of your neurons are in a central brain; the rest make up eight other brains, one to each arm. So is it like something to be a whole octopus, to be its central brain, to be a single octopus arm? The science of consciousness provides no easy way of finding out.
Even worse is the ‘hard problem’ of consciousness, or how subjective experience arises from objective brain activity. How can physical neurons with all their chemical and electrical communications create the feeling of pain, the glorious red of the sunset or the taste of fine claret? This is a problem of dualism – how can mind arise from matter? Indeed, does it?
The answer to this question divides consciousness researchers down the middle. On one side are those who agonise about the hard problem and believe in the possibility of the ‘Philosopher’s zombie’; an imagined creature that is indistinguishable from you or me but has no consciousness at all. Believing in zombies means that other animals might conceivably be seeing, hearing, eating and mating ‘all in the dark’ with no subjective experience at all: consciousness must be a special additional capacity that we might have evolved either with or without and, many would say, are lucky to have.
On the other side are those who reject the possibility of zombies and think the hard problem is, to quote philosopher Patricia Churchland, a ‘Hornswoggle problem’ because consciousness just is the activity of bodies and brains, or inevitably comes along with everything we so obviously share with other animals. There is no point in asking when or why ‘consciousness itself’ evolved or what its function is because ‘consciousness itself’ does not exist.
Suffering
Why does it matter? One reason is suffering. When I accidentally stamped on my cat’s tail and she screeched and shot from the room, I was sure I’d hurt her. Yet behaviour can be misleading. We could easily give a simple robot cat pressure sensors in its tail that activate a screech and we wouldn’t think it suffered pain. Many people become vegetarians because of the way farm animals are treated, but are those poor cows and pigs pining for the great outdoors? Are battery hens suffering horribly in their tiny cages? Behavioural experiments show that although hens enjoy scratching about in litter and will choose a cage with litter if access is easy, they won’t bother to push aside a heavy curtain to get to it, so do they not much care? Lobsters make a horrible screaming noise when boiled alive, but could this just be air being forced out of their shells?
When lobsters or crabs are injured, taken out of water, or have a claw torn off, they release stress hormones similar to cortisone and corticosterone. This provides a physiological reason to believe they suffer. An even more telling demonstration is that when injured prawns limp and rub their wounds, this behaviour can be reduced by giving them painkillers.
The same is true of fish. When experimenters injected the lips of rainbow trout with acetic acid, the fish rubbed their lips on the sides of their tank and stopped feeding, but giving them morphine reduced these reactions. When zebrafish were given a choice between an interesting tank and a barren one, they chose the one with gravel and plants. But if they were injected with acid and the barren tank was flooded with a painkiller, they swam to the barren tank instead. Fish pain may be simpler or in other ways different from ours but these experiments suggest they do feel pain.
Despite this evidence, some people remain unconvinced. Australian biologist Brian Key argues that fish may respond as though they are in pain but this does not prove they are consciously feeling anything. Noxious stimuli, he says, ‘don’t feel like anything to a fish’. Human consciousness, he adds, relies on signal amplification and global integration, and fish lack the neural architecture that makes these possible. So if behavioural and physiological studies cannot resolve the issue, perhaps comparing brains might help.
A world of different brains
Could humans be uniquely conscious because of their large brains? British pharmacologist Susan Greenfield proposes that ‘consciousness increases with brain size across the animal kingdom’. But if she is right, then African elephants and dusky dolphins are more conscious than you are, and Great Danes and Dalmatians are more conscious than Pekinese and Pomeranians, which makes no sense.
More relevant than size may be aspects of brain organisation and function that scientists think are indicators of consciousness. All mammals, and most other animals, including many fish and reptiles and some insects, alternate between waking and sleeping, or at least have strong circadian rhythms of activity and responsiveness. Specific brain areas, such as the lower brainstem in mammals, control these states. So, in the sense of being awake, most animals are conscious, but this is not the same as asking whether they have conscious contents; whether there is something it’s like to be an awake slug or a lively lizard?
Many scientists, including Francis Crick and, more recently, British neuroscientist Anil Seth argue that human consciousness involves widespread, relatively fast, low-amplitude interactions between the thalamus in the core of the brain and the cortex, and these thalamo-cortical loops underlie consciousness. If this is correct, finding these features in other species should indicate consciousness. Seth concludes that most mammals share these structures and are therefore conscious. Yet many other animals do not; lobsters and prawns have no cortex or thalamo-cortical loops. Perhaps we need more specific theories of consciousness to find the critical features.
Among the most popular is Global Workspace theory (GWT), originally proposed by American psychologist Bernard Baars. The idea is that human brains are structured around a workspace, something like working memory. Any mental contents that make it into the workspace, or onto the brightly lit ‘stage’ in the theatre of the mind, are then broadcast to the rest of the unconscious brain. This global broadcast is what makes them conscious.
This implies that animals with no brain such as earthworms, spiders, and jelly fish could not be conscious at all; nor could those with brains that lack the right global workspace architecture, including fish, octopuses and many other animals. Yet we have already found behavioural evidence implying they are conscious.
Integrated Information Theory (IIT), originally proposed by neuroscientist Giulio Tononi, is a mathematically based theory that depends on measuring a quantity called Φ or ‘phi’, a measure of the extent to which information in a system is both differentiated and unified. Various ways of measuring phi lead to the conclusion that large and complex brains like ours have high phi, depending on signal amplification, global integration and strongly interconnected feedforward and feedback circuits; simpler systems have lower phi, with differences also due to the specific organisation found in different species. Unlike global workspace theory this implies that consciousness might exist in simple form in the lowliest creatures and also in appropriately organised machines with high phi.
Both these theories are currently considered contenders for a true theory of consciousness and ought to help us with our question – but their answers conflict.
The evolution of human consciousness
Our behavioural, physiological and anatomical studies all give conflicting answers, as do the two most popular theories of consciousness. Might it help to explore how, why and when consciousness evolved?
Here again we meet that gulf between two groups of researchers. Those in the first group assume that because we are obviously conscious, consciousness must have a function such as directing behaviour or saving us from predators. Yet the answers they give range from billions of years ago to historical times. For example, psychiatrist Todd Feinberg and biologist Jon Mallatt describe the defining features of consciousness as nested and non-nested hierarchies and isomorphic mental images which, they claim, are found in animals from 560-520 million years ago. Baars, the author of global workspace theory, ties the emergence of consciousness to that of the mammalian brain around 200 million years ago. British archaeologist Steven Mithen points to the cultural explosion that started 60,000 years ago when, he argues, separate skills came together in a previously divided brain, and psychologist Julian Jaynes claims there is no evidence of consciousness in the Greek epic, the Iliad; so until 3000 years ago people had no subjective experiences.
Is any of these right? They are all mistaken, claim those in the second group, because consciousness has no independent function or origin: it’s not that kind of thing. These include ‘eliminative materialists’ such as philosophers Pat and Paul Churchland who argue that consciousness just is the firing of neurons and we will come to accept that just as we accept that light just is electromagnetic radiation. IIT also denies a separate function for consciousness because any system with sufficiently high phi must inevitably be conscious. Neither of these theories makes human consciousness unique but one final idea might.
This is the well-known, but much misunderstood, claim that consciousness is an illusion. This does not deny the existence of subjective experience but rejects most of our intuitions about what consciousness must be like. Illusionist theories include psychologist Nicholas Humphrey’s idea of the magical mystery show staged inside our heads and Michael Graziano’s ‘attention schema theory’. But by far the best known is American philosopher Daniel Dennett’s ‘multiple drafts theory’: brains are massively parallel systems with no central ‘theatre’ in which ‘I’ sit viewing and controlling the world. Instead, multiple drafts of perceptions and thoughts are continually processed and none is either conscious or unconscious until the system is probed and elicits a response. Only then do we say the thought or action was conscious. He relates this to the theory of memes. Because humans are capable of widespread generalised imitation, we alone can copy, vary and choose between memes, giving rise to language and culture. ‘Human consciousness is itself a huge complex of memes’, says Dennett, and the self is a ‘benign user illusion’.
But perhaps this illusory self, this complex of memes I call a ‘selfplex’, is not so benign, benefitting the memes that make it up more than the person who carries them. Perhaps paradoxically, our unique capacity for language, autobiographical memory and a sense of being a continuing self may increase our suffering. While other species may feel pain, they cannot make it worse by crying, ‘How long will this pain last? Will it get worse? Why me? Why now?’ In this sense our suffering may be unique. For illusionists the answer to our question is simple and obvious. We humans are unique because we alone are clever enough to be deluded.