The Real Problem
What is consciousness?
For a conscious creature, there is something that it is like to be that creature. There is something it is like to be me, something it is like to be you, and probably something it is like to be a sheep, or a dolphin. For each of these creatures, subjective experiences are happening. It feels like something to be me. But there is almost certainly nothing it is like to be a bacterium, a blade of grass, or a toy robot. For these things, there is (presumably) never any subjective experience going on: no inner universe, no awareness, no consciousness.
This way of putting things is most closely associated with the philosopher Thomas Nagel, who in 1974 published a now legendary article called "What is it like to be a bat?" in which he argued that while we humans could never experience the experiences of a bat, there nonetheless would be something it is like for the bat, to be a bat. I've always favored Nagel's approach because it emphasizes phenomenology: the subjective properties of conscious experience, such as why a visual experience has the form, structure, and qualities that it does, as compared to the subjective properties of an emotional experience, or of an olfactory experience. In philosophy, these properties are sometimes also called qualia: the redness of red, the pang of jealousy, the sharp pain or dull throb of a toothache.
For an organism to be conscious, it has to have some kind of phenomenology for itself. Any kind of experience-any phenomenological property-counts as much as any other. Wherever there is experience, there is phenomenology; and wherever there is phenomenology, there is consciousness. A creature that comes into being only for a moment will be conscious just as long as there is something it is like to be it, even if all that's happening is a fleeting feeling of pain or pleasure.
We can usefully distinguish the phenomenological properties of consciousness from its functional and behavioral properties. These refer to the roles that consciousness may play in the operations of our minds and brains, and to the behaviors an organism is capable of, by virtue of having conscious experiences. Although the functions and behaviors associated with consciousness are important topics, they are not the best places to look for definitions. Consciousness is first and foremost about subjective experience-it is about phenomenology.
This may seem obvious, but it wasn't always so. At various times in the past, being conscious has been confused with having language, being intelligent, or exhibiting behavior of a particular kind. But consciousness does not depend on outward behavior, as is clear during dreaming and for people suffering states of total bodily paralysis. To hold that language is needed for consciousness would be to say that babies, adults who have lost language abilities, and most if not all nonhuman animals lack consciousness. And complex abstract thinking is just one small part-though possibly a distinctively human part-of being conscious.
Some prominent theories in the science of consciousness continue to emphasize function and behavior over phenomenology. Foremost among these is the "global workspace" theory, which has been developed over many years by the psychologist Bernard Baars and the neuroscientist Stanislas Dehaene, among others. According to this theory, mental content (perceptions, thoughts, emotions, and so on) becomes conscious when it gains access to a "workspace," which-anatomically speaking-is distributed across frontal and parietal regions of the cortex. (The cerebral cortex is the massively folded outer surface of the brain, made up of tightly packed neurons.) When mental content is broadcast within this cortical workspace, we are conscious of it, and it can be used to guide behavior in much more flexible ways than is the case for unconscious perception. For example, I am consciously aware of a glass of water on the table in front of me. I could pick it up and drink it, throw it over my computer (tempting), write a poem about it, or take it back into the kitchen now that I realize it's been there for days. Unconscious perception does not allow this degree of behavioral flexibility.
Another prominent theory, called "higher-order thought" theory, proposes that mental content becomes conscious when there is a "higher-level" cognitive process that is somehow oriented toward it, rendering it conscious. In this theory, consciousness is closely tied to processes like metacognition-meaning "cognition about cognition"-which again emphasizes functional properties over phenomenology (though less so than global workspace theory). Like global workspace theory, higher-order thought theories also emphasize frontal brain regions as key for consciousness.
Although these theories are interesting and influential, I won't have much more to say about either in this book. This is because they both foreground the functional and behavioral aspects of consciousness, whereas the approach I will take starts from phenomenology-from experience itself-and only from there has things to say about function and behavior.
The definition of consciousness as "any kind of subjective experience whatsoever" is admittedly simple and may even sound trivial, but this is a good thing. When a complex phenomenon is incompletely understood, prematurely precise definitions can be constraining and even misleading. The history of science has demonstrated many times over that useful definitions evolve in tandem with scientific understanding, serving as scaffolds for scientific progress, rather than as starting points, or ends in themselves. In genetics, for example, the definition of a "gene" has changed considerably as molecular biology has advanced. In the same way, as our understanding of consciousness develops, its definition-or definitions-will evolve too. If, for now, we accept that consciousness is first and foremost about phenomenology, then we can move on to the next question.
How does consciousness happen? How do conscious experiences relate to the biophysical machinery inside our brains and our bodies? How indeed do they relate to the swirl of atoms or quarks or superstrings, or to whatever it is that the entirety of our universe ultimately consists in?
The classic formulation of this question is known as the "hard problem" of consciousness. This expression was coined by the Australian philosopher David Chalmers in the early 1990s and it has set the agenda for much of consciousness science ever since. Here is how he describes it:
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.
Chalmers contrasts this hard problem of consciousness with the so-called easy problem-or easy problems-which have to do with explaining how physical systems, like brains, can give rise to any number of functional and behavioral properties. These functional properties include things like processing sensory signals, selection of actions and the control of behavior, paying attention, the generation of language, and so on. The easy problems cover all the things that beings like us can do and that can be specified in terms of a function-how an input is transformed into an output-or in terms of a behavior.
Of course, the easy problems are not easy at all. Solving them will occupy neuroscientists for decades or centuries to come. Chalmers's point is that the easy problems are easy to solve in principle, while the same cannot be said for the hard problem. More precisely, for Chalmers, there is no conceptual obstacle to easy problems eventually yielding to explanations in terms of physical mechanisms. By contrast, for the hard problem it seems as though no such explanation could ever be up to the job. (A "mechanism"-to be clear-can be defined as a system of causally interacting parts that produce effects.) Even after all the easy problems have been ticked off, one by one, the hard problem will remain untouched. "[E]ven when we have explained the performance of all the functions in the vicinity of experience-perceptual discrimination, categorization, internal access, verbal report-there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?"
The roots of the hard problem extend back to ancient Greece, perhaps even earlier, but they are particularly visible in RenŽ Descartes's seventeenth-century sundering of the universe into mind stuff, res cogitans, and matter stuff, res extensa. This distinction inaugurated the philosophy of dualism, and has made all discussions of consciousness complicated and confusing ever since. This confusion is most evident in the proliferation of different philosophical frameworks for thinking about consciousness.
Take a deep breath, here come the "isms."
My preferred philosophical position, and the default assumption of many neuroscientists, is physicalism. This is the idea that the universe is made of physical stuff, and that conscious states are either identical to, or somehow emerge from, particular arrangements of this physical stuff. Some philosophers use the term materialism instead of physicalism, but for our purposes they can be treated synonymously.
At the other extreme to physicalism is idealism. This is the idea-often associated with the eighteenth-century bishop George Berkeley-that consciousness or mind is the ultimate source of reality, not physical stuff or matter. The problem isn't how mind emerges from matter, but how matter emerges from mind.
Sitting awkwardly in the middle, dualists like Descartes believe that consciousness (mind) and physical matter are separate substances or modes of existence, raising the tricky problem of how they ever interact. Nowadays, few philosophers or scientists would explicitly sign up for this view. But for many people, at least in the West, dualism remains beguiling. The seductive intuition that conscious experiences seem nonphysical encourages a "na•ve dualism" where this "seeming" drives beliefs about how things actually are. As we'll see throughout this book, the way things seem is often a poor guide to how they actually are.
One particularly influential flavor of physicalism is functionalism. Like physicalism, functionalism is a common and often unstated assumption of many neuroscientists. Many who take physicalism for granted also take functionalism for granted. My own view, however, is to be agnostic and slightly suspicious.
Functionalism is the idea that consciousness does not depend on what a system is made of (its physical constitution), but only on what the system does, on the functions it performs, on how it transforms inputs into outputs. The intuition driving functionalism is that mind and consciousness are forms of information processing which can be implemented by brains, but for which biological brains are not strictly necessary.
Notice how the term "information processing" sneaked in here unannounced (as it also did in the quote from Chalmers a few pages back). This term is so prevalent in discussions of mind, brain, and consciousness that it's easy to let it slide by. This would be a mistake, because the suggestion that the brain "processes information" conceals some strong assumptions. Depending on who's doing the assuming, these range from the idea that the brain is some kind of computer, with mind (and consciousness) being the software (or "mindware"), to assumptions about what information itself actually is. All of these assumptions are dangerous. Brains are very different from computers, at least from the sorts of computers that we are familiar with. And the question of what information "is" is almost as vexing as the question of what consciousness is, as we'll see later on in this book. These worries are why I'm suspicious of functionalism.
Taking functionalism at face value, as many do, carries the striking implication that consciousness is something that can be simulated on a computer. Remember that for functionalists, consciousness depends only on what a system does, not on what it is made of. This means that if you get the functional relations right-if you ensure that a system has the right kind of "input-output mappings"-then this will be enough to give rise to consciousness. In other words, for functionalists, simulation means instantiation-it means coming into being, in reality.
How reasonable is this? For some things, simulation certainly counts as instantiation. A computer that plays Go, such as the world-beating AlphaGo Zero from the British artificial intelligence company DeepMind, is actually playing Go. But there are many situations where this is not the case. Think about weather forecasting. Computer simulations of weather systems, however detailed they may be, do not get wet or windy. Is consciousness more like Go or more like the weather? Don't expect an answer-there isn't one, at least not yet. It's enough to appreciate that there's a valid question here. This is why I'm agnostic about functionalism.
There are two more "isms," then we're done.
The first is panpsychism. Panpsychism is the idea that consciousness is a fundamental property of the universe, alongside other fundamental properties such as mass/energy and charge; that it is present to some degree everywhere and in everything. People sometimes make fun of panpsychism for claiming things like stones and spoons are conscious in the same sort of way that you or I are, but these are usually deliberate misconstruals designed to make it look silly. There are more sophisticated versions of the idea, some of which we will meet in later chapters, but the main problems with panpsychism don't lie with its apparent craziness-after all, some crazy ideas turn out to be true, or at least useful. The main problems are that it doesn't really explain anything and that it doesn't lead to testable hypotheses. It's an easy get-out to the apparent mystery posed by the hard problem, and taking it on ushers the science of consciousness down an empirical dead end.
Finally, there's mysterianism, which is associated with the philosopher Colin McGinn. Mysterianism is the idea that there may exist a complete physical explanation of consciousness-a full solution to Chalmers's hard problem-but that we humans just aren't clever enough, and never will be clever enough, to discover this solution, or even to recognize a solution if it were presented to us
by super-smart aliens. A physical understanding of consciousness exists, but it lies as far beyond us as an understanding of cryptocurrency lies beyond frogs. It is cognitively closed to us by our species-specific mental limitations.
What can be said about mysterianism? There may well be things we will never understand, thanks to the limitations of our brains and minds. Already, no single person is able to fully comprehend how an Airbus A380 works. (And yet I'm happy to sit in one, as I did one time on the way home from Dubai.) There are certainly things which remain cognitively inaccessible to most of us, even if they are understandable by humans in principle, like the finer points of string theory in physics. Since brains are physical systems with finite resources, and since some brains seem incapable of understanding some things, it seems inescapable that there must be some things which are the case, but which no human could ever understand. However, it is unjustifiably pessimistic to preemptively include consciousness within this uncharted domain of species-specific ignorance.
Copyright © 2021 by Anil Seth. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.