I can’t point to an exact moment in my life when I realized that there’s this little voice in my head.
Maybe you can pin down the exact day you discovered there’s an echoey voice in your head that is “you.” This is the voice you hear as you read these words. Some days, I’ve idly wondered, as some of you might have, what this voice is.
For a while, younger me used to think this was what being “conscious” meant.
Some call this the narrative self, an inner monologue. Reportedly there are humans who don’t have it, and they are still conscious. They can still reason. (Some researchers even propose a term for the near absence of inner speech: “anendophasia”.)
So, then the voice you hear as you read these words is not what makes you conscious. But “wait a minute!” you might say. “I know what it is to be conscious. Of course I know, I am conscious the same way you are”. Well yes, but now we get into murky terrain, because we need to pin down a few things about what it means to be conscious and what it means to be intelligent before we go forward, or else we will get lost in nuances.
What is consciousness? Unfortunately, we don’t have a single clear definition for it, and not for lack of trying across decades of philosophers and scientists. Instead, we have several competing theories on what exactly consciousness is, but we still don’t have a clear answer. What we can say is that you know what it means to be conscious.
We will define here, that when I say “consciousness,” about an entity, I mean two things: (1) there’s something it feels like from the inside for the entity, and (2) some information is available for the entity to use in deliberate thought and action.
As Thomas Nagel wrote:
“The fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism.”
There is something it feels like to be you. Also, information isn’t just processed somewhere in the background; it’s usable for thinking, deciding, speaking, and guiding deliberate action right now. This is what we will constrain ourselves to.
How about the question: what is intelligence? Again, there are many formalized definitions, but let’s start with this one from Shane Legg and Marcus Hutter:
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
We will augment this definition with the requirement that the agent should be able to efficiently learn, plan, and self-correct their course.
Now, let’s get back to that voice in your head. What is “you” as you are awake? For me, I used to think there was a nebulous space inside my head where “I” am, a sort of section where my mind, and thereby the conscious “me,” sits. This conjures up an image of a little man sitting inside your head, a homunculus, watching everything, listening to everything, pressing buttons to move the body, described as the “Cartesian Theater” by Daniel Dennett.
But a simple, naive question breaks this down: how does the little man see? Well, you’d need another man inside the little man’s head. With this, it quickly becomes clear that there is nobody sitting inside our head driving our body, because it’s a logical fallacy that recurses. You cannot explain something by saying it’s turtles all the way down.
So, another hypothesis we can take is that we are the result of processes: neurons, hormones, and a multitude of chemical and physical mechanisms. This is much better than a man in the head. But, if we are processes tied together, then how do we understand things, when the processes don’t?
John Searle proposed what is now a famous thought experiment: the Chinese Room. Imagine you are in a room with closed doors, no window, and just a slit to receive a piece of paper and a slit on the other side to send a piece of paper. You have no knowledge of the Chinese language. But inside the room you have a massive rulebook that tells you exactly how to respond to strings of Chinese symbols: “If you receive these symbols, copy out those symbols.”
Outside the room, people ask you questions by slipping in pieces of paper covered in Chinese characters. You follow the manual and send back another set of Chinese characters. To an outsider, it’s quite evident that you “know Chinese.” Look at you responding fluently. But as you (inside the room) know, you have no clue what you are doing. You are just following a manual.
This experiment argues that you can have ability, competence, and performance with absolutely no innate understanding. Just like you sitting in that room manipulating symbols, you could have entities that showcase abilities, but by no means showcase understanding, and thus do not have any inner awareness of what they are doing.
Let’s explore another strange condition called blindsight, in which people with damage to the part of the brain that processes visual information lose conscious vision. But surprisingly, if you test them, they can still guess things about what’s there better than random chance: the direction a line is tilted, where a light flashed, whether something moved left or right. For them, it feels like they are guessing, taking a shot in the proverbial dark, yet they seem to be right.
This phenomenon showcases yet another point: ability and competence can happen with no awareness of competence at all. Extending this idea, an entity could do something, reason, or think without having any conscious knowledge that it is doing it.
Put these two together and a pattern appears: perhaps intelligent ability can exist without understanding, and ability can exist without conscious awareness. All we can see is that the assumption that consciousness and intelligence is coupled may not be as strong as we thought. That leaves us with the real question: what, if anything, does consciousness add that intelligence can’t do without?
Now, welcome to age of AI. Let’s constrain ourselves to Artificial Intelligence by focusing on Large Language models, and LLMs with harnesses built around them, also known as “Agents”.
We now have models that can produce output words (specifically they output tokens) that look like thinking, reasoning, creativity, tool use, planning, and action. You can talk to these models, ask them questions, converse with them, ask them to solve your problems, and provide advice. You can ask these models if they are self-aware, and depending on the personality they adopt, they may say yes. But now we know that entities can have an ability without innate understanding and that you need no conscious awareness to have competence.
You can ask a model to debug your code, and it will identify the error, explain why it’s happening, suggest multiple fixes, and even anticipate edge cases you hadn’t considered — appearing to understand programming in a way that feels remarkably human. Thus, these LLMs sit at an uncomfortable crossroads, and as they are hill climbing towards general intelligence, we arrive at a question that may well prove to be very important:
Are these models conscious?
We know now that we cannot confuse ability with awareness. We cannot assume that plausible answers, or the chain-of-thought that these models exhibit (when you read the thinking it does, as in case of reasoning models) showing introspection have any self-awareness in them. As we progress into building scaffolding, systems, and agents using these models to achieve general intelligence in the future, we reach another question:
Does then general intelligence need consciousness?
Well, if LLMs are able to achieve general intelligence, again a definition hotly contested, and we struggle to test and confirm for consciousness and self-awareness, then a reasonable hypothesis could be made that they are not conscious. Thus, in future, we may have entities that exhibit the same level of intelligence as you or me but are not capable of consciousness.
Then, what does this say about human consciousness?
There are a few possibilities. I’ll leave you with one that seems attractive to me: If general intelligence doesn’t require consciousness, then human consciousness needs explanation as something other than a prerequisite for intelligence. Maybe consciousness as we know it is just a weird quirk of our evolution. A spandrel — a byproduct of other evolutionary changes, like the space under an arch that exists not by design but as a consequence of the arch’s structure. Maybe instead, it’s something humans ended up with that became useful for coordination across societies. Maybe consciousness is not required for some type of intelligence, like the one LLMs exhibit.
Asking “what am I?” and “am I this voice in my head?” is perhaps a uniquely human thought. Meta-thinking and self-awareness may be uncommon across the cosmos, and intelligent entities, whether built by our hands or evolved separately, may have no concept of consciousness at all. Unsettlingly, the inner world we carry around may just be an optional element. Maybe some of these answers lie with the advent of an intelligence we treat as equal and judging whether, in fact, it is conscious.
Notes:
- Blindsight by Peter Watts is a recommended read that inspired a few of these thoughts, but be forewarned about vampires in space.
- The term “spandrel” originates from architecture, where it refers to the roughly triangular spaces between the top of an arch and the ceiling. But, also used in evolutionary biology to reference a byproduct of the evolution of some other characteristic, rather than a direct product of evolution. The human navel is a spandrel.
Originally published on Haecceity