intelligence is not consciousness: Nonlinear Function
Created: January 13, 2023
Modified: January 18, 2023

intelligence is not consciousness

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

A lot of discussion around artificial intelligence implicitly conflates intelligence with consciousness. It assumes that as we advance AI, as we build systems capable of increasingly flexible, insightful, and human-like cognition, that this will somehow lead to them being conscious or sentient. This doesn't seem right to me.

Most people who reflect on this will notice that many of their most vivid conscious experiences actually involve very little of what we'd consider higher-order thought. Whether pleasurable, like the ecstasy of an orgasm, the taste of a delicious meal, or the sight of a beautiful sunset, or painful, like a broken bone or deep grief from a death or breakup, these 'peak' moments often involve absorption in the flow of sensory experience (which for this purpose includes psychosomatic sensations and physical manifestations of strong emotion, like the whole-body tingling of an orgasm or the pit-in-stomach clenching of deep fear or sadness). They exist independent of our capacity for high-level thought, for reasoning, planning, and learning; they would exist even if we had no language to describe them. Presumably they do exist for many non-human animals that lack all of these things. So it's hard to view conscious experience as directly requiring or benefitting from the sorts of capabilities we are trying to build into AIs.

Strongly distilled forms of this distinction come up in meditative practices focus explicitly on developing and recognizing conscious awareness without explicit thought. This is visible in the common and simple instruction to attend to the present-moment sensation of the breath, and to return to this sensation whenever you notice you've become distracted by thought. Nick Cammarata on Twitter points out that in one endpoint of this sort of practice, hard jhana states, there is no decision making, no problem-solving capability, no thought, no concept of the self, and yet these states are clearly conscious; in some ways peak consciousness.

Conversely, we know that our brains do quite a lot of sophisticated processing, modeling, and decision-making that we are not explicitly conscious of. Some of this processing occurs even during deep sleep when we are presumably not conscious of anything. So we can have consciousness without intelligence, and intelligence without consciousness (this seems to point towards the possibility of p-zombies, although I'm not sure if it fully gets us there).

This is in my opinion a huge blind spot of current AI research. We are perhaps getting close to building systems with human-level intelligence, but nowhere near an understanding of human consciousness. And yet our consciousness has a huge impact on our cognition, and is generally considered to be the ultimate source of moral worth, so understanding this is relevant to building human-like AIs (AI that can meditate) and of course to the ultimate task of AI safety.