consciousness: Nonlinear Function
Created: October 04, 2021
Modified: July 14, 2023

consciousness

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Philosophical views on consciousness:

Buddhist and meditative traditions focus on awareness. They claim that consciousness has nothing to do with the content of our thoughts or with our personal identity, because through meditation or psychedelics you can enter conscious states 'cleaned up' of thought and of the notion of self (as in ego death).

Ned Block distinguishes between two main things we mean by "consciousness":

  • phenomenal consciousness: identified with experience, that there is "something it is like" to be in a given state
  • access consciousness: the set of representations that come together (perhaps in a global workspace) so as to be available as premises for reasoning and for rational control of action and speech.

He also touches on

  • self-consciousness: possessing the concept of the self
  • monitoring-consciousness: some sort of self-reference (strange loops) or other process that represents one's own internal states

Some relevant psychedelic experiences I've had:

  • 'peak consciousness': while eating fresh-baked bread on psilocybin, I understood that we are the universe experiencing itself, and that at that moment I was having an extraordinarily rich experience.
  • ego death: on 5-MeO-DMT, I felt like my consciousness grew to encompass the entire library of Babel of potential selves. I felt as though:
    • there is some mathematical description of 'consciousness', maybe as a system with a certain type of self-awareness
    • various mathematical objects, e.g., the computations implemented by certain physical dynamics, can satisfy this description
    • for each such mathematical object, there is something it is like to 'be the math' Consciousness is clearly linked to attention. Our minds are at any moment taking in a huge amount of information, and storing years of memories, most of which we are not aware of at any given moment. Consciousness is bigger than attention in the narrow sense --- we can be attending to the person we're talking to, while still aware of whatever else is happening in our environment. But it is still a filter.

Nick Cammarata on Twitter points out that consciousness is not the same as self-awareness or intelligence: see intelligence is not consciousness. Still in humans these things mostly seem to coincide. Conscious experience is the clearinghouse; the place where everything comes together. We can only speak about things in our conscious experience.

and computation: Joe Antognini argues that computations cannot be conscious, because

  • to be a computation is to be in 1-1 correspondence with the state trajectory of a Turing machine, but
  • whether an arbitrary natural system is in such correspondence depends entirely on the observer, so it is not intrinsically a computation or not, so this can't tell us whether it's intrinsically conscious or not. This all gets at the question of what it means for a system to 'implement' a computation.
  • Suppose I snapshot the states of a totally chaotic system at fixed time intervals, and the states I see happen to correspond (under some reasonable mapping) to the execution trace of a Turing machine. But this is pure coincidence; the 'actual' dynamics was not any sort of Turing machine simulation. Should this have the same consciousness as a system that does run an explicit simulator of the kind we would recognize?
  • Does a program implemented by a Post Correspondence Problem setup have the same consciousness as the same program implemented as a Turing machine?
  • Can any physical system with a large enough state space (he uses the example of an iron bar) be considered to be in correspondence with any computation?
    • it seems like some 'observers' have lower Kolmogorov complexity than others, seems wrong to equate simple mappings with incredibly convoluted mappings.

can we require that the system's dynamics are consistent with the program executions across all inputs?

  • how to define 'input'?

this HN post seems like a nice response:

… The actual substantive point of the article is not that consciousness is not computation, but that for computation to produce consciousness, it has to have semantics--it has to be hooked up to iputs and outputs in a non-trivial way. Which is certainly not easy, but that doesn't mean it's impossible.

What does rigorous 'consciousness research' look like?

Humans have the ability to describe our own subjective experience. So if there are real general truths about consciousness, then in you'd expect to see some sort of consistency between human self-reports. It's a fuzzy lens, but this establishes that at least some properties of consciousness are in-principle identifiable through objective research methods.

An interesting test for machine consciousness is to ask a machine about some aspect of consciousness it's never seen described by humans. If its response agrees with human self-report data, that's evidence that the machine is conscious. Roon credits this to Ilya Suskever: train an advanced LLM on a dataset without any description of conscious experience and see how the convo goes. does the machine’s description of its own experience match yours? .

but I think it was actually first suggested by Susan Schneider in 2019: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/SchneiderCrit-200828.pdf.

It seems to me that this could also admit another explanation: the machine could have developed an accurate theory of consciousness that is sufficient to predict conscious experiences without having them, just as we can predict the result of physical experiments without actually performing them. It's conceivable (although seemingly quite unlikely) that a machine could derive a theory like this from first principles, without empirical data, and so work out its answers 'in simulation'. Of course, if such a reliably predictive theory of consciousness is within reach, we should probably just use that theory directly to examine the machine's consciousness.