thoughts are actions: Nonlinear Function
Created: June 27, 2021
Modified: June 27, 2021

thoughts are actions

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.
  • The agent model of intelligence imposes a sharp distinction between the agent and its environment, where the agent 'chooses' actions, and then these affect the state of the world.
  • But how can an agent choose actions? This requires computation, and which computations to do is itself a choice. Choosing what to think about may not directly change the state of the 'outside world', but it does change the state of the computational system. Should we then back up and consider the 'computational system' to be part of the environment, and the 'agent' itself to be a much-reduced, computationally limited core? This kind of agent is like the finite state machine that lies at the heart of a Turing machine---it drives the system and determines its behavior, but can't 'deliberate' on its own; it's just a set of reflexes for doing computations.
  • Do we need multiple levels of meta-reasoning? Can't we have thoughts about thoughts about thoughts? The Turing machine analogy implies that one level is 'enough', in some sense. We can model any computational system as an inner reflex agent that exerts direct control over computational actions. Of course, a practical architecture might still involve multiple layers.
  • Dan Dennett points out that taking the intentional stance is a choice. We can choose to model a system with an agent-environment boundary at the walls of its body, or inside its thoughts, or not at all. As psychedelics teach us, our egos and identitys are only models, and not always helpful models.