dual-process cognition: Nonlinear Function
Created: July 08, 2020
Modified: July 08, 2020

dual-process cognition

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Elephant and rider

Asking "what should I value?" is asking the rider. it demands solving impossible moral questions.

Asking "what do I value" is asking the elephant. or at least it is asking about* the elephant. it demands only observation, and self-knowledge.

Of course the 'should' is important. the rider should give some direction. But it's equally important that the rider understands what the elephant wants, since most of the time the elephant will govern.

As aligned AI

this post: http://squirrelinhell.blogspot.com/2017/04/the-ai-alignment-problem-has-already.html seems insightful as hell.

Imagine we really have two agents inside our own heads. A reflex agent (system 1) and a conscious/planning agent (system 2). The reflex agent came first and originally developed system 2 as a 'tool AI'. System 2 is in some sense 'aligned' with system 1: if it weren't system 1 would never allow it to be called upon to act. So System 2 is incentivized to understand and further the desires of system 1. However, at some point we start to identify with system 2, and system 1 is trained to align itself with system 2 (this is the AlphaGo paradigm in which we train neural nets to mimic MCTS). It's not clear that either is really 'primary'. In a well-adjusted individual we have multiple internal 'agents' and we hope they are all aligning themselves with each other constantly.