simulator AI: Nonlinear Function
Created: February 16, 2023
Modified: February 16, 2023

simulator AI

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

References:

It seems pretty clear that the intelligence emerging from language models is not explicitly agentic, with well-defined reward functions and optimization procedures. Instead it is the result of behavioral cloning, of training the models to simulate human intelligence. But like a decision transformer, this simulation can be prompted to recover a wide variety of behaviors seen during training and to behave in ways that were not directly present in the training set.

GPT instantiates simulacra of characters with beliefs and goals, but none of these simulacra are the algorithm itself. They form a virtual procession of different instantiations as the algorithm is fed different prompts, supplanting one surface personage with another. Ultimately, the computation itself is more like a disembodied dynamical law that moves in a pattern that broadly encompasses the kinds of processes found in its training data than a cogito meditating from within a single mind that aims for a particular outcome.`