transformers with memory: Nonlinear Function
Created: September 03, 2022
Modified: September 03, 2022

transformers with memory

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Incorporating explicit memory and retrieval seems pretty clearly like the next frontier in language modeling and AI more broadly. We have systems like GPT-3 that view the entire internet and store whatever they learn in some giant associative memory trained via SGD. But these are imperfect in several ways.

  1. Short context windows: can't generate long structured text (novels, etc) or have long coherent conversations. (see long-term context in Transformers).
  2. Updatability. The only way to teach a new fact is to include it in a prompt (reducing to the previous problem of prompt length limitations) or to do lots of expensive fine-tuning.
  3. Uninterpretable: we have no way to explicitly query the model's "beliefs" or to attribute an answer to a particular belief (and indeed with appropriate prompting these models can espouse conflicting beliefs - this is 'humanlike' and not necessarily a bug as such, but problematic in some cases).
  4. Inefficient: we're storing and using all of those 175B parameters even for tasks/topics that implicate only a small subset of the training set.
  5. Unsatisfying distinction between training time updates (SGD) and test-time 'updates' in context window (not the ideal meta-level shape of machine learning).
  6. One-pass SGD training can fail to memorize training set (a feature, but also a bug: if I show something to a computer it should remember what it saw!)

One way to view this is as a code vs data distinction: current models are 'all code' (in the sense that a transformer is a giant differentiable program), but many information-processing systems have relatively small code 'cores' that interact with large databases.

Off-the-top-of-my-head impressions of what improved systems might look like:

  1. A system that can 'take notes' on a conversation and refer to these in future conversation. (could potentially do this with current models and appropriate prompt engineering).
  2. A system with 'long-term context' that in fact extends to the entire training set (!). Maybe something hierarchical, similar to a Merkle tree but differentiable. This would require radically rethinking the training procedure.
  3. Systems with latent 'lookup' actions allowing them to augment their context with the result of some database (or similar) query.

Papers of note (papers to read):