Modified: September 06, 2021
compression
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Links to this note
AI reflections master
This page is a general jumping-off point for organizing my thoughts about the [ AI research landscape ], where the field is, where it is…
predictive agent
Consider an agent that is purely concerned with [ predictive processing ]: finding the optimal [ compression ], or equivalently the optimal…
worldly objective
This may be a central point of confusion: how do we define AI systems that have preferences about the real world , so that their goals and…
value learning
Notes on the Alignment Forum's Value Learning sequence curated by Rohin Shah. ambitious value learning : the idea of learning 'the human…
minimum description length
Short descriptions of things, when they exist, must capture some kind of structure. The principle of [ Occam's razor ] posits that we should…
free lunch theorem
'[ no free lunch theorem ]' arguments are misleading because they consider the space of all possible functions. In fact, we usually care…
Occam generalization bound
Here's a formal argument for [ Occam's razor ], adapted from Theorem 2.2 of Kearns and Vazirani's "An Introduction to Computational Learning…
A Universal Law of Robustness via Isoperimetry
Link: A Universal Law of Robustness via Isoperimetry | OpenReview This paper purports to explain (and quantify) the observed fact that…
representation
In modern ML, representation learning is the art of trying to find useful abstractions, embodied as encoding networks. We can learn…
abstraction
Abstraction is lossy [ compression ]. A good abstraction throws away everything not relevant to a particular problem, while preserving a…
default mode network
A set of connected brain regions that are active when you're 'at rest', not focused on the external world. This includes mental states such…
large models
If you believe that neural nets basically just memorize the training data, then training larger and larger models is hopeless. The…
it seemed profound at the time
Notes copied from Google Docs https://docs.google.com/document/d/1G7Gxo-A3gQrlUx3G4BYmH-GjUWbB7eTOJZcuAs2ii0A/edit most of these things…
growing up means becoming wrong
(related: [ communication is processing ]) A big part of growing up is communicating to your future self. Your future self isn't going to…