Created: May 22, 2021
Modified: May 22, 2021
Modified: May 22, 2021
glimpses of AI
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.- An intelligent agent should work to understand the world. This understanding takes the form of a set of relevant abstractions, a dynamics or predictive model of the abstract representations, and a causal model of the effects of abstract actions.
- These abstractions may be seen as many models, and together they at-least-implicitly define a product of experts model on reality.
- The abstractions are not fixed; they constantly evolve. And they are not disjoint; there are webs of connections.
- The abstractions are not purely compressive but can be driven by reward maximization. (for some reward that may be externally or internally specified).
- The agent should, in particular, be intelligent about learning. It should be curious, it should understand what it's uncertain about, and should be able to ask questions, do research, test hypotheses, and record what it learns for its future self, in a way that doesn't suffer from catastrophic forgetting.
- It should have an objective function that leads it towards trying to optimize global utility, while also being very uncertain about what this means. (at least, a civilizational-level AI needs this; a household assistant might have less grandiose goals, but it's still trying to learn about and implement human preferences).