Modified: December 01, 2023
probabilistic programming is not AI research
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Many probabilistic programming researchers frame their work as part of the broader problem of artificial intelligence. Artificial agents will need to learn models of the world, maintain uncertainty over beliefs, and use this to make decisions.
There are two reasons why probabilistic programming will never be part of the ultimate story we work out for general AI:
- probabilistic: general AI will not use explicit models of uncertainty
- programming: general AI will not rely on explicit programs in any sense we would recognize.
Why won't AI use explicit models of uncertainty? Because the abstraction boundary of approximate Bayesian inference doesn't respect that computation is important.
Why won't AI use explicit programs? Most people instinctively 'get it' that human behavior don't follow an explicit script. Nothing in the brain is that clean. Our concepts have fuzzy boundaries. We're quite bad at logical reasoning compared to computers. And if AI could be captured in simple programs that people can understand, we would have written those programs already.
Probabilistic programming is about encoding domain-specific inductive bias. But general AI is, by definition, domain-general. Whatever inductive biases it uses will be extremely generic.
- this could just be the assumption that the world has tractably computable structure, as in the free lunch theorem. then we could frame learning as probabilistic program induction? but this fails because AI won't be about programs (as argued elsewhere in this note), and also because probabilistic program induction is intractable.
Having said all of this, there is a sense in which probabilistic programming is AI research. Generally intelligent systems will be tool users, just like we are. If they need to sort a billion integers, they'll call a sorting algorithm. And if they need to reason rigorously and interpretably about a system for which they can write a reasonable model, they'll use probabilistic programming tools, just as humans would. An expanding library of computational tools gives new capabilities to humans and AIs alike. So essentially all computational progress --- in languages, libraries, algorithms, etc. --- can be seen as enabling future AI systems.