probabilistic program induction: Nonlinear Function
Created: March 14, 2022
Modified: March 14, 2022

probabilistic program induction

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Can we think about generative flow networks as a potentially tractable formulation of probabilistic program induction?!

executing a line of a program is like an RL action. at every step, there are many possible 'next lines'. generative flow networks would let us learn a policy to write and execute the next line of a program on the fly in such a way that the distribution of sampled traces satisfies some target distribution R(x)R(x).

But only to the same extent that regular RL lets us learn a policy to induct optimal programs for an arbitrary reward R(x)R(x), which is to say, there's a lot else that has to happen here. We do induct programs pretty well with deep networks and supervised learning.

What about continuous programs? I'm thinking of a literal fluid flow in a high-dimensional space where the coordinates represent a continuous 'computational state' and the actions are local changes to that state, where of course the likelihood of those changes depends on the current state. any given flow represents a 'continuous' probabilistic program, and a flow that achieves consistency with the rewards matches the target distribution.

these are like normalizing flows except that they don't necessarily normalize?