Modified: February 06, 2023
all models are wrong
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Fundamental observation is that the map is not the territory.
BUT---there is a real and important distinction between models that are just incomplete, and those that are fundamentally wrong. A map that replaces an actual mountain with a tiny picture of a mountain is incomplete, but it's not wrong. It's an abstraction, but a faithful abstraction. But a map that puts a valley in place of an actual mountain is actually wrong in a more fundamental way.
How can we quantify the wrongness or rightness of a model?
- calibration: does the territory actually have features corresponding to the model, with whatever level of precision the model claims? Interpreting a map's contours as defining a probability density for the heights of individual points (most naively, any point between the x and x + 100 contour lines has height distributed according to Uniform[x, x + 100], though real human interpretation is more nuanced than this).
- A model's state abstraction also induces an abstract dynamics and causality. For a given state abstraction, there is a correct dynamics; any other dynamics are incorrect. This is related to calibration, since the abstract dynamics are probabilistic in general, and getting the dynamics right means knowing the correct probability distributions.
- Ultimately we want to use the model for something. "All models are wrong, but some are useful". A model's value is ultimately determined by its usefulness; other notions of correctness are just proxies.
We might say that a predictive model can be wrong in four ways: we can have the wrong (because predicting it is either difficult or useless), the wrong , or the wrong .
- 'the wrong ': the model can be uncalibrated. That is, the probabilities may not match those of the true data-generating distribution.
- For example: your map shows mountains, but they are in the wrong places.
- 'the wrong ': the model can condition on the wrong thing. may be just an incomplete and noisy representation of some other quantities , which really determine . The extreme case of this is the the marginal distribution (equivalently taking ).
- For example: your map was made by someone who has never actually explored the territory, and it just labels the whole region with 'there be dragons'.
- 'the wrong ': it can model the wrong thing. may not actually be useful for making decisions. That is, the optimal policy in the POMDP induced over abstraction states by pushing forward the ground dynamics may accrue less value over time than the optimal policy in the POMDP induced over some other quantities .
- For example: if you are trying to plan a military campaign, but your map contains nothing but the locations of every Chuck E. Cheese in the area, then this map is not
- Another way to model the wrong thing is if is difficult to predict. Perhaps our is more predictive of some other quantity , and this quantity is just as useful as for making decisions. Then we should prefer to model instead.
- If you know that hidden treasure exists, but your surveys didn't find any signs of it, then a map showing the treasure probability at each location given your survey information is not going to be very useful, because those probabilities are not based on meaningful evidence.
Each of these is a matter of degree. Models can be further or closer to being calibrated, and they can condition on or be predictive of concepts (viewed as thought vectors) that are angled wildly or just slightly away from the optimum.
Since there are costs to improving a model, it can also be right to use a wrong model (and indeed it typically is). Renting fancy sensors to predict a die roll isn't beneficial if the sensors cost more than you'd win on the roll itself. And a really high-resolution model---a 'map as big as the territory'---might be too unwieldy to use even if you could get it for free.