probabilities hide detail: Nonlinear Function
Created: May 27, 2021
Modified: February 22, 2022

probabilities hide detail

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.
  • Matt Levine explains how a financier might react to losing a billion dollars:
    1. Sure sure the risks didn’t work out but you probably have a good story about why it seemed like a good idea ex ante.
    2. You have probably learned from your mistakes and won’t do it again. (You probably also, now, have a good story about why it wasn’t a good idea ex ante.)
    3. Or not, maybe you are like “that was the correct call, the odds were in my favor, the dice rolled the wrong way but I’d take that bet 100 more times any day.” I feel like that attitude — which in the abstract might be correct — would make it harder for you to get a new job, but what do I know, somebody will appreciate it.
  • What does it mean for the odds to be in your favor? Who decides what the 'odds' in a situation are?
    • In some stylized circumstances, like rolling a die, people generally agree on 'objective' odds.
    • But in general the odds we give are subjective; they are properties of our models, not of the world itself. (there may be true odds, defined by quantum physics, but we are not calculating those). If you 'get unlucky', this doesn't prove that your model is significantly wrong, but it is evidence against it.
    • Even a calibrated model only defines correct odds conditioned on the details you've chosen to model. If all you know is that a fair die is being rolled, then a uniform distribution is a calibrated model. But if billions of dollars were riding on correctly predicting the outcome, you might be willing to invest in some pretty sophisticated sensors and physics modeling to incorporate more detail of the initial conditions.
  • How can a model be wrong?
  • Assuming we restrict ourselves to optimizing money, the financier's attitude could be correct, if:
    • Their model is conditioned on relevant information, and
    • Their model properly incorporates most of that information (calibration), and
    • The cost of improving the model along either of the previous dimensions exceeds the expected value from the improvement. Our expectations regarding the costs and the prospects for improvement are themselves subjective, model-based quantities, so this is a bit tricky.