Modified:
neural nets don't just interpolate
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.Sometimes you'll see people say that neural nets 'just' memorize and interpolate their training data.
No one denies that neural nets with enough capacity can memorize training data.
But if that were all, you'd be able to get state-of-the-art results from -nearest-neighbor lookup on large perceptual datasets without using deep networks. Yet you can't. There's evidence that nearest-neighbor lookup can work well when using learned representations, but that just begs the question, since in that setup the representation learning is clearly doing important work.
As Yann Lecun and colleagues point out: Learning in High Dimension Always Amounts to Extrapolation. There is no 'interpolation' in high-dimensional spaces because test points almost never occur within the convex hull of the training set.