wrong models in AI: Nonlinear Function
Created: February 13, 2022
Modified: February 13, 2022

wrong models in AI

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

The models we use in AI are wrong (if maybe still useful). How?

Agency

The agent model assumes a separation of the agent from the world. Where do we put the boundary? We can't just put it at the borders of the body, since humans (and presumably human-like AIs) also make choices about what to think about, meaning that we also have some agency over our thoughts.

We can try to let agency 'recede' to deeper levels within the system. Under a dual-process cognition model, we could let system 1 be the 'agent' that directs both bodily motions and system-2 'computational actions', like a Turing machine using a tape. This is closer to reality. But it's still 'wrong', since there's still a part of the system with non-physical free will, which doesn't exist in the real world. Even a low-level agent can't understand that taking drugs will change its decision-making, because an agent by definition assumes control over its actions. At the same time, by letting agency recede to the very lowest levels we've given up the ability to reason about high-level actions.

Worse, the traditional agent model assumes a fixed utility function, but human goals change over time. Our utilities are learned, to some extent. However our utility-learning works, you could model it as instrumental to the final evolutionary goal of reproduction driven by gene-level 'agent's, but now we're pretty far afield from the standard agency model.

When we as humans reason about the world, we sometimes choose to model other humans as rational agents, sometimes as rational thinkers, sometimes as part of a larger rational corporation, and sometimes as not rational at all, and each of these with different goals in different contexts. We use many models, or in this case, the same model at many scales, because no single instantiation of the agency model explains human behavior in a tractable way.

Proposals: