advice is hard to take: Nonlinear Function
Created: January 27, 2022
Modified: February 25, 2022

advice is hard to take

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

It's natural to fault someone who fails to follow advice, especially when the results are predictably bad. It can be helpful to notice and update on their misfortune. But we should also extend grace to people who screw up, because advice is fundamentally difficult to take. To attempt to follow any particular piece of advice is to add another constraint to what is usually an already over-constrained system.

Imagine that you're shaping a hunk of marble into a sculpture, and your patron asks you to make that sculpture look more masculine (for example). There's no 'masculinity' knob you can just turn, orthogonal to all other properties, because the actual sculpture has to exist as a three-dimensional object. You have to figure out what actual changes in sculpture-space will imply the abstraction of increased masculinity. The natural changes to increase masculinity may screw up the proportions, requiring you to make another compensating change which then screws up the facial expression, and so on. Actually finding the 3D object that optimally satisfies all of your constraints is an NP-hard task. Just because you have the advice doesn't mean it's easy to follow it, any more than a critic knows how to do the job themselves.

I've seen this in my own work building software. It's one thing to come up with a property that a software system system should have. It's another thing to actually construct a software system that has that property, or to modify an existing software system to have that property.

Our personalities are a bit like the sculpture. We all have obligations, goals, relationships, and beliefs; all of which create constraints. If I want to change myself in some way---perhaps to be more extroverted, or more open to alternative perspectives, or more curious, or any advice that I could imagine anybody giving me---then it's not like I can just turn a knob to become 'myself but more extroverted'. Instead, that advice adds a new constraint (or at least a preference, a weighted constraint) to my decision-making. Actually satisfying the constraint is still hard. This is why most learning is by demonstration.

This also applies at the level of organizations and societies, which balance an enormous number of constraints both explicit and implicit. Large companies, for example, end up being predictably irrational, because they inherently have constraints related to how they process and transmit information, how they reward initiative, and how they evaluate work. Financial markets have different constraints and will also be predictably irrational in other ways. When somebody like Eliezer Yudkowsky points out some way in which society is irrational---like, it's crazy that we're not doing vaccine challenge trials, or it's crazy that we don't think more about existential risk, or so on---he's describing a property that society should have. But he's not showing us how to build a society with that property. The criticism might be right as far as it goes, but the implication that everyone else is stupid for not conforming to it is not.

People overestimate the value of criticism in general, but there is an art to diagnosis. What separates diagnosis from criticism? In medicine, to diagnose someone as having a disease is not just to criticize them; you have to describe some deficiency along with a theory of the case for what is causing that deficiency, and what sort of counterfactual situation or conceivable intervention might eliminate it. If I diagnose you as having type-1 diabetes, I'm not just criticizing you for having various symptoms; I'm describing a specific condition of insulin deficit, which suggests specific interventions. The best criticism is actionable. It's not just an abstract property that you'd like the system to have that it doesn't have, but a concrete property that the system plausibly could have, for which that you've identified a mechanism by which to help it have that property.