impact: Nonlinear Function
Created: February 01, 2022
Modified: February 10, 2022

impact

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

One aspect of depression recently has been feeling like things are pointless, there's nothing valuable for me to do. My entire PhD has been pointless because even if it worked technically it would have a negligible chance of solving any political problem. Most research in computer vision is pointless because it involves no conceptual/theoretical advances and the practical applications are stupid stuff like better organizing photos. All research in computational photography is pointless for the same reason, at best it makes some photos look better but in the landscape of human utility this is such a drop in the bucket. Even inventions that save many lives might be pointless or even bad, if those lives were ultimately unpleasant or negative-utility lives. There is a social presupposition that life is valuable and always worth saving, but this is a presupposition not because it's true but just because the alternative leads to depression and various dystopian outcomes.

The things that affect human utility are:

  • depression
  • diseases that lead to incredible suffering and painful 'bad deaths' (certain forms of cancer apply here)
  • genuine love and friendship and connection
  • creating a sense of community. religion, but also gay community, or research group/academic community, family community, etc.
  • art and music, especially communal musical experiences (concerts, dancing, performance experiences) that bring people together.
  • possibly sports (thinking of the joy I've found playing soccer -- partly because it builds a tribe)
  • possibly AI safety / value alignment research. (though humans don't have utility functions! and aren't rational agents! so this needs to be better thought through in terms of true thriving)
  • possibly dog ownership. or the AI equivalent of dog ownership: an entity that provides unconditional love (but also cuddly physical intimacy: hard to replicate through AI alone!).
  • obviously lots of other things lol, it's pretty ridiculous to think this is the complete or even representative list.
  • Things that don't necessarily affect human utility in a way that I care about:
    • efforts in these directions with second-order effects, eg Facebook as a way to foster human community. It does connect people, but turns relationships into something artificial, curated, no longer a source of joy. Twitter even more so, it creates broader intellectual 'communities' but mostly accomplishes creating a sense of inferiority/missing out (everyone talking about the research they're doing and events they're attending), constant alarm (at politics, even things we have no ability to change, or things that are represented as BREAKING or MUST READ but actually are just small increments in belief distributions over outcomes that may or may not eventually happen, like Trump's removal from office), smugness (making mocking comments about other people being wrong, in a way that is self-validating but ultimately advances no useful ends), and general time wasting.
    • anything that 'saves lives' in a neutral way without necessarily increasing the value of those lives. Death is not necessarily bad, and so anything that prevents a cause of death that is better-than-average will tend to overall decrease the average quality of deaths.
    • AI capabilities research for the sake of AI research. It is technically cool (which can be its own reward) and potentially can have great outcomes, but it's not clear that it will have great outcomes. And anything that improves capabilities without improving understanding is useful iff those capabilities are themselves good -- which is usually unclear.
  • How do my research interests/aspirations fit into this? I'd say I'm generally interested in Bayesian modeling and inference, PPLs, ML for science, and applying CS concepts (composability/modularity, type safety, abstraction, turing completeness) to ML/AI. And in some looser sense I'm interested in mathematical structure, understanding consciousness, meta-reasoning, and what AI can teach us about how to think and live (everything from 'rationalist' notions of how to think effectively, to utilitarian ethics, to metareasoning, value alignment, fairness and social justice, etc.). None of these things have an inherently positive valence. But many of them could be executed in ways that do have a positive impact. For example, interspersing abstract research work with modeling specific problems in biology, mental health, governance, music, etc. And if I think I generally have a more finely calibrated sense of what is good than the average person (which is in some sense arrogant, in some other sense obviously true if the average person voted for Trump, or even is someone like NameRedacted who is smart and well-meaning but satisfied doing arbitrary technical work) then it is on average good for the world for me to improve my technical and research skills, learn more about other fields of science, and hold impactful positions in the research community. This still doesn't give me a specific research program to get excited about. But it does argue that generally getting back into research or at least technical ambition is a valuable thing.
  • At a higher level, I eventually need to address the tension between the idea that everything is pointless and the seemingly obvious fact that some things are better than others. The first comes from:
    • we will all die eventually
    • nothing we do is of cosmic or lasting significance (excluding the Bostromian light-cone perspective, and even then the significance is limited, and it's hard to know whether individual work will be useful)
    • many things we think might be good will turn out to be bad. improving a local maximum could prevent us from reaching a better maximum. pushing in a good direction can instigate countervailing second-order effects (facebook, or hillary clinton running for president). people who convince themselves to act need to ignore these phenomena or convince themselves they are not important (targeting first-order effects helps on average even if many of them are counterproductive, so the expected value of your efforts are positive).
    • doing things is hard and most projects fail even at their stated goals. this is especially true if you're in my position of having never succeeded at anything and having no self-confidence.
    • research pushes us to look for deeper impacts. the more abstract you get, the more generally applicable your work is. but the less likely it is to actually matter. Has most work in theory of computation actually affected practical computation? Would a solution to P vs NP actually matter? I kind of lean towards no. On the other hand, understanding the halting problem is a useful intuition for thinking about building PL systems. The product of theory is intuition, at least to a large extent. But this means much of the actual theory is useless, or only very tangentially helpful.
  • On the other hand, almost no one's work is truly worldchanging. Even Barack Obama had only a limited impact. PhdAdvisor's AI safety advocacy mmight matter but will itself have only a limited impact. The world is a big place, there are many other smart people and forces beyond our control. Meanwhile many people seem to be satisfied with jobs that researchers would view as pointless or non-impactful. For example, research grant administrators view themselves as contributing to research (without realizing how pointless most research is). Salespeople view themselves as helping meet people's desires, and advancing the impacts of their corporate tribe (without realizing the world would probably be fine without their entire profession). From a high enough perspective, almost no work is valuable. Even Elon Musk may have no long-term impact.
  • So it's important to do work you think is cool for its own sake (if you can find such work, to make you 'come alive'), ideally aligned with outcomes you believe in, but not to take it so seriously that you believe you are essential or that you let the work prevent you from achieving joy in other aspects of life, or to be cruel or inconsiderate to people.