Created: June 07, 2021
Modified: June 07, 2021
Modified: June 07, 2021
rationality is moral
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.- From a utilitarian perspective, all of morality follows from improving global utility, and it follows that it'd be better to do this effectively than ineffectively.
- Corollary: morality requires rationality.
- If someone sets up a bomb that will maim and torture hundreds of people, and can only be disarmed by entering a solution to a particular SAT problem, then the 'moral' thing to do in that situation would of course be to solve the SAT problem and disarm the bomb. As stated, this is clearly an artificial situation, but lots of real-world challenges require solving hard problems. Running an economy, allocating resources, strategic planning, figuring out how to help the most people using the fewest resources. Moral behavior isn't just about good intentions; it's about trying hard to learn about the world, understand the roots of problems, and work effectively to fix them.
- Of course, perfect rationality is impossible, and so perfect morality is also impossible. For a given level of intelligence, we could say that 'morality' consists of doing the best you can under bounded cognition. But your level of intelligence is never fixed. You can work to actively improve, learn about the world, build your abilities, or at a more global level, improve the world's intelligence through education or institution-building or AI research.
- What if you're not a utilitarian? Well, I hold that Kantian or Aristotelian ethics are tractable approximations to utilitarianism, and so arguments about utilitarianism should also apply to its approximations. But even from first principles, any ethics that fits our moral intuitions should include an idea that you have a duty to be as smart as you are able to in figuring out how to help people. Giving a man a fish is admirable, but if you have the ability to teach him how to fish, you should do that.
- This gets at ideas of effective altruism. But it doesn't guarantee that that particular community has a monopoly on the right ideas. We live in a wildly uncertain world, and it's legitimate to draw your own conclusions about which causes are the most effective. Optimizing QALYs is not necessarily the right goal under negative utility, and on its own it doesn't necessarily lead to a brighter future.