Modified: July 14, 2023
goals are arbitrary
This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.If you fail to achieve your goals, you'll be sad---almost self-evidently. Nonetheless: there is no coherent notion of the 'right' goals to have.
In what sense is this true? At the philosophical core is the is vs ought dichotomy: as Hume pointed out, you can't go from 'is' (descriptive) statements to 'ought' (normative) statements.This is controversial, e.g., Searle claims that 'X promised to do Y' implies 'X ought to do Y'. That means any 'ought' statement can only be justified with another 'ought'. There is no way to get there from an objective description of the world. So it's hard to see how one could say that any set of goals is better than another.
This is related to, though not quite the same as, the fact that nothing matters.
Beyond the philosophical technicalities, some other ways this can be true:
- It is extremely difficult to know if or how our immediate goals will actually contribute to the fulfillment of our actual, final goals. It might turn out that the goal of 'become a lawyer' is actually a very negative contribution to the larger goal of 'live an enjoyable life', so that failing to achieve it was in fact a good thing (the optimistic take on this is that every branch has high-value leaves).
- Our utility normalizes over time. We are on the hedonic treadmill: a poor person might be used to getting happiness from a plate of rice and beans, and a good book. As they get richer, they get used to fine food, fancy wines and beers, luxury clothes, houses, cars, and so on. But achieving these rewards doesn't necessarily lead to happiness, and in fact it can prevent us from enjoying what we used to enjoy. The goals of 'make a hundred dollars' and 'make a billion dollars' are arbitrary in the sense that they are just signposts to be reached, not things that will satisfy us in and of themselves. eudaemonic happiness is in the journey, not the destination; dukkha will always be with us.
I'm not sure I believe the is-ought dichotomy. It's a distinction we can choose to make or not to make. It's true that we don't currently have any logical rules from 'is' to 'ought' statements, but we could choose to adopt some. A set of rules from is to ought statements is, essentially, a morality (if you are in situation A, you ought to do B). We are building cognitive technology, and the logical system is perfectly valid, with or without a morality (unless your morality is inconsistent).
- Is this point useful at all? By what criterion would we choose which type of technology to build? There would have to be some higher-level 'ought'.
It seems really tempting to me to say that there are some universal goals. But it might be hard to sustain that.
- Saying that some goals are universal in the world for empirical reasons (everyone seems to want status), doesn't mean that we ought to have those goals. (but see the logical-implication pattern below)
- Certainly it would be problematic to treat evolutionary goals, like survival and reproduction, as normative goals.
- Maybe some goals are held by the shared consciousness we experience during ego death.
If a goal were instrumental to all possible other goals, then perhaps it would be reasonable to say we 'ought' to do it. Improvements that make us generally more capable of fulfilling our goals, such as increasing our power and staying alive, fall into this category.
We can always take refuge to logical implications. "If your goal is to stay alive, you ought to eat." is a perfectly fine descriptive statement in the abstract. But once you pattern-match against the first clause, it 'catches' you---conditioned on wanting to stay alive, you ought to eat. Now, statements with a conditional correspond to 'ought' statements only for those people that they catch---but when that class is 'everyone' or even 'nearly everyone', we have found a universal or nearly-universal 'ought'.
- This pattern lets us move from empirical universals ('everyone wants to stay alive') to moral universals ('everyone ought to help bail out this sinking ship we're on'). It presupposes the empirical universals---but that's a lower bar to clear.