Saturday, July 3, 2010

Thoughts on Ethics

Ethics is one of my favorite topics. If philosophy departments weren't so full of quacks, the grading wasn't so subjective, and the requirement's didn't include so much tripe, I'd probably be a philosophy major. (More thoughts on philosophy in one of my two Amazon.com book reviews.)

Warning: This is the customary warning that someone has probably made these points before.

Consequentialists think that the right thing to do is the action with the best consequences. Utilitarians are the most common type of consequentialists because they specify an intuitive criteria for "best:" whatever generates the most happiness is best. Of course, what exactly that means is a subject of much debate.

One "refutation" of utilitarianism is this reductio ad adsurdem argument. Consider a scenario where a million people will be happier by blaming and punishing a scapegoat for their problems. A slight increase in happiness multiplied a million times over could easily cancel out a ton of misery for one scapegoat. So the right thing to do is punish the scapegoat. But we know that isn't right. You can't punish someone for crimes they didn't commit.

But who says the utility function should just be the sum of happiness in the universal minus the pain in the universe. The point is that the distribution of happiness could matter. The utility function could take the happiness of each individual into account separately. One idea for how to do that would be to think of all possible words and rank them according to which one you'd want to live in if you didn't know who you'd be. This utility function captures the value of original position and veil of ignorance from John Rawls without the weaker parts of his theory of justice. This gives the utility function a much more natural, and less grand, interpretation too.

If we apply this utility function to the scapegoat case we find it gives the "right" prediction. If the risk of being that scapegoat is large enough no one would choose to live in that world. On the other hand, if everyone gains a lot and the risk of being the scapegoat is small, then the theory says punish the scapegoat. That makes sense. We don't want to lower the speed limit even though we know it would save a few lives because we all want the little bit of extra happiness from a shorter commute. Those who die in accidents are de facto scapegoats.

The interesting thing about this theory is that is changes the utility function from being concerned with the nature of "the good," i.e. with the meaning of a good life is. It punts on those questions and simply assumes that whatever we prefer is a good life. There's a large literature on this "good life" question, the debate largely revolving around the issue of whether a good life is one with satisfied preferences, happiness, or "virtue" (whatever that is).

An interesting property of this utility function, though, is that everyone might end up with different rankings of preferences. I might be risk averse and hate worlds with lots of inequality while you might be risk-neutral and just prefer the happiness world. How should we decide what the "actual" best world is? We'd have to know that to know what the true "best action" is. (Ignore the fact that it's already impossible to do this thought experiment and rank all possible worlds on our own, much less for everyone!)

One idea is that you could just take everyone's ranking of all possible words (suppose there is a finite number) and use them as some kind of vote. Whichever world wins the vote is the best world and the corresponding action is the best action. The problem with this is Arrow's Impossibility Theorem applies which means, under some basic criteria, there is no fair system for deciding which world is the best.

The lesson I take from this is that ethics is really a game where we make the rules and it's important not to over think the significance of these rules. When you try to make them consistent and sensible things start to fall apart, even on a purely theoretical level. When the utility function represents something objective (e.g. happiness, brain states, activity in pleasure sensing areas of the brain) then you're left asking why should ethics be concerned with that quantity? Is the purpose of the universe really to maximize the activity in pleasure sensing areas of the brain? But if you go with a less meta-physically grand strategy based on preferences, things fall apart. I ignored two even more basic problems with preferences. First, they may not even be transitive (e.g. A > B, B > C, yet C > A). Second, people are bad predictors of what they like. The lotteries will likely be heavily influenced by cognitive biases, like focusing too much on small changes of a miserable life.

Theoretical ethics is a mess, yet the alternative of having ad hoc rules for living seems equally unsatisfying.

Update: Related thoughts from Bryan Caplan.

No comments:

Post a Comment