"no harm, no foul"
Sunday, July 20, 2003
 
QUESTIONS ABOUT UTILITARIANISM: Consider a bargain-basement version of utilitarianism, one which says, “maximize the happiness of the greatest number of people” or “maximize the greatest good.” Now, in order to make this theory work, we’ve got to make it more sophisticated, much more sophisticated. Specifically, we’ve got to answer the following three concerns. The trouble is, I’m not sure how to go about answering them (the concerns are typical, but I thought it’d be useful to list them).

First, you’ve got to have some story about what makes people happy, or what happiness consists in. This seems an almost impossible task. Fulfilling the preferences of individuals? Even supposing that you could identify what those preferences were (certainly we don’t want to identify what people truly prefer with what their revealed preferences are), there are still the problems of (a) sometimes making people happy means not fulfilling their preferences and (b) sometimes satisfying the preferences of one person is exactly what makes another person’s preferences go unfulfilled. A related difficulty is that what things we can identify as good depend on the conceptions of the good we hold. For example, what is pleasurable to me is constrained in all sorts of ways by what I consider it permissible to take pleasure in – even if only for the simple reason that I’d feel guilty doing something wrong to satisfy a base pleasure, and the guilt would outweigh the pleasure I got (a stronger version of this claim is that a pleasure achieved by doing something wrong would just not count as a pleasure for me). Finally, there are some pleasures we want to discount altogether, such as the pleasure the criminal gets from committing his crime (Bentham, in his rigorous consistency, says we have to consider this sort of happiness, too – which has to be weighed against the pain punishment would cause the criminal). In short: is there some good or condition (or goods or conditions) that we can specify as either intrinsically leading to, or constituting, happiness?

Second, even supposing that we could identify one good or some set of goods that, when we maximize them, they lead to the happiness of the greatest number of people, there’s still the problem of calculating how to bring this result about. Take the case of animal welfare. Intuitively, this seems a no-brainer (or at least it seems so to many): the pain of the animals easily outweighs the pleasure people get from eating them. So, we shouldn’t eat animals. But there are a whole number of factors left out of this calculation: for one, there’s the fact that if there wasn’t a market for eating animals, many, many animals wouldn’t exist and therefore therefor there would be no pleasure for them, existing being a precondition for experiencing pleasure (this is related to what I think Parfit terms the “repugnant conclusion,” viz., that utilitarianism is committed in principle to the possibility that we should just harvest a whole bunch of miserable humans, because the sheer number of them would mean greater happiness overall, then if we just had a smaller number of content humans). In addition, we’ve got to factor in that there’s a worldwide economy that depends on a lot of people eating meat – the welfare of the people working in the meat industry would decrease precipitously if we all of the sudden stopped eating meat. The fact that all of these considerations would have to be weighed gives us a nearly impossible task: an ethical task for a God, maybe, but not for finite human beings. This isn’t an argument for having absolute rules, of course, but it is a serious knock against the practicability of utilitarianism (usually thought to be the most hard-nosed, practical, of all philosophies!).

Third, there’s the problem that some goods we do enjoy are goods that aren’t subject to maximizing criteria. Take friendship, an example Thomas Scanlon uses to great effect in his What We Owe Each Other: you don’t pay proper heed to the value of friendship if you break up with two friends in order to win the friendship of four other people. Something about being a friend with another person precludes this type of maximizing logic. For another example, and one more directly relevant to the acceptability of utilitarianism, there’s the value of being inviolable (highlighted in the recent work of F.M. Kamm) – we enjoy being the types of beings that can’t be sacrificed for the greater happiness of other beings. As Kamm phrases it, not being capable of being sacrificed for the greater good gives human beings a certain kind of status, the status of being inviolable. And this is a good, Kamm argues, that we can’t get if we’re just out and out utilitarians, and say that any given person can be sacrificed if overall utility would be increased: in that utilitarian world, people aren’t inviolable (at the very least, they would lose the sense of security such a status bestows – they wouldn’t know if at time T they might be sacrificed for the greater good. But I think Kamm is making a stronger point).

Of course, the utilitarian can reply to this last worry along these lines: “Well, in the case of goods like these, then what it means to maximize will mean something different. What it is to maximize the value of friendship is just to be a good friend to the friends you have, and not abandon them for the sake of being friends with more people.” In this case, utilitarianism can accept any type of good – but at the cost, I think, of sacrificing any bite a utilitarian theory might have. After all, a utilitarian on this account could be a deontologist, simply by saying, “The way you maximize respect of persons is just not to treat them as means to a greater good.” If the content of what a utilitarian can count as contributing to happiness or the greatest good can be blown wide open like this, then utilitarianism just amounts to the very, very modest type of theory – a theory that says, whatever good you have, maximize that good, within the constraints that good allows, where what “good” consists of is left worryingly vague -- and could, in principle, include constraints on maximization: a point which, at the very least, has the whiff of paradox about it.

Comments: Post a Comment

<< Home

Powered by Blogger