There is a view of morality, most notably being propagated by misogynist and bigot Sam Harris, which claims that morality is reducible to general human well-being. He claims, on that basis, to have identified the scientific underpinnings of morality. These are laughable claims to anyone who knows about moral views and their defeaters, but it seems that a lot of atheists don’t know enough on the subject to really address these claims.
Equating morality with something like “human well-being” is called reductionism- a meta-moral position which reduces evaluative properties (like “good” and “evil”) to some factual property or properties. Sam Harris defines this in his introduction:
I will argue, however, that questions about values—about meaning, morality, and life’s larger purpose—are really questions about the well-being of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc.
This is the reductionist trick that Harris is using: that evaluative properties can be reduced to questions about emotions, impulses, laws, institutions, and neurophysics. According to this view, knowing enough facts related to these things means that we can make moral judgments, because questions of value are really just questions about these scientific issues. A moral statement can be reduced to some set of factual statements. So when I say, for instance, “torturing babies is wrong,” I am actually making a statement about scientific issues, such as the pain caused by torture, the nervous system of babies, the emotions that a torturer goes through, and so on, which I express through evaluative terms like “wrong.”
The first thing to point out is that there is no evidence to demonstrate that morality is reducible in this manner, there is no evidence to demonstrate that “general human well-being” is actually what morality reduces itself to. The common response given by Harris and others to this objection is: “if you don’t think morality is simply about well-being, then why should we care about what you think morality is?” But this is a poor response. A negative utilitarian could also say the same thing: “if morality is not about minimizing suffering, then why should we care about it?” Likewise for someone who believes morality is reducible to, say, happiness, self-accomplishment, the accumulation of knowledge, or whatever.
This brings me to my next point, which is that this “general human well-being” standard is a utilitarian standard, and therefore can be no more valid than any other utilitarian standard. For instance, Sam Harris cannot validate inter-subjective calculations any more than any other utilitarian can (although they claim far and wide that they can, until you ask them how). Furthermore, like most (but not all) utilitarian positions and like adaptationist positions, it cannot explain acts of self-sacrifice and justifies acts of sacrifice which are clearly immoral.
For instance, we widely believe that the people who helped hide Jews during the Holocaust were acting morally. If morality is reducible to general well-being, then this position is incomprehensible. After all, the act of hiding Jews was a sacrifice of well-being (depending on the country, you could be executed if you were found hiding Jews), at little to no gain in general well-being. Anyone who seriously believes that morality can only mean maximizing general well-being should boo Schindler’s List. Likewise, utilitarian advocates must hold that sacrificing the lives of innocent, non-consenting people in the name of a greater good (like, say, Hiroshima and Nagasaki) is a good thing.
I am not saying that any moral position which entails these views is necessarily wrong, but that these are at least pretty strong counter-arguments. It also gives a lot more weight to our “why should we care” questions. If your moral position entails that self-sacrifice of well-being to help others is bad, and that sacrificing innocent lives for the general well-being is good, then why exactly should we care about it? This seems to be no less a coherent question than the one they ask us.
Another problem, which is a problem for all reductionist positions, is that the well-being standard is an attempt to get evaluative properties from factual statements, which we know is logically impossible. You cannot get moral statements from non-moral statements, any more than you can get esthetic statements from non-esthetic statements, logical statements from non-logical statements, objective obligations from inter-subjective orders, and so on (in that regard, the is/ought dichotomy is really not special at all, but a rather commonplace principle). The formal argument demonstrating the is/ought gap was written by Toomas Karmo in 1988 (I am going here from the description by Michael Huemer in Ethical Intuitionism).
I will spare you the details, but the gist of the argument is this. There are statements that can break the is/ought barrier, but these statements are necessarily trivial (for instance: “it is good to do good things,” or “murder is bad”). For a moral statement to be non-trivial, it must be the case that under some possible sets of values the statement is false, and that under some other possible sets of values the statement is true. For instance, “torturing babies is wrong” is true under most possible sets of values, but it can be false under a possible set of values where torture is an absolute good, therefore the statement “torturing babies is wrong” is non-trivial. To derive a non-trivial moral statement, we must have some way of rejecting some sets of values and not others, but that itself would require a moral judgment. Therefore, no collection of factual statements alone can derive a moral statement.
Suppose a person knew everything there is to know about “positive and negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc,” but held to no value system. This person would still not be able to make non-trivial moral statements. They certainly would not be able to derive “what is good (/desirable/ought to be pursued/whatever) is what furthers general human well-being,” unless they were first able to logically eliminate all possible value systems which do not further well-being, which they cannot.
Sam Harris also addresses intuitionism, but his reasoning is, well, bizarre:
I am arguing that everyone also has an intuitive “morality,” but much of our intuitive morality is clearly wrong (with respect to the goal of maximizing personal and collective well-being).
Of course it doesn’t. Why should we expect intuitions to jibe with someone’s manufactured moral system? To take an example I’ve already used, intuitively we do not boo Schindler’s List, but Harris’ standard means we should. Does that mean the intuitions are clearly wrong? The only way this argument makes any sense is if we assume Harris’ morality is true by definition and the golden standard by which we should evaluate all other moral claims, but as I think I’ve demonstrated here, this is very silly.
This also brings up another big problem with Harris’ moral stance. How do we know he’s right? Since no amount of factual information can validate it, we must therefore validate it through our own morality. But this, in turn, means that Harris’ position is itself secondary to some other principle, the principle that was used to evaluate whether the position is valid or not. If someone thinks that well-being is a good standard, the moral principle by which he has judged it good must therefore be superior to, and replace, well-being. And if we cannot judge Harris’ stance as being morally good, then why follow it?