Also see previous entries I have written on intuitionism:
What is Ethical Intuitionism?
Why Atheists Should Be Intuitionists.
Why intuitionism supports antinatalist conclusions.
Ideological bias is the main ethical error.
An intuitionist answers Matt Slick re: atheist morality.
Some misunderstandings about intuitions.
Equating intuitionism with “naive intuitionism.”
Intuitionism as used by reactionaries.
***
Many times, people who talk about morality from an authoritarian standpoint, or from a nihilistic standpoint (which usually end up being pretty similar), will ask the question: “why should I care about what’s right and what’s wrong?” One of the assumptions behind this question is that humans are in a default state of “not caring about morality” and that we need to reason our way into it. If there is no good reason to do so, then the individual should remain in a state of “not caring.”
From an intuitionist perspective, this assumption is invalid. The vast majority of human beings are born with the mental structures which enable morality, and cannot “opt out.” As sentient beings and social beings, we have no choice but to have some idea of what’s right and what’s wrong, because we must act intelligently in order to fulfill our needs. We can only distort our moral balance, generally as a result of some profound moral bias (such as that one might acquire when joining a fundamentalist religion, a cult, or be subjected to some other form of indoctrination). In time, if we leave the source of bias, our moral balance will begin to re-establish itself.
But there is another way we can interpret the question, and that’s to ask: “why should I care about YOUR definition of right and wrong?” That question is more important, insofar as any positive claim or belief must be shown to be based on reality in order to be rational or credible. Many people are unable to answer this question and fall back on the inter-subjective idea of “if you agree with me, then you should care, otherwise you shouldn’t.” This is not a good sign. A position based on reality should be able to point to some fact (however abstract), not just agreement that the position is true.
There is a consideration that complicates things here, and that’s the is/ought problem, which entails that one cannot prove the validity of a moral system based on statements of fact alone. Intuitionism cuts through the Gordian knot of the is/ought problem, in that moral intuitions (and all other intuitions) are part of our biological makeup and partially constitute the kind of organism that we are. We would not ask why humans walk the way they do: the way we walk is a result of our skeletal system. Likewise, our morality is based on our moral intuitions.
To continue the analogy, people may wear all sorts of shoes, may walk or run in various stances, may become handicapped, and all of these behaviors and abilities/disabilities have as their foundation the fact that we are bipedal animals with a certain gait. People may act “selfishly” or “altruistically,” their moral systems may get unbalanced, they may label themselves all sorts of ways. All those behaviors are also based on moral intuitions as their foundation, in the ways some become more important than others, in the expression of those intuitions in our specific societies or subcultures, and so on.
So the question “why should I care about right or wrong?” fails to make an impact on intuitionists, because it has the same general nonsensicality as “why should I walk on two legs?” or “why should I act like a social animal?”. In all cases, the answer is the same: “you’re a human being, so you already do.” Moral statements are derived from moral intuitions, which are pre-existing and therefore do not require justification or reasoning for their existence (or at least no more than being bipedal does).
The question, however, does apply to moral positions which assume some moral system requiring justification or reasoning. So take utilitarianism, for example (or any of its variants). Asking “why should I be a utilitarian?” is a valid question, since utilitarianism is an abstract construct which must be accepted by the individual. If we accept the existence of moral intuitions (which I think we should, as they do exist), then either utilitarianism is based on moral intuitions or it is not. If it is, then why not go straight to the source and skip utilitarianism? If it is not, then why should we care? And equally importantly, how could it possibly be justified without appealing to some form of pre-existing evaluation, whether it’s intuitions or something else?
I have used the same argument structure before about following the Bible as a moral guide. Before one can do so, one must accept the Bible as moral in the first place. This evaluation was made on some basis. So why not just skip the Bible and go straight to that basis, elaborating on it? If the individual already possesses a way to evaluate morality, then the Bible was not needed in the first place. If the individual does not already possess a way to evaluate morality, then they cannot validly evaluate the Bible as moral in the first place.
Of course utilitarians (or other moral realists who believe they have a solution to the is/ought problem) have their own justifications for why they believe what they believe, and I am not dissing that. This is not an analysis of those justifications. I would be willing to examine them on this blog, although I do not know of any right now (feel free to submit any you know).
One further issue is that many defenses of moral realist positions smuggle in supposed intuitionist evaluations as a “check” against invalid moral premises. For instance, some people will claim that negative utilitarianism cannot possibly be true, because it leads to the conclusion that no one should exist. But why cannot this even be possibly true? Well, because it’s “absurd.” That sounds like an arbitrary appeal to intuition (I have analyzed why I believe antinatalism, and by extension human extinction, is harmonious with the intuitionist position here). In practice, it means something like “well, I don’t like this position because it ruins my big complicated arguments about morality, and I can’t obfuscate it, so I’ll just ignore it.” That’s just intellectual dishonesty.