You may remember an entry where I analyzed the common definitions of faith, found them lacking, and decided to strike out on my own and define faith as the result of our need to belong, with a counter-impulse based on cognitive dissonance. I didn’t define faith as such, but I can roughly define its relationship to evidence as such: faith is acceptance of evidence as long as it concords with group beliefs, unless the evidence against those group beliefs is overpowering.
What about rationality? One day, I read about something called the Argumentative Theory, a position with a large degree of support which purports to explain the role of rationality. In Argumentative Theory, rationality did not evolve in humans to help us find the truth, but rather to help us support our beliefs and to argue effectively, thereby influencing others and helping us maneuver the social hierarchy.
Now expand this a little bit by including group allegiance and hierarchies into it. If rationality helps us support our beliefs and convert others, and many of our beliefs come from the groups we belong to, then it must be the case that rationality serves our need to belong to those groups.
Therefore rationality and faith are basically the same thing and the distinction is really pointless. In retrospect this all seems pretty obvious, but it took me a long time to get to this realization. We seem to be lulled into believing in these stock definitions of rationality and faith that really make no sense. I’ve already done this analysis in the case of faith (again, see this entry).
What about rationality? Well, the usual sort of definition is that rationality means someone relies on the evidence of the senses, instead of relying on their feelings, values or prejudices (be careful in distinguishing this from instrumental rationality, which means taking decisions that best fulfill one’s stated goal). If you look for definitions of reason in the epistemic sense, that’s a general idea of what you’ll find. However, there are a few problems with this definition:
* People who are lacking in higher emotions or lack the mental connection between emotions and decisions, such as sociopaths or people with brain damage to the ventromedial prefontal cortex, are notoriously incapable of taking what we consider “rational” decisions (interestingly, people with vmPC brain damage are also measured as being much more credulous, as well as scoring higher on religious fundamentalism). It is now recognized that emotions are an integral part of reasoning.
* The “evidence of the senses” is not accumulated from all the sensory date we receive. We actively decide what to focus on and what not to focus on. This can be as simple as looking at something or turning away, and as complicated as deciding whether a given news item is important. Our emotions, values and prejudices are part and parcel of gathering evidence because they determine what we pay attention to and how we interpret it.
In general, definitions of rationality implicitly assume that we are equanimous and unbiased in our analysis of the evidence, which is simply delusional. The only “value-neutral,” “unbiased” human is a dead human.
I expect people will raise the obvious counter that some people believe in nonsense and others don’t, and that there must be some epistemic reason for the difference. I agree that there is an epistemic reason, but I think it’s a matter of degree, not of kind.
The way I think about it is, all reasoning is circular. We start from our own values and beliefs, and we come back to them at the end. In between, hopefully, there’s evidence, unless your reasoning is really just repetition of a fixed idea, in which case it’s just a point. So one’s reasoning is of varying quality depending on the scope and breadth of the evidence we consider between point A and point B.
And there are cases when a large enough circle can, with enough repetition, end up at a slightly different point than when you started. That’s what makes the difference between someone who stays stuck in the ideologies they grew up with, and people who keep learning.
If there’s no such thing as rational thought, then how does science work? After all, it’s been extremely successful at explaining reality. The scientists who investigated Argumentative Theory state that it comes from the arguments between factions:
Science works very well as a social process, when we can come together and find flaws in each other’s reasoning. We can’t find the problems in our own reasoning very well. But, that’s what other people are for, is to criticize us. And together, we hope the truth comes out.
We hope so, however this is not likely to happen when strongly-held values and beliefs are involved, including religious values and beliefs. Fortunately, most scientists are secular, which generally bypasses that particular problem, although scientists still have values and biases like everyone else.
On the whole, it seems that reason and faith are ways to label people who agree and disagree with us. And because we don’t “choose” to believe what we believe, all arguments are ultimately ad hominems (“you are not like me or didn’t go through the same experiences I did, therefore you’re wrong”). So… what’s the point of arguing?
I think the success of sciences gives us at least one good reason. It is in a variety of positions, and their conflict, that we can eventually hope to arrive at some truth. The arguments themselves provide more evidence for ourselves and for others.