James Derulo's

Portfolio
Every so often, while in a discussion about the correct interpretation of this or that passage in the Bible, a certain claim will be articulated.  The claim goes something like this:

If a Christian decides to treat a particular passage of Scripture as something other than a straightforward historical and/or scientific account, then they no longer have good reason for treating the central claims of Christianity in the same manner.

For example, let's say that I express a belief that the story of Jonah being swallowed by a fish is an allegory (this is just an illustration, I am not presently intending to express such a belief).  I no longer have good reason, my detractors might say, to hold the narrative of the resurrection of Jesus Christ to be a historical account rather than an allegory or some other non-historical literary device.  Such a claim comes from Christians and non-Christians alike.  The Christian is concerned that treating some portion of the Bible as other than history or science will effectively undermine our trust in the rest of Scripture.  The atheist, on the other hand, might see this as a Christian's way of dodging some deadly argumentative projectile which has been launched at him (it seems, rather tragically, that dodging bullets is only cool in the movies): if the external evidence corroborates some biblical detail, then it is historical.  If it contradicts it, then it is a metaphor. (This is almost verbatim of something that was said to me by an atheist recently, tongue-in-cheek).  Both of these views are misguided.  The Christian is under no obligation to treat every passage of Scripture the same.  In fact, to do so is irresponsible and wrongheaded.
I don't really have any specific way of guarding against cognitive bias myself, other than trying to think carefully and spend time in honest introspection. But it may at least be useful to know what some common cognitive biases are. That may make it a bit easier to guard against them. Here are just a few that are relevant to discussions of philosophy, apologetics, and argumentation.

The one bandied about the most these days is confirmation bias. One falls prey to confirmation bias by selecting evidence that confirms one's preconceived notions, and rejecting all other evidence. For example, have you ever heard someone complain that it is always they who are stopped the red light and everyone else makes it through the intersection in the nick of time? Obviously, over the course of one's driving career that is almost certainly not true. That person is just ignoring all the times they made it through the light (probably because one's mind isn't on the light when able to simply cruise through the intersection). In the case of a belief system, one may look for evidence for one's own beliefs while subconsciously rejecting any opposing evidence.

Here's a biggie: the belief bias. With belief bias, one is predisposed against an argument not on the basis of it's logic but on the basis of the believability of the conclusion. Think of the Pevensie siblings in The Lion, the Witch, and the Wardrobe - especially Peter and Susan. Lucy is incredibly excited to tell them that she has found another world in the wardrobe. This announcement is met with utter skepticism. However, the professor calls them on their skepticism: Lucy's excitement seems genuine, and she is the most honest of the siblings. So, logically, she is probably telling the truth. But what she is claiming is plainly ridiculous! And yet it was true.[2]

Another is the focusing effect. Ever seen some sports pundit chalk up the success or failure of a recently completed season on just one factor? Happens all the time. And they are wrong all the time. The success of a team over the course of a season - or even one game - is dependent on a myriad of factors. But these pundits are so certain it was all due to this one thing they identified.

One more. In general, we have a tendency to reject information or an argument from an adversary simply because it is coming from our adversary. It's sort of a psychological ad hominem- that argument is bad because it came from my adversary. Someone who falls for this is suffering from the scourge of reactive devaluation.

In closing, it might go without saying but I am no psychologist. And I'm not a philosopher (many biases have actually been identified first by philosophers). Yet I think it is important that we think carefully and honestly about important issues, so having at least some idea of how we may be biased may help guard against these tendencies. That and wisdom.


Notes:
1. This of course can be taken too far. There are some beliefs that are not based on rationality but nonetheless seem completely reasonable. If you see a tree, you believe a tree is there not by some deductive argument but because your perception of the tree's existence is basic. The belief the tree is there is what Plantinga would call a properly basic belief.
2. Granted, this is an example from a fictional work. A more pressing example might be the arguments for the resurrection. Many are rejected outright, simply on the grounds that the idea of a man rising from the dead is ridiculous.
In a previous post, I introduced the so-called "Is/Ought Problem," first made famous by David Hume.  Hume argued that you cannot get from an "is" to an "ought"-- that is, you cannot argue from the way that things currently are, to the way that things ought to be.  Or more generally, you cannot get from factual statements to evaluative statements.  But this is exactly how many people argue in a moral context-- "punching someone causes them pain, therefore you should not punch someone," for example. Or "James is fast and strong. He should play football." But is this true?  Is it always uncalled for to argue from premises involving the way things are, to conclusions involving the way things ought to be?


In After Virtue, Alasdair MacIntyre gives several counter-examples for why this is not the case.  Consider first:
There are several types of valid argument in which some element may appear in a conclusion which is not present in the premises. A.N. Prior's counter-example to this alleged principle illustrates its breakdown adequately; from the premise 'He is a sea-captain', the conclusion may be validly inferred that 'He ought to do whatever a sea-captain ought to do'. This counter-example not only shows that there is no general principle of the type alleged; but it itself shows what is at least a grammatical truth - an 'is' premise can on occasion entail an 'ought' conclusion. [1]
You can see here that the reason this example succeeds is because there is a concept of "Sea Captain" which entails certain responsibilities.  Those responsibilities are built into what it means to be a sea captain.  Thus the counter-example adequately shows why Hume's contention is wrong, at least in some cases.  Consider further:
From such factual premises as 'This watch is grossly inaccurate and irregular in time-keeping' and 'This watch is too heavy to carry about comfortably', the evaluative conclusion validly follows that 'This is a bad watch'. From such factual premises as 'He gets a better yield for this crop per acre than any farmer in the district', 'He has the most effective programme of soil renewal yet known' and 'His dairy herd wins all the first prizes at the agricultural shows', the evaluative conclusion validly follows that 'He is a good farmer'.  
Both of these arguments are valid because of the special character of the concepts of a watch and of a farmer. Such concepts are functional concepts; that is to say, we define both 'watch' and 'farmer' in terms of the purpose or function which a watch or a farmer are characteristically expected to serve. It follows that the concept of a watch cannot be defined independently of the concept of a good watch nor the concept of a farmer independently of that of a good farmer; and that the criterion of something's being a watch and the criterion of something's being a good watch-- and so also for 'farmer' and for all other functional concepts-- are not independent of each other. [...]  
Now clearly both sets of criteria-- as is evidenced by the examples given in the last paragraph-- are factual. Hence any argument which moves from premises which assert that the appropriate criteria are satisfied to a conclusion which asserts that 'That is a good such-and-such', where 'such-and-such' picks out an item specified by a functional concept, will be a valid argument which moves from factual premises to an evaluative conclusion. Thus we may safely assert that, if some amended version of the 'No "ought" conclusion from "is" premises' principle is to hold good, it must exclude arguments involving functional concepts from its scope. But this suggests strongly that those who have insisted that all moral arguments fall within the scope of such a principle may have been doing so, because they took it for granted that no moral arguments involve functional concepts. [2]
So we have two examples of arguments which start from premises stating only the way things are, to a conclusion that entails some evaluative judgement.  MacIntyre rightly points out that at this point Hume's defenders can only proceed by modifying their principle.  Perhaps it is the case that evaluative conclusion can (in some cases) proceed from factual premises.  Perhaps instead, the principle is only true from those arguments which do not involve functional concepts.  But this shift identifies and magnifies the breakdown of the Enlightenment attempt to justify morality rationally, a major thesis of MacIntyre's book:

Yet moral arguments within the classical, Aristotelian tradition-- whether in its Greek or its medieval versions-- involve at least one central functional concept, the concept of man understood as having an essential nature and an essential purpose or function; and it is when and only when the classical tradition in its integrity has been substantially rejected that moral arguments change their character so that they fall within the scope of some version of the 'No "ought" conclusion from "is" premises' principle. That is to say, 'man' stands to 'good man' as 'watch' stands to 'good watch' or 'farmer' to 'good farmer' within the classical tradition. Aristotle takes it as a starting-point for ethical enquiry that the relationship of 'man' to 'living well' is analogous to that of 'harpist' to 'playing the harp well' (Nicomachean Ethics, 1095a 16). But the use of 'man' as a functional concept is far older than Aristotle and it does not initially derive from Aristotle's metaphysical biology. It is rooted in the forms of social life to which the theorists of the classical tradition give expression. For according to that tradition to be a man is to fill a set of roles each of which has its own point and purpose: member of a family , citizen, soldier, philosopher, servant of God. It is only when man is thought of as an individual prior to and apart from all roles that 'man' ceases to be a functional concept. [3]
Thus the failure involves not a simple disagreement with classical thinkers (the predecessor culture, as MacIntyre calls it), but a wholesale rejection of the classical worldview, whereby "man" was understood to have an essential nature.  And this essential nature involves functional concepts.  To say a man is a "good man" is no different from saying a watch is a "good watch," a farmer is a "good farmer," or a sea-captain is a "good sea-captain" on the classical view.  Each of these is true in virtue of the role that they play, and the functionality which they essentially exhibit. 

So given Hume's presuppositions (a complete rejection of the classical worldview, especially the idea of man's essential nature and telos), he may have been right that morally evaluative conclusions could not be derived from factual premises.  However, these presuppositions (shared by many Enlightenment and post-Enlightenment thinkers) are not obviously true.


Notes:
1. MacIntyre, 1981, p.57
2. Ibid. pp.57-58
3. Ibid. pp.58-59