Pragmatic Evidentialism
Posted on: April 7, 2008 - 7:40pm
Pragmatic Evidentialism
EVIDENTIALISM VS PRAGMATISM
In this piece I will examine the philosophy of Evidentialism as opposed to Pragmatism. In particular I will look at the claim of W.K. Clifford on the side of Evidentialism: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence" (Clifford 1879: 186) and contrast it with Pragmatism, the position adopted by William James,whereby when we are faced with a genuine choice about what to believe, and where evidence does not decide the matter, we are free to decide however we want(Jordan 2004). Hopefully this will allow me to construct a framework for if and when it is acceptable to base beliefs on a lack of evidence. PRAGMATISMS FAILED ATTACK ON EVIDENTIALISM William James says:"[A] rule of thinking which would absolutely prevent me from acknowledging certain kinds of truth if those kinds of truth were really there, would be an irrational rule." (James 1896: 28) On the face of it, this seems quite reasonable. James seems to argue that Clifford's main goal is to avoid error, even though that will almost certainly come at the expense of some truths due to the fact that there will be some truths that either don't have any evidence or don't have any evidence that we have detected yet. From James' viewpoint, the potential loss of truth is the potential loss of a vital good; therefore it is preferable to risk error for a chance at truth and a chance at that vital good. It really does make the pragmatist sound heroic, boldly adventuring forward, responsible for all the seeking of truth and vital good in the world. True pioneers leaving the evidentialist in the past, meekly and cowardly sitting on the fence, awaiting evidence to fall in their lap so they can take one step to the next fence. This is simply not true, for a few reasons. First, the pragmatist is unable to demonstrate that a belief unsupported by evidence is always more beneficial than a suspension of judgment and/or that a belief unsupported by evidence is not, in fact, sometimes much more detrimental. Why should it be accepted as a general rule that suspension of judgment is inherently inferior? Secondly, it is a bit of a Strawman argument. That is to say, the viewpoint has been presented in such a way as to make it easy to cut down. Clifford did not advise sitting around and hoping for evidence. Evidentialism would hardly deserve such a name if by its very nature it hindered or prevented the gathering of evidence. It does not stop the postulation of hypotheses, nor testing them with evidence. The philosophy W.K. Clifford was advocating had nothing to say about the methods for gathering evidence, it merely talked about what you would be right to believe before and after you had the evidence. Thirdly, James seems to presuppose that any single example of truth is more (vitally) good than any single example of error is (vitally?) bad. I don't think it would be a very contentious statement if I were to say that this is blatantly not the case. Suppose you are locked in a room with a bomb, with say 3 hours left on the timer and many potential wires to cut. Two other people share your prison. You have a pair of wirecutters, but for the purpose of this thought experiment you can't reach the bomb. Do you give the wirecutters to person one knowing he or she will heroically (?) start cutting wires in the pursuit of the vital good of disarming the bomb, or do you give them to person two, who will not start cutting wires until he has first looked around for some kind of evidence as to how the bomb works? I would hazard a guess that any bomb disposal technician will tell you that avoiding error is tremendously important. Suppose you live in the car jacking capital of the world and you drive every day along a heavily policed route? Avoiding errors is what will allow you to keep your car and your life. Knowing truth gets you to the corner store to buy a spare light bulb. All other things being equal, which is more important? There are clearly cases whereby the avoidance of error is more vitally good than the knowing of truth. Lastly, some pragmatic arguments are truth-independent. Instead of arguing for the truth of a proposition, they instead argue that regardless of the truth, it is better to believe than to not believe. Pascal's wager is the most famous example of a theistic pragmatic argument. It is a truth dependent argument because the benefits of the belief only come to fruition if the belief is true. Arguments from comfort are examples of truth independent arguments: "Believing in x makes me feel good". The benefits are granted to the believer regardless of the truth. For pragmatic truth-independent arguments, it seems obvious to point out how they to fly in the face of the pursuit of that "vital good", that James mentions. I would hope that any defense of a truth independent argument is able to explain logically why equal or equivalent benefits of the belief are unavailable to those who lack the belief and demonstrate that the benefits ARE available to those that DO have the belief. Also it is very important to make the distinction about what such an argument is really all about. Always remember that it is not an argument about the truth of the belief, only about the benefits of the belief. "Believing in God makes me feel good, therefore God exists" is logically invalid. "Believing in God makes me feel good, and I like feeling good" is valid and obvious, but such arguments still have nothing to say about the truth of the belief. PRAGMATISM'S SELF IMPOSED LIMITATIONSJames makes the point that limiting belief may prevent him from acknowledging certain kinds of truths. However, as Alan Wood points out, any way of thinking that restricts belief might conceivably shut us off from certain truths. (Wood 2002: 24) Therefore, besides attempting to believe everything, all epistemic rules fall afoul of this same "flaw". However, I don't see this as a flaw. Can a flaw that is present in all possibilities be considered as such? These limitations are an important part of the imperative to "avoid error", the other imperative being to "know truth" (James 1896:105). I present this as "The Argument from infinite absurdity": The amount of things we could take an action in regards to (forming a belief is an action) is effectively infinite. We are finite in time, intelligence, and observations available to us. Therefore it would be an exercise in infinite absurdity to attempt to form beliefs about absolutely everything. We require rational mechanisms of narrowing our beliefs to the finite. So let's look at the limitations that Pragmatism imposes on itself. Consider again the main premise: when we are faced with a genuine choice about what to believe, and where evidence does not decide the matter, we are free to decide it however we want. First and most clearly, Pragmatism gives way where the evidence is compelling enough. But what is meant by a genuine choice? A genuine choice is one that is living, forced and momentous. A live option is one that is a real candidate for belief. An option is live for a person if that person lacks compelling evidence disconfirming that option, and the option has an intuitive appeal for that person. A forced option is where the decision cannot be avoided, or where the consequences of refusing to decide are the same or worse as actually choosing one of the alternative options. A momentous option is one that may never again present itself, or something of importance hangs on the decision, it is not a trivial matter. (Jordan 2004) Each of these three conditions for a "genuine option", if not met, are grounds for not committing oneself to a belief, course of action, decision, etc, while staying within James’ view of pragmatism. Thomas Edison is alleged to have said, in regards to the 1000 times he attempted to make a light bulb but failed, "I have not failed 1,000 times. I have successfully discovered 1,000 ways to NOT make a light bulb." I'm not sure if this is just an urban legend, but it serves as a nice illustration to show what is meant by a live option. Any of those 1000 times the experiment did not work is compelling evidence that it is not the way to build a light bulb, and thus Pragmatism would restrict forming the belief that any one of those 1000 ways was in fact the correct method for building a light bulb. All other potential ways for making a light bulb that have an ‘intuitive appeal’ would still be live options. Let's go back to the bomb disposal situation mentioned earlier. Instead of 3 hours on the timer, we now have 10 seconds. This is a forced option; there is no time to gather evidence. If you cut the wrong wire the bomb goes off, if you kick your feet up and relax, the bomb goes off. Assuming that you want to keep on living beyond the next 10 seconds, you are forced to make a decision. The consequences of inaction are the same or worse than taking action. Consider the situation where, after sending your resume (C.V.) out to several different recruitment agencies, you finally get a call in for an interview. They interview and do some standard tests on you. At the end of the interview they say the job will go to either you or one other candidate, but they are giving you the right of first refusal. You ask for some details of the job, but all they say is that the salary is one million dollars a year. This is a momentous option. Something like this is unlikely to present itself again, and a job with that kind of salary is unlikely to be trivial. If, on the other hand, somebody tells you that in one hand they have one cent, in the other they have two cents, and asks you to pick one, it would generally be considered a trivial matter. FURTHER PRUDENT LIMITATIONS OF PRAGMATISM Live option Note, I have already said that the Evidentialism presented by Clifford doesn't advocate that hypotheses cannot be formed and be tested. The formation of a hypothesis is not the formation of a belief. This would be starting with the conclusion, and any hypothesis that was a conclusion would be a bad hypothesis, not least of all due to mere definition! Thus at the start of an experiment, Thomas Edison wouldn't have said "Hypothesis: This is how you make a light bulb" but would have been more likely to say "Hypothesis: I am going to see if a light bulb can be made by...." However, some may argue that just by forming a hypothesis the person is forming a belief because if you don't believe a test will yield results then what is the point? Yet the answer is already present in the (potentially fictional) quotation from Thomas Edison: "I have successfully discovered 1000 ways to NOT make a light bulb." Even if I were to concede that the person forming a hypothesis is on some level forming a belief (perhaps a better word would be "hope"?) then the justification for forming that belief is that it is made to be falsifiable, is formed with the express intention of gathering evidence, and cannot help but increase knowledge. Imagination of course would not, could not and should not be caged by such constraints and prerequisites. The suspension of disbelief when you watch a movie is not the same as believing in it. Just because I don't believe that Bilbo Baggins really found the one ring, doesn't mean I cannot be entertained by the story. In our example, Thomas Edison would have wasted his entire life if he treated each of his experiments as separate instead of building his knowledge on each one. The live option still needs further limitations of the kind imposed by my argument from infinite absurdity. Let’s assume that there are an infinite number of ways not to make a light bulb. Thomas Edison was unlikely to have considered that, after all his experiments, there was still an infinity of live options for making a light bulb. Nor was it likely that Thomas Edison would believe in every way to make a light bulb the instant he thought of it or the instant a concept was presented to him, without testing it. Inductive reasoning is important, "every time I take a teapot and smash it with rocks, I get a poor light bulb", "every time I use glass and tungsten and pass an electric charge through it, my results are better." This will lead to getting closer and closer to the truth until, hopefully, the truth is discovered. Of course, Thomas Edison is unlikely to have started with rocks, a teapot, glass and tungsten, but we have the ability to build on the evidence gathered by predecessors, thus would have already known that electric energy can be changed into light energy and metal can conduct electricity. This is similar to what is described as an “intuitive appeal” for a live option in Pragmatism, but is a more rational basis for that appeal. “Intuitive” seems to leave things too open to the pitfall of deciding a live option by what “feels good.” Thus the live option condition is itself refined to a three step process: A live option is one that is a real candidate for belief. An option is live for a person if that person lacks compelling evidence disconfirming that option. The option is live for a person if it has been discovered through building on previous confirmed reason/logic/evidence and the option is falsifiable. Imagination is free from these constraints, and that is a wonderful thing, but it is important to recognise that if an idea has come completely out of the blue and the idea isn't falsifiable then it is not something that should be believed. Recognise it as a product of the imagination. I can see people rising up in protest over these extra constraints, and will probably say that by following these guidelines, many of the amazing discoveries of our history never would have been made. They will probably cite many examples were the people who really made groundbreaking discoveries had to make a 'leap of faith'. I would urge these people to seriously consider when the discoverers actually made the leap from formations of hypotheses and testing of them to belief in their conclusions and what caused that change. I submit to them that that particular transition happened at the point where the evidence became compelling enough. I don't see how such restraints as I have suggested would have 'slowed us down' in the slightest. We are almost non-stop observation machines. We saw birds fly and thought... why not? We noticed by accident that bacteria in that Petri dish out of the rubbish didn't grow too near to that mould... why is that? We saw some of those dots in the night sky move differently than all the others...can I get a closer look? We can build mathematical models to represent reality from the simplest axioms to mathematically sound, potentially accurate, representations of the entire universe. All discovered without the need of prior unfounded belief. Forced optionBy and large I don't have any problem with this. However, clarification is needed regarding whether an option really is forced or not, and this doesn't appear to be an easy thing to do. The argument that an option is forced seems to be a truth dependent one. Consider again the situation where you are asked to guess about either one hand with one cent in it or the other hand with two cents in it. Add to the situation somebody who is holding a gun to your head who says they will shoot you if you don't choose one of them. The option is truly forced only if it is a real gun and the person has a real intent to shoot you. On the face of it, it seems that the argument for the forced aspect of an option is entirely lame in the absence of evidence and that it is always invalid to define an option as forced without it. However, it is easy to come up with many situations where people do indeed act under the assumption that an option is forced, and most people would be hesitant to call them irrational or accuse them of acting illogically. The first set of examples of when a rational person would consider an option to be forced without evidence would be when the purported consequence of inaction is going to be so soon, that it prevents any weighing up of the evidence and/or is going to be so soon that you cannot verify if there are more options than the one(s) presented. When someone yells "DUCK!" most people tend to duck, not ask for evidence of incoming projectiles. If you are in a car crash and you wake up on an operating table, and the doctor tells you the name of the procedure that is needed to save your life in the next few hours, nobody says "Whoa back there, Chief", enrols in, and passes medical school before consenting. It is clear here that we are using inductive reasoning, past experience tells us that people usually warn us for valid reasons, past experience tells us that the Doctor is there to save your life and had to study long and hard to become a Doctor.
Naturally, by using inductive reasoning we are leaving ourselves open to the problem of induction, we are basing our actions on the assumption that the causes and effects of the past will be similar to the causes and effects of the future. As an experiment, walk up to somebody and yell "DUCK!” with feeling, like you mean it. I think the odds are good that they duck. Repeat several times. Assuming that a projectile of some kind doesn't narrowly miss their head each time you do this, then the odds are they will stop ducking pretty quickly. More inductive reasoning on their part! No, they were not being irrational to duck, though I'll leave it up to you to decide after how many times in a row they need to be fooled in order to be fairly considered irrational. This is an important example to keep in mind, that in the absence of evidence, we cannot make the positive claim that an option is forced, though it is not always irrational to consider it so. It is sometimes possible to infer that the consequences of inaction are not so immediate that there is literally no time for any rational thought. Those that would insist that the option is forced, in the face of doubt, have a burden of proof of the imminence of the consequences of inaction, or incorrect action. Evidence that somebody should duck would be, for example, pointing to a swinging wrecking ball heading in their direction. Evidence that somebody should cut a wire on a bomb in a particular timeframe is the digits showing on the bomb timer. It would seem that if the consequences are sufficiently distant in the future to allow for the gathering and consideration of evidence, as well as the consideration of other options, then that is preferable. However, my argument from infinite absurdity can be applied here too. Clifford, in The Ethics of Belief, starts out with a situation involving a ship-owner sending out his ship on the unsupported belief that it was in good condition. Of course, in the story, the ship sinks and everybody agrees that the ship-owner was in the wrong but Clifford points out that even if the ship didn't sink, the ship-ower's actions were the same, so was he not still in the wrong regardless? The wrongdoing came not from the results, or specifically from the belief itself, but from the reasons for formation of that belief. In this case Clifford deemed that there was simply not enough evidence to form that belief. So what is the ship-owner to do? Carefully inspect the ship before each voyage? Doesn't sound like a bad idea really, but what happens if after the first inspection, there is enough time for another inspection, is it rational to inspect the ship again? If yes, then what happens if they finish the second inspection and decide to delay the ship while they conduct a third inspection? Still rational? I think you can see where this is going: to infinity. Again we must use our inductive reasoning. By assuming that past flaws with a ship will be similar to present and future flaws with a ship, and will take a certain amount of time to develop, and would be more likely under certain circumstances. They don't just appear out of nowhere and for no reason. Efficient aircraft maintenance schedules are based on these principals today, and I have no reason to think that ship maintenance wasn't similar in Clifford's day. The second set of examples explains further why inductive reasoning is important. The second set of examples of when a rational person would consider an option to be forced without evidence is when not making that decision makes it impossible to seek or learn truth and avoid error. I have already mentioned the problem of induction, one of the larger marks that David Hume left on the world. According to this, strictly speaking there is no solid logical basis for the belief that the past will resemble the future or uniformity of nature. I am currently undecided on the validity of this when applied to the uniformity of nature (on a macroscopic scale anyway), as it doesn't seem to address the question of why the future WOULDN'T resemble the past, and I suspect that this delves more into the realm of physics. However, it does seem that the problem of induction fits very well when applied to our personal observations. Again this seems to come down more to our own limited observations than actual non-uniformity in nature. The classic example given for this is "All swans we have seen are white, and therefore all swans are white" If it was 1753 in England and somebody had told you they'd just seen a swan, you could probably safely bet a decent sum of money that the swan was white. Of course the Australian Black Swan would never have been observed by anybody you were likely to come into contact with. This doesn't mean that we were being irrational in believing that the past would resemble the future, it is merely cause to keep in mind that there is no absolute certainty with induction. So, my reservations on its validity aside, consider for a moment the other extreme, the belief that the present and future would not resemble the past. This would render the possibility of learning anything impossible, every experiment, every experience would be meaningless and obsolete immediately. You would have no reason to think that just because you are a human laying in bed right now, that you might not be falling through the bed and floor the next second, or flying through the ceiling, or a baby flying spaghetti monster in the red spot on Jupiter. No reason to think that just because you mixed an acid and a base and got salt and water one day, you might not get an explosion or a black hole the next time you try it. To sum up, it would be impossible to learn anything. Why does the world have to be one in which we can make any connections between different things and different times? It doesn't of course, nature is under no conscious requirement to conform itself to what I think is reasonable, but regardless of the truth in this matter it seems best to hold the belief that will allow us to progress, just in case it turns out that the future does resemble the past which, with more and more collective-human observations, it does seem to. That belief does not need to be the absolute one "The future will always resemble the past" and indeed, until we have observed close to everything, that would probably be a pretty arrogant thing to claim, but "It is most likely that the future will resemble the past" is essential for us, from our every day lives to our most groundbreaking research, and essential for us to have any basis at all for seeking truth and avoiding error. Another question once posed to me was "How do we even know that we are real?" I'm sure many people have considered this with other people whilst under the influence of alcohol or perhaps some other substance, what if we are all just living in the dream of some sleeping entity? What if each one of us is just a brain in a jar and we are living in a world created by our own minds? What if the entire universe is a computer simulation that we're living in? If any of these is the case, everything I observe and experience happens to me within the confines of a fictional environment, and as such, either didn't really happen, or worse, there's not even a 'me'! This leads us to the exact same problem as just discussed in regards to the problem of induction. If I can't even be sure that I exist, and that things actually do happen, then all my observations and all our experiments are again rendered meaningless. There is no point to attempt to progress, we have no way to seek truth and avoid error. Again it seems best to hold the belief that we do indeed exist in reality and the things we observe and measure actually are happening. Keep in mind of course that these alternative truths that are postulated by the inebriated amateur philosophers of the world are all ones that by their very nature would be undetectable and have no evidence. I wonder though, if something like one of these was indeed the truth, what are the odds that we would be able to guess its true nature without evidence? Somewhere between slim and none I would suspect, the possibilities would be infinite, and so the argument from infinite absurdity would be applicable here. The only thing we have that even resembles evidence would be the evidence for this reality. I guess you could call it truth-dependent evidence! It is only evidence if this is the true reality, but in absence of all other evidence, it is this reality we should believe in. The third set of examples of when a rational person would consider an option to be forced without evidence is what Clifford discussed in his section from The Ethics of Belief called "The Weight of Authority". At some stage, long ago, it is conceivable that one person could learn all of non-personal human knowledge. That is, to be up to date with the latest evidence of all the general areas of human knowledge. This is likely to have been early in our species' history, but after that point it is just something that we have to concede: other people are going to know truths and be able to avoid errors that we do not and cannot. It is certainly rational that we require specialisation in order to increase knowledge in each particular field, due to our limited time and capacity, but it does leave us with having to just accept the testimony of certain people. So how do we decide what is acceptable testimony from these so-called experts? The two important questions that Clifford suggested we ask were "Is he dishonest?" and "May he be mistaken?" The first question is a tough one to answer with any degree of certainty. If this person, or set of people, is giving us an answer to a question that we truly can't answer ourselves then it seems we must rely somewhat on our inductive reasoning. How much do we know about this person or these people? Do they have a history of dishonesty? Do they have motives for supplying misinformation? I'm afraid I have no better general framework for ascertaining the trustworthiness of an individual or collective other than previous experience with that person or people, or from being in similar circumstances. If you go to a Doctor and they give you a particular diagnosis, odds are you would accept or deny the diagnosis given your previous experiences and knowledge of Doctors. If you are wandering down the street and a bricklayer happens to give you the same diagnosis, again you'd probably accept or deny the diagnosis based on your previous experiences regarding bricklayers' expertise on medical matters. For the second question, I will assume an honest person. The second question is something we can answer with a much better degree of confidence. "May he be mistaken?" can possibly be answered almost as well by the question "How did he acquire this information?" Again the requirement for evidence seems to come into play. We generally believe Doctors because their knowledge is based on a long human history of ever increasing knowledge due to evidence. We trust Chemists, Geologists, and Physicists for the same reason. By their very nature, they are fields of knowledge where it is possible to go right back to their beginnings and build your own knowledge up to the level of the expert if you have the time and inclination(“without ceasing to be men, as Clifford says). Knowledge like this that has been built up from simple beginnings, tested, and has a wealth of evidence for it, can be falsified but hasn't, is trustworthy to a high level of certainty. Not 100% of course, but still high. What methods of the ‘expert' acquiring information might be deemed unacceptable then? Methods which require no evidence seems to be the obvious answer. A vague feeling, or even a strong feeling, that somebody has about the truth is not a good reason for considering them to be a trustworthy source of authority. If they are unable to point to any evidence then their claim to authority is dismissible. Private revelation is another poor example of evidence. To an independent observer it is indistinguishable from imagination, hallucination or complete fabrication. It is based on nothing, unverifiable information coming to the "expert" from an unverifiable source. You wouldn't be rational to believe somebody that said "I can't tell you who told me, but Pluto is made of cheese!" It doesn't matter how long the chain of people is that the information has taken to reach you (though the longer the chain the more you need to be wary of the "Chinese whispers" or "telephone" effect) but at some point, there has to be evidence. A chain of people passing information, where the opposite end from yourself disintegrates into a vague "somebody said" or any unverifiable source with no backing of evidence absolutely needs to be treated with skepticism, as does a similar chain where the opposite end from yourself ends in a source that just says "because I said so". I heartily recommend reading some of the articles on www.snopes.com for extensive examples of the kind of beliefs that are commonly known as urban legends. These are exactly the kind of beliefs that come from such untrustworthy, unverifiable sources as have been discussed here. So we can see that evidence needs to come into the picture at some stage for the person to have any weight of authority, but how can we tell where that evidence came from? Again, it seems we have to use our inductive reasoning. If we know about the scientific method, we know the requirements for evidence that those fields have, we know that any scientific theory is only as strong as its evidence. We know that they must be falsifiable, and the fact that they haven't been, despite all attempts, strengthens their trustworthiness. The way Clifford put it was "I can be made to understand so much of the methods and processes of the science as makes it conceivable to me that, without ceasing to be man, I might verify the statement." (Clifford, 1877). Let's look at some situations where we weight the authority of various people in various circumstances. If someone is lost in an unfamiliar city and asks for directions from some passing stranger, and they give directions to the desired location, people will generally follow the advice. You could consider it a forced option in the sense of you can either believe this person or remain lost; not believing them is the same or worse as believing them. You are using your inductive reasoning as to where they got their evidence from: most of the people in the area that you are would have been in the area before, and therefore will have a better idea of how to get around. This is knowledge whereby you can set about testing it to see if it matches up with reality. You're unlikely to seek the advice of that same random stranger for his or her expertise on deep-sea-vent extremophiles, you have no reason to guess that they have knowledge of any truths of the matter because you currently have no way of reasoning by what method they acquired that information. Although, it is possible that this person legitimately has some knowledge on the topic, until you know more about how they acquired the information, it is just interesting speculation. You're unlikely to seek the knowledge of a 5 year old on matters of Quantum Physics, because you probably know that knowledge on matters of Quantum Physics takes a long time. You can't really grasp such things when you're distracted by potty training and learning basic communication and motor skills. When you already have a good idea about the kind of evidence required for a particular truth, and you know the person is unlikely have acquired such information it is rational to consider such things as this under the category of their imagination, hallucination or dishonesty. This would be a rejection to their ‘expert’ status. There are certainly times where people who may reasonably be deemed experts (such as in regards to things they themselves witnessed) may honestly believe the information they are trying to impart to you, but they may be mistaken. The best way that I can think of to apply the argument from infinite absurdity would be to consider if the option presented is a live one. If an option presented by an expert is not live for a person, and the option is otherwise unforced, then that person is unforced in believing the testimony. Human perception is a funny thing. The way I remember an event might be entirely different from the way you remember an event, even if we were both present in approximately the same geographic location relative to the event, and we can’t both be right. Say you are a police officer who has just arrived on the scene of a car crash. The first eyewitness you speak to says the blue car pulled out as the lights turned green, and the yellow car sped out of nowhere crashing into them. Do you believe them? Questions of honesty aside, you probably do. The second eyewitness you speak to says the blue car pulled out while the light was still red, and the yellow car was well within the speed limit. Now what? The third eye witness you speak to says the blue car did indeed pull out while the light was still red, but there was no yellow car, it was a speeding two-wheeled ice cream van, with the ice cream jingle music playing too quickly, being chased by what looked suspiciously like a velociraptor. So now you can see the problem, all questions of honesty aside, these people have perceived the same event differently. What if you interview a fourth person, whose story agrees with the first person? Problem solved? No, that would simply be an argument from popularity, similar to the ancient belief that the world was flat. In light of the disagreement on what really happened according to the people who were there and should have been the most authoritative experts, it is irrational of you to believe any of these witnesses until you have looked at the evidence of the crash and seen which, if any, version best resembles it. Of course, in a real life car crash example, I see no reason why most of the time, the vast majority of the eyewitnesses wouldn't agree on what happened, and it would probably match the reality that the evidence points to. I have just used this as an illustration for the huge number of things that experts don't agree on, a warning to be wary of how a person's perception of an event, or a fact, or evidence can alter their recollection of it. Say you were the first person interviewed. You would be rational to believe that what you saw did indeed happen. You might be surprised when you hear the second person's testimony, because they have just as much "expertise”. At this point in time, it would be rational to reconsider what you think you saw, might you be mistaken? That all important question of Clifford's can just as easily be applied to the self as it can to another person. If you were the third person interviewed, you might feel pretty disgruntled about how different the first two testimonies were from your own, but reconsider, your experience has left no evidence for it, and it was certainly pretty extraordinary! Might you be mistaken? Is there a better explanation for what happened? Did the event go against your previous experience, or do you commonly see ice-cream crazed velociraptors? In other words, before your experience, was the option live for you? While the police officer has no rational grounds for believing any of the eye witnesses in light of such dissent, he would be entirely rational to dismiss the testimony of the third interviewee. That option presented by that expert is not live for the police officer. This car crash example does once again almost sound like we are heading towards an "argument from popularity", should we just believe when everybody agrees? A well known logical fallacy doesn't seem to be the best reason to believe in anything. However, if we can infer that the knowledge was gained in an acceptable manner, then we may consider it rational to believe a consensus of "experts". Of course, as already mentioned, such an absolute consensus is rare, there will always be some equivalent of my "third interviewee" who's perception of an event or evidence etc is incorrect, so we may ignore an insignificant amount of disagreement. Just what an 'insignificant' amount of disagreement is depends on the field of knowledge we are talking about, but for example in terms of History, a consensus of experts means 95% agreement(Rook Hawkins, some RRS radio show). We may also ignore expert testimony that presents a non-living option, and where the expert is unable to provide evidence to make the option living to us. So the forced option has been a bit more clearly defined. With a lack of evidence, it isn't possible to make a logical absolute claim that an option is forced, though it is not always irrational to consider it so and the exceptions are as follows. If the consequences of the option are so soon so as to preclude the possibility of looking for/at evidence or rational thought then that option is forced. If we can infer that the consequences are not so immediate then for those that would have us believe that a particular option is forced there is a burden of proof of the imminence of the requirement for a decision, otherwise the option is not forced for us. Also, if we can infer that the consequences are not so immediate then there is a further burden of proof on those who would have us believe to demonstrate that the consequences of not choosing are the same or worse than choosing. An option is forced if not taking action in regards to that option makes it impossible to know truth or avoid error. Due to the requirement for humans to specialise there are circumstances where we are forced to believe the testimony of "experts". Where we know or can logically reason that they acquired the knowledge to provide us with that testimony in an acceptable way, where evidence was the beginning of that knowledge, it is rational to consider the option forced for you in favour of the consensus of the experts when the option is also live. If you don't know what the consensus is, and the option is otherwise unforced then the option is not forced. As we can see, it is not all that common for an option to truly be forced. I think this goes along with what James says "In scientific questions, this is almost always the case; and even in human affairs in general, the need of acting is seldom so urgent that a false belief to act on is better than no belief at all." (James, 1896) The traditional view of pragmatism that also defines an option being 'forced' as “where not choosing would have the same or worse consequences than choosing any of the other potential alternatives” is essentially identical to the aspect of the momentous option that is talked about below in regards to deciding whether an option is ‘important’ or not. This fits in much better with the set of momentous conditions, and I consider it to be part of that section rather than part of the forced option section. Momentous Option Clifford seems to be of the opinion that there is no such thing as a trivial belief "No real belief, however trifling and fragmentary it may seem, is ever truly insignificant; it prepares us to receive more of its like, confirms those which resembled it before, and weakens others; and so gradually it lays a stealthy train in our inmost thoughts, which may someday explode into overt action, and leave its stamp upon our character for ever." (Clifford, 1877). That is to say, no matter how trivial a belief may seem, if there is no evidence for it, it is potentially the first step on a slippery slope down to where you have absolutely no rational standards for belief whatsoever. One day the unfounded believer may 'explode' into overt action, basing his or her actions on a whole slew of ridiculous beliefs that were justified to them by the original belief that was formed with no evidence. When first thinking about this I considered it a bit extreme, and thought about the most trifling, vague and fragmentary examples of belief that I could. The first example I came up with was that on a particular planet (not within our solar system) there exists a small rock that has a resemblance to a potato, and it sits on another rock that has a resemblance to a dinner plate. Surely, I thought to myself, this has to be a harmless little belief. How could something like that explode into overt action and leave its stamp upon my character forever? Then I considered what thought processes, if any, were required for me to come to that belief. Yes, it is certainly possible that somewhere out there there is such a pair of rocks, but realistically, I couldn't possibly have any knowledge of what particular rocks look like on such a far away planet. That means that my standards of evidence are either so low as to be essentially non-existent, my inductive reasoning is laughable or options for belief are conjured up by my mind out of nowhere, and I just believe them. In all cases of unfounded belief, at least one of these things must have played a part in the formation of that belief, and therein lies the inherent momentous nature of belief. None of these three things are useful if I really want to minimise error and maximise knowledge of truth, they are in fact harmful to such endeavours. So I would have to agree with Clifford, as far as beliefs are concerned, I do not think that there are any that are trivial. Of course, there is nothing to say that unfounded beliefs formed and held in the absence of evidence are guaranteed to ‘explode’ into overt action, but it cannot be denied that they are more likely to than if one didn’t form or hold them at all! However, there are a whole range of actions that are not the formations of belief. Consider the argument from infinite absurdity. If there are no options that are trivial, then does that mean that all options are momentous? Do we to go through life deliberating over every option presented to us as if our lives depended on it? For most people, no we do not, therefore we must have rational mechanisms for deciding what options to consider momentous and what options to consider trivial. As summed up by Jeff Jordan: "Momentous option: [1] the option may never again present itself, or [2] the decision cannot be easily reversed, or [3] something of importance hangs on the choice. It is not a trivial matter." (Jordan 2004) The first condition is easy to explain. If you are presented with an option, and you will have more opportunities to decide on that option in the future, then it is not momentous, not yet at least. If the option will never again present itself, then it is basically just another way of saying that it is a time forced option, as just discussed. The second condition is also easy to explain. If you make a decision or take an action that you can't undecide or undo easily then it is a momentous option. Examples of this could be getting married, signing up for a mortgage, or deciding to have children. I would also add to this second condition that the consequences of the different actions must be different in order for the option to be considered momentous. For example if I am playing a game of rugby and on my way to the try line I choose to run around a (VERY slow) player on the right hand side, or on the left hand side it isn't something I could easily undo once I'd done it but the consequences are the same, I get past him, so the option is not momentous. How do we decide if something is important though? The easiest way seems to be to assess whether the balance of outcomes of the option for myself and those I care about is either positive or negative or whether I am indifferent about them. If the balance of the outcomes is positive or negative then the importance of the option increases proportionally to the 'distance' from the 'indifferent point' where I truly don't care one way or the other. Of course outcomes aren't always quantifiable in such terms as, for example, "the outcomes of decision 'A' in regards to option 'A' are exactly twice as positive as the outcomes of decision 'B' in regards to option 'A'" But in most cases we would be able to get a good idea about which balance of outcomes are better or worse relative to others reasonably easily. What doesn’t seem at all easy is how to quantify exactly when an outcome stops being trivial, becomes something you care ‘some’ about and when it becomes something that is actually significantly important to you. When thinking about this I considered some of my examples from earlier. I would care very much about the decision to accept or decline a job with a $1 million a year salary, and be completely indifferent about whether I choose the person's hand with 1 cent in it or the hand with 2 cents in it. What about all those potential consequences between 2 cents and $1m dollars though? I gradually increased the value of money from 2 cents and thought about how important it was for me. I could easily recognise that more money would allow me to buy more things that myself and people I care about would want. Thus ,relative to each other, each value was easily quantifiable as more or less important. I also had no problem classifying each example as reasonably safely in the category of trivial or important. This led me to the idea that, as far as the “trivial-to-important scale” is concerned, it is most rational to describe a rule that is able to divide options clearly into either the trivial or important. The categories that best seem to fit this are “clearly trivial” and “some importance but essentially indistinguishable from trivial in practice” on the side of trivial, and everything else on the side of important. Every decision that is made has what the economists of the world would call an ‘opportunity cost’, that is basically summed up as “all the other things you could have done instead”. Most decisions that people make in order to gain benefits also have some costs intrinsic within themselves. Weighing up these benefits and costs is what I mean by “the balance of outcomes” and this final perception of balance is what we are putting on the “trivial-to-important scale”. Consider the job paying $1m again but the job was “Test Subject for Bio-Engineered Viruses and Receiver of Beatings” and also as you’ll be spending all your time in strict quarantine, you won’t be able to see your family while you are employed. It’s clear to see that the job would have the same positive aspect as before, but (for most people I hope) the negative aspects would outweigh the positive ones and bring the balance of outcomes for this option quite safely out of the positive range, right through the indifferent range and safely far into the negative range. This balance would then go on the “trivial-to-important” scale quite strongly in the important category, thus making the decision a momentous one. Now to test this out on something a little less clear. Say a few people are presented with the option to do a 5 week regimen of basic exercise doing such things as running on a treadmill, using rowing machines, general gym activities, or not. Person 1: First weighs up the balance of the outcomes. The benefits would be a mild improvement to health, physique and self-esteem. The costs would be less time for tv, which he would be spending at the gym working out. He decides that he would like to lose a few pounds, but there is also a tv show on in the evenings that he would hate to miss. He has no particular aversion to working out. Adding up these positives and negatives for himself, he decides that the balance of outcomes is mildly positive for him. Putting this balance on the “trivial-to-important” scale results in it being in the “some importance but essentially indistinguishable from trivial in practice” category and thus makes the decision a trivial one. Person 2: First weighs up the balance of the outcomes. The benefits would be a mild improvement to health, physique and self-esteem. The costs would be less time for tv, which he would be spending at the gym working out. He would very much like to gain those benefits, at the moment he does watch tv and enjoys it but wouldn’t care much if he missed the tv shows, and he has also always loved working out in the past. Adding up the positives and the mild negative, he decides that the balance of outcomes is clearly positive. Putting this balance on the “trivial-to-important” scale, while it is clear that this isn’t something that is really significantly important, it is still distinguishable from trivial and thus makes the decision an important one. Person 3: First weighs up the balance of the outcomes. The benefits would be a mild improvement to health, physique and self-esteem. The costs would be less time for tv, which he would be spending at the gym working out. He thinks he already looks and feels fine, a new season of his favourite tv show just started, and he’s always harboured a secret contempt for those ‘posers’ that go to the gym. Adding up the negatives and also the essentially neutral, he decides that the balance of outcomes is definitely negative for him, and reasonably strongly so. Putting this balance on the “trivial-to-important” scale, it is definitely distinguishable from trivial, and thus makes the decision an important one. In those cases where we really aren't sure which outcomes are better or worse for us than others then effectively the importance of the option is the same as if we didn't care one way or the other. Even if after making the decision we found that there were indeed differences in the outcomes, the importance of the option relates to our feelings before we make a decision or take an action, not after. An important option is a momentous one. What are the rational courses of action in light of an option being trivial or important? Trivial options are obviously going to place very little burden upon us. Assuming the option is not forced, you are free to decide however you like, guess, or not decide at all. Again using the same example of somebody who tells me to choose one hand with 1 cent in it or the other hand, which will have 2 cents in it, I am free to choose his left hand, right hand or tell him this is not worth my time. If the option is a forced trivial one, then it removes the option of not deciding at all, and leaves you with deciding however you like, guesswork. Same example, but the person with the gun telling you to decide which hand's prize you will get, the option of which hand to choose is still trivial but forced. The option of choosing a hand or not is, of course, NOT trivial! Formations of beliefs are all momentous options, and they do require somewhat more of us if we ware to remain rational. Forming beliefs in situations that aren’t forced demand more than just “deciding however we like”, or just guessing. If they are not forced then we must gather as much evidence as is necessary in order to make the option ‘live’. To form an unforced belief about something that is not also ‘live’ would demonstrate great irrationality and a blatant indifference to the goals of avoiding error and maximising the knowledge of truth. It is necessary to differentiate between the formation of beliefs and those momentous actions that are not the formation of beliefs. This is because actions that are not the formations of belief already have inherent limitations to absurdity. By this I mean, you literally and obviously cannot take an action that you cannot take. For example, you cannot walk into your back yard and then fly off into the air like superman, you cannot become Napoleon Bonaparte, you cannot lift a Boeing 747 airplane, and so on. It is, however, technically possible that you could (irrationally) believe that you could do these things. The condition of being ‘live’ or not is not something that can be applied to non-belief-forming actions, as a live option is something that is “a real candidate for belief” so all the requirements under that condition for gathering evidence are directed at the formation of beliefs. Once the non-belief-forming option is considered momentous, the rational way to decide is the way that will result in the most-positive value for the balance of outcomes. The rational courses of action (as far as the non-belief-forming aspects of this scenario are concerened) for the three potential gym-goers mentioned above according to this are as follows: Person 1: Trivial. Assuming the option is not forced, this person is free to decide however he likes, guess or not decide at all for the time being. If the option is forced, then he can still choose to do the 5 week regimen, or decide not to do the 5 week regimen, but suspending the decision is no longer possible. Person 2: Momentous. All other things being equal, it would be rational for this person to decide to take part in the 5 week exercise regimen. Person 3: Momentous. All other things being equal it would be rational for this person to decide to not take part in the 5 week exercise regimen. The momentous option has now been more clearly defined, then. Due to the inextricable implications involved in the formation of belief, all formations of belief must be held to be momentous. Unforced formations of belief have the highest requirements for evidence for those that want to maximise knowledge of truth and avoid error. Unforced formations of belief must be made ‘live’ by the gathering of evidence if a rational belief is to be formed. Non-belief-forming options can be momentous or not. If taking a non-belief-forming option is not easily reversed and the consequences of that option are noticeably different, then the option is momentous. If a non-belief-forming option is important than it is momentous. If a non-belief forming option is momentous, then it is rational to take the action that results in the most positive value for the balance of outcomes. If a non-belief-forming option is trivial and unforced then one is free to decide however one likes, guess or not decide at all. THE FRAMEWORK IN A NUTSHELLThus ends my analysis of evidentialism vs. pragmatism, I have found that neither in its purest form strikes me as entirely rational. I did originally think that what I ended up with would be closer to Evidentialism than to Pragmatism, but what it really seems like is that on some different levels, each must bow to the wisdom of the other. The self imposed limitations of Pragmatism go a long way towards making it a functional framework for belief, but I have found that in the vast majority of cases, it seems that inserting more requirements for evidence into the various limitations of Pragmatism not only assists in the endeavour of avoiding error, but also doesn't hinder our efforts to acquire true knowledge, nor is our capacity for imagination held back at all. What follows is a concise summary of the framework for belief that I have detailed above, more focussed on the formation of beliefs, rather than on all actions.1. In all cases, where the balance of evidence is compelling enough, believe the option that the evidence indicates.
2. There is no belief that is not momentous due to the fact that even the most seemingly harmless of beliefs is the product of a belief-forming method and an unsupported belief is symptomatic of an inadequate belief-forming method. With the exception of:
3. It is rational to believe that the universe in which we find ourselves is reality. It is the only way we are able to learn or know anything to any degree. To assert the opposite is usually to self-refute.
4. To take an action in regards to an unforced momentous option has the highest requirement of evidence for anybody that cares about the goals of avoiding error and maximising the knowledge of truth.
5. Argument from infinite absurdity: The amount of things we could take an action in regards to are effectively infinite. We are finite in time, intelligence, and observations available to us. Therefore it would be an exercise in infinite absurdity to attempt to form beliefs about absolutely everything. We require rational mechanisms of narrowing our beliefs to the finite.
6. The process of determining that an option is live for a person tells us that the person doesn't have evidence disconfirming it, it is a real candidate for belief, is built on a solid base of confirmed reason/logic/evidence, the option is falsifiable and is thus formed to increase knowledge. In this case one would be justified in forming a low level of belief necessary for forming a hypothesis that is more likely to yield the desired results, gathering evidence and attempting to falsify in regards to the option. I regard it as debatable that such a low level of belief is a belief at all. An option that is not live is a product of the imagination. While an option that is a figment of the imagination may be a starting point for the formation of a hypothesis, and could potentially be true, it is an irrational thing to believe and is less likely to yield the desired results, as it's methodology for arriving at truth and avoiding error can be summed up as a unfathomably lucky guess with statistically unfavourable implications for future belief-forming processes. All options that a person has not heard of are not live options. The most rational thing to do in regards to an unforced 'dead' option is to lack belief.
7. Under circumstances when there is no time to consider evidence, the option is forced and a rational person is free to form a belief immediately in order to avoid consequences or attain benefits. Possibly the most common set of examples are those of self-preservation in the face of immediate danger. Such situations are rare, and those that exploit this in order to force a belief upon a person or persons may be accurately regarded as using scare tactics or be labelled as con men.
8. If we can infer, by whatever means, that the consequences of suspending belief are not so immediate as to make it impossible to gather and examine evidence then the option is unforced, and the rational thing to do is to lack belief, until you do gather and examine the evidence, should you decide to do so. Those that would have you believe under these circumstances have a burden of proof.
9. In regards to personal experience, while we are rational to believe that what we are observing is usually a good representation of reality, we are capable of mistakes, in the face of disbelief it is important to ask oneself "Might you be mistaken?" Was the option live for you before you had the experience?
10. Questions of honesty aside, we are sometimes forced to believe the testimony of others when we can infer that they have acquired their knowledge in an acceptable way and that their expertise relative to ours in regards to an option or a particular aspect of an option is likely to be higher. Note that in regards to many options, there will be dissent amongst the 'experts'. Obviously, according to this rule, disregard the experts that acquired their knowledge in an unacceptable fashion, and of the experts that are left that did acquire their knowledge in an acceptable fashion believe in line with the consensus, with a strength of conviction relative to the strength of the consensus of experts. If you are unaware of what the consensus is, and the option is otherwise unforced, the rational thing to do is to suspend belief until you gather the evidence regarding the level of that consensus, should you decide to do so. If there is no real consensus of the experts, the rational thing to do is to suspend belief. If the option presented is not live to you and the option is otherwise unforced then the option presented by the expert is not forced for you.
- Wood, Alan. 2002. “W.K. Clifford and the Ethics of Belief” in Unsettling Obligations: Essays on Reason, Reality and the Ethics of Belief. Stanford, CA: CSLI Publications.
- Jordan, Jeff. 2004. "Pragmatic Arguments for Belief in God" (Stanford Encyclopedia of Philosophy) http://plato.stanford.edu/entries/pragmatic-belief-god/
- Clifford, W.K. 1877. "The Ethics of Belief" http://www.infidels.org/library/historical/w_k_clifford/ethics_of_belief.html
- James, William. 1896. "The Will to Believe" http://falcon.jmu.edu/~omearawm/ph101willtobelieve.html
- Conee, Earl; Feldman, Richard. 2004. "Evidentialism: Essays in Epistemology"
- Login to post comments
I know it is a long shot due to the length of this work, but if anybody actually reads it, then please let me know what you think. Go easy on me though, I didn't study this in university, I just tried to work my way through Clifford and James and fix any flaws as logically as I could.
I'll be honest. I didn't actually read the whole thing, primarily because I think you got off to a false start. I'll try to go easy on you, but I'm also going to be blunt. The whole thing seems like one giant straw man of pragmatism. At least it starts out that way.
I'm a hard-core pragmatist. But I'll confess, I've never read any James or Peirce. That's because in my view pragmatism is dead simple. It's also an unbeatable philosophy. Here it is in a nutshell: Use the best ideas. That's it. Use the best ideas. Everything else derives from that pragmatic principle. It is dead simple. And it's also unbeatable.
Why is it unbeatable? Simple. Whenever you have two ideas in conflict, pragmatism always supports the better one. So, if you have a bone to pick with pragmatism, and you propose a better idea, then pragmatism automatically supports that idea, and you're not actually arguing against pragmatism, but FOR it. Pragmatism's simplicity makes it flexible and adaptable. Any effort you spend debunking some derived triviality in favour of a better idea, automatically 'upgrades' pragmatism to include that better idea.
Pragmatism, in my mind, is more like a meta-philosophy. It's a way of choosing between ideas, rather than a fixed idea in itself. Thus, the whole framing of the article 'Evidentialism vs. Pragmatism' is mistaken. If evidentialism is any good, it will be part of pragmatism, but pragmatism will not be limited by evidentialism's limitations if there are better ways of overcoming those limitations. Those better ways will also be part of pragmatism.
Here's a way you can sum it all up in a simple question. Whenever someone points out some supposed 'weakness' in pragmatism, just ask them: "You got a better idea?" If they do, great! It's supported by pragmatism. If they don't, then their critique is empty, and pragmatism stands undefeated. Either way, pragmatism comes out on top.
The only real alternatives to pragmatism are forms of epistemological nihilism, the idea that we can't really know anything at all. I'm not talking about new-age emo 'nihilism', which is little more than dressed up and confused anarchism. I'm talking the real deal: Knowledge is not really possible, literally nothing can be known. Pragmatism is essentially a rejection of epistemological nihilism. It says, Yes in fact we can know some things, and furthermore, we can attain better and better knowledge.
Again, I'm speaking out of my ass. I've never really read the so-called pragmatist philosophers. Frankly, you don't need to, and many of their ideas may be out of date. That does nothing to diminish the core principle of pragmatism, which is to use the best ideas. The old pragmatist philosophers were simply presenting the best ideas they had at the time. Pragmatism per se is not limited by any weaknesses in their conceptualizations or arguments.
Now, with that preamble out of the way, I'll directly address your post:
This quote by you, is a mischaracterization (straw man) of this quote by James:
James did not say that we are free to choose in whatever way, he said we shouldn't limit ourselves with strict rules which prevent us from acknowledging real truths. The key concept here is that there are truths, which are real, which are excluded by the rule. The rule is therefore too strict.
The rest of your post appears to flow from this core misunderstanding (straw man).
The following quote is more or less accurate:
Whereas this following quote is a strawman misinterpretation:
The pragmatist does not make such a claim in the first place. They do not claim that 'a belief unsupported by evidence is always more beneficial than a suspension of judgment'. Nor do they claim that 'a belief unsupported by evidence is not, in fact, sometimes much more detrimental'. You are putting words in the pragmatist's mouth and then saying that they fail to prove the straw man case.
I will give you one case to counter your straw man, and that is the case of expert intuition without supporting evidence. A seasoned firefighter chief is in a burning building with his team. He suddenly gets a feeling of danger. He doesn't know exactly why, but he decides to go with his intuition. He says, "Hey guys, let's get out of here right away!" The team exits the room just in time. The floor has collapsed behind them.
The chief later realizes that a subconscious cue had triggered his intuition. Although he was not consciously aware of it, he had felt his feet get warmer. That meant there was likely fire directly underneath them, which apparently is a very dangerous situation for firefighters. It was only later, after thinking about it that he realized what cues his subconscious mind had detected to trigger his intuition. There was no clear evidence known at the time to support his decision. Needless to say, if he hadn't trusted his unsupported intuition, he would be dead.
Expert intuition can be very useful (key word in pragmatism), even when there's insufficient evidence to support a concrete action by evidentialism. Therefore, I hold the position that the following quote by Clifford is shown to be wrong:
Pragmatism supports expert intuition under certain circumstances where evidentialism would not. It is clear to me that pragmatism wins in this case. If you disagree, I simply ask you, "Got a better idea?"
Lastly, in regards to so-called 'pragmatic' arguments for belief in god, such as pascal's wager. Poppycock. These are not pragmatic arguments, for they do not use the best ideas. One idea in particular, which is extremely useful, and therefore strongly supported by pragmatism, is the principle of Occam's Razor. It slices, it dices. Even after cutting though Zeus and Thor and Yahweh and piles of hardened and caked-on bullshit, it can zip through a deist god as though it had never been used. As sharp as the day it was made. Only 6 easy payments of $19.99.
The key to understanding pragmatism is understanding a pragmatic conception of truth. Truth is like an arrow. If it hits a target reliably, it's true. If it doesn't, it's not true. Truths are about predictions. A pragmatic truth can be used to make good predictions. If it can't make good predictions, it's not true, pragmatically speaking. Belief in god makes no useful predictions. In contrast, trust in an expert intuition (under certain circumstances) can make useful predictions. There are no 'pragmatic' arguments for god-belief. All arguments for god ignore dozens and dozens of pragmatic ideas such as Occam's Razor. Succinctly, they do not use the best ideas.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
Hi Natural, thanks for taking the time to at least start reading it. I will admit that I think I went into my reading on these topics with a bias towards evidentialism, though I never entirely agreed with that statement by Clifford. I think perhaps this has slipped into the early part of the piece, and maybe this is what you are referring to, but I'm not sure.
Your view of pragmatism is indeed dead simple and unbeatable, but I agree with what I think you are also saying in that it doesn't really strike me as a philosophy in itself, more draws on the strengths of others. However, this seems very similar to the situation of when presented with many choices, and using as your guiding principle "choose the best option". Sure, whatever the best option is is the best one to choose, but it doesn't really help us find the best option. Regardless, I wasn't attempting to attack, criticise or even contrast your view of pragmatism, I was doing so towards the pragmatism as I saw it from William James.
This is not the start I got off to that you have called "false" my overall starting point for the position of William James that I perceived was laid out in the paragraph above that quote
"when we are faced with a genuine choice about what to believe, and where evidence does not decide the matter, we are free to decide however we want(Jordan 2004)."
As per the citation (I'm really sorry for anybody who ist a much more meticulous citer than myself who reads the above piece and winces, I promise to edit and fix things like that up) I got this from the Jeff Jordan article, where he says:
As far as I can see, this is derived from:
He goes on to describe suspension of judgement as a passional decision, which I disagree with on pretty much the same grounds that I think "lacking a belief in god" is different from "believing there is no god".
I think, and hope, this is where you have misunderstood me. I was not trying to straw man the "[A] rule of thinking which would absolutely prevent me from acknowledging certain kinds of truth if those kinds of truth were really there, would be an irrational rule." (James 1896: 28)" quote, I merely thought that, after my general introduction, this quote from James was a good place to start.
He said our passional nature MUST decide a genuine option that is intellectually open and said leaving the question open is passional. I disagree with this version of pragmatism, and I also agree that the rule of evidentialism, as presented by Clifford, is too strict.
You said
I take this to mean you more or less agree with my judgement on what James was indicating. It is preferrable to risk error for the chance at truth. Preferrable to what? In my understanding, he meant preferrable to believing something on insufficient evidence. This is the basis for my 4 points at that stage of the writing. Namely
1."Why should it be accepted as a general rule that suspension of judgment is inherently inferior?"
2."Strawman argument. Clifford did not advise sitting around and hoping for evidence. (i.e. Evidentialism is obviously concerned with the avoidance of error, but is also concerned with that same vital good that James says pragmatism is)"
3."James seems to presuppose that any single example of truth is more (vitally) good than any single example of error is (vitally?) bad." and
4."some pragmatic arguments are truth-independent."
But then you seemed to do an about-face and said:
So I'm not entirely sure if you agree with me or not! However, I think I can see that I need to be clearer in that particular sentence. I should have said "First, the pragmatist is unable to demonstrate that a belief unsupported by evidence IN REGARDS TO A GENUINE OPTION is always more beneficial than a suspension of judgment IN REGARDS TO THAT GENUINE OPTION and/or that a belief unsupported by evidence is not, in fact, sometimes much more detrimental, EVEN IN REGARDS TO GENUINE OPTIONS.
I go on to explain what "genuine" options are soon afterwards.
I hope with the above clarifications you can follow the point I was intending to make now.
An excellent example, thanks. I think that it is an interesting question as to whether evidence that we are only subconsciously aware of still counts as evidence. Off the top of my head, I would tend to think that yes, it does. I'm not sure though, I reckon this point is potentially up for debate. Even if subconscious perceptions of evidence doesn't count, then I think the point I go on to make in regards to "forced options" would cover this scenario you describe as well.
Again, this was never my position.
However, I obviously agree that Pascal's wager is ridiculous, and it wasn't your version of Pragmatism that I was working with. To tell you the truth, I didn't actually read a more modern piece on pragmatism, is there any book you could recommend I pick up on the subject? For me, searches tended to bring up the older pieces. I charged ahead anyway, as it was the James version I was planning on contrasting with evidentialism because it was a piece that was specifically addressing W.K. Clifford's "The Ethics of Belief", which outlined a "strong evidentialism". I initially thought "what could be better than evidence as a rational basis for the formation of beliefs!" so I thought these would be the best pieces to at least start with.
I think that sounds like a BARGAIN for such a useful tool.
Agreed
You seem to have ignored what I said about the pragmatic conception of truth, based on predictions. Of course we can easily choose which are the better ideas by comparing how well they make predictions.
All this mumbo jumbo terminology just seems to be obscuring the central issue. I have no idea what 'passional' means and I'm not really interested in finding out, unless it is a useful concept. I'll wait for you (or someone else) to illuminate its definition and relevance. I honestly think it would serve you better to try to give the important concepts straightforward labels, rather than getting lost in trivialities of obscure definitions.
The central issue seems to be getting lost in your semantical tweaks. The central issue is whether evidence is necessary to believe something, or if it is sometimes useful to believe something without sufficient evidence.
If evidentialism rejects the latter case, then it is wrong. End of story. Let's not make this so complicated.
Now I'd like to make an important distinction. You'll notice that the quotes in the original post talk about beliefs, not knowledge. I would say that to claim that you *know* something, you should stick to evidence. But they were talking about believing things, not knowing things. It is certainly okay to believe some things without evidence, if you have pragmatic reasons to go with the belief. A perfect example is expert intuition. I'd even go with personal (non-expert) intuition, but that's more debatable, so I'm going to go with the more solid sentinel case of expert intuition. I do not believe you could make a case that evidentialism supports making decisions based on expert intuition. The key is that, while expert intuition may be shown to be reliable over time, at the time when the decision to act is actually made, there is no evidence to support the decision, only the intuition itself, which may or may not be based on solid sensory evidence, regardless whether subconscious or not. The evidence may never be identified in the end, but the decision is still pragmatically justified.
You never commented on the pragmatic conception of truth, i.e. the idea that makes the best predictions is most 'true', like an arrow is true.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
Hi Natural,
I'm at work at the moment, so can't respond properly. However, I want to point out that by the end of the writing I didn't agree entirely with evidentialism or the William James version of pragmatism. I wasn't trying to shoehorn everything into an evidential philosophy, and overall I think what I ended up with is actually closer to pragmatism.
also...
But what do you do before the predictions are made and tested? I would say that once predictions have been verified, i.e the most and best predictions by the idea or hypothesis, then this constitutes as evidence.
This is where personal intuition comes into the equation, in my book. I define intuition as the brain's natural ability to make pretty good guesses. First the brain observes the environment, then it starts to see patterns in the environment. Even the process of detecting patterns requires making more-or-less random predictions, some of which pay off and others don't. It's a very similar process to evolution.
So, to directly answer your question: You observe. You experience.
Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!
Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!
That's what I thought I did when I expressed it as "free to decide however we want"
I agree, but of course that doesn't mean that James' view of Pragmatism is correct.
Perhaps, but I didn't make such a distinction between beliefs and knowledge. I think the difference between justified knowledge and justified beliefs is pretty negligible. However, I did consider many situations where an unjustified belief may still be a rational and pragmatically useful.
Perhaps, but I have attempted to define such situations.
I don't see how the expert having such a high level of expertise in a given circumstance that the assessment of evidence is lowered to a subconscious level negates the fact that the evidence is there. Maybe we can eventually identify what that evidence was, and maybe we can't. I agree that opinions of experts are important and useful. I do talk about experts as well, and why I would believe them or not. I even talk about personal experts as well.
I wasn't trying to. Not Clifford's version of evidentialism anyway.
And this is where I talked about live options, forced options, momentous options etc. I think it gives a bit more in depth reasoning of the justification behind suspension of belief and also situations where belief without evidence might be rational.
Could you explain this a bit more? I don't see how the process of detecting patterns neccesarily must come before the prediction of patterns, let alone random predictions. In fact I would say that almost all prediction of patterns comes after the initial observaton and detection of said patterns. "This happened before, I predict it will happen again" What I considered to be inductive reasoning.
Well, that is a process I am a fan of. What I think you have described seems reasonably similar to what I discussed with the Thomas Eddison example in the part about 'live options'.
Yes indeed. And most of the time, you collect evidence from those observations and experiences to help form your beliefs. However, there are reasons to stop collecting evidence and form beliefs.
As an example, if Jesus Christ appeared before me and threw a spear in my direction, I would form whatever beliefs were forced upon me to avoid the momentous immediate danger and get the hell out of there, then when the immediate danger had passed and I had time to assess the evidence I would get myself to a doctor for testing, and I would not believe that it was Jesus risen from the dead to attempt my impalement. That is because a living Jesus is not a live option for me, and there are other possible live options that are inherently more rational courses of investigation, with the lack of evidence for a living Jesus.
This is just an off the cuff example, sorry if it doesn't illustrate my point as well as I think it does. I do intend to create a flow chart of the framework for belief that I came up with, but I need time (and some nice free flow chart software!)
Regardless of how blunt you are, I appreciate your feedback, it's difficult to get people to read something of that length and talk to me about it.
*shameless self-bump in the hope of other opinions and pieces of advice*
Not a bad paper, but entirely too lengthy. Especially since Clifford's paper and James' paper are a heck of a lot shorter than this one. But I will not leave that as my criticism. I'd have to say Clifford combines ethical responsibility with rational epistemic belief that is not entirely warrented. And a person believing something might not necessarily be as bad as Clifford would like. Given that, there is reason to doubt whether it is wrong all the time, at all places to believe something on insufficient evidence. Suppose I have cancer, and me having the belief that I will get better actually helps me get through treatment; even though the likelyhood of me getting better is small. I have insufficient evidence yet it seems okay for me to have that belief. Hence Clifford's thesis seems a bit strong. Most contemporary evidentialist only use his thesis as an epistemic thesis. That is to say: it is epistemically wrong for me to believe something on insufficeint evidence; or knowing something requires me to have the appropriate amount of truth conducive evidence; or if two individuals have the same truth conducive evidence they either both know or both do not. Pragmatic interests and stakes don't matter when it comes to knowing. So in the cancer case above, it might be morally okay for me to believe what I do, but it's epistemically wrong for me to say that I know that I will get better. This seems right, but is it really the case that pragmatic interests and stakes don't matter when it comes to an epistemic belief?
James challenges Clifford on the idea that it is morally wrong for one to believe something without sufficient evidence. He also implicately challenges the contemporary evidentialist's ideas that knowing something requires me to have the appropriate amount of truth conducive evidence; and the idea that if two epistemic agents have the same truth conducive evidence, they either both know or both do not. By suggesting that epistemic beliefs are not entirely a matter of having truth conducive evidence, pragmatic interests and stakes of the epistemic agent do matter when we attribute knowledge.
I personally side with James, Clifford's thesis is too strong, and I think our doxastic lives are more complicated that Clifford would like. Pragmatic interests and stakes do matter when we attribute knowledge to an epistemic agent. Hence Clifford's thesis is false. Why? Because the following principle seems true:
IF S knows that A is the best action, S should do the best action.
That principle is enough to undermine epistemic purism and contemporary evidentialism.
Thanks very much for the feedback. I can appreciate that it is a bit on the lengthy side of course, but at the same time I can't think what I should remove while still being thorough.
I agree that Clifford's rule was too strict and doesn't take into account many different factors and situations, but that didn't lead me to agree entirely with James. While the framework for belief I ended up with does seem to be closer to James than Clifford, I think it is helped by a more detailed explanation of where evidence is required, and gives more justification for the suspension of judgement where evidence isn't conclusive.
Indeed, this was something I mentioned in the example that I gave regarding the multiple witnesses to a car accident, and what Clifford talked about a little bit in his section regarding the weight of authority.
Seems simple enough, but if S doesn't know for sure what the best action is... then should they perform an action anyway? James seems to advocate that any rule that would prevent him from performing an action when that action was the best action it isn't a very good rule, and he might miss out on a vital good. I argued that there is no guarantee that action is intrinsically better than no action, and tried to go for something that more often encompasses the general goals of 'know truth' (or do best action) and 'avoid error'.
Again, thanks for the feedback.