Acceptance, Belief, and the Question of Informal Fallacies

By Daniel Tippens

Introduction

Informal fallacies have become a hot topic in some circles. An informal fallacy is an argumentative move that may be psychologically persuasive, but is logically incorrect. [1] For example, an argument from authority is an informal fallacy, because it involves basing one’s conclusion on the premise that one is an expert with regard to something related to the conclusion. This can be quite persuasive. Indeed, many university classes only work because students buy into an implicit appeal to the expertise of the professor. That one is an authority on the matter, however, provides no guarantee of the truth of what one has said, and this is why such arguments are generally taken to be fallacious.

It is quite common for the charge of an informal fallacy to be taken as a decisive responses to certain arguments. It is considered a hands-down, “gotcha!” move.  Recently, however, researchers have come to think that the kind of reasoning involved in informal fallacies may actually be sufficient justification for belief, or at least that informal fallacies may boost the justification for a belief. Let’s consider a well-known argument in the philosophy of mind known as the argument from hallucination. The argument goes like this:

Suppose that John is hallucinating a black whale.

  1. John experiences blackness.
  2. There is nothing that is black in front of John – the experience is a hallucination.
  3. A hallucination is a mental representation of something – a sense datum.
  4. Hallucinations and veridical (accurate) experiences are subjectively indistinguishable.
  5. Since the two experiences seem indistinguishable, they are identical.
  6. John’s veridical experience of blackness must also be a mental representation – a sense datum (from 5).
  7. All experiences are subjectively indistinguishable from their hallucination counterparts.
  8. All we are aware of are sense data.

This famous argument is essentially an argument from ignorance: a lack of evidence for -P is taken as evidence for P.

Let’s look at the move from premise 4 to premise 5. Since a hallucination of blackness is indistinguishable from a veridical experience of blackness, the two experiences are indistinguishable. In other words, since we have no evidence (introspectively) that a hallucination is not identical to a veridical experience, the two experiences are identical. A lack of evidence for non-identity is evidence for identity.  So, despite the appeal to ignorance, the argument seemed to offer sufficient justification for the belief that we only are aware of sense data in our experience. What’s going on?

On the Bayesian account of informal fallacies, depending on the background conditions in which an appeal to an informal fallacy is made, the justificatory status of a belief that rests on the informal fallacy may vary. If I am talking with a known professional physicist, I’m justified in believing what he tells me about his field of study because he is an authority on the matter. But, if a biochemist tells me which car I ought to buy, on the grounds that he has a PhD in biochemistry, his appeal to authority does not justify the belief that I ought to buy this car, rather than that one.  In the first case, the fact that a physicist is telling me about physics increases the probability that what he is telling me about physics is true, but in the second case, the fact that a biochemist is telling me about cars doesn’t increase the probability that his choice of car is the right one.

It is important to note that thus far, my discussion about informal fallacies has been restricted to how they relate to the justificatory status of a belief. But I also will want to consider whether an appeal to an informal fallacy, under conducive background conditions, warrants not only belief, but also acceptance, by which I mean that we treat whatever is under consideration as true for practical purposes, even though we don’t really believe it. I will conclude that there are occasions on which informal fallacies may confer justification for belief, but not acceptance. That is, they may justify my belief in something, but will not justify my acting as though the belief is true.

There are two reasons for pointing this out. First, many of the examples employed in the Bayesian approach to informal fallacies have practical undertones to them, so, it is worth demonstrating that acceptance and belief can come apart.  Second, a significant part of the interest in informal fallacies comes from their practical consequences. Since my friend is an expert on wine, for example, I can trust what she says,  when deciding which wine to purchase for dinner. But, if informal fallacies may fail to confer justification for acceptance, even while providing justification for belief, then we will need to be more careful when thinking about how an informal fallacy informs our decision-making process.

Acceptance and belief

Here, I take a belief in a proposition to have 3 features:

  1. A belief is a mental state in which somebody takes the world to be a certain way
  2. A belief is persistent.
  3. A belief disposes us to report that we take the world to be a certain way.

What it means for a person to take the world to be a certain way is just to say that he endorses a certain proposition as accurately reflecting a certain state of affairs. If I believe the world is round, for instance, then I take the world to be round. A belief is persistent, insofar as I continue to hold it and so long as I don’t encounter any defeating evidence.  Of course, when I am asked if I think the world is such-and-such, I will answer, “yes,” if I believe the proposition. Belief disposes us to report how we take the world to be.

Turning, now, to acceptance, it has the following features:

  1. Acceptance is a mental state in which somebody treats a proposition as true for practical purposes.
  1. Acceptance may or may not be persistent.
  1. We may be hesitant to report that the world is a certain way, when asked.

To treat a proposition as true for practical purposes is just to assume the proposition to be true, while not necessarily believing it. For example, a physicist might believe that quantum mechanics is true, but not accept it. If he wants to figure out how to get a satellite into orbit, he may treat Newtonian Mechanics as true for practical purposes. [2]

Acceptance may or may not be persistent. Once a physicist has figured out how to send a satellite into orbit using Newtonian mechanics, he may never accept Newtonian mechanics again. However, sometimes acceptance is persistent. A person might not believe there is an external world, but he may accept that there is an external world for his entire life. David Hume and Thomas Reid are good examples of philosophers who accepted that there was an external world, even though they didn’t believe it.

Belief and acceptance can overlap. I both believe and accept that my houseplant needs to be watered, in order to survive. But as indicated above, they can also come apart and so can their epistemic statuses. As already seen, I might be justified in accepting a proposition, but not justified in believing it. But there are also cases where I might be justified in believing a proposition, but not accepting it. This kind of case is what we will investigate in the next section.

It would be difficult to classify which kinds of factors uniquely justify acceptance over belief and vice versa. I cannot do so here. But what is important, as mentioned before, is that belief and acceptance come apart. I can believe something but not accept it for pragmatic reasons. Let’s now take a look at how this happens in the context of informal fallacies.

Acceptance, belief, and informal fallacies

Let’s examine a case in which both a belief in and acceptance of some proposition is justified by appeal to an informal fallacy. [3]

  1. There is no evidence to suggest that drug x is not safe, so I have reason to believe that drug x is safe.

(1) seems to confer justification for the belief that drug x is safe, when certain background conditions are met: past clinical trials have shown no detrimental effects or increased mortality rates, etc. When past studies have given us no reason to believe that a drug is unsafe, it seems reasonable to believe that the drug is safe, since the lack of evidence in those past studies increases the likelihood of the drug’s safety. Additionally, should we fall ill, most of us would use the drug, once we have learned the facts just mentioned.

Now consider the following two scenarios. These are adapted from a paper by Jacob Ross and Mark Schroeder [4]:

Case One:

Sam is well-known competitor in memory events. He is a professional when it comes to remembering things: people, events, whatever. Additionally, he remembers things in great detail. However, you don’t personally know Sam very well. You have been introduced to him just today when a mutual friend asked you if Sam could crash on your couch. But your friend did tell you about how Sam is a well-known professional “rememberer.”

You are preparing your lunch for the next day with Sam. You make your favorite sandwich, a classic peanut-butter and jelly sandwich. Sam makes his standard lunch: ham and cheese. You both put your sandwiches in plastic bags and place them into the refrigerator. Sam places his on the right-hand side, and you place yours on the left-hand side. In the morning, you both reconvene in the kitchen to grab your sandwiches. However, you have forgotten if you put your sandwich on the left or on the right. Sam says, “you put your sandwich on the left. Trust me, I am a rememberer.” So, you grab the sandwich on the left and eat it. Sam was right.

In this case, you have appealed to the informal fallacy of appeal to authority. Sam is an expert rememberer, so when he tells you that your sandwich is on the left, you seem justified in believing him. Additionally, you seem justified in accepting the proposition that your sandwich is on the left. Indeed, you ate it. But now take a look at case two.

Case two:

All the conditions from case one are the same, except that after you both reconvene in the kitchen to grab your sandwiches, you see a sign on the refrigerator that says, “The ham and cheese sandwich has been poisoned, and if you eat it you will have a bad headache for 2 days. The placement of the sandwiches has not been altered. Cheers.” Sam says, “don’t worry, I remember that you put your sandwich on the left.” You don’t eat the sandwich. [5]

In this case, you still seem justified in believing Sam.  What you don’t seem to be justified in doing, however, is accepting what he says. Indeed, you are likely to respond to Sam, “I believe you, man. I really do. But I just can’t risk eating the sandwich on my left.” The risk associated with him being wrong is just too high for you to accept what he says, in your decision-making.

In the first case, an appeal to authority justified your belief that Sam was right, as well as your acceptance of what he has said. In the second case, though your belief was justified, your acceptance was not. This kind of situation happens often, when high-risk practical problems undermine justification for acceptance, but not belief.

The situation just described is one in which there is a substantially high risk in taking the wrong advice. But, many cases in which we appeal to informal fallacies involve more difficult tradeoffs between potential risk and benefit. Indeed, I think this sort of problem arises whenever we appeal to informal fallacies for practical purposes, and thus, we must be careful not to conflate justification of belief with justification of acceptance

Conclusion

A significant reason why we are concerned with the justificatory benefits of informal fallacies is because of the role they may play in practical decision-making. Informal fallacies may justify beliefs, when taken against certain background conditions, but they may not always justify acceptance of those same beliefs, under those same conditions. It is worth remembering, then, that just because an informal fallacy confers justification for a belief in a proposition, it may not also confer justification for that proposition’s acceptance.

Daniel Tippens is co-founder of The Electric Agora. He is also a research technician in the S. Arthur Localio Laboratory at New York University School of medicine.

Endnotes

  1. Hahn, Ulrike; Oaksford, Mike; The Rationality of Informal Argumentation: A Bayesian approach to reasoning fallacies, Psychological Review, 2007.
  2. Much of this section comes from Ross, Jacob; Acceptance and practical reason, Dissertation (Rutgers), 2006.
  3. See endnote [1].
  4. Ross, Jacob, Schroeder, Mark; Belief, Credence, and Practical Encroachment, Philosophy and Phenomenological research, 2011.
  5. It is worth noting that Ross and Schroeder use cases like this in the context of showing how pragmatic factors can affect one’s credence, I instead used these cases to show something about acceptance.

13 Comments »

  1. Great essay! Some questions/observations:

    1. It’s always struck me as a bit odd that we call informal fallacies fallacies even when they are often necessary for daily life. For example, you write:

    “An informal fallacy is an argumentative move that may be psychologically persuasive, but is logically incorrect. For example, an argument from authority is an informal fallacy, because it involves basing one’s conclusion on the premise that one is an expert with regard to something related to the conclusion.”

    To describe such things as fallacies strikes me as presupposing an idealized reasoner, rather than looking and seeing how people often use them in daily life. For example, the argumentum ad baculum (appeal to force) is a fallacy, but only if we presuppose that both parties are rational. If one is speaking to a crazy person or giving a mild threat to one’s child to get them to act reasonably for their own safety, it is unclear how that is fallacious reasoning, although it is still an appeal to force.

    2. If we say that I believe you, but because of risk cannot accept what you say, then it seems to me that we’d need to add something to the description of beliefs, viz: that I believe (persistently) that the world is a certain way, but not so strongly that that belief cannot be doubted. Per the last example, it seems to me that behind the decision to believe but not accept lies an ontological (?) presupposition as to what is possible in terms of what is able to undermine the belief, namely that some very strange person left the message and that the message is true. But what if the message said that you would be ill because the sandwich had been sprinkled with pixie dust? Thus, the belief can be undermined if there is a perceived chance that what threatens the belief could be true. But then, why isn’t a belief simply very strong acceptance that can be undermined given a relevant threat?

    Like

  2. @mpboyle56,

    Regarding (1): I agree with you that if we investigate informal fallacies under the context of practical reasoning, it is really weird to call them fallacies at all, given that the term has a negative connotation to it.

    Regarding (2): Very interesting concern. I take it you are suggesting that perhaps there isn’t really a belief/acceptance distinction, rather we should define belief in terms of acceptance. Say, there is really only belief which is just very strong acceptance. This came out in what you said here, ” Thus, the belief can be undermined if there is a perceived chance that what threatens the belief could be true. But then, why isn’t a belief simply very strong acceptance that can be undermined given a relevant threat?”

    I think my only concern with saying that belief is simply very strong acceptance is cases like David Hume and the physicist.
    David Hume, it seems, did not believe that there is an external world, yet he seemed to persistently act *as though* there were an external world. In this case, either you would have to say that he was wrong about his own beliefs and actually did believe there is an external world (since he seemed to very strongly accept that there is an external world), or you have to give up the idea that belief just is strong acceptance.

    The case of the physicist is also very similar: it seems pretty clearly that he believes quantum mechanics, but he accepts newtonian mechanics.

    I hope I haven’t misrepresented your concern. Feel free to clarify things to me if I have.

    Like

  3. “D is a memory expert. D says that sandwich on the left is mine. Therefore the sandwich on the left is mine”

    The logical fallacy there is non-sequitur. We have “A,B therefore C”. That should warrant neither acceptance, nor belief.

    There is a missing implied premise – “if a memory expert says the sandwich on the left is mine then the sandwich on the left is mine”.

    Therein is the room for doubt, even if we are confident of the first two premises. Do off-duty memory experts have more reliable memories than the average person, on average?

    After all a memory expert uses extreme concentration and specific strategies when professionally remembering. How is their memory when they are not doing that? Is it not possible that off-duty memory experts have worse memories, on average, the way that brilliant mathematicians sometimes suck at everyday arithmetic?

    For this reason I would not even take a fallacious argument as even increasing the probability that something was true.

    Rather I would look for what is the implied valid argument, because that usually supplies the missing premise that is the crux of the argument.

    “Drug X has been tested many times and there has never been a link shown between it and condition Y, therefore there is no link between it and condition Y”.

    The missing key premise is: “if drug X has been tested many times then a link would have shown between it and condition Y”. Again there is room for doubt there. It would only be true if the tests in question would show up the link if it was there.

    It would be misleading to take the fallacious form of the argument as increasing the probability that a belief is true. Completing the inference allows us to ask the right question – “has it been subjected to tests which would show the link, if a link existed?”.

    If we are talking about MMR and autism then we can get a confident “yes” and we have a real basis for confidence that the link does not exist.

    Like

  4. Gee, this covers a lot of ground. You don’t mention “knowledge” once, though “justification” does make an appearance, which is I guess where the informal fallacies fit in. Reading the SEP article (and the recent Boudry et al paper), I get the impression that the modern fallacies are taxonomic rather than diagnostic – this person’s belief is likely to be wrong for a host of reasons, but the form of the justification they give when asked is ad such-and-such. This does slide into the psychology of motivated reasoning, bias, cognitive dissonance, as well as into sophism/rhetoric where there is no real argument at all eg ad baculum etc.

    Like

  5. This is interesting but, perhaps somewhat similar to what mpboyle56 is saying, I think that it mixes up formal deductive reasoning, pragmatic inductive reasoning, and risk management, perhaps to make something appear surprising that really isn’t. Of course fallacies are good heuristics – that is precisely why we are so prone to invoking them. I would assume that this is widely accepted as their, let us say, evolutionary origin. And the situations were we tend to believe in something but aren’t sufficiently sure to bet a lot on it boil down to risk management, again an endeavour very different from formal logic.

    What would perhaps be helpful is if more armchair philosophy minded people would realise that there is indeed a distinction between formal logic and, say, inductive reasoning, and that the latter works (as a heuristic!), has its uses, and does not need to be justified through deduction. In fact the utility of formal deductive reasoning is rather limited in everyday decision making or even just in figuring out how the world works.

    On the other hand, I still believe that ‘gotcha’ moments of pointing out fallacious reasoning have their legitimate use – specifically as replies to people who try to use a fallacy as the decisive argument in a controversy.

    Like

  6. @dantip,

    Thanks very much for the reply. Quick clarification: my mention of the ad baculum is specifically in reference to a secondary use, assuming prior cogent and justified but nonetheless ineffective reasons. A primary use would of course be a fallacy of might makes right.

    I would push a bit further still and argue that informal fallacies, when not fallacious, can be cases of inductive reasoning, as when I take my car to the mechanic and rely on an appeal to authority. So I think that not only is there a problem in terms of connotation, but the denotation may also be off. “Informal fallacies” of this sort are simply instances of inductive reasoning necessary to live our daily lives.

    Rather than posit a division between Hume’s beliefs and actions, perhaps its more accurate to describe Hume as limning the limits of reason itself. For example, in the Enquiry Concerning Human Understanding (Part XII) he says this:

    “The Cartesian doubt, therefore, were it ever possible to be attained by any human creature (as it plainly is not) would be entirely incurable; and no reasoning could ever bring us to a state of assurance and conviction upon any subject.”

    And then, a bit further on:

    “And though a Pyrrhonian may throw himself or others into a momentary amazement and confusion by his profound reasonings; the first and most trivial event in life will put to flight all his doubts and scruples, and leave him the same, in every point of action and speculation, with the philosophers of every other sect, or with those who never concerned themselves in any philosophical researches. When he awakes from his dream, he will be the first to join in the laugh against himself, and to confess, that all his objections are mere amusement, and can have no other tendency than to show the whimsical condition of mankind, who must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them.”

    Like

  7. As a Pragmatist, the only interesting distinction between belief and acceptance, is that we often talk about belief in concepts rattling around our brains with no immediate use or behavioral response – I believe the earth orbits the sun, but so what? For me, the belief only triggers behaviorally in debates about beliefs, like debate with a flat-earther, or how much money we should spend on science education, etc.

    Otherwise, the border between belief and acceptance in practice seems to me to be a bit fuzzy. It is true that we often act on what we can easily recognize are really probabilities, and I suppose this is what is meant by acceptance. The problem is that what we hold as ‘beliefs’ (taking the world to be a certain way), that we act upon, are also really only probabilities. Pragmatically, the only difference here is that ‘beliefs’ are assumptions we act upon that tend to be recurrently re-enforced through experience. I assume I still have a job, because every day I show up to work and nobody tells me to go home; at the end of the week I get paid; so I ‘believe’ myself to be employed.

    Let’s raise an issue the article doesn’t mention. Say the police have me under suspicion for some reason, bring me in for questioning. “Are you employed?’ they demand. ‘I believe so,’ I answer. ‘Never mind your beliefs! are you or are you not employed?!’

    I remark this, because social situations often require us to state strong probabilities to be fact.

    In the sandwich situation, do my choices change if my guest simply says, ‘yours is on the left’? In the first case perhaps; in the second perhaps not.

    The rhetorical function of the enthymeme (without an enthymemic structure you can’t have an *informal* fallacy) is to abbreviate demonstrations for those with lesser knowledge bases or time constraints. Consequently, the audience is asked to suspend their critical faculties. Obviously this involves persuading others to behave in certain ways (‘eat that sandwich, not this one’).

    One should always check the sandwich before eating it.

    Like

  8. Hi Robin,

    —-“The logical fallacy there is non-sequitur. We have “A,B therefore C”. That should warrant neither acceptance, nor belief.

    There is a missing implied premise – “if a memory expert says the sandwich on the left is mine then the sandwich on the left is mine”.—-
    ____________________________________________

    Assuming the bayesian approach, the implicit premise that you get from bayesian reasoning would be something like “if a memory expert says that the sandwich on the left is mine, then the sandwich on the left is more likely to be mine.”

    So when you conclude “the sandwich on the left is mine” you have increased justification for that belief, since it was more likely that it would be true.

    Presumably, there is an increased likelihood that your conclusion is true, and so supposedly you have increased justification for your beliefs.

    I am inclined to think this reply could extend to your concern about the drug case as well? If I missed something please try to get through to me again. I might have tunnel vision!

    —-“Therein is the room for doubt, even if we are confident of the first two premises. Do off-duty memory experts have more reliable memories than the average person, on average?”—-
    ____________________________________________

    I was assuming this, by hypothesis.

    Hi DavidDuffy,

    Yeah I didn’t mention knowledge primarily because this paper was on practical decision making, and typically question such as “am i justified in believing x so I can do x?” or “am i justified in accepting x so I can do x?” are the questions we care about in our decision-making process. Knowledge rarely seems to enter into our deliberations.

    ” I get the impression that the modern fallacies are taxonomic rather than diagnostic – this person’s belief is likely to be wrong for a host of reasons, but the form of the justification they give when asked is ad such-and-such. ”
    ____________________________________________

    Love this way of articulating how to view informal fallacies!

    Hi mpboyle,

    Thanks for clarifying and saying more, I was curious what you would have to say about this topic considering your interest in critical thinking, so I am really enjoying this exchange 🙂

    “I would push a bit further still and argue that informal fallacies, when not fallacious, can be cases of inductive reasoning, as when I take my car to the mechanic and rely on an appeal to authority. So I think that not only is there a problem in terms of connotation, but the denotation may also be off. “Informal fallacies” of this sort are simply instances of inductive reasoning necessary to live our daily lives.”
    ____________________________________________

    I suspect I would agree with you, actually (at least if I endorse the Bayesian account). Isn’t the Bayesian approach to informal fallacies saying that permissible cases of informal fallacy appeal are actually sophisticated cases of inductive reasoning (specifically, Bayesian reasoning)? For them, informal fallacies are fallacious when appealed to under right background conditions which don’t increase inductive support, but are not fallacious when appealed to under the right background conditions which do increase inductive support. So in the latter cases, using an informal fallacy is basically a sophisticated case of inductive reasoning (specifically, bayesian reasoning).

    —-“Rather than posit a division between Hume’s beliefs and actions, perhaps its more accurate to describe Hume as limning the limits of reason itself.”—-
    ____________________________________________

    Interesting view. I’m not completely sure how it should compel me to give up the distinction between belief and acceptance. Even if we can explain the cases like Hume, saying he reached the limits of reason and that this suggests there was no divide between belief and acceptance in his mental life, I am still puzzled about how to explain the case of the physicst, where he clearly believes Quantum Mechanics but accepts Newtonian Mechanics on occasion for practical purposes. He clearly doesn’t seem to believe Newtonian Mechanics. This case seems to resist the idea that he hit the “limits of reason.”

    Like

  9. @dantip:
    I’m also really liking this exchange. In terms of Bayesian probability, from what you’ve described, that was basically my understanding, too. In terms of the physicist, it strikes me that to say he doesn’t believe Newtonian mechanics misses something, namely an implied ontology of eliminativist reductionism. Yes, he belives /accepts very strongly that Newton describes the motion of planets accurately. If we are asking at what level of description do we get down to the the most elementary level of what exists, then its quantum mechanics. But to say he doesn’t believe in Newton’s laws in the sense that they describe real things seems to imply that he wouldn’t believe in planets either, since he would be part of the “its all an illusion” crowd, to use Massimo’s phrase. I don’t see how that is required of the physicist.

    Like

  10. Dan Tippens,

    I didn’t note in my earlier comment how much I enjoyed your article.

    That said, I can’t help wondering if the effort to make a clear distinction between ‘belief’ and ‘acceptance’ doesn’t actually reveal that such a clear distinction is only possible in given contexts, and thus are only contingent categories with limited use.

    Belief: “A belief is a mental state in which somebody takes the world to be a certain way.”

    Acceptance: “Acceptance is a mental state in which somebody treats a proposition as true for practical purposes.”

    In terms of our behavior, this distinction seems only contingently valid. If I act on a ‘belief,’ or I act on a (probabilistic) ‘acceptance,’ the behavior will be the same; the difference may be felt internally – or not; remarked upon – or not.

    McDonald, my car’s mechanic, has been at work in his field for some time. Do I accept his expertise, or believe in it? I keep going back to him, because he keeps my car running; whereas I won’t return to Paulson, who ‘fixed’ a fuel line leak by perforating the gas tank.

    Most of our ‘belief’/acceptance’ declarations seem to me to be ‘post-hoc’ summations. We go to any mechanic at all because they have assured credentials. Cars being what they are, we must go to some mechanic eventually.

    I also agree with mpboyle56 concerning the physicist; saying he doesn’t ‘believe’ in Newtonian physics as a whole, says nothing about what he believes/accepts concerning the parts of it that seem to work.

    Finally, on rhetoric: What rhetorical use can we make of the distinction (and I admit there is one)?

    It seems to me that beliefs are more difficult to disconfirm with evidence. ‘Biologically, there is no difference between the races’ is true, but simply mounting evidence in support of it will not convince trenchant racists. Instead, their acceptance has to be worked – ‘if you insult these (ethnicity of choice), they will be aggrieved and take legal action’ is not *convincing* destruction of their beliefs, but is frequently the best persuasion we can accomplish.

    Like

  11. Re the physicist, he *knows* that the predictions of QM and Newtonian mechanics are the same for objects of the appropriate size and velocity – and also where and how they diverge. Similarly with orbital mechanics and general relativity, he knows when the additional precision of a full treatment will be needed eg famously for GPS.

    Returning to fallacies – we have all experienced them doing arithmetic, and that extends to the mental calculations with probabilities needed for eg Laura the feminist bank teller or the Monte Hall problem. The fallacies in the former can be avoided by presenting the same problem in a more concrete fashion, so our heuristics can get a grip on them. Lots of learning involves tuning our informal methods of reasoning so they match correct known formal or empirical results. Unfortunately, in philosophy there are usually no right answers to compare them to 😉

    Like

  12. I may think about this stuff just a bit more simply. Yes arguments from authority and such can fail us for a variety of reasons, so in practice we must always keep such concerns in mind. Nevertheless, why not take Rene Descartes’ famous observation to heart, and so acknowledge that EVERYTHING we that believe, does happen to rest upon fallacy (other than that “I think” itself)?

    In practice what we have are more and less strongly held beliefs, given the evidence that we perceive. As far as “acceptance” goes, I reserve this for our most strongly held beliefs. Notice that over the past few centuries, science has not just built up various beliefs about reality given its evidence, but through a community with generally accepted understandings. Of course the power that these understandings have brought us have brought dramatic change. Philosophy does not yet have such a community, though some say that this is because the field is not actually in the “reality studing business,” but rather in the “criticism of reality studying business.” This is fine with me, as long as it doesn’t also hinder reality study itself. Observe that philosophy claims lordship today over the very important and real subject of ethics. I mean to either help scientists become philosophers in this regard, or philosophers become scientists, so that the field can indeed advance.

    Like

  13. Handling fallacies in the form of inconsistencies is a “feature” of some kinds of software:

    Inconsistency robustness is information system performance in the face of continually pervasive inconsistencies—a shift from the previously dominant paradigms of inconsistency denial and inconsistency elimination attempting to sweep them under the rug. Inconsistency robustness is a both an observed phenomenon and a desired feature: Inconsistency Robustness is an observed phenomenon because large information-systems are required to operate in an environment of pervasive inconsistency.
    http://www.amazon.com/Inconsistency-Robustness-Carl-Hewitt/dp/1848901593

    Like