Reason and the Post-Human

by Daniel A. Kaufman

In “Excessive Reason,” an essay I published in these pages last year, I argued that mainline philosophy is characterized by a pervasive and systematic rationalism, the main characteristics of which I summarized as follows:

  1. The acceptability of a belief, activity, practice, institution, etc., rests entirely on whether or not it can be rationally justified.
  1. The rational justification of beliefs is comprised either of empirical evidence or of inductive or deductive proof. The rational justification of activities, practices, and institutions may include appeals to utility, where this notion is grounded in a scientific conception of human nature (like Bentham’s), or to duty, as long as it is grounded in some clearly definable, logical conception of reason (such as Kant’s).
  1. Rational beliefs and actions are the logical and causal products of rational intellection.
  1. Rejected categorically are those beliefs, activities, practices, and institutions grounded in the authority of individuals, classes, customs, or traditions—the collective sources of what Burke called “prejudice”—adherence to which is broadly identified with pre-modern civilization and is considered intellectually and behaviorally atavistic.
  1. Also rejected are those beliefs, activities, etc., which are grounded in common sense, intuition, or sensibility, obedience to which, inasmuch as they do not constitute rational grounds for obtaining knowledge or motivating action, is also treated as regressive; the province of children or of incurious or otherwise unreflective adults.
  1. Truth is the end of all inquiry and belief and trumps all other intellectual ends. The fulfilment of one’s duty (service to the Good, the Right, and the Just) is the end of all activity and consequently, supersedes all other practical ends.

I also suggested that these ideas support an ethos or conception of the Ideal that is also characteristic of mainline philosophy and which is defined as a conjunction of the following

A.  Disinterestedness (impartiality) in belief and conduct: one must eschew bias, prejudice, and any other form of pre-judgment, in everything that one believes and does, and go wherever the evidence, logic, cost-benefit analysis, or other rational calculus leads.

B.  Dispassion in belief and conduct: one must believe and act solely on the rational merits of the case at hand. One should never believe because of appealing rhetoric or wish-fulfilment or act on the basis of critically unexamined sentiment.

C.  Autonomy: The ideal person is a free agent, both in belief and in action, but this freedom must be rigorously defended: from the forces of nature, by having one’s reason sit in constant judgment over one’s inclinations and sensibility; and from the forces of social conformity, by maintaining one’s independence from the influences of others and especially from the often unconscious influence of habit, custom, and tradition.

D.  Consistency and Fairness: As inconsistency is the most obvious manifestation of irrationality, consistency is a bedrock rationalist virtue.  Fairness is a manifestation of both consistency and dispassion, so it too is a rationalist virtue.

E.  Purity of Purpose and Perfectionism: Absolute fidelity to the supremacy of truth, goodness, rightness, and justice in everything that one believes and does, over the course of one’s life. (1)

In that essay, I attributed both this rationalism and the ethos that follows from it to what I called a “tacit dualism” that pervades mainline philosophy and for which the chief intellectual engineers were Plato and Descartes.  Of this dualism, I said that it,

combines a quasi-Cartesian estrangement of mental from bodily and social life – in which consciousness and reasoning comprise the mind’s lone necessary, “indigenous” activities, while perception, sensibility, and the full range of conative states are relegated to the contingent bodily and social dimensions of life – with a Platonic devaluation of bodily and socially-influenced belief and activity and corresponding inflation of the value of consciousness and ratiocination.

It is no secret that I strongly disagree with mainline philosophy’s rationalism.  Indeed, much of the professional work that I’ve done, especially in epistemology and metaphysics, has been devoted to dismantling it.  (2)  In “Excessive Reason,” I outlined just what a comprehensive critique of rationalist philosophy might look like, from the perspective of a philosophy that accepts the idea of intellectual and existential givens and boundedness, in the manner described by Hume, Reid, Wittgenstein, and others.  But what I have not said much about is how harmful rationalism is – although I did point out some of the ways in which it can have perverse results, in my recent essay, “Self-Made” (3) – and this is what I want to address here; the moral, social, political, and ultimately the civilizational effects of embracing rationalism.

___

If there is a general characterization of what is wrong with rationalistic philosophy it is that it represents a rebellion against our humanity; one that has played and in my view will continue to play a negative role in the lives of individuals and in the civilization of the modern West.  This rebellion ranges from rationalism’s hostility to our natural and customary beliefs, sensibilities, and inclinations, to its outright rejection of our humanity, if conceived of as an integrated, organic unity of mental and bodily capacities that is only fully realized in a social and cultural context and which possesses a complex and heterogeneous good.  Rationalist philosophy rejects this on behalf of a wholly abstract personhood which, in “Excessive Reason,” I identified most strongly with Descartes, Locke, and Kant, but which I traced back to Plato and even earlier, to the Pythagoreans and the Orphic mystery cults.  Two significant areas in which modern mainline philosophy has persistently urged resistance to our natural and customary beliefs and inclinations, in the service of rationalist perfectionism, are ethics and epistemology.

With respect to ethics, modern mainline moral philosophy’s perfectionism — particularly, its requirement that one adopt a disinterested, dispassionate, and impartial stance, in identifying and carrying out one’s moral duty, and its rejection of sentiments such as love, hatred, sympathy, attraction and aversion as morally legitimate motives — far from advancing the cause of goodness and justice, in fact constitutes an obstacle to it.  On the epistemological front, the rationalist’s rejection of nature and custom as legitimate sources of belief and insistence that every belief be rationally justified do not serve the cause of liberal, democratic politics, as mainline political philosophers have liked to claim, but instead create intellectual conditions conducive to totalitarianism, by leaving a vacuum at the foundations of social, civic, and political belief and thereby exposing the public to the manipulations of propagandists and demagogues.

___

I trust that it is uncontroversial to observe that our natural inclination is to be interested rather than disinterested, partial rather than impartial, sympathetic and unsympathetic rather than detached and that we needn’t go as far as invoking animal behavior or the genetic imperatives that operate across species to make the point; that we need only consider that disinterestedness, dispassion, and detachment are not manifest in childhood, but must be cultivated over the course of one’s youth, or reflect on the fact that even once acquired, the exercise of disinterest, dispassion, and detachment requires effort and is never entirely successful, in order to see that these traits are the products of acculturation.  (4)  Our natural inclination is to be partial to our own good and to the goods of those close to us in affection, who are those with whom we typically also enjoy physical, social, and other forms of proximity.  This point was made most strongly by Hume, whose observation that “a man naturally loves his children better than his nephews, his nephews better than his cousins, his cousins better than strangers, where everything else is equal” was intended to show that justice and other disinterested virtues are the products of artifice, rather than nature; that we are primarily affective and only secondarily reflective beings; that we are defined by sensibility more than by ratiocination; and that “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” (5)

Doubtless, there are contexts where the artifice of impartiality is not only beneficial but required, a prime example of which is the law, but modern mainline moral philosophers like Kant and Mill, as well as contemporary rationalistic moralists like Peters Singer and Unger, have deemed impartiality an inherent feature of morality and consequently, have construed the moral stance as essentially indistinguishable from that of the jurist.  The problem with this, aside from its dubious validity — one wouldn’t have thought that a stance appropriate to an institution, whose scope is the entire population of a city, state, or nation, would be appropriate to one’s personal conduct and family life — is that it represents an unattainable ideal.  It’s just not reasonable to expect a person not to privilege himself or those whom he cares about or loves over those whom he doesn’t know, to whom he feels indifferent, or whom he loathes.  Singer and Unger may want to claim that the fact that a person is a close relative as opposed to a total stranger or that a suffering person is right in front of you, as opposed to being a statistic, in a newspaper story, has no relevance to your moral obligations towards one person or the other, but this will never change the fact that people overwhelmingly will privilege their intimates over strangers and will be more moved to help those people whose need is imminent and manifest than those whose suffering is distant and unperceived. (6)

Many will want to point out that being moral can be difficult and may require a person to turn against his natural inclinations, which, depending on the circumstances, may be true (though it is worth noting that Aristotle described the morally virtuous person as one who is inclined to do the right thing and feels pleasure in doing so), but I would like to suggest that maintaining an entire system of unattainable moral ideals is dangerous; that despairing of one’s ability to live up to the moral ideal, a person may give up on morality altogether and become amoral, adopting a “plague on both your houses” attitude towards the moral and the immoral alike or even embrace immorality, out of the resentment that commonly follows from unremitting failure, possibilities that Singer simply dismisses as “remote” and which Unger acknowledges, but treats as minor difficulties to be surmounted with a bit of clever rhetoric.  (7) In contrast, Bernard Williams believed these to be real and serious consequences of the kind of perfectionist moralizing upon which Singer and Unger have built their careers.  As Williams put it:

Some writers aim to increase a sense of guilt in their readers.  Peter Singer is an example, and in his book Practical Ethics he is evidently more interested in producing that effect than he is in the theoretical basis for it, which gets very cursory treatment.  As moral persuasion, this kind of tactic is likely to be counterproductive and to lead to a defensive and resentful contraction of concern.  This can be seen in research and, at the present time, all around us. (8)

What rationalist moral philosophers have failed to understand is that our capacity to care about and feel obligated towards strangers is derivative of the natural affection that we feel for those who are close to us.  “We pity even strangers,” Hume explains, “but if we examine these affections of pity, we shall find them to be secondary ones, arising from original affections, which are varied by some particular turn of thought and imagination.” (9)  So, when Unger urges us to neglect the interests of our children and elderly parents, in order to send as much money as possible to poor strangers living in the far corners of the earth (in one particularly surreal passage, he tells us that global poverty obligates us to send our children exclusively to public primary and secondary schools, “even if it means …moving to a different neighborhood,” and to spend as little as possible on our elderly parents, even if they are infirm), he is demanding a state of affairs in which no one would be inclined to be charitable at all. (10)

As Hume explained, the natural affection that we feel for others is proportionate to their proximity to us and the extent to which they “resemble” us, the latter of which includes a sense of social, economic, religious, and other non-physical varieties of closeness.  Hence the fan-like structure of Hume’s account — in which our affection and care are initially directed towards ourselves, after which they reach out to others — that we’ve already encountered in his remark concerning a person’s relative affections for his children, nephews, cousins, etc., and which is simply a specific instance of his more general idea that the “force and vivacity” of our ideas is proportionate to the immediateness of their objects. (11)  One consequence of this is that our sense of concern and obligation decreases as the potential objects of our good will are perceived as being at a greater distance from us, whether physically or in the ways that they resemble us, to the point at which the only source of charitable feeling is the sense in which every human being resembles every other, which Hume says is not sufficient to constitute any sort of “love for mankind” and which, given competing demands for care from family and friends, amounts, effectively, to none.  (12)  Nonetheless, we are capable of caring about and feeling obligated to others, because we are able to imagine that they might have been close to us, whether physically, socially, economically, etc.  “I [cannot] feel the same lively pleasure from the virtues of a person who liv’d in Greece two thousand years ago, that I feel from the virtues of a familiar friend,” Hume explains, “yet I do not say that I esteem the one more than the other.  Our situation, with regard both to persons and things, is in continual fluctuation; and a man that lies at a distance from us, may become a familiar acquaintance.” “We blame equally a bad action, which we read of in history, with one perform’d in our neighbourhood,” Hume continues, “the meaning of which is that we know from reflexion, that the former action wou’d excite as strong sentiments of disapprobation as the latter, were it plac’d in the same position.” (13)

The disinterested ideal that pervades mainline moral philosophy and requires that we disregard the natural asymmetry of our affections and care and deny our intimates pride of place in our generosity thus constitutes an assault on the very ground from which the charitable instinct springs and is a force for its diminishment, a fact that can only be exacerbated by the scolding, hectoring approach to the subject favored by the Singers and Ungers of the world.  None of this should surprise us, for it was long understood by Western thinkers of antiquity and the Middle Ages that the civic and political orders are derivative of the smaller-scale forms of social life and ultimately, of the family, and that the very concepts and habits that make good civil and political society possible — obligation, prerogative, fealty, obedience, authority, and the like — have their origins and are first acquired and exercised in the context of one’s relationships with family and friends. (14)  Indeed, it is precisely because they play this foundational role in civil and political society that the institutions of the family and of friendship are among the first targets of any aspiring totalitarian regime, whose aim is not merely to achieve effective absolute rule, but to reconfigure the moral template of the citizen; something that can only be accomplished, when the emotional connections between parents and children and between friends and neighbors have been severed and the grounds of natural affection thereby destroyed. (The people living in the totalitarian society described in Aldous Huxley’s Brave New World are raised from infancy in state-run institutions and are conditioned to conceive of the very words associated with family life — ‘family’, ‘father’, ‘mother’, ‘sister’, etc. — as obscenities. (15))

____

If a collapse in moral sympathy and concern is one of the hazards posed by the mainline tradition’s moral perfectionism, what of its epistemological ideals, which are equally extreme?  Skepticism, when understood as a position rather than as a method, is the product of a frustrated rationalism — the result of the rationalist being convinced that the high epistemological standards to which he adheres cannot be met — and would seem, therefore, to entail that we should suspend all of our beliefs and activities and adopt the posture of the Pyrrhonist or at least, the Pyrrhonist of popular legend.  Now, on first glance, the risk here, would appear to be purely theoretical: even though Hume warned that such a person would suffer “pensive melancholy” and would receive a “cold reception” from others and Reid thought that “a man who did not believe his senses, could not keep out of harm’s way an hour of his life” (16), both were convinced that the rationalist-skeptical stance was unsustainable; that the force of natural belief and inclination and inherited habits and customs would always overcome even the most hardnosed rationalist philosophy and that consequently, neither rationalism nor skepticism posed any real danger to individuals or to mankind as a whole.  “The great subverter of the excessive principles of skepticism is action and the occupations of common life,” Hume wrote.  “These principles may flourish and triumph in the schools; where it is difficult, if not impossible to refute them.  But as soon as they leave the shade and are put in opposition to the more powerful principles of our nature, they vanish like smoke.” (17)  Reid agreed, observing that “in all the history of philosophy, we never read of any sceptic that stepped into fire or water because he did not believe his senses, or that showed, in the conduct of life, less trust in his senses than other men have.” (18)

But if we skip forward several centuries and consider G.K. Chesterton, whose Orthodoxy is partly devoted to the same question — in two chapters entitled “The Maniac” and “The Suicide of Thought” — we find that he is profoundly worried that widespread skepticism, sustained by the belief that human thought and activity can never live up to rational standards, would engender an equally wide-ranging irrationalism.  “Just as one generation could prevent the very existence of the next, by all entering a monastery or jumping into the sea,” he wrote, “so one set of thinkers can prevent further thinking by teaching the next generation that there is no validity in any human thought.” (19)  Indeed, Chesterton believed that this process of collective mental derangement had already begun, writing that “The whole modern world is at war with reason, and the tower already reels.” (20)  In 1908 (the year of Orthodoxy’s publication), he was in a position to see the effects of earlier such losses of faith, whether the irrationalist moral and political philosophies that followed the French Revolution, Terror, and Napoleonic Wars, which were widely interpreted as representing the failure of rationalist thought, or the sublime madness of Romantic literature, painting, and music, much of which was directed against industrialization and urbanization, children of the Scientific revolution and of the rationalist outlook both.  But Chesterton also despaired of developments in his own day and expressed alarm at hearing the kinds of skeptical arguments, which previously had been the unique province of philosophers, increasingly coming out of the mouths of prominent figures in the popular culture, most notably H.G. Wells, whose Oxford Philosophical Society presentation, “Skepticism of the Instrument,” suggested to Chesterton that what had once been a purely academic idea had now penetrated the public square (21), and which, given the literary and rhetorical gifts of Wells and others like him, would quickly make its way into the popular consciousness.

In the first decade of the twentieth century, with Europe’s experience of the French Revolution and its aftermath and of Romanticism behind him, Chesterton could understand both that the kind of credulity that makes it possible for a person to trust his senses, his reason, his instincts, and his acquired customs and habits is crucial, if any intellectual or practical regime grounded in rational procedures is to be sustained and that the kind of systemic incredulity that accompanies a perennially frustrated rationalism represents not just the end of any such regime, but an invitation to intellectual and social anarchy.  But, with the mechanized mass murder of the first and second World Wars and the sweep of totalitarian ideologies and governments across Europe and Asia still ahead of him, the Chesterton of Orthodoxy could not see beyond the prevailing incredulity of his day to the point at which it would become credulity once again, but a credulity of a most terrible kind; one that would clear the way for human monstrousness on a scale and of a scope hitherto unknown.  For beyond that self-absorbed mental condition, in which a person refuses to believe or act, there comes an even more wretched point at which he will believe and do anything.

Hannah Arendt has argued that as much as the catastrophic economic depression or the collective humiliation experienced at Versailles, this degenerate, post-skeptical credulity is crucial to understanding the rise of Nazism; specifically, it explains how the people of advanced societies like Germany and the Austro-Hungarian Empire could place themselves, their legacies, and their futures in the hands of a ranting lunatic like Hitler and his rabble of costumed perverts, petty criminals, and thugs.  “The problem of Hitler’s charisma is relatively easy to solve,” Arendt wrote.  “It rested on the well-known experiential fact that Hitler must have realized early in his life, namely, that modern society in its desperate inability to form judgments will take every individual for what he considers himself and professes himself to be and will judge him on that basis.  Extraordinary self-confidence and displays of self-confidence therefore inspire confidence in others.” (22)

The Nazi problem, of course, is only an instance of a more general problem that we continue to live with today.  If Hume and Reid are correct that it is in a human being’s nature to believe and to act and if belief and action require an initial credulity in order to get off the ground and to be sustained thereafter, then the sole question that remains is what that credulity will consist of.  In a human being’s natural condition, it is a credulity born of the world and of human nature, experience, and history, but in the post-skeptical condition that comprises rationalism’s legacy, this reality-based credulity is lost, to be replaced by one that derives from whatever ersatz-reality can be designed and promoted with the greatest combination of cleverness and assertiveness.  “In such an atmosphere any kind of fraud becomes possible,” Arendt explained, “because there appears to be no one left for whom the difference between fraud and authenticity matters.  People therefore fall prey to judgments apodictically expressed because the apodictic tone frees them from the chaos of an infinite number of arbitrary judgments.” (23)

This remains true, whether the counterfeit reality is of the brutal, Orwellian kind effected by the Hitlers, Stalins, and Maos of history or the softer, Huxleyan variety that we see every day in the efforts exerted by corporations in the selling of their respective products, the chief force for which is the implantation of fabricated desires into the minds of the public, a job that falls to their hired-guns from the social sciences.  Like the totalitarian dictator, the contemporary marketer does not seek to persuade his target audience by appeal to reasons that ultimately are accepted or rejected according to a person’s natural and customary instincts and beliefs , which belong, as we’ve just said, to the world and to human nature and history and are not easily controlled, but relies instead on our trust in these natural and inherited instincts and beliefs having been so thoroughly undermined by skepticism and its innumerable popular articulations that he can replace them with artificially created instincts and beliefs that can be counted on to produce favorable reactions to the reasons that he offers on behalf of whatever it is he is selling.

Intellectuals have a distinctive susceptibility to this sort of deceit, because they are the people most likely to buy into the rationalist conceit, and once they have abandoned that backbone of given basic beliefs, inclinations, and habits that connects everything else that they believe and do to the realities of the world, their own nature and history, they are left exposed and vulnerable to the manipulator’s fictions, especially when those fictions are placed within the reassuring settings of science, philosophy, and other respected institutions and frameworks.  As Stanley Rosen warned:

If philosophy is understood as a thoroughly extraordinary event or activity having nothing to do with ordinary experience or sound judgment, then there is no basis on which to distinguish between genuine and specious philosophical speeches.  If philosophy claims that ordinary life is irrelevant to philosophy, then philosophy is indistinguishable from arbitrary rhetorical assertions. (24)

This may help explain why so many of the last century’s most distinguished thinkers sympathized with and to some extent were complicit in its worst totalitarian causes, Martin Heidegger and Jean-Paul Sartre being the most famous examples.  In her review of Max Weinreich’s 1946 book, Hitler’s Professors, Arendt describes the Nazis as having exploited precisely this sort of gullibility in the way that they used Heidegger to gain respect in Germany’s elite universities, after which he was replaced with Alfred Bäumler, a known charlatan. (25)  And it is why C.S. Lewis, in That Hideous Strength, his fictional tale of an aspiring totalitarian movement’s efforts to take over Britain, described the coup as beginning in a college and explained, through the mouth of the character one of the movement’s leaders that,

It’s the educated who can be gulled.  All our difficulty comes with the others.  When did you meet a workman who believes the papers?  He takes it for granted that they’re all propaganda and buys it for the football results.  We have to recondition him.  But the educated public, the people who read the highbrow weeklies, don’t need reconditioning.  They’ll believe anything.” (26)

____

If some of rationalistic philosophy’s rebellions against human nature arise from the conviction that our intellectual and practical instincts and habits are atavistic and ought to be entirely disregarded in favor of pure, rational reflection, another derives from its obsession with radical autonomy; with the idea that not only should our beliefs and actions be self-originating, in the sense of deriving exclusively from our own contemplative and ratiocinative activity, but that our very being should be self-made; that nothing about who or what we are should prevail independently of our own rational will; that nothing should be given and by implication, nothing should bind or otherwise constrain us.  This, of course, was the primary subject of “Self Made,” but I would like to examine it in greater detail here.

If rationalistic philosophy’s dualistic heritage created a pecking order, in which our conscious lives are given pride of place over our embodied and social existence, and contemplation and ratiocination are privileged over sensation, conation, and habituation, then one result of its contemporary misapplication of modern science and scientific method to the understanding of human nature has been to harden that dualism to the point where one’s entire self-identification is with one’s rational consciousness and all the things that are connected with one’s embodiment are treated as mere tools.  (There also has been another, even more extreme result, namely the complete abandonment of the conscious, rational dimension altogether, in favor of an entirely scientistic, thoroughly evolutionary and mechanistic picture of the human being and human life, more about which I’ll say a bit at the end of these remarks.)  Lewis described this situation as on in which we have “abolished” man and replaced him with an “artefact” (27), an idea that has been the subject of some of the most powerful humanist and dystopian literature of the last century. (28)

The modern scientific stance is essentially analytical, quantitative, reductive, and impersonal, by which I mean that understanding is accomplished by: (i) breaking objects and processes down to their most basic parts; (ii) defining objects and processes in terms of entirely quantifiable characteristics; (iii) explaining objects and processes in ways commensurate with the explanations provided by the most fundamental physical sciences; (iv) engaging objects and processes from an objective, depersonalized point of view (rather than as they are experienced by people).  This stance, it should be emphasized, is perfectly suited to the ends that modern science serves.  Since the industrial revolution, scientific understanding has been pursued almost exclusively for its engineering potential — it enables us to make and use things — and in this regard, the type of understanding that it provides is the right kind, not only because it provides the sort of information that one needs, if one is to manipulate matter and energy, but because it enables us to justify treating the natural world as something to be used; as something with no ends of its own that consequently may be employed entirely in the service of our own ends.  The modern scientific vision is a devaluated one, and devaluation is the crucial first step to revaluation, which is required if one is to justify the subordination of any thing or process to one’s own ends.  “We reduce things to mere Nature in order that we may ‘conquer’ them,” Lewis wrote, and so “the price of conquest is to treat a thing as mere Nature.” (29)

When this perspective is turned from the world to us, what was a largely eschatological dualism in antiquity is brought down to earth and rendered in high relief.  The conscious, rational mind, in the position of the user, analyses, reduces, neutralizes, devalues, and finally instrumentalizes its own embodiment, something that is manifested not only in many of our theories of human nature, morality, and politics, but in practice as well.  As we’ve already seen, it is pervasive in the philosophy of the mainline tradition, whether in its disregard for human emotional imperatives or in the assignment of moral, legal, and political rights and duties exclusively on the basis of a rationalistically conceived personhood, one result of which has been the weird treatment of the human body as private property (30), to do with as one sees fit and whose distinct imperatives and constraints must never be permitted to interfere with the ends of conscious selves.  The observation that such attitudes are hubristic seems quaint today, but it is worth noting that Chesterton, Lewis, and Iris Murdoch all described them as essentially Luciferean, in the (metaphorical) sense that they represent a combination of resentment at having been made by forces other than one’s own will and a hunger for absolute – in the sense of counter-causal – freedom. (31)

More serious still is the alienation effected by the withdrawal of the conscious self from its body, its world, and its history, on which Carl Jung hung the ongoing and currently metastasizing pandemic of neuroses, afflictions that arise not simply from a sense of distance from the world, from others, and from one’s past, but from the horrible over-awareness of self that results.  “Whenever there is established an external form, be it ritual or spiritual, by which all the yearnings and hopes of the soul are adequately expressed, then no spiritual problem, strictly speaking, exists,” Jung wrote. (32)  The modern man, however, “has become ‘unhistorical’ and has estranged himself from the mass of men who live within the bounds of tradition,” the consequence of which, he explained, is that we have “suffered an almost fatal shock and fallen into profound uncertainty.”  Indeed, the very need for clinical psychology, Jung believed, “is symptomatic of a profound convulsion of spiritual life.” (33)

The most potent criticism of the rationalistic dream of being entirely self-made and radically free, however, is also an ironic one, for the arc that begins with the rationalist’s core premises and ends with our “self-made” people, far from describing the rise of freedom is rather a tale of its erosion and ultimately, its loss.  For, the very rationalistic framework that renders an individual’s body a tool to his conscious mind also justifies a political picture, in which the individual is conceived of as an object to be managed by the state.  With human nature reduced to a mechanical, value-neutral point, politics, which in antiquity was conceived of as the arena in which richly, “thickly” rendered human beings realized their full potential, becomes indistinguishable from social engineering and maintenance.  The result, today, is a still somewhat light, but undeniable technocracy, in which public policy and governance are the province of experts — natural and social scientists, engineers, mental and other healthcare professionals, economists, and a professional class of bureaucrats — whose transformation of civic life and government into a form of “human resource management” is familiar enough to the average person, but whose methods have become ever more inscrutable, as the scientific basis for policy has become more and more advanced and the logic of politics and administration has become increasingly complex.

The result, for those who have come to view themselves and their lives through the reductive lens of resources-to-be-maintained and problems-to-be-solved, is an ever-increasing dependency on the experts, which is hardly the radical autonomy dreamt of by the architects of modernity.  Indeed, precisely this complexity has been used by some to argue for the subversion and manipulation of the public, as in the case of Walter Lipmann’s notorious arguments for the “manufacturing of consent” in modern democracies. (34)  As Lewis described this paradoxical outcome of our attempts to attain individuality, strength, and freedom by using the theoretical and practical instruments of modern science to conquer nature: “If I pay you to carry me, I am not therefore myself a strong man,” to which I would add, “or a free one.” (35)

A few pages back I spoke of the other effect of our misapplication of the modern natural sciences to the study of human nature and human affair, and that is the collapse of what is a clearly untenable, unstable dualism in favor of a thoroughly physicalistic monism; one in which the Scientific Image, as described by Wilfrid Sellars, has entirely supplanted the Manifest one, and in which our account of ourselves and our activity no longer has any room for selves or persons or reason or autonomy, even of the humble, non-rationalistic variety.  Once such a view has fully taken hold and pervades not only our civic and political institutions, but our common understanding and practices, we will essentially be living in the world described and hoped for by B.F. Skinner, in his technocratic manifesto Beyond Freedom and Dignity, and therein will lie not only the greatest tyranny, but the greatest irony of all.

If the rationalistically-inspired scientific takeover of our conception of human nature points towards technocracy — if as Lewis wrote, “Man’s conquest of nature, if the dreams of some scientific planners are realized, means the rule of a few hundreds of men over billions upon billions of men” (36) — then the question of the principles on which those few hundreds of men rule becomes essential.  The promise, of course, is that they will be rational principles; that rule by experts will mean rule by reason, by way of rationalistic philosophy and science, as opposed to rule by unreason, by way of prejudice, superstition, or archaic myths about man.  “We have made immense strides in controlling the physical and biological worlds, but our practices in government, education, and economics have not greatly improved,” Skinner lamented, after which he went on to say:

We need to make vast changes in human behavior, and we cannot make them with nothing more than physics or biology. What we need is a technology of behavior [that will allow us to] adjust the growth of the world’s population as precisely as we adjust the course of a spaceship or move toward a peaceful world with something like the steady progress with which physics has approached absolute zero. (37)

Unfortunately, Skinner seemed to have missed the wild inconsistency involved in making the case for a plan on the grounds of its being more rational than the rest, while simultaneously claiming that the people who design and implement the plan are not rational agents, but act solely on the basis of their physical nature and conditioning. (38)

There is no one more enslaved to his nature then one who is unaware of it or who was once aware of it, but has denied it to the point where he believes his own lies and has forgotten it.  For the person who accepts the fact that his every act of reasoning is ultimately grounded in the uncritical acceptance of his world, his faculties, and his inclinations and habits, reason remains a real, active force for sound behavior and thought.  But for the person who insists on the rationalist’s rarefied conception of reasoning, reason is no longer a real, active force for soundness in his life, but at best an empty proceduralism, taking place in a vacuum; a void that will be filled either by his own unreasoned, unrecognized nature or that of others.  In either event, he is controlled.  The greatest irony of all, then, is that a philosophical movement that for two and a half thousand years preached the rational ascendance of man over nature and the wills of others may very well be responsible for effecting his utter subordination to both.

Notes

  1. Daniel A. Kaufman, “Excessive Reason,” The Electric Agora, February 28, 2016.

https://theelectricagora.com/2016/02/28/excessive-reason/

  1. For example my “Between Reason and Common Sense,” Philosophical Investigations, Vol. 28, No. 2 (April 2005) and “Reality in Common Sense: Realism and Anti-Realism from a ‘Common Sense Naturalist’ Perspective,” Philosophical Investigations, Vol. 25, No. 4 (October 2002).
  1. https://theelectricagora.com/2017/05/25/self-made/
  1. “Reason requires such an impartial conduct, but ’tis seldom we can bring ourselves to it, and our passions do not readily follow the determination of our judgment.” David Hume, A Treatise of Human Nature (1739-40), 2nd Edition, eds. L.A. Selby-Bigge and P.H. Nidditch (Oxford: Oxford University Press, 1978) p. 583.
  1. Hume, A Treatise of Human Nature, p. 415.
  1. Peter Unger, Living High and Letting Die Our Illusion of Innocence (New York: Oxford University Press, 1996), pp. 33-36; 149-150; Peter Singer, “Famine, Affluence, and Morality,” Philosophy and Public Affairs, Vol. 1, No. 3 (Spring 1972) p. 232.
  1. Singer, “Famine, Affluence, and Morality,” pp. 237-238; Unger, Living High and Letting Die, pp. 156-157.
  1. Bernard Williams, Ethics and the Limits of Philosophy (Cambridge, MA: Harvard University Press, 1985), p. 212, fn. 7.
  1. Hume, A Treatise of Human Nature, p. 369.
  1. Unger, Living High and Letting Die, p. 150.
  1. Hume, A Treatise of Human Nature, pp. 427 & 581.
  1. Hume, A Treatise of Human Nature, p. 481.
  1. Hume, A Treatise of Human Nature, pp. 581 & 584.
  1. See, especially, Aristotle, Politics, 1252a25-1252b30; 1276b20-1277b30.
  1. Aldous Huxley, Brave New World (1932) (New York: HarperCollins, 1998), pp. 37-41.
  1. David Hume, Enquiries Concerning Human Understanding and Concerning the Principles of Morals (1777), 3rd Edition, ed. L.A. Selby-Bigge and P.H. Nidditch (Oxford: Oxford University Press, 1975), p. 9; Thomas Reid, Essays on the Intellectual Powers of Man (1785), ed. Baruch Brody (Cambridge, MA: The MIT Press, 1969), p. 115.
  1. Hume, Enquiries Concerning Human Understanding and Concerning the Principles of Morals, pp. 158-159.
  1. Reid, Essays on the Intellectual Powers of Man, pp. 115-116.
  1. G.K. Chesterton, Orthodoxy (1908), (San Francisco, CA: Ignatius Press, 1995), p. 38.
  1. Chesterton, Orthodoxy, p. 37.
  1. Chesterton, Orthodoxy, p. 38. Tellingly, Wells’ paper was subsequently reprinted, with some revisions, in Mind.  H.G. Wells, ‘Skepticism of the Instrument’, Mind, Vol. 13, No. 51 (July 1904), pp. 379-393.
  1. Hannah Arendt, “At Table with Hitler” (1951), tr. Robert and Rita Kimber, reprinted in Essays in Understanding: 1930-1945 (New York: Schocken Books, 1994), p. 291.
  1. Arendt, “At Table with Hitler,” p. 292.
  1. Stanley Rosen, ‘Philosophy and Ordinary Experience’, the 1996 Bradley Lecture, Boston College, reprinted in Metaphysics in Ordinary Language (New Haven: Yale University Press, 1999), p. 228.
  1. Hannah Arendt, ‘The Image of Hell’ (1946), in Essays in Understanding: 1930-1945, p. 202.
  1. C.S. Lewis, That Hideous Strength (1945) (New York: Simon & Schuster, 1996), pp. 99-100.
  1. C.S. Lewis, The Abolition of Man (1944) (New York: HarperCollins, 2001), p. 64.
  1. The aforementioned That Hideous Strength was intended by Lewis to be a novelization of the central ideas of The Abolition of Man. See Lewis, That Hideous Strength, p. 7.
  1. C.S. Lewis, The Abolition of Man, p. 71.
  1. John Locke, Second Treatise of Government, C.B. Macpherson, ed. (Indianapolis: Hackett Publishing, 1980), p. 19.
  1. G.K. Chesterton, The Ball and the Cross (1909) (New York: Dover Publications, Inc., 1995), p. 1; C.S. Lewis, That Hideous Strength, esp. pp. 177-179; Iris Murdoch, The Sovereignty of Good (New York: Routledge, 1971), pp. 77-78.
  1. Carl Jung, Modern Man in Search of a Soul, tr. W.S. Dell and Cary F. Baynes (New York: Harcourt Brace Jovanovich, 1933), p. 201.
  1. Jung, Modern Man in Search of a Soul, pp. 197; 200; & 202.
  1. Walter Lippmann, Public Opinion (1921), esp. Ch. XV.

http://wps.pearsoncustom.com/wps/media/objects/2429/2487430/pdfs/lippmann.pdf

  1. Lewis, The Abolition of Man, p. 54.
  1. Lewis, The Abolition of Man, p. 58.
  1. B.F. Skinner, Beyond Freedom and Dignity (1971) (Indianapolis: Hackett Publishing Co., 2002), pp. 6 & 4-5.
  1. Skinner, Beyond Freedom and Dignity, pp. 182-3; 215.

88 Comments »

  1. I agree with a lot of this. That’s why I find myself really disliking many if not most of the conversations I have with philosophers outside of philosophy. They almost come across as idiot savants.

    Like

  2. I should add that this is also why I dislike talking to so many scientists about anything outside of science, with our friend Coel from Massimo’s backyard being exhibit A.

    Like

  3. Just a few things in response, none of which are intended to start any kind of an argument, which I really don’t want to do. Just take them for what they are.

    1. “I would like to see some of these points addressed before I could agree with the thesis.” My aim is not to sell you something. The essay is simply my own view on the subject.

    2. “It is unclear why it is harmless to see faeries, to promote such thought, but harmful to see different genders (and that the latter comes from excessive reason)?”

    For one thing, I doubt Chesterton really thought there were faeries in some literal sense. I think that misunderstands his point. With respect to the latter, perhaps it’s just because I find the idea of dozens and dozens of genders both stupid and irritating, and I find the trans-trender types pushing them to be exceedingly unsympathetic. YMMV.

    3. ” it seemed that criticism of religion or religious influences in relation to rationality was largely absent “. That’s because I find the objectionable forms of religion all to be on the fundamentalist, irrationalist side. The balanced Fides et Ratio outlook of the orthodox religions don’t strike me as problematic, with respect to the concept of excessive reason.

    Like

  4. Hi Carter, yes that made sense and was a very nice reply. It gets to much of where I agree with Dan’s essay (and what he’d written in the past).

    I was trying to suggest that some of the examples he mentioned (certainly not all) were not clearcut as belonging to what you laid out, and perhaps involved or required additional elements.

    Liked by 1 person

  5. Hi Dan, for clarification…

    2) Regardless what Chesterton meant regarding fairies in specific, the nature of the projects do appear to remain the same. I understand you don’t like the other project, and maybe it is harmful (we can set that aside), but that does not get to whether it is based on excessive reasoning any more than Chesterton’s. My only interest was on whether it involved excessive reasoning.

    Like

  6. Self-madeness is more an effect of an overloaded notion of autonomy, which is itself only indirectly related to excessive reason, as I explained in this and the previous essay. Faeries have nothing to do with either.

    Like

  7. DB,
    Reacting to your concern about faeries, here is a thought for you, straight from the mouth of Chesterton. He pointed out that free thinkers were not free because so many forms of thought were forbidden to them!

    Or, according to labnut, once we become concerned about forbidden thought, the act of thinking becomes the act of forbidding thought.

    Like

  8. I am puzzled to see that the discussion of Dan’s essay has turned into an attack on critical thinking! The essay ends with a contrast between what I will call “good reason” versus “bad reason”:

    “For the person who accepts the fact that his every act of reasoning is ultimately grounded in the uncritical acceptance of his world, his faculties, and his inclinations and habits, reason remains a real, active force for sound behavior and thought. But for the person who insists on the rationalist’s rarefied conception of reasoning, reason is no longer a real, active force for soundness in his life, but at best an empty proceduralism, taking place in a vacuum; a void that will be filled either by his own unreasoned, unrecognized nature or that of others.”

    Labnut seems to have read the essay as an attack on “so called critical thinking”. “So called critical thinking” is a bad thing, he thinks, because it is arrogant, it is faultfinding and it kills curiosity. It is “like an overdose of weedkiller”. As a teacher of critical thinking, I see it very differently — I see it as teaching the skills of good argument.

    Labnut’s comments are a nice example of the fallacy of defining something pejoratively in order to attack it. He doesn’t discuss what critical thinking really is before he sets about telling us what is wrong with it. There are a hundred textbooks on the topic from which he could get a definition and see how it is done in practice. Or there’s Wikipedia: https://en.wikipedia.org/wiki/Critical_thinking.

    Here’s one definition: CT is “the process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”. This sounds pretty much like de Bono’s Blue Hat.

    CT is also a subject taught in schools and universities, in which students study and practice arguments in everyday language.

    Is CT arrogant? No. It can be used arrogantly or it can be used with humility. Generally, I think, one learns humility from it, since we see how easy it is to be wrong or biased.

    Is CT faultfinding? Yes, of course it is. Faults are faults. Faultfinding is part of getting better at what we do.

    Does CT kill curiosity? Why should it? Studying good and bad arguments is an exploratory and mind-opening exercise.

    The bigger issue is whether CT (as properly understood) is a kind of “excessive reason”, the “modern scientific stance [which] is essentially analytical, quantitative, reductive, and impersonal”, and “the rationalistically-inspired scientific takeover of our conception of human nature”. Myself, I don’t think it is any of these things. It is a kind of good reason, “a real, active force for sound behavior and thought”.

    Here’s a well-known and useful piece on what CT is:

    https://web.archive.org/web/20070210101711/http://www.insightassessment.com:80/pdf_files/what&why2007.pdf

    Liked by 2 people

  9. Alan,
    Labnut seems to have read the essay as an attack on “so called critical thinking”

    No, I did not. I correctly read Dan-K’s essay. This is what I said earlier of Dan-K’s essay:

    You did not directly address the current fad for critical thinking.

    You carry on:
    Labnut’s comments are a nice example of the fallacy of defining something pejoratively in order to attack it

    Well spoken as a true critical thinker(let’s see: you accuse me of improper reading of the essay, using pejorative tactics, failing to define my terms and using fallacies). You seem determined to prove my words correct. Might I return the compliment and note that your comment is a nice example of misrepresenting something in order to attack it. See my comment above and the remainder of my comment below.

    He doesn’t discuss what critical thinking really is

    This is a comment and not an essay! It is quite reasonable to assume a certain commonality of knowledge. But a critical thinker should never let an opportunity like that go unused.

    the process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion

    That sounds rather like good, proper, right, reasonable and rational thinking. With that definition you don’t need the word “critical“, now do you? Remove that one pesky little adjective and you defang it entirely. It loses its force, doesn’t it? But where’s the fun in that.

    You see, that one, tiny little adjective, “critical“, transforms the whole meaning. If it didn’t, why bother to use it? Why not just call it good thinking? And this is where the whole problem lies. That one little adjective, despite your defanged, sanitised version above, transforms the practice of thinking. It shapes the conduct of the thinkers who profess “critical thinking“. That adjective qualifies the word so that it becomes a certain kind of thinking and it is this kind of thinking which is evidenced in real world conduct, despite your claimed sanitised, defanged version of it.

    Terminology matters because it shapes the conduct of the people who use the terminology. And if you don’t mean ‘critical’, then why say it? And if you do mean ‘critical’ then don’t be surprised when people act accordingly.

    A more careful reading of my comment might have revealed to you that I said that ‘critical thinking’ has its rightful place in the armoury of thought. Quite clearly then I am not opposed to critical thinking as such. With that in mind it should be clear that I believe critical thinking, while a useful tool, with its rightful place in our armoury of thought, is a tool that is prone to misuse. I am attacking its misuse, as well I should.

    But I went further. I proposed a model for thinking where critical thinking had its natural place as part of the whole process, but being part of a larger process produced a more balanced outcome. You fail to credit any of this, or even to consider the nature of the model I proposed.

    But a critical thinker is less liable to read another’s comments generously, because, well because that would not be critical.

    As a teacher of critical thinking

    I noticed.

    Like

  10. Alan,
    Does CT kill curiosity? Why should it? Studying good and bad arguments is an exploratory and mind-opening exercise.

    You are failing to consider the holistic process outlined by De Bono.

    Critical thinking is a tool that should be brought to bear only after the earlier processes, that I described, are completed. Bring it to bear too early and you stifle useful thought.

    The exploration and idea generation that is typical of curiosity must be given the freedom to roam widely and consider openly. Once done, you can subject its result to the careful, clinical examination of critical thinking.

    It has been shown again and again that bringing critical thinking to bear too early in the process harms the process.

    Like

  11. Hi labnut:

    OK, you accuse me of doing what I accuse you of doing! It looks like this hinges on the different ways you and I understand the word “critical”. You use the term — that “pesky little adjective” — as a pejorative: “Expressing adverse or disapproving comments or judgements” (Oxford). It denotes a Black Hat process. I use it as a term of praise: “Involving the objective analysis and evaluation of an issue in order to form a judgement” (Oxford). It denotes a Blue Hat process.

    As far as the dictionary definition goes, then, we are both right.

    The background for your take on the matter is (I think) business decision-making, about which I know nothing. You may well be right that de Bono is the way to go in that context.

    My background is pedagogical (you noticed). I’ve read thousands of student essays where the student exhibits almost no capacity to construct and evaluate arguments. This is no great surprise, since they have never been taught anything about argumentation skills (or “critical thinking”, as I would call it). The only academic discipline that directly addresses this deficiency has been called “critical thinking” since about 1970. Today only a very small fraction of students (school, college or university) make its acquaintance. I see the teaching of this subject as one of the useful contributions that philosophy has to offer the wider world.

    I get a bit irate — for which I apologise — when people seem to be attacking the subject in ignorance of what it is or does.

    Liked by 1 person

  12. The only thing I know of de Bono is that in an essay he attacks Aristotle based on a complete misrepresentation of what Aristotle said. That doesn’t recommend him much to me as a critical thinker.

    Like

  13. de Bono seems like a very interesting figure with quite a number of impressive accomplishments across disciplines. Reminds me a lot of Raymond Tallis of whom I am an admirer.

    Maybe you should know more than just one thing about him before deciding whether or not he is “recommended.” Certainly, critical thinking would suggest as much.

    Like

  14. There are many sources of information about critical thinking and we can’t follow them all up. Perhaps this was the only stupid thing he said, I don’t know – but I don’t really have time to follow that up, or the motivation given the extravagant stupidity of what he said.

    If I want to learn about building and they tell me of someone who can teach me about it and I come across him tightening up a screw with a chisel then I am going to have to come to certain conclusions. If there are no shortage of people from whom to learn then I might pass on.

    He may have quite a number of impressive accomplishments across disciplines, but if I am looking for advice on critical thinking then he does not seem to practice the kind I am interested in.

    Liked by 1 person

  15. Robin,
    The only thing I know of de Bono is that in an essay he attacks Aristotle based on a complete misrepresentation of what Aristotle said.

    I’ve said many foolish things. I hope I will be judged on the totality of who I am and what I have done, not by the foolish things I have said. If the Twitterverse is anything to go by, it is a vain hope.

    but if I am looking for advice on critical thinking then he does not seem to practice the kind I am interested in.

    That depends on what you are interested in. His interests were in encouraging the development of creative problem solving, something that mattered a great deal to me during my corporate incarnation, because that lay at the heart of my work. The starting point for me was always an unbridled curiosity where I explored the entire landscape of the problem domain, and usually beyond.

    This seeded my thinking process and created the space for new ideas to take hold. Critical, or Black Hat thinking, could not be applied at this stage because the new ideas would die stillborn. They needed to be tended, developed and multiplied without negative or critical thoughts. Finally there would come a culling process where the most viable idea was selected. This was Black Hat thinking. Then would come the dreaded adverse consequences analysis(what could go wrong, wrong, wrong…), which was really the end stage of Black Hat thinking. I call it the dreaded adverse consequences analysis because by that stage one has become so emotionally wedded to one’s chosen concept that one could not admit the possibility of failure. But it had to be faced.

    Then came the realisation process. I had a simple rule of thumb – the more people that opposed it the better the concept must be. That was because a good idea always radically disturbed the network of alliances and power balances. My rule of thumb never failed me 🙂

    Like

  16. Hi Alan,
    thanks, that was a good analysis.

    I’ve read thousands of student essays where the student exhibits almost no capacity to construct and evaluate arguments.

    I feel for you. Those same students enter the corporate world and write equally dreadful memos, reports, analyses and proposals. In desperation I would require them to never give me anything longer than one page. This requirement forced them to think very carefully about what it was they were really trying to say and then to say it clearly and simply. But it was an uphill struggle. What has happened to the student world? Why are they so bad? When reading these documents I was forced to wear my Black Hat, just as you are.

    The important thing to remember is that you and I were then dealing with the end product, where the Black Hat was appropriate. The other Hats apply to the earlier stages where the document is conceived, developed and prepared.

    After reading your comment I had a wry moment of self-realisation. I had failed my own process! I had failed to use the other Hats while considering your comment. If I had I would have realised that your approach was entirely reasonable when seen from the perspective of pedagogy.

    I see the teaching of this subject as one of the useful contributions that philosophy has to offer the wider world.

    It is certainly a valuable contribution, if tempered with the broader perspective of the six Thinking Hats.

    I think that philosophy can learn a great deal from jurisprudence. The finest practical arguments today are found in court judgements. I can do no better than recommending making a habit of reading court judgements. Moreover they are fascinating because in them you see the human drama writ large. The legal process represent the best means we have today of arriving at the truth in human affairs and their judgements the finest form of argumentation.

    My favourite author in this regard is Douglas Walton and a book I can recommend(among others) is Legal Argumentation and Evidence. You should teach a course in this. I suspect it would be an eye opener for your students.

    Like

  17. Alan,
    Here is a delightful example of Six Hat Thinking, reported in the South China Morning Post
    http://www.scmp.com/news/hong-kong/article/1065279/first-national-education-classes-taught-tai-kok-tsui-primary-school

    The problem.
    How to teach attitudes to the flag to children, a hugely contentious subject in Hongkong.

    Sze, who usually teaches Chinese and is co-ordinating the programme, used four hats to symbolise objective, critical, emotional and optimistic ways of thinking about any given topic – a technique derived from Edward de Bono’s book Six Thinking Hats.

    He first played a video of activists waving the national flag on one of the Diaoyu islands and another of a man burning the flag.
    This was followed by video of a flag-raising ceremony in Golden Bauhinia Square and the same ceremony after the National Day ferry disaster in which the flag flew at half mast.

    He then asked the pupils to put on one of the coloured hats and try to use the mode of thinking it represented to compare the different contexts in which the flag appeared.

    “I wanted to show the pupils that the flag itself is neutral, but it is the context that it is in that gives it meaning,” he said.
    “Everything has hard facts, but why do we add meanings to it? I want them to have more than one way of looking at an issue, which includes being critical and emotional.”

    Like

  18. Let me flog the dead horse one more time 🙂

    A perennial subject, which philosophers debate, is the choice of a valid ethical framework. Thus we have deontologists, virtue ethicists, consequentialists and relativists. Each system has its advocates, but need they be exclusive?

    We can apply Six Hat thinking to this problem and call it the Six Moral Hats. This gives us the following framework:

    1. white hat – moral rules and duties, deontological thinking;
    2. red hat – virtues, agent based thinking, attitudes, dispositions;
    3. black hat – outcomes, examining the consequences;
    4. green hat – principles, focussing on the context, autonomy, justice, beneficence, non-maleficence;
    5. yellow hat – care, focus on relationships and power structures;
    6. blue hat – evaluative, assessing, all things considered point of view.

    When considering a complex moral issue we should don each of the six moral hats in turn.

    In this way we examine the full moral problem domain allowing us to arrive at a considered decision that properly balances the tensions between:
    – desire and virtue,
    – feeling and thinking,
    – self and other;

    Surprisingly, it was a Jesuit philosopher who first prompted this approach during a series of lectures he gave to our parish congregation.

    Liked by 1 person

  19. At the risk of sounding like the Mad Hatter, let me add one more observation.

    It is has been argued that the Six Thinking Hats are incomplete. There should be a seventh hat, a Purple Hat, which stands for the valuing or ethical perspective, since almost all issues have an ethical dimension. When considering the Purple Hat one could then expand on this, if necessary, by using the Six Moral Hats.

    I will let the Mad Hatter have the last word

    If I had a world of my own, everything would be nonsense. Nothing would be what it is, because everything would be what it isn’t. And contrary wise, what is, it wouldn’t be. And what it wouldn’t be, it would. You see?

    He was a philosopher.

    said Alice, very much confused, `I don’t think–‘
    `Then you shouldn’t talk,’ said the Hatter.

    Taking that advice, I retire.

    Like

  20. Refreshing to read.

    And I agree in all major parts with your analysis.

    = =

    I also agree it’s inaccurate to frame reason or cognition as arising from the brain and conation or affection from the body; and I have the same kind of trouble with body-soul dualism.

    Like

  21. Hi labnut:

    We agree on so much! Including, obviously, a love of “Alice”.

    Doug Walton is the leading guru in the critical thinking world. I once invited him to my university to give a lunchtime talk on CT and its values. Derridean postmodernism was then still in fashion, and its main advocate came along. Afterwards he said: “But that’s exactly what we are against!”

    I’m sure you are right about legal judgements being a great model of good argument. I don’t know Walton’s legal reasoning book. Thanks for mentioning it. Incredibly (to me), not even law students get taught good argument skills. Philosophy could contribute to law courses (as it once did) but in my country jurisprudence and legal reasoning have been squeezed out of the law programs. I used to teach critical thinking to nursing students but that got cut out of the program.

    (I no longer work in philosophy. I work in a university that has no philosophy.)

    Your advice on one-page summaries is spot on too. Learning to condense ideas down to their minimum is such a key skill.

    Seeing things not in black and white but in various colours also defuses conflict and tensions, as in your Chinese flag story.

    This is turning into a love-fest. My Black Hat is not working. Let me say something negative about de Bono. He’s a hopeless historian. Robin is right about that. He created a cardboard caricature he called the Gang of Three (Socrates-Plato-Aristotle). Quote:

    “What happened was, 2,400 years ago, the Greek Gang of Three, by whom I mean Aristotle, Plato and Socrates, started to think based on analysis, judgment and knowledge. At the same time, church people, who ran the schools and universities, wanted logic to prove the heretics wrong. As a result, design and perceptual thinking was never developed. People assumed philosophers were doing it and so they blocked anyone else from doing it. But philosophers were not. Philosophers may look out at the world from a stained-glass window, but after a while they stop looking at the world and start looking at the stained glass.”

    The mind boggles.

    Liked by 1 person

  22. Alan,
    that Gang of Three quote has left me gobsmacked. It is real Mad Hatter stuff. I smiled at the thought of philosophers only looking at the stained glass. His chain of reasoning was all wrong but somehow he came to the same conclusion as many present day scientists. Philosophy has an image problem and Dan-K will tell you that it is partly of its own making.

    Like

  23. Apologies for a long delay in replying, again real life intervened.

    The essay sets out symptoms and gives a kind of prognosis for civilization if a certain *form* of rationalism was allowed to fester and spread. It is the rationalism of Plato and Descartes, which in promoting our ability to reason, has fetishized it to the point of not only personifying it (making it an entity) but identifying the self with that ability. While damaging on its own, there are also problems stemming from skeptics, operating as frustrated rationalists.

    My argument is that while I agree with much in the essay, a few of the examples seemed not to fit into the picture. Running with the medical theme, just because these examples might show a few of the symptoms, that does not make “excessive reason” the correct diagnosis.

    One example I gave was from the Self-Made essay. While transgender activists might display an interest in autonomy, is there really no other path to such strong autonomy than rationalism? Dan said the link was indirect, but that only supports the question I was raising. It seems like there may be greater contributions, more pertinent contributions to the issue than excessive reason (if there is any connection at all) on their part. One could say that we would still have the issue, regardless of reason, though it might look slightly different.

    Labnut’s reply supported my point further still…

    “Reacting to your concern about faeries, here is a thought for you, straight from the mouth of Chesterton. He pointed out that free thinkers were not free because so many forms of thought were forbidden to them!”

    I wasn’t concerned with faeries, and was using Chesterton to question Dan’s concern with transgender activists. It seemed they had the same project as Chesterton, and it is what you just stated. For them, the belief in some Platonic ideal of Man and Woman (a rationalist account), has strangled freedom of thought about humanity, sexuality, and gender (all of which are much mushier) for so long. The idea one could identify sex by what is between a person’s legs, and go on to ascribe further features of the person based on that, is thoroughly rationalist in aspect. That pure binary mode ignores all the variations from the level of genes, through physiology, through feelings, through behavior. Mandating that a person manifest some extreme clinical symptoms, identifiable by doctors, before taking that person’s claims about their own feelings seriously is (I thought) the same kind of reliance on technocracy being railed against in the essay here.

    I would say that for some advocates it is clearly wish fulfilment (and we could look to trans-racial as another pointed example of that), but wish fulfilment is described in the essay as anathema to rationalism!

    Whether or not one sympathizes with them or their project, it just does not seem to have come from a place of rationalism, or skepticism as frustrated rationalism. It simply treats sex and gender in a way where it is *not* forbidden to think beyond Platonic boxes, and entertains both the sensual and social aspects of both.

    Another issue has also come up in discussion, about critical thinking, which matches the other concern I had: the avoidance of discussing religious aspects of rationalism. If religious is too loaded, perhaps we can switch it to spiritual or supernatural. I certainly was not talking about organized religions alone. The point is that this rationalism involves assumptions or inclinations that are religious or whatever in nature. By talking about excessive *reason* and somehow connecting Plato and Descartes with modern scientists and atheist philosophers alone, without any discussion of the religious elements within *rationalism*, or the religious theologies and philosophies that rely on it (yes even fundamentalists), indeed using religious writers as counterpoints, opens up an interpretation that reason or rationality itself leads to the problems under discussion. That the more rational one is, the more one is likely to end in the gutter.

    I’m not saying this was Dan’s intention, but it’s a vibe that could be taken from it because of the lack of discussion on that aspect of rationalism.

    When I was taught about rationalism, there was a point made to distinguish between the use or promotion of reason and the position of *rationalism*. Science requires the former, but not the latter. One area where Dan and I have enjoyed great agreement is on Hacker’s critique of neuroscientists who have inadvertently perpetuated Cartesian (and so rationalist) narratives. But Hacker’s work makes my point. It is not that the scientists were engaged in too much reasoning, only that they started with some false assumptions which reasoning from those points compounded. Indeed, if any of these scientists had applied reason more meticulously, as Hacker does, rationalist errors may have been avoided.

    Here is an interesting, if quirky speech by Hacker examining the “mind-body” problem. The most important part for this discussion probably starts about 44 min into the video… when Aristotle enters the picture. He walks people (slowly) through the error made, and as a side discussion gives an intriguing account of reason and rationality (which is where I am coming from) and how reasoning along a certain line can solve the task of removing the Cartesian error. From this position, it is hard to fault (or forewarn against) embracing rationality, rather than reasoning (lazily or excessively) from flawed, unexamined assumptions… which is to say *rationalism*.

    I hope this makes things more clear.

    Like

  24. It seems like there may be greater contributions, more pertinent contributions to the issue than excessive reason (if there is any connection at all) on their part. One could say that we would still have the issue, regardless of reason, though it might look slightly different.

    = = =

    My argument was — and is — that radical autonomy arises, intellectually, out of the rationalist rejection of tradition, custom, and the idea of givens more generally.

    On the trans front, which really don’t want to talk about anymore and which isn’t what this essay is about, dimorphism is hardly some Platonic ideal.

    Like

  25. Hi Dan,

    “My argument was — and is — that radical autonomy arises, intellectually, out of the rationalist rejection of tradition, custom, and the idea of givens more generally.”

    I’m not arguing that that isn’t the case, or has not been the case for many things. I would agree in general with your description. I am just raising the question if that is the only way to get there. It would seem that one could end up with the same or a similarly equal interest in autonomy or rejection of tradition, custom, or givens, without rationalism.

    Just because we see XYZ we can’t necessarily say it is due to rationalism, or emerges from rationalism. Just as XYZ does not always allow us to discriminate between one medical problem or another. More refinement and evidence (for specific cases) is necessary.

    “On the trans front, which really don’t want to talk about anymore and which isn’t what this essay is about, dimorphism is hardly some Platonic ideal.”

    I was not talking about simple, physiological dimorphism. That project is not about simple, physiological dimorphism. Although I would point out from a medical standpoint, even physiological dimorphism in humans is not always so simple. It certainly is not for the patients I’ve been working to help, and there are many more.

    But we can drop that subject. The trans-racial case is equally problematic, and if anything an obvious case of wish fulfilment, if not delusion. It is hard to see that as coming from a hard rational intellectual position.

    Like

  26. I am just raising the question if that is the only way to get there.
    = = =
    Certainly not. I was speaking of what I take to be the actual etiology of the idea in the Western tradition.

    Hard cases make bad law and worse medicine. Dimorphism is certainly the overwhelming norm in humans and that goes for both sex and gender. There isn’t much more norm-like than 99%,

    Liked by 1 person

  27. Hi Dan,

    “Certainly not. I was speaking of what I take to be the actual etiology of the idea in the Western tradition.”

    Right, and why we agree on the main. My concerns were *only* about that covering some of the given examples.

    “Hard cases make bad law and worse medicine.”

    I’m not sure what you meant by that.

    “Dimorphism is certainly the overwhelming norm in humans and that goes for both sex and gender. There isn’t much more norm-like than 99%,”

    Humans as a species are sexually dimorphic. But medicine does not deal with the species as a whole, it deals with human individuals, which means having to understand and deal with statistically rare cases because they exist. That is why I said: “Although I would point out from a medical standpoint, even physiological dimorphism in humans is not always so simple.”

    Physiologically, sexual dimorphism is actually higher than what you stated. I found a page on intersex and it may be as low (totalling different conditions) as 1 in 2000, or 0.05%, of births where sex specialists are required for determination. Taken individually, some cases are as rare as 1 in 130,000 births. But as the site notes: “…a lot more people than that are born with subtler forms of sex anatomy variations, some of which won’t show up until later in life.” The syndrome I study includes that later group.

    For those interested here is the link: http://www.isna.org/faq/frequency

    From this, and an understanding of our population levels, it should be clear that “good medicine” means having to deal with this reality. In the US alone intersex people (or facing such issues) constitute the population of a small city. As Labnut cited in an earlier thread, it even became a *medical* issue at the sports level.

    Because of this, one issue faced by hospitals (1 in 2000 births) is the problem of having to tell parents that they have had a child that is a healthy *human*, that is of indeterminate, or determinably not, male or female. But because of an overriding Platonic ideal in western society of Man and Woman, based in part on common physiological dimorphism, healthy *human* is not enough for most parents. And so children are *assigned* a physiological sex and *forced* through surgical and chemical means to fit the rational conception of what it is to be a Man or a Woman, rather than allowing them to be one or the other (or none) without the physical perfection demanded by the *ideal*. This choice comes at no small cost for those children, including actual physical suffering they would not need to experience otherwise. And all that suffering may be for nothing, as the work may not be permanent. Thus, this reality brings with it very real ethical considerations.

    If we get to gender, by which a whole slew of things can be meant, I am uncertain of the numbers. I’m not sure they can be known. However, I do not think it is wise to limit it to those suffering from clinical gender euphoria. That is to engage in the technocratic issue related to rationalism that I also have a problem with. Regardless of numbers, and limiting it to clinical cases, this adds to intersex cases.

    We can assume for sake of argument that matching sexual dimorphism of both sex and gender is 99%.

    The claim is not that non-easily classified people are more than 1%. The claim is they exist. The question is how this is dealt with. Up until now it has been dealt with by killing, medically altering, hiding away, or simply ignoring their existence, in order to perpetuate a Platonic ideal of Man and Woman, starting with and linking expected phenotype to expected perception to expected roles.

    I’m not that interested in the extremes of the transgender movement, but I think there is merit in abandoning how things have been going up until now, and dealing with the reality of how people actually live and experience life. Not the statistical majority, which is simply an intellectual exercise in bean counting, but reality which means everyone along the bell curve.

    As someone associated with medicine, doing otherwise is to engage in willful ignorance, and in modern times is no longer tenable.

    Like

  28. Hi Dan, after discussing the medical I remembered something of interest (and humorous) regarding the scientific.

    In the past, some ethologists (who study animal behavior) were so caught up with rational concepts of purity, a disdain for biology and biological urges, that they failed to report homosexual or polygamous sexual activity of animals they were studying. They would write it off as belonging to some rationally explainable activity, or not report it at all, because they didn’t even want to admit their animals had… well… animal urges.

    Like

  29. Dwayne, the dysphoria cases are the only real ones, as far as I am concerned. The rest are examples of precisely the phenomenon I am talking about — autonomy run amok — and I think it’s not a good thing.

    Again, I reject the claim regarding “Platonic ideals.”

    As for what is “tenable,” a far larger number of people agree with me on this; indeed, an overwhelming number do; and that includes plenty of people in the psychological and psychiatric professions.

    But we clearly are never going to convince one another so let’s drop it.

    Like

  30. Heheheh. obviously when I wrote “gender euphoria” I meant “dysphoria”. That’s a pretty embarrassing mistake for me, though I suppose what happens when I am talking and experiencing my own general euphoria (drinking some good whisky to celebrate submitting my next journal article) while writing.

    ………..

    Labnut,

    “It is interesting that Dan-K made the connection with medicine where it is evidently even more true.”

    I understood the phrase. It is well known. The idea that it has relevance to medicine, and what would be meant by that is not clear, much less “even more true.”

    That X might not be generally toxic to humans, and perhaps generally beneficial, would make for bad laws banning X because of the fact that some people can be poisoned by it (perhaps allergy).

    However, in a medical setting it would be very much of interest to have that knowledge about X and keep it in mind before administering any medicine or conducting scientific experiments. This is why they usually try to get as much particular information about you before doing anything, and not just wing it based on statistical averages.

    I mean this is hilarious. I’ll leave off the failure of medicine early on to adequately understand women’s health needs because people thought humans could be understood by studying men, which is parallel to what we had been doing within both sexes, by assuming nothing really is different outside the averages of those.

    What I will talk about is that medicine and medical research has an interest in “hard cases” because they can help us understand why and how things work in normal cases. I would expect Dan T would back me up on that. And it doesn’t even have to be the extremes of things going wrong. A person or population that do very well where most fail are also of great interest.

    And obviously you don’t have to treat people who don’t need medical intervention, if they will be fine because they are a rare case.

    The worst *medicine* is assuming every individual fits some statistical average… pretending hard cases don’t count… unless it is an emergency and one can’t get better info.

    …………

    Dan,

    “the dysphoria cases are the only real ones, as far as I am concerned. The rest are examples of precisely the phenomenon I am talking about — autonomy run amok — and I think it’s not a good thing.”

    Bare assertion noted, and rejected. I would think I have the advantage in this case, having known and dealt with actual people you have passed judgement on. I encourage you to talk with actual transgenders (or activists) and see what it is based on. For some you might be right, but certainly not all. My guess would be less than 1%.

    “Again, I reject the claim regarding “Platonic ideals.””

    Ok, and again I will reassert… and add some interesting info along these lines.

    Remember this ideal (and the project being discussed that rejects such binary restriction), involves more than just physiology and gender (self-sensation) and extends to sexual orientation and behavior (which can include sex or general activities). This is why you were likely being unfair regarding the 20% statistic about Millennials, when it appeared to include activists moving beyond sex and gender. If we go just to orientation, then the dimorphism certainly drops below 99%, and further still if we include behavior.

    Where this intersects with culture and medicine in an interesting way is Iran. While they are against homosexuality (where physiology and gender do not match expected orientation and behavior), they are okay with “treating” such cases as transgender and publicly funding sexual reassignment. While great for actual transgenders, it is having the effect of pressuring gays into aligning physiology and general expected behavior to match orientation, even if it stands against their gender (self-sensation).

    Here is one documentary on this, though for those interested I recommend docs following people still in Iran: https://www.youtube.com/watch?v=Wg51RnpGn9k

    “As for what is “tenable,” a far larger number of people agree with me on this; indeed, an overwhelming number do; and that includes plenty of people in the psychological and psychiatric professions.”

    Let me remind you that I work in a hospital, with a department for transgenders as part of my overall department. The patient group I work for have to deal with physiological impairment of dimorphism (on top of other things). I have an interest in and so am generally familiar with it. That is on top of having known intersex, transgender, and so-called gender-fluid individuals. Blanket appeals to some overwhelming number of people who allegedly agree with you, does not constitute evidence much less an argument.

    That is especially true when appealing to a field that has been notoriously wrong, and behind the times, working largely as a societal check, on sex and gender issues. It is only recently that homo(and bi)sexuality was pulled out of the DSM as a disorder. That is to say, within my lifetime I have seen my “clinical condition” go from a disorder, to normal. At one time psychologists and psychiatric professionals believed masturbation was harmful and orgasms in women a sign of trouble (or useful as a “treatment”).

    If you watch the video I linked to you can hear someone discuss how they were convinced by psychologists that they were trans, not gay, and pressured into sexual reassignment.

    So I’m taking your claim with a large grain of salt. We’ll see where this overwhelming number stands about 5o years from now.

    “But we clearly are never going to convince one another so let’s drop it.”

    I’m willing to drop it if you are. Heck, I wouldn’t even be talking about it, except you have used it as an example, and it is inaccurate.

    One thing I will note, earlier in the thread you mentioned having problems with people outside of philosophy discussing philosophy. And yet you are discussing medicine, and transgenders, with someone who is connected/familiar with both, though you are not. Don’t you think my viewpoint might have some greater weight on this specific issue?

    Like

  31. Hi Dan, I forgot to mention something important. This will be my last comment in the thread anyway, as I’m going to take a short vacation (before the next article starts)…

    “the dysphoria cases are the only real ones, as far as I am concerned.”

    You can of course hold this position. The reason that I argue it is a bit mistaken is that it would be equivalent to not believing people claiming to be depressed unless they are diagnosed with severe clinical depression.

    Clinical gender dysphoria is a strong, consistent, diagnosable disconnection which is normally problematic to the person. There is no reason to believe that people exist who have such disconnects but not on such great a level or persistence.

    There are people, sadly even within LGBT, who might go further than you and claim a person is not really trans unless they go through with the complete surgery (top and bottom) and sustained chemical treatment. Again, this can be held, but seems mistaken.

    Especially as you have intersex which are non-dysphoric but whose physiology simply is not distinct enough, or which switches on its own. The first intersex person I knew (decades ago) had been reassigned and then because of age and tolerance the chemicals no longer worked and the person began transitioning back. That had to be very awkward for the person, and did not start with any psychological mismatch.

    “autonomy run amok”

    But this is not based on reason, some intellectual construct disconnected from conation or societal concerns, and in some cases is clear wish fulfilment. I have not seen you deal with that issue. If meeting partial criteria is enough, and holding some things you consider anathema not exonerating, I do not get how you are applying your standards consistently. How does Buddhism or Magical thinking get a pass?

    ………..

    As I said, by default this will be my last comment in this thread, so you can have the last word.

    If you seriously don’t want to have such subjects discussed in the future, I might suggest not using them as examples? We’ve both written enough that I think people can (or at least will) have decided which arguments held more weight. I’m comfortable with that. But if I see it being referred to again, as if no challenge has been presented, I will challenge it again.

    You could at least qualify commentary by saying “it is possible” or “one interpretation may be” or even “in some (extreme) instances of”. That would certainly reduce my concern about commentary that seems overly broad and inaccurate.

    Like

  32. Dwayne, my appeal to the opinion of others was in response to your claim regarding what is “tenable,” so you know very well that I wasn’t making any argument from authority.

    Like

  33. it is a bit mistaken is that it would be equivalent to not believing people claiming to be depressed unless they are diagnosed with severe clinical depression.

    = = =

    Great analogy and exactly right. Though it makes my point, not yours.

    Have a good vacation.

    Like

  34. Dan-K,
    My argument was — and is — that radical autonomy arises, intellectually, out of the rationalist rejection of tradition, custom, and the idea of givens more generally.

    That kind of rationalist rejection is like cutting one’s self off at the ankles. To reject ‘givens‘ is to reject 99 % of all the knowledge one possesses. That is because 99% of all the knowledge of any individual is vicarious. The overwhelming majority of our knowledge comes from other people. Only 1% comes from direct experience. As social animals we gain enormously by accepting knowledge from other people, since that multiplies our effectiveness. Thus we are finely tuned to assess this vicarious knowledge and have a variety of strategies to assist us in this. These strategies are:

    1) Authority. We recognise the source as a trusted authority.
    2) Tradition. A body of knowledge has been accepted by enough people over a long enough period that we trust it.
    3) Received opinion. Our social environment agrees on the truth of certain matters.
    4) Trust. We trust the person imparting the information.
    5) Agreement from experience. The information seems credible because we can relate to it from our experience.

    We assemble our body of internalised knowledge from vicarious sources by using one or more of these strategies. There is no-one, except perhaps a lonely mountain hermit in his cave, who does not assemble his knowledge in these ways. Consequentially the radical intellectual autonomy, that Dan-K mentions, is not practised by anyone. All that happens is that different groups apply different filters to the vicarious knowledge available to them. Some people claim to have more stringent, more discriminating filters, but they are still crucially dependent on authority, tradition, received opinion and trust. To claim otherwise is simply to be disingenuous.

    What weakens their claim to radical intellectual autonomy even further is the fact that they mostly accept their filters from their environment. It is more often than not a pretentious claim to be different that they copied from others, thus defeating their claim to be different.

    Like