by Daniel A. Kaufman
In the philosophy of mind, apart from sensations, with their perplexing “qualia,” intentional states, the so-called “propositional attitudes,” have proven materialism’s biggest headache. Materialism’s greatest hope, functionalism – and particularly its computational variety – ran into trouble with the propositional attitudes by way of the Chinese (or at least, one of their Rooms), and disputes over the status of the folk psychological explanations into which they figure – Science! Bad science! Not science! – threaten to see them “eliminated” altogether. Hence the star of this edition of Course Notes, Paul Churchland, who has made it his mission to get us to believe that there are no beliefs.
It’s silliness of the highest order, of course, but there is a quite serious, important point lying underneath regarding the (genuinely puzzling) question of how we should think about intentionality and intentional explanations, which is why Churchland’s landmark paper, “Eliminative Materialism and the Propositional Attitudes,” appears in the place that it does in my Philosophy of Psychology course, after “Fodor’s Guide to Mental Representation” (1) and before Wilfrid Sellars’ “Philosophy and the Scientific Image of Man.” (2)
First of all, let’s be clear about what we mean by “folk psychological explanations.” When inquiring as to the reasons for someone’s behavior, say getting on a certain bus, one may find oneself being told something like the following:
- I wanted to go to the mall.
- I believed this bus was going there.
We often account for our actions by citing various beliefs or desires that we have. These are “intentional” in that they have semantic content (desires have satisfaction conditions – my desire to go to the mall can be satisfied or remain unsatisfied – and beliefs have truth-conditions – my belief that the bus goes to the mall is either true or false), and this content is determinative, with respect to which actions a person performs.
But what sort of account one is giving, when one offers a folk psychological explanation for something someone has done? Specifically, should we think of these as causal explanations and of folk psychology, consequently, as a scientific theory?
Fodor, for one, thinks we should. Think of intentional explanations as causal (and of folk psychology as a scientific theory), that is. From “Fodor’s Guide to Mental Representation”:
If you ask the Man on the Clapham Omnibus what precisely he is doing there, he will tell you a story along the following lines: I wanted to get home ( to work , to Auntie’s) and I have reason to believe that there – or somewhere near it – is where this omnibus is going.
When the ordinary chap says that he’ s doing what he is because he has the beliefs and desires that he does, it is reasonable to read the ‘ because’ as a causal ‘ because’ – whatever, exactly, a causal ‘ because’ may be. At a minimum , common sense seems to require belief/desire explanations to support counterfactuals in ways that are familiar in causal explanation at large: If, for example, it is true that Psmith did A because he believed B and desired C, then it must be that Psmith would not have done A if either he had not believed B or he had not desired C. (Ceteris paribus, it goes without saying.)
Anyhow, to a first approximation the commonsense view is that there is mental causation, and that mental causes are subsumed by counterfactual supporting generalizations of which the practical syllogism is perhaps the paradigm. (3)
Now, my regular readers will know that I disagree with this; that I do not think that the reasons we give for our actions are causes, in the scientific sense; that I do not think folk psychology is a scientific theory; and that I think these sorts of explanations serve an entirely different function. (4) In a sense, then, I find Fodor’s work in this area a bit puzzling, insofar as he otherwise has demonstrated that he understands – and accepts – the idea that inquiry (scientific and otherwise) may be fundamentally dis-unified; that different types of accounts of different types of phenomena may be essentially autonomous, insofar as they are un-assimilable to one another. (5) Moreover, it’s precisely the claim that intentional explanations are causal and folk psychology a scientific theory that has opened it up to the critique of eliminative materialists like Churchland, as without this starting point, their critique simply does not apply. Of course, there are any number of reasons why Fodor (and to be fair, most philosophers of mind) moved in this direction – science envy, stubborn, semi-conscious unity of the sciences intuitions, etc. – but they really aren’t germane to these Course Notes, which are focused on the lectures I did on Churchland’s paper.
Churchland begins by making the case that intentional explanations are causal and folk psychology is scientific, but as just indicated, he needn’t have, as his opponents already have made the case for him. The question then becomes how good of a scientific theory it is, and it is here that Churchland pounces. He maintains that there are several criteria by which one judges the merits of a scientific theory:
A. The ratio of its explanatory successes to its explanatory failures.
B. Its historical “arc.”
C. How well it coheres with what other sciences are saying about the same subject matter. (6)
In every one of these respects, Churchland maintains that folk psychology fares poorly. While it might do a good job at explaining our mundane, daily goings-on, it tells us nothing about a huge range of mental phenomena:
As examples of central and important mental phenomena that remain largely or wholly mysterious within the framework of FP, consider the nature and dynamics of mental illness, the faculty of creative imagination, or the ground of intelligence differences between individuals. Consider our utter ignorance of the nature and psychological functions of sleep, that curious state in which a third of one’s life is spent. Reflect on the common ability to catch an outfield fly ball on the run, or hit a moving car with a snowball. Consider the internal construction of a 3-D visual image from subtle differences in the 2-D array of stimulations in our respective retinas. Consider the rich variety of perceptual illusions, visual and otherwise. Or consider the miracle of memory, with its lightning capacity for relevant retrieval. On these and many other mental phenomena, FP sheds negligible light. (7)
Meanwhile, folk psychology’s historical arc is a story of contraction and stagnation. Human beings used to offer intentional explanations for everything in nature (here, Churchland is referencing primitive, animistic accounts of weather, the movement of water, etc.), but now such explanations are only applied to human activity, and they really are no different from or better than they were in Aristotle’s day. (8) Finally, all the other sciences in which human beings are an object of study are converging on a common picture of human nature, in non-intentional terms, a picture in which folk psychology has no place:
If we approach homo sapiens from the perspective of natural history and the physical sciences, we can tell a coherent story of his constitution, development, and behavioral capacities which encompasses particle physics, atomic and molecular theory, organic chemistry, evolutionary theory, biology, physiology, and materialistic neuroscience. That story, though still radically incomplete, is already extremely powerful, outperforming FP at many points even in its own domain. And it is deliberately and self-consciously coherent with the rest of our developing world picture. In short, the greatest theoretical synthesis in the history of the human race is currently in our hands, and parts of it already provide searching descriptions and explanations of human sensory input, neural activity, and motor control.
But FP is no part of this growing synthesis. Its intentional categories stand magnificently alone, without visible prospect of reduction to that larger corpus. (9)
Churchland’s view isn’t simply that intentional explanations are of limited use or that folk psychology will eventually be eclipsed by other sciences. It’s that there are no intentional states at all. No beliefs. No desires. The situation with folk psychology, he thinks, is much like it was with the caloric theory of heat, according to which it was the presence of a fluid inside bodies (caloric fluid) that determined their temperature. Of course, we now explain temperature in terms of mean molecular energy, and not only was the caloric theory discarded upon this realization, but the caloric fluid as well. We think there is no such thing.
This inference from the falsity of a theory to the non-existence of its ontology presumes a Quinean account of ontological commitment, according to which what exists is a matter of what our best scientific theories quantify over. (10) This account, while influential, is clearly too narrow to serve in a general way, and seems most suited to the postulated entities one finds in Physics. As John Greenwood points out in his excellent paper, “Reasons to Believe,” not only are there any number of cases where the falsity of a theory has not led to the elimination of its ontology (the rejection of epicycle theory in astronomy did not lead anyone to conclude that there are no stars or planets), but there also are plenty of cases where the relevant objects or processes are identified first, with the theorizing about them – often hesitant and tentative – only coming after (“Hey! Here’s a weird something … wonder what it does?”):
“Marsh vapors” exist but do not cause malaria or other tropical fevers [as was once thought]…We recognize the existence of Golgi bodies and myelin sheaths, although we presently do not have much understanding of their biological and neurophysiological function. [T]he most obvious examples come from biology and neurophysiology, where we recognize many structures and processes whose causal … role we are trying desperately hard to understand. When we abandon tentative hypotheses about the causal-explanatory role of such phenomena, we never for a moment consider their ontological elimination, as we are supposed to do in the case of folk psychology. This is just as well for biology. One of the earliest documented forms of folk psychology is biological in nature. According to the folk psychology of Plato’s Timeus, our emotional life is to be explained in terms of the swelling of the heart, and the function of the liver is to reflect the contents of our thoughts. This has long been demonstrated to be false, yet few have rushed to conclude that there are no such things as hearts or livers. (11)
The point about reduction that Churchland makes at the end of the last quotation indicates the way in which he wants to distinguish the caloric fluid case from that of hearts and livers — for him, reducibility is an indicator as to whether the ontology of a bogus science might play a role in a viable one – but it doesn’t really matter. The bigger problem with ontological elimination in the case of folk psychology is that one cannot even describe the subject-matter of much of psychology, without there being intentionality; representation. As Greenwood observes:
Many practicing psychologists are not particularly concerned with the explanation of human behaviors or physical movements per se. They are instead concerned to provide empirically supported explanations of socially meaningful human actions such as aggression, dishonesty, helping, child abuse, and suicide. They are concerned with the explanation of those behaviors that are constituted as human actions by their intentional direction and social location. [My emphasis] To abandon the ontology of folk psychology would be to abandon the very subject matter of much psychological science. (12)
As I indicated in my essay, “Representations, Reasons, and Actions,” actions are not identical with motor movements. (See note 3) Moving my arm in a certain way does not, in itself, constitute an assault. The latter is an action, not merely a set of motor movements, and depends on my representing my victim as in some way deserving of my attack (i.e. as a villain or foe or what have you). Moving my arm in other ways does not, in itself, constitute giving charity. The latter, again, is an action, not merely a set of motor movements, and depends on my representing the receiver of my largesse as in some way deserving of help. In both cases, the relevant action is only characterizable in intentional terms, and it is actions like these that psychologists are interested in explaining, not mere motor movements, for which entirely neurophysiological causes suffice. (13)
Churchland’s ontological eliminativism, then, is not only dependent upon a conception of ontological commitment that is, at best, of limited applicability, but is grounded in basic confusions regarding the difference between actions and events and the subject matter of much if not most psychology.
Notes
(2) http://selfpace.uconn.edu/class/percep/SellarsPhilSciImage.pdf
(3) Jerry Fodor, “Fodor’s Guide to Mental Representation,” p. 272.
(4) https://theelectricagora.com/2017/07/16/representations-reasons-and-actions/
(6) Paul Churchland, “Eliminative Materialism and the Propositional Attitudes,” p. 73.
(7) Loc. cit.
(8) Ibid., p. 73.
(9) Ibid., p. 75.
(10) W.V.O. Quine, “On What There Is” (1948)
http://www.uvm.edu/~lderosse/courses/metaph/OnWhatThereIs.pdf
(11) John Greenwood, “Reasons to Believe,” in John Greenwood, ed., The Future of Folk Psychology: Intentionality and Cognitive Science (New York: Cambridge University Press, 1991), p. 77.
(12) Ibid., p. 71.
Comments
30 responses to “Course Notes – Paul Churchland, “Eliminative Materialism and the Propositional Attitudes””
I sometimes get into discussions with creationists, who want to eliminate evolution. So they bring their objections to evolution. But they never come up with a good alternative that could serve as a replacement for the use of evolutionary theory to systematize biology.
I see a similar problem with Churchland’s eliminativism. He wants to get rid of folk psychology and its propositional attitudes. But he fails to provide a useful alternative. And I see that as a fatal flaw in his argument.
Personally, I’m somewhat skeptical of FP. But it cannot be thrown out until there is a workable replacement. And if I’m reading it properly, that’s also part of your objection.
My reading of this isn’t that Churchland doesn’t say there are no intentional states – “we can of course insist on no stronger conclusion at this stage” – but that FP or functionalism shouldn’t constrain hypotheses about mind.
As to FP, it is folk(s), that is culture specific. For most people I know, FP includes a big role for unconscious motivation, which was not a big thing prior to Freud, “chemical imbalances” and neurology, and doesn’t include demons, daimons, or rational immaterial or material parts that can be transferred to and from other animals.
“actions are not identical with motor movements”: “[in rodents,] neurons in motor cortices play a role in sensory integration, behavioral strategizing, working memory, and decision-making. We suggest that these seemingly disparate functions may subserve an evolutionarily conserved role in sensorimotor cognition [Ebbeson et al 2018]”. That is, motor movements are (nearly) always actions, under continuous real-time control to maximize their goal.
“aggression, dishonesty, helping, child abuse, and suicide”: all easily definable in other species. Consider Wang and Hayden’s (2018) definition of “human-like curiosity”: (1) willingness to pay to obtain information, (2) information provides no instrumental or strategic benefit (and the subject understands this), and (3) amount the subject is willing to pay scales with the amount of information available. “Willingness to pay” sounds like FP, but an operationalised concept like that is relying on basic biology – is an animal obtaining enough calories to maintain weight, will it have an average lifespan, will it successfully reproduce, will its interactions with conspecifics support these goals. This is the empirical knowledge that underpins FP, but is not undermined by the correctness or not of theories of mechanism..
I’m afraid I can’t agree with any of this. One can anthropomorphize animals of course but that’s all one has done. As for Chrchland he is explicitly an eliminativist. It’s not an interpretation.
From SEP:
Eliminative materialism (or eliminativism) is the radical claim that our ordinary, common-sense understanding of the mind is deeply wrong and that some or all of the mental states posited by common-sense do not actually exist.
= = =
And from the very first sentence of the piece:
ELIMINATIVE materialism is the thesis that our commonsense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience.
= = =
So, as you can see, his ontological eliminativism is demonstrable and not really disputable.
Dan,
I agree with your interpretation of eliminative materialism(EM). The problem that I see is that this is the logical endpoint of a certain, er, ideological predisposition, pursued to its final destination. If we are nothing more than biological computers it is difficult to reach any other conclusion. Once you choose that premise you are forced into increasingly contorted reasoning in order to reconcile the premise with the facts.
I see this from the point of view of a computer person(inevitably, since that is how I spent most of my career). The most complex programs ever built can be reduced to fundamental atomic statements, something like A = B + C. This obviously lacks consciousness, qualia and intentionality. We may start scaling up this statement to increasingly complex levels, but at no point along that complex progression, is there something that suddenly and magically transforms mechanical functions into consciousness of the free willing kind. We have not the slightest or even vaguest idea of how that could possibly happen. But we desperately want it to happen so that we do not have to abandon ideological commitments.
We preserve our prior commitments by inventing a special label for this magical step, and that is ’emergence’. But applying labels is not a satisfactory causal explanation. It is both bad philosophy and bad science. And with that the whole edifice of eliminative materialism collapses.
Can we ever explain it? I really doubt it. Once again I am going to appeal to computer reasoning. Imagine for a moment that I have created a really large and complex computer program, as I have done many times. Now you come along, as a maintenance programmer and you wish to understand what I have done, so that you can fix my program or enhance it. How will you go about understanding my program?
You could follow the example of neuroscience and scan my computer system with advance equipment that will detect the flow of electrons and the emission of heat. Do this and you will not have the slightest idea of the motivations, intentions, structure, organisation and sheer beauty of my programs(that is just me boasting). To get that you will have to read my source code, noting the variable names, function names, classes and objects. You will need to read and understand the content of each function, class and object. You will need to read my comments explaining my intentions. Do all of this, and with much effort the jigsaw will fall into place and you will understand my program.
But if you are denied access to my source code you cannot do any of this. You simply cannot understand my program because a fundamental piece of knowledge is missing, and it is not contained in my working program. And this is a good analogue for the problem of understanding the mind. You may gain good understanding of the brain but the knowledge necessary for understanding the mind is not contained in the brain. But science can only observe the brain. The knowledge for understanding the mind, since it is not in the brain, is not available for inspection by science. Science simply cannot do the job.
Let me give an example of what I mean. Imagine that I am writing a programme to filter high frequency radio signals. There are several filtering algorithms available, with speed, efficiency and functional tradeoffs. I choose one I think makes the best tradeoff. How could you possibly know which algorithms I considered and what tradeoffs were acceptable, unless you had access to my source code? Without my source code you have no chance of understanding my program. The end result does not contain the knowledge necessary for understanding the end result. It is as simple as that.
Hi Labnut. There are several issues here. One is that the hardware of the nervous system is organised differently – so physical relations of neurons reflect function. Analysis is done at different levels simultaneously ie combining functional and structural – more like clean room reverse engineering. And consider the analysis of other complex dynamical systems say weather. It’s not like we can read the source code.
Hi Dan,
“_some_ or all”, “eventually be displaced” “Why folk psychology _might_ (really) be false”
And writing in the 2013 edition of _Matter and Consciousness_ “…the evidence is still ambiguous, however, and a choice among the relevant theories has not yet been made…”
Here Churchland is toeing the scientific model line, I don’t have a problem with the hypothesis that the noumenal world of mind may be quite different from what we introspect or infer. And it seems likely to me that we will design experiments to test this. Just as Metzinger makes a point of putting up testable hypotheses in his recent mental action paper.
In passing, the idea that sleep and its relationship to consciousness and cognition needs a biological rather than a philosophical or psychological explanation is trivial, it seems to me, and it seems funny that it comes up repeatedly in discussion of the 1981 paper.
David,
“One is that the hardware of the nervous system is organised differently – so physical relations of neurons reflect function.”
Examine the CPU chip through a scanning electron microscope and you will see that it is made up interconnected transistor gates and the physical relationships of electrons reflect function. But you will never infer from that the understanding of the program required by the programmer. Know amount of reverse engineering can ever make that jump. And this seems analogous to the brain and mind.
David,
“I don’t have a problem with the hypothesis that the noumenal world of mind may be quite different from what we introspect or infer.”
The only thing we know about the noumenal world of the mind is the information we gain by introspection. That is the only reality we know. And here’s the thing, this strange thing called the mind has proven to be a spectacularly successful tool. It has freed us from the bondage of the determinism of the laws of nature, which is really quite a remarkable thing. There is not the slightest chance in eternity that my programmes can do that.
David,
“And consider the analysis of other complex dynamical systems say weather. It’s not like we can read the source code.”
That is not a valid comparison. Their outcomes are not controlled by program code.
David,
“the idea that sleep and its relationship to consciousness and cognition needs a biological rather than a philosophical or psychological explanation is trivial, it seems to me, ”
It is only trivial to you because you have made a prior commitment to biological explanations. And our inability to provide even such an explanation means that the problem is far from trivial.
Let me give you another analogy.
Last night I hit the ‘suspend’ button on my computer. It dutifully closed down in what seemed like an instant. I closed my eyes, once in bed, and the same thing happened. This morning I opened my eyes and the programme of my mind resumed, though not as cleanly as my computer’s will. Then I hit the resume button on my computer and the operation of my computer resumed cleanly from where it left off. Almost perfect, except for three programmes I am writing and testing. They did not resume cleanly.
I won’t find the explanation for this in the , er, biology of my computer. The explanation can only be found in the programme code and right now I am puzzling over my source code, trying to work out what I have done incorrectly. Without my source code this would be an impossible job. And that is exactly the problem with understanding the mind, even where sleep is concerned.
Dear labnut.
“Examine the CPU chip…via SEM…”. How about examine the running cpu via Van Eck phreaking while controlling the inputs and reading the outputs. “information we gain by introspection”: not so – we gain much more information by a) watching how other minds do the same things we do b) from experimental science – quick, if two visual stimuli are presented to you, how far apart in time do they have to be for you to notice both, and how long to make decisions arising from either; if you have a thrombotic event in your R posterior cerebral artery distribution, what might you expect to experience; “weather” – I presume you believe brains don’t work by program code, and it would be nice to imagine that solving weather over the entire globe is easier than doing such for one brain – it may well be; “sleep”: please, ants sleep, dogs and cats dream, the phenomenology of sleep must be combined with other lines of evidence to make any advances on understanding. In deep anesthesia, brain use ~50% O2; in NREM sleep ~80%, when carrying out cognitive tasks, 5-10% increase in the brain region at work (this is how functional brain imaging works)
Dear Dave,
“while controlling the inputs and reading the outputs”
You might reveal a correlation between inputs and outputs but it says nothing about motivation, will and intent that underlies these correlations. Understanding and correlation are most decidedly not the same thing. This is a mechanistic, functional view of our species that drains out of it all that is most vivid and most real about us..
“introspection”: not so – we gain much more information by a) watching how other minds do the same things we do”
You have just conceded my entire argument. When you compare what other people do with your own experience you are appealing via introspection to your own experience.
“b) from experimental science”
There is very little to show from this. It can describe us as functional beings(inputs vs. outputs) but simply cannot examine our internal experiences. And yet it is our internal experiences which are the most vivid and real part of us. Failure to explain this is terminal failure.
“if you have a thrombotic event in your R posterior cerebral artery distribution, what might you expect to experience”
A fault in the substrate will influence the supervening layer to a greater or lesser extent. But what does this prove? Unsurprisingly it shows they are connected but it most definitely does not show that the substrate is the supervening layer. This is the underlying assumption of all your reasoning. But it has never been shown to be true. It is clear that you wish it to be true but wishes are not good science.
“please,”
What might you mean by that? Please explain. Is this a verbal ejaculation or a verbal exclamation? Why should you resort to this?
“ants sleep, dogs and cats dream,”
Yes, of course they do. They have minds. They are not nearly as capable as our own, and in some cases, very rudimentary, but they still have minds
“the phenomenology of sleep must be combined with other lines of evidence to make any advances on understanding.”
It is true that the mind runs on top of the brain, just as my programmes run on top of the CPU. Hence the substrate has a powerful influence, just as in my computer. So of course we should consider that influence. But to think the influence is the entire explanation is to go up a dead end street.
You seem to think that explaining the substrate explains the layer above it. This is the whole basis of your line of reasoning and it is plainly false. I used the analogy of the CPU and programme to show how false that it is.
“In deep anesthesia, brain use ~50% O2; in NREM sleep ~80%, when carrying out cognitive tasks, 5-10% increase in the brain region at work”
That is very interesting. But how does oxygen usage explain my creative thinking processes? I know for a fact that the energy usage in different parts of my computer do nothing whatsoever to explain the operation of my programmes.
“(this is how functional brain imaging works)”
Which is why it is pretty near damn useless for understanding the mind. Oxygen usage of the brain might show which parts of the brain are activated but it says precisely nothing about the content of the operations that it reveals. Try scanning my CPU. You will see some hot spots in the GPU and two of the eight CPU cores. You will see intense heat in the graphics chip’s GPU. Aha, you say, it is doing graphics work. But your scanner will never detect the shining love in a mother’s eyes as she cuddles her new born baby. That knowledge is simply not contained in the layer, the substrate, that you are examining.
We are rehashing an old argument about the inability of science to cross the brain-mind divide. It hasn’t so far and all that science can do is issue a promissory note. You seem to believe that a functional examination of the brain and people’s activities will do the job, by correlating inputs and outputs. But I don’t see a coherent argument to support your proposition. I have shown that the same approach wholly fails to explain my computer programmes. It fails because that understanding requires knowledge not contained in the object that is being examined. I illustrated this with the example of my computer programmes. Understanding the operation of my computer requires understanding of my program source code. But that information is not contained in the object being examined, the computer.
We have reached a remarkable milepost in the history of science. Until now the knowledge necessary for understanding an object or process was always contained in the object or process. But now we are encountering a situation where the object/process under examination, does not contain the knowledge necessary for understanding it. This would seem to be a pretty fundamental limit to the progress of science.
After reading all the comments, I still have a question. Suppose you meet Churchland on the Clapham Omnibus and you ask him: “Hey! You’re Paul Churchland, aren’t you? Why are you taking this train?”
What is he going to answer?
– “Elementary particles moving, old boy!”
– “I don’t know.”
– “Perhaps neuroscience and evolution theory will find out one day.”
– “I want to visit my aunt. I haven’t seen her since 2002. But don’t take this answer too seriously, please! I don’t believe it myself because I don’t have the necessary ontological commitment.”
Churchland must be great fun at parties.
Churchland, in another book, wrote that rather than saying “I love you,” one should say, “My neurons are firing at a frequency of xyz.”
And no, he’s not joking.
OK, everybody starts to say: “My neurons ar firing.”
The thing is: after a while “my neurons are firing” would be just another way to say “I love you”. The expression would acquire meanings Churchland would rather eliminate, if he could.
There’s something unpleasantly totalitarian in his position, in my opinion.
I think it’s too batshit crazy to be scary. It’s the sort of thing that only a very ungrounded person could actually think.
“My neurons are firing.” – an allusion to Rorty perhaps (the Antipodeans in The Mirror of Nature).
Leiter links to a recent article by Alex Rosenberg which reiterates some of this –
https://www.3ammagazine.com/3am/is-neuroscience-a-bigger-threat-than-artificial-intelligence/
Hi labnut, My own sympathies are more functionalist, as it happens. The general computing power and the resulting relatively homogenous architecture of a modern computer is not a good model for the physical organisation of a brain. The very specific results of particular brain lesions are unlike the results of lesioning a particular DIMM precisely because they are high level psychic eg some occipital cortical strokes causing a relatively pure inability to recognize familiar streets or buildings. As to your claims regarding opacity of programs, even say a binary where the source is unavailable. It is a fact that all such entities only can be successfully interpreted as dynamical systems embedded in an environment. This includes the physical world and the users of the program. Analogously to the private language argument, the information content depends on interaction with other organised structures. Rosenberg alludes to his interest in teleosemantics in a comment in the Leiter discussion, which fits in with this.
The Rosenberg article is not very well written, using a strongly confirmation-biased prose, and is filled with typographical errors. Frankly, it’s just hilariously misguided. “Because rats can’t talk they had to identify statements unambiguously attributable to rats as their beliefs. ” – LMAO!
I’ve come to think of him as a hack, honestly. I used to think he was very smart, but some of the positions he takes makes it very hard to continue doing so.
Dan-K,
“I’ve come to think of him as a hack, honestly. I used to think he was very smart,”
Yes, and no. I actually enjoy the way he writes, with clarity and persuasiveness. I always eagerly read his opinion pieces, which you will think strange for a devout Catholic, considering his avowedly atheistic stance which colours all his writing! I think he is undoubtedly smart, but very wrong. But to show that he is wrong is difficult indeed.
We need to take him very seriously because his is persuasively expressing a line of thought that is becoming increasingly common, and will shape our society. Science, by its very nature, is committed to ‘mechanistic’ explanations and so it will endeavour to find them everywhere. Science has also been very successful, making this ethos credible.
Some of his opinions may seem way out, but I have always thought he was being ruthlessly consistent, taking his ideological priors to their inevitable conclusion. And I think his critics, by and large, have made a poor job of answering him.
David,
“My own sympathies are more functionalist, as it happens”
Yes, that comes through very clearly in your comments and I could imagine you nodding approvingly as you read Rosenberg’s article. Thanks for that reference, by the way. He clearly expresses a point of view that we must grapple with.
“It is a fact that all such entities only can be successfully interpreted as dynamical systems embedded in an environment.”
Your functionalist view point! Of course they need to be interpreted in their environment as well, but you smuggle in two tendentious words, ‘fact’ and ‘only’. You have not demonstrated that it is a ‘fact’ that can ‘only’ be successfully be interpreted…etc.
You say that
“a modern computer is not a good model for the physical organisation of a brain”
but that was not the claim. What the computer demonstrates is that it is possible to have a system that cannot be understood, in itself, without another source of knowledge, the source code. You simply have no chance whatsoever of understanding my programmes by scanning the computer system, no matter how good your scanner is.
To continue my answer to David.
In reply to my argument that scanning my computer system would give you no understanding of my computer programme, you asserted that supplementing this with a functional examination of the computer system in its environment was sufficient to provide the understanding.
It is trivially true that it will give you a functional understanding, but we need much more than that. What if one of the functions of my computer system is incorrect? Why is it incorrect? How can you correct it? If you cannot answer these questions you have not understood my computer system.
I can assure you that the only way you can possibly answer these question is by examining my source code. But you don’t have access to my source code. Now what do you do?
At its heart this debate is all about determinism. Science inexorably reveals a world that is strictly deterministic. And if that is correct then David and Rosenberg are right. The laws of nature simply allow no exemptions from strict determinism and every discovery that science makes further confirms this fact.
Yet, on the other hand, we behave as if our minds have been partially decoupled from the strict determinism of the laws of nature. How could that be? Rosenberg and others say that this simply can’t be. The laws of nature don’t allow for it. We must logically, rigorously and ruthlessly accept this fact with all its implications. Therefore the higher world of experience is an illusion, sometimes comforting, sometimes useful and often harmful.
This is the dilemma we face. Have our minds been partially decoupled from the strict determinism of the laws of nature? Science does not allow for this. But it cannot account, in any way, for the manner our minds can seemingly defy determinism. Science responds by calling it an illusion. We stubbornly cling to this illusion, calling it reality.
Who is right? If we are right then there must be some kind of escape clause in the laws of nature that allows for exemptions from strict determinism. But there is no clue as to what that escape clause might be and that is the greatest weakness in our argument. All we can say in reply is that science is a work in progress, with a huge amount that is still unknown. Given sufficient progress we will uncover the escape clause. Now if science possessed sufficient epistemic humility it would be compelled to accept that this is a possibility. However epistemic humility is a quality in short supply in the world of science!
My position is that our experience of the world is sufficiently compelling to believe that an escape clause from strict determinism exists, even if science, a work in progress, has not yet uncovered it. But then I am a Catholic, so I would say that wouldn’t I, just as Rosenberg reaches an opposite conclusion. At the end of the day ideological priors matter.
Sometimes philosophy seems like two distinct disciplines. There are those (like me) who see it as the study of conceptual interconnections within the domain of folk psychology. That domain includes standard everyday concepts such as belief, desire, emotion, feeling, reasons, actions, intentions, responsibility and all the ethical and sociopolitical concepts. Others see philosophy as the handmaiden of the empirical sciences. On their view, the task of philosophy is to pave the way for the advance of science. I think of the former view as humanistic and Socratic, the second as Cartesian (even when it is very anti-Cartesian in content).
The scientific approach casts itself as merely seeking explanations and eschewing normative commitments. The first approach — humanistic philosophy — operates within a normative outlook and sees no basic problem with the idea of rational normativity. I guess there will always be something like a standoff between these two deeply different approaches.
Incidentally, Churchland is surely right when he lists his “examples of central and important mental phenomena that remain largely or wholly mysterious within the framework of FP”. But, really, no-one from the humanistic school ever thought these examples were within the scope of intentional explanations. If the neurosciences can make them less mysterious then this is all to the good.
Alan
It’s just Sellars again. You cannot reduce the Manifest Image to the Scientific, and you cannot eliminate it. The world, after all, includes people and the institutions and forms of life they have made.
Dan: The Greenwood essay is not readily accessible. If you could say more about it, I for one would be interested.
Dan-K,
“You cannot reduce the Manifest Image to the Scientific, and you cannot eliminate it.”
I really like Sellar’s contribution by coining that pithy term, ‘manifest and scientific image’. It helps clarify the discussion enormously. But it still comes down to the question of strict determinism. This, if true, means that the manifest and scientific images must necessarily be the same thing, with the manifest image being derivable from the scientific image. On this view, the manifest and scientific images are simply different, though useful, perspectives on the same thing.
On the other hand, if there is indeed an escape clause in the laws of nature that partially liberates us from the confines of strict determinism, then the manifest image cannot be reduced to the scientific image and therefore cannot be eliminated.
Hi labnut.
“What the computer demonstrates is that it is possible to have a system that cannot be understood, in itself, without another source of knowledge, the source code.”
If you can point to a system that can be understood in itself, this might strengthen the point. You would gain just as little insight from a electron microscope image of the cell nucleus, as you would of one of a computer. You have to see it running, and you have to be able to manipulate it, As to understanding code, I have enough trouble debugging my own 🙂
Alan: Send an email to my Missouri State address, and I’ll send you a PDF of the Greenwood essay.