by Mark English
The topic of eliminative materialism (or ‘eliminativism’ as its current manifestation is usually called) has been the focus of some recent debate. What prompted this piece, however, was a brief discussion in a comment thread about a very specific kind of eliminativism which applies to language and which is known as semantic (or meaning) eliminativism.
My original intention was to write a short essay describing what semantic eliminativism is and giving a personal view on how plausible or useful (or implausible) it might be as a way of conceptualizing how the semantic side of natural language actually works. But I realized, not only that the term has been used in different ways by different theorists, but also that the discussion surrounding it is embedded in some very intractable philosophical problems.
Consequently I decided to slant my discussion in a more linguistic direction, though without attempting to explicate basic concepts such as polysemy, homonymy, semantic fields, prototype theory, etc.. (Prototype theory, which was developed by Eleanor Rosch amongst others, has influenced the way I see concepts and categories: they are less arbitrary and language-driven than I had previously believed them to be.)
Nor will I be dealing with language acquisition or evolutionary questions. Regarding the former, there is a solid body of data which any semantic theory needs at least to be compatible with; with respect to the latter, perhaps the most directly relevant work is ethological. Attempts to teach simplified languages to chimpanzees are also interesting in this context, as is the growing body of genetic and archaeological data which is slowly providing a foundation for understanding how and when human language developed.
Frame semantics, an approach pioneered by Charles J. Fillmore, has deep roots in linguistic theory. Fillmore worked within the framework of, and made major contributions to, Chomsky’s early syntactic theory before developing his own system (case grammar) which he saw as being complementary to Chomsky’s. Case grammar eventually evolved into frame semantics which has played an important role in some major research projects in the general areas of computational linguistics and artificial intelligence.
People come to these sorts of topics for very different kinds of reason. Some are driven simply by intellectual curiosity concerning how our brains process language; for others such knowledge is a means to a particular end – the development of natural language processing systems, for example. Much of the philosophical literature is concerned with defending or attacking various metaphysical perspectives on human culture and agency. My own interest in language is in part philosophical and in part scientific. With respect to the scientific side, my formal training has been predominantly in syntax and phonology – not semantics – and I have an interest in evolutionary perspectives on language.
Words can be seen as tokens (actual or concrete instances of a word being used) and as (abstract) types. The latter are commonly seen to be represented in our brains as lexical entries (the terminology varies) incorporating phonological, morphosyntactic and semantic elements. The semantic eliminativist suggests that these stored representations (if representations they are) lack an intrinsic semantic component.
The claim is not that we can eliminate meaning from an analysis of language and communication (that would be absurd), but rather about the way words are or are not stored in the brain, and about ways of conceptualizing meaning which avoid postulating unnecessary processes and unnecessary metaphysical entities.
On the face of it, however, the claims of the semantic eliminativist seem very implausible, if only because of the intimate connections between grammatical (e.g. syntactic) processes and meaning.
The development of Fillmore’s thinking may point to a solution here. His grammatical case-based approach gradually led him to see situations rather than words as the drivers of meaning. In his later work, situational frames (of which there are many, linked into networks within individual brains) are cognitive schemata which underlie the meanings of the words which evoke them.
My preferred approach is to see linguistic hypotheses and theories as aids to research and to stay as close as possible to the empirical evidence. To use Karl Popper’s terminology, they are conjectures which are tested against (and often “refuted” by) experimental or other empirical results. And if a theory of brain function leads to fruitful applications, it may not be a vindication of the theory but it is a mark in its favor.
Does metaphysics come into this process of theory creation and testing? Yes. Natural languages arguably carry their own implicit ontologies, and the same applies to theories. As I see it, individuals draw on various cultural sources – including science – both to elaborate and to constrain their metaphysical commitments.
Semantic eliminativists reject the notion that words (as stored lexical entries) have meanings. This is, however, not a well-supported position, and semantic eliminativism is only briefly alluded to by Luca Gasparri and Diego Marconi in their excellent review of lexical semantics (published as the entry for Word Meaning in the Stanford Encyclopedia of Philosophy).
Semantic eliminativism can be seen as an extreme form of contextualism and in their account of contextualism Gasparri and Marconi discuss the views of a philosopher, François Recanati, who is sympathetic to meaning eliminativism. Seemingly echoing Fillmore, Recanati sees words as being associated with “abstract schemata corresponding to types of situations.”
“Words could be said to have, rather than “meaning”, a semantic potential, defined as the collection of past uses of a [particular] word on the basis of which similarities can be established between source situations (i.e., the circumstances in
which a speaker has used [the word]) and target situations (i.e., candidate occasions of application of [the word in question]).”
Such a view (unlike Fillmore’s) obviously poses problems for neural processing. “It is natural to object,” write Gasparri and Marconi, “that even admitting that long-term memory could encompass such an immense amount of
information (think of the number of times ‘table’ or ‘woman’ are used by an average speaker in the course of her life), surely working memory could not review such information to make sense of new uses.”
They also question whether Recanati’s approach is an improvement on traditional linguistic accounts.
Of course, lexical semantics does not encompass all, or even most of, semantics. In the real world of linguistic communication we are not dealing with words in isolation or as such, but with orders, statements, questions and so on which occur in specific communicational contexts. Sometimes a single word constitutes an order (as in “Stop!”); or a warning (as in “Fire!”); or an answer to a question. In other contexts, a single word can stand for an implied statement (“Idiot!” for “He is/you are an idiot.”)
Obviously, most speech (and linguistic communication generally) involves strings of words, sometimes very long strings indeed. But the sentence (or something like it) is generally seen as the basic message-unit, that is, as the shortest string which can be said to encapsulate a message.
The meaning of these strings is generally seen to depend on the meanings of the various lexical components which are strung together to make the sentence. The principle behind this process is known as “compositionality”: basically that the meaning of the whole derives in an ordered way from the meaning of the parts.
This sort of approach works well with formal languages but, mainly because of contextual issues, it does not work so well with natural languages and ordinary human communication. Semantic eliminativism, as a form of externalism, sees compositionality as being – at best – of only marginal significance.
Though I have made it clear that I have serious reservations about semantic eliminativism, I share some of the convictions which drive it, including the view that meanings and messages are not “things” which are sent from one person to another.
There is a tendency (I have it myself) to see speaking as “conveying” something from one brain or mind to another. According to this view, a string of words encodes something (a message?) which is sent via some channel (sound waves, squiggles on a screen or paper…) from the speaker or writer to others whose senses and cognitive processes take in and decode this thing-that-was-sent, this meaning-thing, this message. You could see the initial coding as like the wrapping up of a parcel. The parcel is then sent and received and unwrapped (decoded). So that the very thing that was sent is received.
But, of course, this is not how linguistic communication actually works. For one thing, this model fails to account for the misunderstandings which are endemic but all too often unnoticed in ordinary social communication. Increasingly I have come to see linguistic communication as something of a “surface” phenomenon, often papering over (as it were) huge differences in the way individuals see the world; or sometimes (as in many comment threads) creating divisions and dichotomies which are largely linguistic and which fail to reflect in an accurate or perspicuous way the actual differences and divisions which lie behind the disagreements in question.
You could see language – certainly in its mundane social uses – as a set of games we play which facilitate and enhance social interaction (and manipulation). The same could be said of more formal contexts: rituals and so on. Another context where this game metaphor seems to apply is the academic one. Academic seminars (in my experience) certainly have this quality. There are many unwritten rules and conventions which all the regular participants know and follow.
The game metaphor is a metaphor and not a theory, but it seems very appropriate. Due to the popularity of Wittgenstein’s work, it has been much discussed. And, crucially, it avoids postulating the existence of mysterious, metaphysically problematic entities which are actually sent or conveyed from one person to another.
Without question, the speaker of a language has developed an intuitive sense of the sound system (phonology) of that language, involving a practical knowledge of a set of phonemes and rules for combining them, coupled with a kind of mental dictionary (or lexicon) which lists lexical items (words) as specific sounds/strings of sounds with certain morphological and syntactic features and functions. If we didn’t have an intuitive sense of these things, not only would we be unable to speak, we would not even be able to distinguish the sounds of spoken language from random vocalizing; in other words, to recognize certain specific vocal sounds and sound sequences as words and sentences (orders, statements, questions, etc.) rather than as mere noises.
The main question at issue with respect to semantic eliminativism is not whether language has a semantic dimension, but rather to what extent (if at all) a lexical item as stored in the brain incorporates semantic elements. This question may not be answerable in a definitive way, but I think the view of most linguists is that semantic elements and semantically-relevant links to various parts of the brain constitute an important part of a lexical entry.
The morphosyntactic and phonological aspects of a language can be more easily modelled than the semantic side of things and, admittedly, the difficulty of systematizing semantics leaves a lot of space for speculation and radical theories like meaning eliminativism and other forms of externalism. There is no denying that context is a crucial element in most forms of linguistic communication, and any satisfactory theory must come to terms with this fact, as well as with the dynamic and pragmatic aspects of meaning.
In the final analysis, however, linguistic theories are scientific in the sense that they stand or fall according to their usefulness in advancing understanding and in facilitating the development of various technologies. They make implicit claims about how the world is and are subject to a slow winnowing process, on the one hand, as our knowledge of the neurophysiology of language develops and, on the other, as progress is made in computational linguistics, and artificial intelligence generally.
In fact, the latter is particularly important. Whether we manage to create truly intelligent – and articulate – machines or not, whether this program succeeds or fails, we will, in the course of pursuing it, have learned some genuinely new and profound things about the nature of our own minds.