Flow of Ideas

INTENTIONALITY



What is intentionality? What philosophical problems does it present?


An essay written as an undergradaute in the Department of Philosophy, King's College London

Alexander Rikowski

London, June 2010


Searle says that: “Intentionality is that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world” (1). I take the intentionality of a mental state to be about something. I shall largely be focusing on two philosophical problems presented by intentionality. The first problem is: Are objects of intentional states all objects of a certain kind? And my second problem is: How can a mental state be about something? I shall be arguing alongside Crane that since an intentional object is whatever a mental state is about, then objects of intentional states are not objects of a certain kind. In response to the second question, I will be arguing that we cannot say exactly how a mental state can be about something. I will discuss Fodor’s causal theory which says that a mental state can have intentionality by being a language-like symbol caused by environmental surroundings. Yet, even if our mental states are symbols, these symbols only have meaning because the understanding makes sense of them. Searle shows us that understanding is responsible for there being intentionality, but that the understanding is itself unanalyzable.

Searle (1983, p. 11) explains that intentional states are representational, as they are about something. He also tells us throughout his works that, these states may include being in a mental state of fear, love, admiration, hating, believing, imagining, regret, guilt, and pride etc. For example, I can be in a state of admiration for the way Wayne Rooney plays his football. But I can also be in a state of hoping he will be fit for the next world cup. These two intentional states are both directed at a particular thing that exists. But a problem of intentionality presents itself as soon as one realises that a mental state can be about something that does not exist. As Jacob says: “But one may also love Anna Karenina (not a concrete particular in space and time, but a fictional character)” (2). Should we say then, that, there are non-existent intentional objects?

Initially, it seems implausible to say that there are objects which do not exist. We normally think of objects as being things that exist, and to say that some objects are non-existent appears contradictory (Crane, 2001, p. 17). Crane explains that we could be in agreement with Searle and maintain that each and every intentional object exists, and that some intentional states are not directed at intentional objects. However, a mental state would not be intentional without being about something (Crane, 2001, p. 22). One way around the dilemma is to hold that non-existent intentional objects are not real objects at all. As Crane says:

“…there are intentional states which can be truly described as being about ‘Pegasus’, ‘about unicorns’, etc.—and it is not the case that there is anything corresponding to these quoted words” (3).

Well, why bother calling ‘Pegasus’ and ‘unicorns’ intentional objects then? One good reason is that thoughts about unicorns are of a different kind to those about Pegasus, Crane rightly points out. To draw the distinction between Pegasus and unicorns, we need to say that they are two different intentional objects. Crane also talks about the fact we can think about events, and we do not normally call an event an object. Sometimes we think of things that exist, and something we think of non-existent things. Mental states can be about different sorts of stuff. We must then say that, intentional objects are not objects of a certain kind (Crane, 2001, p. 26). An intentional object is just whatever an intentional state is about.

Another benefit to maintaining that there are intentional objects is that, it solves the problem of how a person can have quite different beliefs about one existing object without them realising that both beliefs are about the same existing thing. Davies (1995, p. 276) for example, explains that a person could be thinking about a morning star and an evening star, without realising that from an external position, both thoughts are about the same existing star. According to Crane, this person would simply be thinking about two different intentional objects. However, if you and I are both thinking about a fictional character that we both call “superman” for example, then how do we know whether we are thinking about the same intentional object? To this Crane says: “…there is no reason to think that there will always be a fact of the matter about whether two thinkers are focusing on the same intentional object when that object does not exist” (4).

Yet, all of what I have said gives rise to further philosophical problems. It may be said that an account is needed to explain how exactly a mental state can be about something. How is it possible for two mental states to be about different intentional objects? Fodor provides us with an attractive theory which first of all seems to help us solve these problems. Cummins says that Fodor had a hypothesis that: “…mental representations are language-like symbols” (5). For Fodor, a particular physical state in my brain meets a certain functional criterion for it being a certain mental symbol (O’Connor & Robb, 2003, p. 261). Fodor first of all looks at what he calls “the crude causal theory”. This particular theory says that a symbol token represents what it is that caused it, and that a type of symbol expresses certain properties that “…reliably cause their tokenings” (6), says Cummins. For example, a type of mental symbol like ‘cat’ expresses a cat-like property that reliably causes tokenings of ‘cat’. This theory says that, I can represent a cat due to the fact that observations of cats reliably cause the tokening of ‘cat’ to arise in me. In my discussion of Fodor, to refer to such a mental symbol, I will write ‘cat’ instead of cat.

Fodor explains that one problem with the crude causal theory is that it cannot give an account of misrepresentations (Cummins, 1989, p. 57). Cummins gives the example of a Graycat (a type of cat), which causes a token of ‘dog’ to arise in a person. A misrepresentation would then occur, as the tokening of ‘dog’ would occur in the person because of them seeing a Graycat. This person is already aware that ‘dog’ expresses some dog-like property which reliably causes tokenings of ‘dog’. However, the crude causal theory fails to give an account of how such misrepresentations are possible, as it says that a token represents what caused it. You could say that ‘dog’ expresses some property which is either dog-like or cat-like, which means that either a cat or dog would reliably cause the tokening of ‘dog’ to occur in the person. Yet, if we say this, then, a token of ‘dog’ arising in the person because of them seeing a Graycat, would not be a misrepresentation (Cummins, 1989, p. 57).

Fodor’s solution is to first of all accept that observing a dog causes the token of ‘dog’ to arise in a person’s “belief box”. However, in a close possible world, if dogs never cause the tokening of ‘dog’ to arise in me, then it would also be impossible for a Graycat to cause the tokening of ‘dog’ to occur in me (Fodor, 1987, p. 109). But Fodor would say that since seeing a dog does cause the tokening of ‘dog’ to occur in me, an intentional state can still falsely represent a Graycat even if its function is to be about what caused it (this could perhaps be a result of the cat being too far away). Fodor maintains that we should think of intentional states as being symbols that have functions, and that these symbols can sometimes be dysfunctional. Unlike the crude causal theory, Fodor’s theory does not say that an intentional state only represents what caused it. I can mistake a Graycat for a dog if the Graycat looks as though it has the features required for something being a dog, Fodor would say. He seems to have solved the misrepresentation problem then.

Fodor hypothesised that mental states are language-like symbols, and that a mental state can have intentionality by having a function of being about what caused it. It may now be said that two mental states can be about different objects, because they were caused by different objects. Yet, we do not just represent particular observable objects like cats and dogs—we have theories too. Scientists for example, have theories about protons, even though protons are not observable. Fodor acknowledges that: “If the causal theory is going to work at all, it’s got to work for the so-called ‘theoretical vocabulary’ too” (7). The explanation given by Fodor is that, a scientist could associate a proton with the look of a photographic plate. A photographic plate looking to be a certain kind of way could then cause the tokening of ‘proton’ to occur in that scientist.

But how does the scientist manage to associate the proton with the look of a photographic plate? Things get trickier when we ask how we can think about abstract mathematical functions (Adams, 2010, pg. 10). However, Adams says that causal theories usually maintain that they only give sufficient conditions for mental representations. They say things like: a mental state can be about something by being caused by certain environmental surroundings, but that this does not also mean that some mental states do not get their meaning via different routes. Yet, Adams explains how causal theorists could instead say that, a symbol which represents an abstract object could be “…defined in terms of the meanings of other terms…” (8). It could be said that complex mental states are constructed by the linking up of certain types of mental symbols, which are then abbreviated into theoretical mental symbols. Environmental surroundings may then cause tokenings of such theories.

What I have just said may help us with understanding how I can think of a non-existent intentional object, like a unicorn. Adams (2010, p. 10) explains how some intentional states are complex symbols that can be broken down into simple constituents. He says that: “…”X” is a kind of abbreviation for, or logical construction of, or defined in terms of “Y1,” “Y2,” and “Y3,” and that a causal theory applies to “Y1,” “Y2”, and “Y3”,”.” (9). Adams points out how it may be said that, a mental symbol representing a unicorn is an abbreviation of a complex representation that has constituents of a mental representation of a horn; a mental representation of a horse; and a mental representation of the relation between the horn and the horse. So, I can think of a unicorn because I link up various simple intentional states that were caused by actual existing objects, and I can then have a mental state of a unicorn which abbreviates all this. The more mental links I have, the richer and more varied my mental language becomes. However, Searle brings out the problem that we cannot give a full account of the thing responsible for making all these mental links.

Searle (2003, p. 333) gets us to suppose that he has been given three batches of Chinese writings in a room which he is locked in. We must also suppose that he understands nothing about the Chinese language. Searle receives rules written in the natural language he understands, which is English. The rules tell him how the symbols within each batch of Chinese writings are to be arranged. In following the rules, Searle is constructing actual Chinese sentences understood by Chinese speakers. And he continues to follow similar instructions where he provides sentences which are one side of a Chinese dialogue. Yet, Searle does not himself understand what any of these Chinese writings say. He explains that: “As far as the Chinese is concerned I simply behave like a computer; I perform computational operations on formally specified elements” (10). Searle is running a program just like a computer does. However, just like a computer, he does not actually understand what he is doing. This example shows us that just having functions and running a program is insufficient for a thing having genuine understanding.

Searle also shows us that symbols only have meaning because the understanding makes sense of them. Chinese writings are meaningful only because they are understood by Chinese speakers. It is the understanding that enables mental states to be symbols that have meaning in the first place. However, we would be leading to an infinite regress in saying that the understanding I have is itself composed of functions which interpret other functions, as we would then need to say what it is that is both interpreting and giving meaning to the functions of that understanding. If we say that the understanding is an inter-connected system of functions, then what is it that is interpreting that system? The understanding is responsible for mental states having meaning in the first place, but we cannot say exactly what thing the understanding is or how it gives mental states their meaning. O’Conner and Robb say that Searle himself maintained that: “…while intentionality is a physical phenomenon caused by processes in the brain, it is not reducible to any such processes, but instead is a basic, unanalyzable feature of the world” (11).

I said that intentionality is the representative property of mental states. These mental states are directed at things, and in the introduction I gave two philosophical problems that intentionality presents. I expressed these problems by giving some questions asked about intentionality. In response to the first question, I say that since some intentional objects exist and some do not, then intentional objects are not objects of a certain kind. We must say that, whatever I am thinking about would be an intentional object. In regards to how an intentional state can be about something, I say that it is the understanding that enables mental states to have the property of intentionality, but that we cannot say exactly what thing the understanding is. Although the understanding of a person enables them to associate a mental state of theirs with an intentional object, we cannot give a full explanation of how it does this.

Bibliography

Adams, F. “Causal Theories of Mental Content”, published in Stanford Encyclopedia of Philosophy, website: Causal Theories of Mental Content (2010).

Crane, T. Elements of Mind: An Introduction to the Philosophy of Mind, Oxford: Oxford University Press, (2001).

Cummins, R. Meaning and Mental Representation, USA: The Massachusetts Institute of Technology, (1989).

Davies, M. “The Philosophy of Mind”, published in Philosophy 1: a guide through the subject,, edited by Grayling. A, Oxford: Oxford University Press, (1995).

Fodor, J. Psychosemantics: The Problem of Meaning in the Philosophy of Mind, USA: The Massachusetts Institute of Technology, (1987).

Jacob, P. “Intentionality”, published in the Stanford Encyclopedia of Philosophy, website:
http://plato.stanford.edu/entries/intentionality/, (2003).

O’Conner, T. and Robb, D. “Mind and Representation: Introduction”, published in Philosophy of Mind: Contemporary Readings, edited by: O’Connor, T. and Robb, D. London: Routledge, (2003).

Searle, J. Intentionality: An Essay in the Philosophy of Mind, Cambridge: Cambridge University Press, (1983).

Searle, J. "Minds, Brains, and Programs”, (1980), published in Philosophy of Mind: Contemporary Readings, edited by: O’Connor, T. and Robb, D. London: Routledge, (2003).



Notes

(1)Searle. J, (1983), Intentionality: An Essay in the Philosophy of Mind, p. 1.

(2)Jacob. P, (2003), Intentionality, p. 4.

(3)Crane. T, (2001), Elements of Mind, p. 25.

(4)Crane. T, ibid, p. 30.

(5) Cummins. R, (1989), Meaning and Mental Representation, p. 56.

(6) Cummins. R, ibid, p. 56.

(7) Fodor. J, (1987), Psychosemantics: The Problem of Meaning in the Philosophy of Mind, p. 112.

(8) Adams. F, (2010), Causal Theories of Mental Content, p. 10.

(9)Adams. F, ibid, pg. 11.

(10) Searle. J, (1980), “Minds, Brains, and Programs”, published in Philosophy of Mind: Contemporary Readings, (2003), p. 334.

(11) O’Connor. T, & Robb. D, “Mind and Representation: Introduction”, published in Philosophy of Mind: Contemporary Readings, (2003), p. 263.


Originally a Pre-Submission Essay for the 2009/10 BA ‘Philosophy of Mind Paper’, King’s College, London, Philosophy Degree, 2010


Copyright ©Alexander Rikowski, November 2010


© Copyright, Flow of Ideas, Ruth Rikowski and Glenn Rikowski. Website by [whiteLayer]
Search
Call for Authors
Printer Friendly
Order by DateAlphabetical[Close menu]