Subjectivity is, primarily, an aspect of consciousness. In a sense, conscious experience may be described as the way the world appears from a particular mental subject’s point of view. The idea that there is a distinction between appearance and reality seems to presuppose the distinction between subjective and objective points of view.
The two controversies
There are two principal controversies surrounding subjectivity: first, whether subjectivity, as it is manifested in consciousness, is an essential component of mentality; and second, whether subjectivity presents an obstacle to naturalistic theories of the mind.
THE FIRST CONTROVERSY. Most philosophers agree that intentionality—the ability to represent—is characteristic of mentality. However, there is strong disagreement over whether subjectivity is also necessary.
Those philosophers who think it is (e.g., Searle 1992) argue that true—or what they call “original”—intentionality can only be attributed to a conscious subject. In this view, representational properties can only be ascribed to unconscious states and to unconscious machines—such as computers and robots—in a derivative sense.
With respect to computers, the claim is that their internal states only have meaning to the extent that people (conscious subjects) interpret them to mean something. On their own, these states are merely meaningless formal symbols.
When it comes to unconscious states—such as the unconscious beliefs and desires hypothesized in Freudian psychology—the claim is that only by virtue of their effects on one’s conscious beliefs and intentions do they have content. The source of all genuine meaning resides in conscious, subjective mental activity.
The basic argument for this position is that for something to count as a representation—as meaning something—there must be a subject for whom its meaning is significant; a subject who is aware of and appreciates what it means. Otherwise, the argument goes, without a subject who understands, interprets, and makes use of the meaning, there is no basis for saying it means anything at all.
In particular, given that most conditions stateable in objective terms for what a brain state or a computer state represents leave room for alternative interpretations, it is only by reference to the awareness of a conscious subject that a representation acquires determinate content.
Other philosophers reject this assimilation of intentionality and subjectivity, arguing that a theory of intentionality—one that applies equally to conscious and unconscious states—can be developed independently of a theory of subjectivity (e.g., Dretske 1981 and Fodor 1987).
Some theorists see no need at all to appeal to the interpretive activity of a conscious subject to fix the content of a representational state. In this view, meaning ultimately comes down to information, a notion that may be treated in objective terms.
Others agree that some appeal to the purposes of the agent is necessary in order to ground an assignment of meaning to brain states.However, they claim that it is not necessary to invoke the subjective character of a subject’s conscious states for this purpose.
Rather, it suffices to show that by assigning the relevant interpretation to the subject’s internal states one may provide appropriate psychological explanations of the subject’s behavior and explain that subject’s success in his or her interactions with the environment.
What a subject’s beliefs and desires are about is determined, in this view, by the nature of the subject’s interactions with the environment and the role these states play in her or his internal psychological economy. These are facts clearly stateable from an objective point of view; no special appeal to the subjective experience of the agent is required.
Just how serious the first controversy is depends considerably on one’s stand with respect to the second one. Suppose that one adopts the position that only creatures possessing subjective, conscious states are capable of any mentality at all.
Still, if one also thinks that possession of subjective consciousness is a perfectly natural phenomenon—itself explicable in physical, or objective terms— the sting is largely removed from this position.
There is now no reason to think properly programmed computers or robots couldn’t possess the full range of mental states, so long as they satisfied the naturalistic conditions for conscious subjectivity.
THE SECOND CONTROVERSY. With respect to this second question—whether or not subjectivity presents a problem for a naturalistic framework—one may reason as follows.
A complete inventory of the world should, if it is truly complete, capture everything there is and everything going on. It seems natural to suppose that such a complete description is in principle possible, and is in fact the ideal aim of natural science.
But some argue that facts that are essentially accessible only from a particular subject’s point of view cannot be included in this allegedly complete objective description (Nagel 1974, 1986). If they cannot, this would seem to undermine the idea that the natural world constitutes a coherent, lawful, and objective whole.
For example, take the fact of one’s own existence.You could read through this hypothetical exhaustive description of the world, and it would include a description of a body at a particular spatio-temporal location, with particular physiological (or even nonphysical) processes going on inside it.
However, what would be missing is that this is your body—this is you. No collection of facts stateable in objective terms seem to add up to this body being yours.
Or take the problem of personal identity. From a point of view outside the subject, what it is that makes one the same person across time—whether it be a matter of bodily or psychological continuity—seems to admit of borderline cases or matters of degree, or other sorts of indeterminacy.
Thought experiments involving split brains, machines that take “memories” from one brain and implant them in another, and the like, reveal just how difficult it is to pin down personal identity as a determinate matter of objective fact.
Yet, from the point of view of the subject, what it is to be oneself seems to be a clearcut, all-or-nothing matter. Either one continues to exist or one doesn’t. It is hard to reconcile the objective and subjective perspectives on this question.
One particularly difficult manifestation of the problem of subjectivity is how to account for the fact that there is “something it is like” to be certain objects (say a human being), or occupy certain states (say, visual experiences), but not others (say, a rock, and its states).
This is also known as the problem of “qualia.” From an objective point of view, there would seem to be nothing special about the neurological activity responsible for conscious experience that would explain what it’s like for the subject. Two influential thought experiments starkly illustrate the problem.
Nagel (1974) presents the problem this way. Bats navigate in the dark by emitting high-pitched sounds and detecting their echos—a sensory system known as “echolocation.” From an objective, third-person point of view, there is nothing especially difficult about understanding how this system works.
While there are of course difficult technical questions, the idea that the bat extracts information concerning the location and movement of its target from the returning sound waves bouncing off of it is fairly clear. The problem emerges when one considers what echolocation is like for the bat, from its point of view.
People know that there is something particular it is like to see a sunset, smell a rose, or feel a pain. There is every reason to believe that there is also something particular it is like to sense by echolocation.
Yet, when the question is posed this way, it doesn’t seem as if any of the details learned about the information-processing capabilities of the bat are helpful in answering this simple question: What is it like for the bat? It seems as if only by adopting the bat’s point of view, by humans’ experiencing echolocation, could one obtain a clue concerning what it is like.
Jackson (1982) asks people to consider the following situation. Imagine Mary, a neuroscientist who learns everything there is to know about the physiology and information processing involved in color vision.
However, she learns this while restricted to a completely black and white environment, so that she herself never experiences color sensations. In a sense, she would be in the same position vis-à-vis everyone else that everyone else is vis-à-vis bats.
|black and white|
At some point Mary is released from her purely black and white environment and allowed to see color. Suppose she now sees a red rose for the first time. It seems undeniable that her reaction would be one of wonder and novelty. “So that’s what red looks like!” she might say.
But now, if the subjective experience were adequately captured by the objective, third-person descriptions presented in her science texts, why should she experience novelty and wonder? That she would have this experience seems to demonstrate that what is apprehended from the first-person, subjective point of view is distinct from what is describable in objective, third-person terms.
Many philosophers argue that subjectivity does not present a special puzzle. For some (e.g., Searle 1992), it is just a fact that the world contains both objective facts and irreducibly subjective facts; their relation requires no explanation and produces no mystery.
For most, though, the demystification of the subjective is accomplished by some sort of reductionist strategy (e.g., Lycan 1987 and 1990, and Rosenthal 1986), one that shows how to incorporate so-called subjective facts into an all-embracing, naturalistic and objective scientific framework.
One influential model of subjectivity is the internal monitoring, or higher-order thought model. In this view, which fits well with a functionalist approach to the mind-body problem in general, subjectivity is principally a matter of some mental states representing other mental states.
That is, to be aware of, or to apprehend from the first-person point of view, that one is having a certain experience, is merely to occupy a mental state that represents one as having that experience.
If this is what subjectivity amounts to, then any model of the mind that builds in the requisite architectural features will explain subjectivity.A model of this sort of internal scanning already exists with computers.
Advocates for the view that subjectivity presents no special mystery sometimes point to the perspectival character of indexical expressions such as “I” and “here” for support.
The idea is that it is generally acknowledged that the meaning of such expressions cannot be captured in nonindexical terms (Perry 1979), yet this doesn’t give rise to any special philosophical problem or mystery.
Because one cannot derive a statement containing an indexical expression from statements free of indexicals, one need not conclude that there are any special indexical facts that are indescribable in indexical-free terms. There are theories that take into account the special behavior of such terms consistent with a general theory that applies to nonindexical terms as well.
In the same way, goes the argument, subjective mental phenomena can be incorporated into a more general theory of the world that applies to nonsubjective phenomena as well.
For instance, whereas it may be true that Mary, in the Jackson example described above, could not predict what it would be like to see red from her knowledge of the neurophysiology of color vision, this need not be taken to show that there are irreducibly subjective facts.
It could be that human beings possess a distinct representational system that is employed only when information comes directly from the sensory systems (Rey 1993). It is no surprise that the same fact can be represented in distinct ways, and that being represented in distinct ways may obscure its identity from the subject.
Yet another approach to the problem of subjectivity is eliminativism (e.g., Churchland 1985, Dennett 1991). Proponents of this view will grant that none of the models proposed to account for subjectivity really explains it; but, they argue, that is due to the human intuitive conception of subjectivity—indeed of consciousness in general—being too confused, or incoherent, to be susceptible to scientific explanation. Subjectivity just isn’t a real phenomenon, so there’s nothing in the end to explain.