Legend has it that when Chilan of Sparta asked, “What is best for man?” Apollo replied, “Know thyself.” Thus, carved into the lintel of the Oracle of Apollo at Delphi were the Greek words “gnothi seauton”—“Know thyself ” (Parke 1933).We can try to follow this Delphic injunction because we are self-conscious beings, capable of self-reflection.
Sigmund Freud (1923) maintained that we have unconscious beliefs, desires, motives, and intentions, and that extensive use of psychoanalytic techniques is often required to uncover them. Whether there is a Freudian unconscious is controversial, as is whether or not there is suppression or repression in the psychoanalytic senses.
Nevertheless, our mental lives can be dissociated. And self-reflection can be as biased as reflection on any topic. Too charitable an attitude towards ourselves can leave us overly sanguine about the strength of our characters or the goodness of our intentions. Too uncharitable an attitude can lead to an exaggerated view of our frailties: We may see ourselves as more selfish, less kind, and less well-intentioned than we really are.
We can engage in wishful thinking, believing something about ourselves on less than adequate evidence because we want it to be true; evasive thinking, which involves turning our attention to other matters when thoughts about ourselves arise that conflict with our self-image; and skeptical thinking, in which we construct hypotheses on the fly to explain away evidence that conflicts with our self-image. Arguably, we can practice self-deception about our motives and reasons, but, in any event, we can certainly unintentionally mislead ourselves about them (McLaughlin and Rorty 1986).
Social psychologists have found that we have a tendency to confabulate. Asked to explain our decisions or actions, we sometimes fabricate explanations, not with an intent to deceive, but apparently with an eye toward justifying or making sense of those decisions or actions; the end result is we are taken in by our own fabrications.
Evidence for this proclivity has led some philosophers to maintain that confabulation is so pervasive that our selfreports are just unreliable stories that we tell about ourselves for a variety of aims (Dennett 1991). Many philosophers, however, deny that our fallibility warrants skepticism about the possibility of self-knowledge. And many follow Socrates in holding that there is virtue in heeding the Delphic injunction.
The quest for self-knowledge is a way of taking responsibility for ourselves. But it should, of course, be balanced with other activities of value. Yet another Delphic injunction is “Everything in moderation.” Narcissism or self-hatred can result in selfabsorption, which is a vice.
Other people are better judges of certain aspects of our character than we are. They sometimes read our emotional state better than we do or remember our view about a certain topic better than we do. Moreover, they can tell us when we are confabulating or not being honest with ourselves. Nevertheless, it seems that each of us is able to know some things about ourselves in ways unavailable to others.
Gilbert Ryle denied this claim, arguing that “the sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.... John Doe’s ways of finding out about John Doe are the same as John Doe’s ways of finding out about Richard Doe” (1949, p. 155).
He further claimed that “our knowledge of other people and ourselves depends on noticing how they and we behave” (1949, p. 181). According to behaviorism, we can know our own mental states only by observing our own behavior or relying on the testimony of others who have. Of course, often others are better positioned to observe our behavior than we are.
Hence, the joke “One behaviorist meeting another on the street said, ‘You feel fine! How do I feel?’” (Ziff 1958). The behaviorist view of selfknowledge seems untenable. We need not rely on observations of our behavior to know whether we are in pain, or are visualizing a red sunset, or are just now thinking to ourselves that behaviorism is untenable.
René Descartes (1985) drew attention to an area of mental life to which we seem to have first-person privileged access and with respect to which we seem authoritative: namely, our current conscious states. Conscious states include bodily sensations (aches, pains, itches, tickles, and the like), sense experiences (visual, auditory, and so on), mental imagery, felt emotions (feelings of fear, and so on), felt urges, and occurrent thoughts.
We seem able to know our current conscious states in a way different from the way in which we know those of others. Indeed it seems that to know whether we are in a certain conscious state, we need only turn our attention to whether we are. To know whether we are in pain, for instance, it seems that we need only turn our attention to whether we are in pain.
Of course, we are by no means omniscient about such matters. Beliefs about what conscious states we are in involve the exercise of concepts (Sellars 1963, sec. 62); and we may lack the requisite concepts to know that we are in a conscious state of a certain sort. Even when we have the requisite concepts, we can fail to know simply because of lack of attention to the matter. Moreover, our concepts of types of conscious states are vague.
Over the course of a morning we may move gradually from feeling cold to feeling warm, being unable to discern a difference in our thermal sensations from one moment to the next. On route to feeling warm we will pass through borderline cases of feeling cold; and in such cases we cannot know whether we feel cold. Theories of vagueness differ over why we cannot know in such borderline cases.
According to semantic theories, the reason is that there is no fact of the matter whether the concept of feeling cold applies, and so they present no limitation to self-knowledge. But given our inability to discriminate cases falling very near the borderline from cases on it, our ability to know whether we feel cold may stop short of the borderline (Williamson 2000).
We can make verbal mistakes in our reports of our conscious states (Broad 1925), and even perhaps conceptual mistakes in our judgments about what conscious states we are in because of less than a full mastery of a relevant concept (Burge 1979). But it has been held that if one has mastered the relevant concepts, one’s belief that one is in a certain conscious state will be infallible and incorrigible (Ayer 1940).
A belief or thought is infallible just in case it cannot be false; incorrigible just in case it cannot be shown to be false. René Descartes (1985) argued, “Cogito ergo sum”—“ I think, therefore I am”—taking his first-person thought that he thinks to be infallible, incorrigible, and indeed indubitable, such that it cannot be rationally doubted.
“Cogito-thoughts” such as that I am now thinking, and that I am now thinking that P, are indeed infallible, and hence incorrigible: they are true by virtue of my thinking them (Burge 1988). (Similarly, the belief that one has beliefs is true by virtue of one’s having it.) Our infallibility in these cases, however, is not due to privileged access to the mental acts of thinking in question.
If I write in English that I am writing in English, then what I write is true by virtue of my so writing it; even though I lack privileged access to whether I am writing in English, or even to whether I am in fact writing at all. Indeed, there are scenarios in which I am writing that I am writing in English but in which I fail to know that I am writing in English.
The cogito-thought that one is thinking that P is (normally) an expression in consciousness of one’s belief that one is thinking that P. But one can believe that one is thinking that P, when the only thought one is having is the thought that P; indeed that is the typical case (McLaughlin and Tye 1998a).
Beliefs to the effect that one is thinking that P are not true by virtue of one’s having them. Moreover, they are fallible. To note just one reason: the longer it takes one to occurrently think that P, the more demand is put on short-term memory, and so the less reliable is one’s belief that one is thinking that P (Armstrong 1963). Even the belief that we are in pain is fallible.
Someone mesmerized by his guru might mistakenly believe that he is in pain solely on the basis of his guru’s testimony to that effect. To take a more mundane case, upon hearing the start of the dentist’s drill, one might momentarily mistake a feeling of pressure for a feeling of pain (Goldman 2002).
The term introspection is sometimes used very broadly to cover nearly any first-person, nonconsciously inferential avenue to knowledge of what mental states we are in. But on a more restricted usage (one to be followed here), introspecting a mental state is supposed to be a kind of direct act of awareness of the state.
According to introspectionism, we can attend to our current conscious states by introspecting them (Locke 1690; Broad 1925; Armstrong 1963; Hill 1991; Lycan 1996; Macdonald 1998, 1999; McLaughlin 2000, 2001, 2003c; Sturgeon 2000; Goldman 2002).
The term introspection derives from the Latin spicere, which means “look,” and the Latin intra, which means “within.” But the etymology is misleading. Introspectionists do not hold that we literally look within. There is no “mind’s eye” by which we observe our visual experiences, no “mind’s ear” or “mind’s toe” by which we observe, respectively, our auditory experiences and tactile experiences.
It is widely held that we see the scenes before our eyes by having visual experiences caused by them.We are not, however, aware of our visual experiences by having visual experiences caused by them. We do not see our visual experiences; they do not look any way to us. (Nor do they look any way to an internal homunculus; an untenable view that leads to an infinite regress of sighted homunculi embedded within sighted homunculi.)
Introspective access is direct in a way perceptual access is not. We experience our experiences, not by having experiences of them, but by having them. We can have them without introspecting them. But when we introspect, our attentional access to them is direct in that it is unmediated by any experiential states. Experiences are in that sense self-presenting). If this view is correct, then we are immune to a certain kind of error.
When our perceptual experiences are illusory, when things are not as they appear, we can be misled into believing that they are as they appear. If, however, our conscious states are self-presenting, then there is no appearance/reality distinction that pertains to them. We thus cannot be misled about them by their appearing to us some way that they are not.
Some introspectionists maintain that an act of introspective awareness of a conscious state is direct in yet another sense: it is unmediated by any causal mechanism. If, however, introspective awareness of a conscious state involves believing something of the state (for example, that it is a pain), the question arises as to whether this ofness connection requires causation.
It seems like mystery mongering to maintain that it is a primitive, fundamental relation.One view is that the relation is part-whole rather than causal: The conscious state is a constituent of the introspective belief. But there are constituents of the belief that the belief bears no of-ness relation to, for example, the concepts involved in it.
So, the constituency must be of a special sort. Proponents of this view are under an obligation to explicate it. There is also the issue of whether such an account can allow for mistaken introspective beliefs. These remain topics of investigation.
The more common view is that an introspective belief and the state introspected are linked by a causal mechanism. Causes and effects, however, must be “distinct existences,” and so capable of independent existence (Armstrong 1963).
|concepts of conscious|
This causal view thus seems to entail that there could be a being with beliefs that it is in conscious states of various sorts on various occasions yet is never in such states. But perhaps there could be a siliconbased robot that is such a being—possessed of the relevant concepts but entirely devoid of sentience.
The shock of such a possibility is somewhat lessened if primary possession of concepts of conscious states requires acquaintance with such states (Peacocke 1998), so that the robot could possess them only in a secondary way—by communicative interaction with conscious beings that possess them in a primary way.
Another “independent existence” concern with the causal view is that it entails the possibility of beings who are in conscious states but lack the capacity to be introspectively aware of them, and so who are “self-blind”with respect to them (Shoemaker 1984b, 1984c).
Introspectionists, however, maintain that introspective awareness of a conscious state consists of a belief that one is in the state, a belief formed by direct acquaintance with the state. Animals seem self-blind in the sense in question: they do not form beliefs about what conscious states they are in, for they lack the requisite concepts to do so.
Indeed, animals do not introspect their conscious states; they are conscious, but not self-conscious. So, this sort of self-blindness may seem not to count against introspectionism. Nevertheless, there is a sense in which animals are aware of their pains or itches, for instance; that is why the dog yelps or scratches.
Indeed it seems that their attention might be riveted on their sensation. It remains an open question whether the relevant mode of attention can be captured by a model of introspective attention as belief-formation or whether further distinctions are called for.
It has been claimed that when we try to direct our attention to our visual experience in order to introspect it, we seem to find ourselves only inspecting the scene before our eyes. It is thus claimed that visual experience is phenomenologically “transparent” or “diaphanous.” And some philosophers claim that all conscious states are diaphanous.
The phenomenological thesis of transparency seems most plausible for visual experiences and least plausible for bodily sensations. But it is maintained that even when we attend to a toothache, our attention seems focused on a feature of the tooth itself, however alarming we may find that feature.
In the light of these phenomenological considerations, a “displaced-perception model” of first-person knowledge of experience has been proposed. The leading idea in the visual case is this: when we are attentively aware that we are having a visual experience, our “awareness-that” is not based on direct awareness of the experience but rather on awareness of the scene before our eyes.
Our awareness of the experience is indirect, because we are aware of it by being aware of the scene. Nevertheless, if we have mastered the concept of visual experience, we can come to be aware that we are having a certain visual experience, without recourse to consciously drawing inferences.
Hallucination seems to pose no problem for the phenomenological transparency thesis itself: Perhaps, whenever we visually hallucinate, we seem to be aware only of a scene. But hallucination poses a problem for the displaced-perception model if, when we (completely) hallucinate, we are not actually aware of any scene at all.
If there are sense data, then we will actually be aware of a scene, even when we completely hallucinate, for sense data would constitute a scene. But the leading proponents of the displaced-perception model are physicalists and so deny that there are sense data.
Proponents of the model have tried to accommodate hallucination by maintaining that in such a case one is aware of a type of scene, despite not being aware of any actual instance of it. Whether this model applies to visual experience and all conscious states remains a topic of controversy.
Our ordinary epistemic practices seem to rely not only on the presumption that our (sincere) first-person ascriptions of conscious states (for example, ‘I am in pain’) are prima facie true but also on the presumption that our first-person ascriptions of beliefs (for example, “I believe that P’), desires, and intentions are prima facie true.
It has been claimed that the social-psychological data about confabulation shows the latter presumption to be unfounded. But, arguably, the data seem to show only that we have a tendency to confabulate when under pressure to explain how we arrived at our propositional attitudes or made choices; thus the data seems not to raise an unanswerable challenge to first-person authority.
In any case, many contemporary philosophers claim that whatever role introspection may play in explaining our firstperson authority as self-ascribers of conscious states, it has little to do with our first-person authority concerning our propositional attitudes.
Even if we indeed introspect conscious states (as these were characterized earlier), we do not introspect our beliefs, desires, or intentions. Indeed, we do not even introspect our attitudinal emotions (fear that P, anger that Q, relief that R, and so on).
Such states can count as conscious, but only in the sense that they can have characteristic manifestations in consciousness; and (at best) we introspect only conscious states that manifest them. Thus, we may introspect an impulse, but not a desire; a feeling of anger, but not an attitude of anger; a thought that P, but not a belief that P.
Indeed, to be aware of one’s belief that P is just to be aware that one believes that P; and similarly for the other cases. Just as we can typically know what we believe without observing our behavior, we can typically know what we believe without introspecting.
Moreover, although we sometimes know that we believe something as a result of assessing evidence that we do, such a case seems atypical. When we ask ourselves whether we believe that P, want X, or intend to A, we usually do not reflect on evidence concerning whether we believe that P, want X, or intend to A. Of course, we sometimes do that. But in response to the questions we typically reflect, respectively, on whether P, whether X has some attractive feature, and whether we ought to do A.
Although we typically do that, reasons for believing that P is true are not reasons for believing that one believes that P; and reasons for believing that one ought to A are not reasons for believing that one intends to A (similarly for the desire case). Rather, they are, respectively, reasons to believe that P and reasons to intend to do A. So, the question of how such reflection leads to knowledge of our beliefs, desires, and intentions persists.
Philosophers who seek a role for introspection here will claim that, when we engage in such deliberative reflective reasoning, we can be introspectively aware of our occurrent thoughts. Philosophers who reject any role for introspection here will claim that even if we can indeed introspectively observe manifestations of propositional attitudes in consciousness and so have more “observational data” than others who can only observe manifestations of our attitudes in our overt verbal and nonverbal behavior, the fact that we have such additional observational data will not explain our first-person authority about our attitudes.
Moreover, occurrently thinking that P is a mental act—indeed a basic mental act: something we do, but not by doing something else. Our knowledge of what we are occurrently thinking is knowledge of something that we are doing. Our distinctively characteristic knowledge of our basic actions may not be introspective. What explains first-person authority about our propositional attitudes and basic actions remains an open issue.
Many philosophers have related first-person authority about attitudes and actions to the fact that attitudes and actions (unlike bodily sensations, imagery, or sense experiences) can be rational or irrational. One view is that our practice of attributing propositional attitudes is essentially an interpretive practice governed (in part) by constitutive principles of rationality, and the presumption of first-person authority is required for interpretation to be possible.
Another view is that the functional organization required to be a rational agent guarantees that a rational agent will, for the most part, be reliable in his or her beliefs about what propositional attitudes and experiences he or she has. Yet another view seeks to explain our first-person authority in terms of rational commitment and first-person deliberation. There are other very influential views.
Belief, desire, intention, and occurrent thought are modes of intentionality; states of these (and other intentional) types have representational content. One issue is how one knows which of these (or other intentional) types a given intentional state falls under; another issue is how one knows what the content of the state is.
Thus, there is, for instance, the issue of how one knows that one’s belief that P is a belief (rather, than, say, a desire); and there is the issue of how one knows that one’s belief is a belief that P (rather than a belief that something else is the case).
The leading contemporary theories of mental content are externalist theories, according to which the content of a mental state fails to supervene on intrinsic states of the subject). On these views, two intrinsic duplicates (for example, an inhabitant of Earth and her doppelgänger on Twin Earth) could be in mental states with different contents.
Some externalist theories hold that content depends on historical context, and according to others, it depends on social context. There has been extensive debate about whether content externalism is compatible with our having first-person authority or privileged first-person knowledge concerning what we think. Some philosophers argue for incompatibilism. Some argue for compatiblism.
Here is an example of one of the leading incompatibilist lines of argument. For any of the content-externalists theories in question, there will be some contingent environmental proposition E such that E can be known only on the basis of empirical evidence, yet the theory will entail that it is a conceptual truth that if we are thinking that P, then E. Thus, if we could have privileged first-person knowledge that we are thinking that P, it follows that we would be able to infer that E and thereby come to know it on some basis other than empirical evidence.
Some compatibilists have responded that the relevant contingent environmental propositions will be ones that can thereby be known on a basis other than empirical evidence, however surprising that might be. But by far the more prevalent compatibilist response is to try to show that combinations of the relevant content-externalist and privileged self-knowledge theses do not lead to this result.