Social Perception
Social Perception:
Understanding
Other People
Nobody
outside of a baby carriage or a judge’s chamber
believes in an unprejudiced point of view.
—Lillian
Helman
In July
1988, the U.S. guided missile frigate Vincenneswas on patrol in the Persian
Gulf. A state-of-the-art ship carrying the most sophisticated radar and
guidance systems, the Vincennesbecame embroiled in a skirmish with some small
Iranian naval patrol boats. During the skirmish, Captain Will Rogers III
received word from the radar room that an unidentified aircraft was heading toward the ship. The
intruder was on a descending path, the radar operators reported, and appeared
to be hostile. It did not respond to the ship’s IFF (identify friend or foe)
transmissions, nor were further attempts to raise it on the radio successful.
Captain Rogers, after requesting permission from his superior, ordered the
firing of surface-to-air missiles; the missiles hit and destroyed the plane.
The plane was not an Iranian fi ghter. It was an Iranian Airbus, a commercial
plane on a twice-weekly run to Dubai, a city across the Strait of Hormuz. The
airbus was completely destroyed, and all 290 passengers were killed.
Following
the tragedy, Captain Rogers defended his actions. But Commander David Carlson
of the nearby frigate Sides, 20 miles away, reported that his crew accurately
identified the airbus as a passenger plane. His crew saw on their radar screen
that the aircraft was climbing from 12,000 to 14,000 feet (as tapes later
verifi ed) and that its flight pattern
resembled that of a civilian aircraft (Time, August 15, 1988). The crew of the
Sidesdid not interpret the plane’s actions as threatening, nor did they think
an attack was imminent. When Commander Carlson learned that the Vincenneshad fi
red on what was certainly a commercial plane, he was so shocked he almost
vomited (Newsweek, July 13, 1992). Carlson’s view was backed up by the fact
that the “intruder” was correctly identifi ed as a commercial aircraft by radar
operators on the U.S.S. Forrestal, the aircraft carrier and flagship of the
mission (Newsweek, July 13, 1992).
What
happened during the Vincennesincident? How could the crew of the Vincenneshave
“seen” a commercial plane as an attacking enemy plane on their radar screen?
How could the captain have so readily ordered the fi ring of the missiles? And
how could others—the crews of the Sidesand the Forrestal, for instance—have
seen things so differently?
The
answers to these questions reside in the nature of human cognition. The captain
and crew of the Vincennesconstructed their own view of reality based on their
previous experiences, their expectations of what was likely to occur, and their
interpretations of what was happening at the moment—as well as their fears and
anxieties. All these factors were in turn influenced by the context of current
international events, which included a bitter enmity between the United States
and what was perceived by Americans as an extremist Iranian government.
The
captain and crew of the Vincennesremembered a deadly attack on an American
warship the previous year in the same area. They strongly believed that they
were likely to be attacked by an enemy aircraft, probably one carrying advanced
missiles that would be very fast and very accurate. If this occurred, the
captain knew he would need to act quickly and decisively. The radar crew saw an
unidentifi ed plane on their screen. Suddenly they called out that the aircraft
was descending, getting in position to attack. The plane didn’t respond to
their radio transmissions. Weighing the available evidence, Captain Rogers
opted to fi re on the intruder.
The
commander and crew of the Sideshad a different view of the incident. They saw
the incident through the fi lter of their belief that the Vincenneswas itching
for a fi ght. From their point of view, a passenger plane was shot down and 290
lives were lost as a result of the hair-trigger reaction of the overly
aggressive crew.
These
different views and understandings highlight a crucial aspect of human
behavior: Each of us constructs a version of social reality that fi ts with our
perception and interpretation of events (Jussim, 1991). We come to understand
our world through the processes of social perception, the strategies and
methods we use to understand the motives and behavior of other people.
This
chapter looks at the tools and strategies people use to construct social
reality. We ask, What cognitive processes are involved when individuals are
attempting to make sense of the world? What mechanisms come into play when we
form impressions of others and make judgments about their behavior and motives?
How accurate are these impressions and judgments? And what accounts for errors
in perception and judgment that seem to inevitably occur in social
interactions? How do we put all of the social information together to get a
whole picture of our social world? These are some of the questions addressed in
this chapter.
Impression
Formation: Automaticity and
Social
Perception
The
process by which we make judgments about others is called impression formation.
We are primed by our culture to form impressions of people, and Western culture
emphasizes the individual, the importance of “what is inside the person,” as the
cause of behavior (Jones, 1990). We also may be programmed biologically to form
impressions of those who might help of hurt us. It is conceivable that early
humans who survived were better at making accurate inferences about others, had
superior survival chances—and those abilities are part of our genetic heritage
(Flohr, 1987). It makes sense that they were able to form relatively accurate
impressions of others rather effortlessly. Because grossly inaccurate
impressions—is this person dangerous or not, trustworthy or not, friend or
foe—could be life threatening, humans learned to make those judgments
efficiently. Those who could not were less likely to survive. So, efficiency
and effortlessness in perception are critical goals of human cognition.
Social
psychologists interested in cognition are primarily concerned with how the
individual tries to make sense out of what is occurring in his or her world
under the uncertain conditions that are a part of normal life (Mischel, 1999).
Much of our social perception involves automatic processing—forming impressions
without much thought or attention (Logan, 1989). Thinking that is conscious and
requires effort is referred to as controlled processing.
Automatic
Processing
Automatic
processingis thinking that occurs primarily outside consciousness. It is
effortless in the sense that it does not require us to use any of our conscious
cognitive capacity. We automatically interpret an upturned mouth as a smile,
and we automatically infer that the smiling person is pleased or happy (Fiske
& Taylor, 1991). Such interpretations and inferences, which may be built
into our genetic makeup, are beyond our conscious control.
Running
through all our social inference processes—the methods we use to judge other
people—is a thread that seems to be part of our human makeup: our tendency to
prefer the least effortful means of processing social information (Taylor,
1981). This is not to say we are lazy or sloppy; we simply have a limited
capacity to understand information and can deal with only relatively small
amounts at any one time (Fiske, 1993). We tend to be cognitive misers in the
construction of social reality: Unless motivated to do otherwise, we use just
enough effort to get the job done. In this business of constructing our social
world, we are pragmatists (Fiske, 1992). Essentially we ask ourselves, What is
my goal in this situation, and what do I need to know to reach that goal?
Although
automatic processing is the preferred method of the cognitive miser, there is
no clear line between automatic and controlled processing. Rather, they exist
on a continuum, ranging from totally automatic (unconscious) to totally
controlled (conscious), with degrees of more and less automatic thinking in
between.
The
Importance of Automaticity in Social Perception
Recall the
work of Roy Baumeister discussed in Chapter 2. His work concluded that even
small acts of self-control such as forgoing a tempting bite of chocolate use up
our self-control resources for subsequent tasks. However, Baumeister and Sommer
(1997) suggested that although the conscious self is important, it plays a
causal and active role in only about 5% of our actions. This suggests that
despite our belief in free will and self-determination, it appears that much if
not most of our behavior is determined by processes that are nonconscious, or
automatic (Bargh & Chartrand, 1999). Daniel Wegner and his coworkers showed
that people mistakenly believe they have intentionally caused a behavior when
in fact they were forced to act by stimuli of which they were not aware
(Wegner, Ansfield, & Pilloff, 1998). Wegner and Whealey (1999) suggested
that the factors that actually cause us to act are rarely, if ever, present in
our consciousness.
Bargh
(1997) wrote that automatic responses are learned initially from experience and
then are used passively, effortlessly, and nonconsciously each time we
encounter the same object or situation. For example, Chartrand and Bargh (1996)
showed that when individuals have no clear-cut goals to form impressions of
other people, those goals can be brought about nonconsciously. It is possible
to present words or images so quickly that the individual has no awareness that
anything has been presented, and furthermore the person does not report that he
or she has seen anything (Kunda, 1999). But the stimuli can still have an
effect on subsequent behavior. Employing this technique of presenting stimuli
subliminally in a series of experiments, Chartrand and Bargh (1996) “primed”
participants to form an impression of particular (target) individuals by
presenting some subjects with words such as judgeand evaluateand other
impression-formation stimuli. These primes were presented on a screen just
below the level of conscious awareness. Other experiment participants were not
primed to form impressions subliminally. Soon thereafter, the participants in
the experiment were given a description of behaviors that were carried out by a
particular (target) individual but were told only that they would be questioned
about it later. Chartrand and Bargh reported that those participants who were
primed by impression-formation words (judge, evaluate, etc.) below the level of
conscious awareness (subliminally) were found to have a fully formed impression
of the target. Subjects not primed and given the same description did not form
an impression of the target. Therefore, the participants were induced
nonconsciously to form an impression, and this nonconsciously primed goal
guided subsequent cognitive behavior (forming the impression of the target
person presented by the experimenter).
Nonconscious
Decision Making: Sleeping on It
Buying a
can of peas at the grocery store usually doesn’t strain our
intellect. After all, peas
are peas. While we might prefer one brand over another, we won’t
waste a lot of time on
this decision. If the decision, however, involves something really
important—what car should we buy, who should we marry, where shall we live—then
we may agonize over the choice. But, according to new research, that is exactly
the wrong way to go about it. For one thing, difficult decisions often present
us with a dizzying number of facts and options. Four Dutch psychologists
(Dijksterhuis, Bos, Nordgren, & van Baaren, 2006) suggest that the best way
to deal with complex decisions is to rely on the unconscious mind. These
researchers describe unconscious decision making or thought as thinking about
the problem while your attention is directed elsewhere. In other words, “sleep
on it.”
In one
part of their research, Dijkersterhuis and his co-researchers asked shoppers
and college students to make judgments about simple things (oven mitts) and
more complex things (buying automobiles). The shoppers, given the qualities of
certain automobiles, were asked to choose the best car. The problems were
presented quickly, and the researchers varied the complexity of the problems.
For example, for some people, the cars had 4 attributes (age, gasoline mileage,
transmission, and handling), but for others, 12 attributes for each automobile
were presented. Some participants were told to “think carefully” about the
decisions, while others were distracted from thinking very much about their
choices by being asked to do anagram puzzles. The results were that if the task
was relatively simple (four factors), thinking carefully resulted in a more
correct decision than when the person was distracted. But if the task became
much more complex (12 factors), distraction led to a better decision. What’s
the explanation? Unconscious thought theory (UTT) suggests that while conscious thought is really precise
and allows us to follow strict patterns and rules, its capacity to handle lots
of information is limited. So conscious thought is necessary for doing, say,
math, a rule-based exercise, but may not be as good in dealing with complex
issues with lots of alternatives (Dijksterhuis et al., 2006).
Should we
always rely on our “gut” feelings when making complex and important life
decisions? We do not have a complete answer as of yet to that question. For
example, we don’t know precisely how emotions or previous
events might enter into the
mix. There is, however, a growing body of research that gives us some
confidence that too much contemplation about our loves and careers and other
aspects of our lives that are important to us may not be helpful.
Social
psychologist Timothy Wilson has examined these issues in novel, even charming
ways. Wilson ( 2002) has argued, and demonstrated, that we have a “powerful,
sophisticated, adaptive” unconscious that is crucial for survival but largely,
to ourselves, unknowable. Fortunately Wilson and others have devised
experimental methods to probe our unconscious. In one study, Wilson, Kraft, and
Dunn asked one group of people to list the reasons why their current romantic
relationship was going the way it was (described in Wilson, 2005). Then they
were asked to say how satisfied they were with that relationship. A second
group was just asked to state their “gut” reactions to the questions without
thinking about it. Both groups were asked to predict whether they would still
be in that relationship several months later. Now you might hypothesize that
those who thought about how they felt would be more accurate in their
predictions (Wilson, 2005). However, those who dug deep into their feelings and
analyzed their relationships did not accurately predict the outcome of those
relationships, while those who did little introspection got it pretty much
right. Again there appears to be a kind of “wisdom” inherent in not thinking
too much about complex issues and feelings. These findings and others about the
power of the nonconcious mind raise the issue among cognitive psychologists
about what precisely do we mean by consciousness.
Automaticity
and Behavior
Just as
impressions can be formed in a nonconscious manner, so too can behavior be
influenced by nonconscious cues. That is to say, our behavior can be affected
by cues—stimuli—that are either below the level of conscious awareness or may
be quite obvious, although we are not aware of their effects upon us. Priming
can also be used to affect perceptions nonconsciously. Psychologists have found
that priming, “the nonconscious activation of social knowledge,” is a very
powerful social concept and affects a wide variety of behaviors (Bargh, 2006).
For example, Kay, Wheeler, Bargh, and Ross (2004) found that the mere presence
of a backpack in a room led to more cooperative behaviors in the group, while
the presence of a briefcase prompted more competitive behaviors. The backpack
or the briefcase is a “material prime,” an object that brings out behaviors
consistent with the “prime” (executives carry briefcases and compete; backpackers
climb mountains and cooperate). Similarly, “norms can be primed,” as
demonstrated by Aarts and Dijksterhuis (2003) in a study in which people who
were shown photographs of libraries tended to speak more softly.
Priming
affects our behavior in a wide variety of social situations. These “automatic
activations,” as Bargh (2006) notes, include the well-known “cocktail party
effect.” Imagine you are at a loud party and can barely hear the people that
you are speaking with. Suddenly, across the room, you hear your name spoken in
another conversation. Your name being spoken automatically catches your
conscious attention without any cognitive effort.
In another
example of nonconscious behavior, imagine a couple, married for a quarter of a
century, sitting at the dinner table vigorously discussing the day’s
events. The dinner
guest cannot help but notice how husband and wife mimic, clearly unconsciously,
each other’s gestures. When he makes a strong point, the husband
emphasizes his comments
by hitting the table with his open hand. His wife tends to do the same, though
not quite so vigorously. Neither is aware of the gestures.
Indeed, there
is evidence that such mimicry is common in social interaction (Macrae et al.,
1998). Chartrand and Bargh (1999) termed this nonconscious mimicry the
chameleon effect, indicating that like the chameleon changing its color to
match its surroundings, we may change our behavior to match that of people with
whom we are interacting.
Perception
may also automatically trigger behaviors. Chartrand and Bargh (1999) had two
people interact with each other; however, one of the two was a confederate of
the experimenter. Confederates either rubbed their face or shook their foot.
Facial expressions were varied as well, primarily by smiling or not. The
participant and the confederate sat in chairs half-facing each other, and the
entire session was videotaped and analyzed. Figure 3.1 shows the results of
this experiment. Experimental subjects tended to rub their faces when the
confederate did so, and the subjects tended to shake their foot when the
confederate did. Frank Bernieri, John Gillis, and their coworkers also showed
that when observers see two people in synchrony—that is, when their physical
movements and postures seem to mimic or follow each other—the observers assume
that the individuals have high compatibility or rapport (Bernieri, Gillis,
Davis, & Grahe, 1996; Gillis, Bernieri, & Wooten, 1995).
In another
experiment, Chartrand and Bargh showed the social value of such mimicry. For
individuals whose partner mimicked their behavior, the interaction was rated as
smoother, and they professed greater liking for that partner than did
individuals whose partner did not mimic their expression or behavior. These
experiments and others demonstrate the adaptive function of nonconscious
behavior. Not only does it smooth social interactions, but it does away with
the necessity of actively choosing goal-related behavior at every social
encounter. Because our cognitive resources are limited and can be depleted, it
is best that these resources are saved for situations in which we need to
process social information in a conscious and controlled manner.
Automaticity
and Emotions
If
cognitive activity occurs below the level of conscious awareness, we can ask
whether the same is true of emotion. We all know that our emotional responses
to events often are beyond our conscious control. We may not be aware of why we
reacted so vigorously to what was really a small insult or why we went into a
“blue funk” over a trivial matter. Where we need conscious control is to get
out of that bad mood or to overcome that reaction. It appears that our
emotional responses are not controlled by a conscious will (LeDoux, 1996). As
Wegner and Bargh (1998) indicated, the research on cognition and emotion
focuses primarily on what we do after we express an emotion, not on how we
decide what emotion to express.
Sometimes
we can be aware of what we are thinking and how those thoughts are affecting us
but still not know how the process started or how we may end it. For example,
have you ever gotten a jingle stuck in your mind? You can’t
say why the jingle started,
nor can you get it out of your mind, no matter how hard you try. You think of
other things, and each of these distractors works for a while. But soon the
jingle pops up again, more insistent than ever. Suppressing an unwanted thought
seems only to make it stronger.
This
phenomenon was vividly demonstrated in an experiment in which subjects were
told not to think of a white bear for 5 minutes (Wegner, 1989). Whenever the
thought of a white bear popped into mind, subjects were to ring a bell. During
the 5-minute period, subjects rang the bell often. More interesting, however,
was the discovery that once the 5 minutes were up, the white bears really took
over, in a kind of rebound effect. Subjects who had tried to suppress thoughts
of white bears could think of little else after the 5 minutes expired. The
study demonstrates that even if we successfully fend off an unwanted thought
for a while, it may soon return to our minds with a vengeance.
Because of
this strong rebound effect, suppressed thoughts may pop up when we least want
them. A bigot who tries very hard to hide his prejudice when he is with members
of a particular ethnic group will, much to his surprise, say something stupidly
bigoted and wonder why he could not suppress the thought (Wegner, 1993). This
is especially likely to happen when people are under pressure. Automatic
processing takes over, reducing the ability to control thinking.
Of course,
we do control some of our emotions but apparently only after they have
surfaced. If our boss makes us angry, we may try to control the expression of
that anger. We often try to appear less emotional than we actually feel. We may
moderate our voice when we are really angry, because it would do us no good to
express that emotion. However, as Richards and Gross (1999) showed, suppressing
emotion comes at a cost. These researchers demonstrated that suppressing
emotions impairs memory for information during the period of suppression and
increases cardiovascular responses. This suggests, as does Wegner’s
work, that suppressing emotions depletes one’s
cognitive resources.
Emotions:
Things Will Never Get Better We can see now that nonconcious factors affect
both our behavior and our emotions. Daniel Gilbert and his co-researchers have
demonstrated in a series of inventive experiments that we are simply not very
good in predicting how emotional events will affect us in the future. For one
thing, we tend not to take into account the fact that the more intense the
emotion, the less staying power it has. We tend to underestimate our tendency
to get back to an even keel (homeostasis) to diminish the impact of even the
most negative or for that matter the most positive of emotions. We think that
if we don’t get a particular great job or we are rejected by a person we’d
love to date that it’ll take forever to recover from it. Gilbert, Lieberman, Morewedge, and
Wilson (2004) were especially interested in how individuals thought they would
respond emotionally (hedonically) to events that triggered very emotional
responses. These researchers point out that when extreme emotions are
triggered, psychological processes are stimulated that serve to counteract the
intensity of emotions such that one may expect that intense emotional states
will last a shorter time than will milder ones. How does this happen? Gilbert
et al. (2004) note that people may respond to a highly traumatic event by
cognitively dampening the depth of their feelings. So they note that a married
person wanting to keep a marriage intact might rationalize her mate’s
infidelity but for a lesser annoyance—say, being messy—her anger lasts longer. In a series
of studies, Gilbert et al. revealed people’s forecasting of how individuals would feel after
one of a number of bad things happened to them (being stood up, romantic
betrayal, had their car dented). The more serious the event, as you would
expect, the stronger the emotional response. But, as Gilbert et al. predicted,
the stronger the initial emotional reaction, the quicker the emotion
dissipated. Now this doesn’t mean that people learn to love their
tormentors, but the intensity of the emotion is much less than people forecast.
Controlled
Processing
As
mentioned earlier, controlled processing involves conscious awareness,
attention to the thinking process, and effort. It is defined by several
factors: First, we know we are thinking about something; second, we are aware
of the goals of the thought process; and third, we know what choices we are
making. For example, if you meet someone, you may be aware of thinking that you
need to really pay attention to what this person is saying. Therefore, you are
aware of your thinking process. You will also know that you are doing this
because you expect to be dealing with this person in the future. You may want
to make a good impression on the person, or you may need to make an accurate
assessment. In addition, you may be aware that by focusing on this one person,
you are giving up the opportunity to meet other people.
People are
motivated to use controlled processing—that is, to allocate more cognitive
energy to perceiving and interpreting. They may have goals they want to achieve
in the interaction, for example, or they may be disturbed by information that
doesn’t fit their
expectancies. Processing becomes more controlled when thoughts and behavior are
intended (Wegner & Pennebaker, 1993).
The
Impression Others Make on Us: How Do We
“Read”
People?
It is
clear then that we process most social information in an automatic way, without
a great deal of effort. As we said earlier, perhaps only 5% of the time do we
process it in a controlled and systematic way. What does this mean for accurate
impression formation?
How
Accurate Are Our Impressions?
How many
times have you heard, “I know just how you feel”? Well, do we really know how
someone else feels? King (1998) noted that the ability to recognize the
emotions of others is crucial to social interaction and an important marker of
interpersonal competence. King found that our ability to accurately read other
individuals’emotions depends
on our own emotional socialization. That is, some individuals have learned,
because of their early experiences and feedback from other people, that it is
safe to clearly express their emotions. Others are more conflicted, unsure, and
ambivalent about expressing emotions. Perhaps they were punished somehow for
emotional expression and learned to adopt a poker face. This personal
experience with emotional expressivity, King reasoned, should have an effect on
our ability to determine the emotional state of other people.
King
(1998) examined the ability of people who were unsure or ambivalent about
emotional expressivity to accurately read others’emotions. She
found that compared to individuals
who had no conflict about expressing emotions, those who were ambivalent about
their own emotional expression tended to be confused about other people’s
expression of emotion.
The ambivalent individuals, when trying to read people in an emotional
situation or to read their facial expressions, quite often inferred the opposite
emotion than the one the individuals actually felt and reported. Ambivalent
individuals who spend much energy in being inexpressive or suppressing
emotional reactions quite easily inferred that others also were hiding their
emotions, and what they saw was not what was meant. This simply may mean that
people who are comfortable with their own emotional expressiveness are more
accurate in reading other people’s emotional expressions.
King’s
work, then, suggests that in our ability to accurately read other people, much depends on our own emotional
life. Consider another example of this: Weary and Edwards (1994) suggested that
mild or moderately depressed people are much more anxious than others to
understand social information. This is because depressives often feel that they
have little control over their social world and that their efforts to effect
changes meet with little success.
Edwards
and his coworkers have shown that depressives are much more tuned to social
information and put more effort into trying to determine why people react to
them as they do. Depressives are highly vigilant processors of social
information (Edwards, Weary, von Hippel, & Jacobson, 1999). One would think
that depressives’vigilance would make them more accurate in reading people.
Depressed people often have problems with social interactions, and this
vigilance is aimed at trying to figure out why and perhaps alter these
interactions for the better. But here again, we can see the importance of
nonconscious behavior. Edwards and colleagues pointed out that depressed people
behave in ways that “turn others off.” For example, depressives have trouble
with eye contact, voice pitch, and other gestures that arouse negative
reactions in others. In fact, Edwards and colleagues suggested that all this
effortful processing detracts depressed individuals from concentrating on
enjoyable interactions.
Confidence
and Impression Formation
Our
ability to read other people may depend on the quality of our own emotional
life, but the confidence we have in our impressions of others appears to
depend, not surprisingly, on how much we think we know about the other person.
Confidence in our impressions of other people is important because, as with
other beliefs held with great conviction, we are more likely to act on them.
If, for example, we are sure that our friend would not lie to us, we then make
decisions based on that certainty. The commander of the Vincennes certainly was
confident in his interpretation of the deadly intent of the aircraft on his
radar screen.
However,
confidence in our judgment may not necessarily mean that it is accurate. Wells
(1995) showed that the correlation between accuracy and confidence in
eyewitness identification is very modest, and sometimes there is no
relationship at all. Similarly, Swann and Gill (1997) reported that confidence
and accuracy of perception among dating partners and among roommates were not
very good.
Gill and
his colleagues found that when individuals were required to form a careful
impression of an individual, including important aspects of the target’s
life—intellectual
ability, social skills, physical attractiveness, and so forth—and they had
access to information derived from a videotaped interview with the target
person, they had high confidence in their judgments of the target. This is not
surprising, of course. But what might be surprising is that confidence had no
impact on the accuracy of the participants’judgment (experiment 1; Gill, Swann,
& Silvera, 1998). In another series of
studies, these researchers amply demonstrated that having much information
about a target makes people even more confident of their judgments, because
they can recall and apply information about these people easily and fluently.
But, the judgments are no more accurate than when we have much less information
about someone. What is most disturbing about these findings is that it is
precisely those situations in which we have much information and much
confidence that are most important to us. These situations involve close
relationships of various kinds with people who are very significant in our
lives. But the research says we make errors nevertheless, even though we are
confident and possess much information.
Our modest
ability to read other people accurately may be due to the fact that our
attention focuses primarily on obvious, expressive cues at the expense of more
subtle but perhaps more reliable cues. Bernieri, Gillis, and their coworkers
showed in a series of experiments that observers pay much attention to overt cues
such as when people are extraverted and smile a great deal. Bernieri and Gillis
suggested that expressivity (talking, smiling, gesturing) drives social
judgment but that people may not recognize that expressivity determines their
judgments (Bernieri et al., 1996).
If at
First You Don’t Like Someone, You May Never Like Them
Certainly,
this heading is an overstatement but probably not by much. Let’s
state the obvious: We
like to interact with those people of whom we have a really positive
impression. And, we stay away from those we don’t like very much.
That makes sense. But
as Denrell (2005) has suggested, one problem with that approach is that there
is a “sample bias,” which happens when the level of interaction between people
is determined by first impressions. This sample bias goes something like this:
Imagine you are a member of a newly formed group, and you begin to interact
with others in the group. You meet Person A, who has low social skills. Your
interaction with him is limited, and your tendency, understandably, is to avoid
him in the future. Now Person B is dif ferent.
She has excellent social skills, and conversation with her is easy and fluid.
You will obviously sample more of Person B’s behavior than Person A’s. As a result,
potentially false negative impressions
of Person A never get changed, while a false positive impression of B could
very well be changed if you were to “sample” more of her behavior (Denrell,
2005).
An
important point that Denrell (2005) makes, then, about impression formation is
that if there are biases in the sampling (the kind and amount of interaction
with somebody), then systematic biases in impression formation will occur. This
may be especially true of individuals who belong to groups with which we have
limited contact. We never get the opportunity to interact with those members in
enough situations to form fair impressions based upon a representative sample
of their behavior. Therefore, we never have enough evidence to correct a
negative or a positive false first impression because we rarely interact again
with a person with whom we have had a negative initial interaction (Plant &
Devine, 2003).
Person
Perception: Reading Faces and Catching Liars
When we
say that we can “read” others’emotions, what we really mean is that we can “read” their faces. The face
is the prime stimulus for not only recognizing someone but forming an
impression of them as well. Recent neuroscience research has yielded a wealth
of information about face perception and its neural underpinnings. For example,
we know that human face processing occurs in the occipital temporal cortex and
that other parts of the brain are involved in determining the identity of the
person (Macrae, Quinn, Mason, & Quadflieg, 2005). We also know that we are
quite good at determining basic information about people from their faces even
under conditions that hinder optimal perception. For example, Macrae and his
colleagues, in a series of three experiments, presented a variety of male,
female, and facelike photographs, some in an inverted position, and in spite of
the “suboptimal” presentation of these stimuli, their subjects could reasonably
report the age and sex of the person. In this case, Macrae et al. suggest that
acquisition of fundamental facial characteristics (age, sex, race) appears to
be automatic.
So we know
that getting information from faces is hard-wired in our brains and we know
where that wiring is. But there is also evidence for the early start of facial
perception. Even newborns have rudimentary abilities that allow them to
distinguish several facial expressions, although it is only at the end of the
first year that infants seem to be able to assign meaning to emotional
expressions (Gosselin, 2005).
It Is Hard
to Catch a Liar: Detecting Deception
If, as the
research shows, we are not very good at reading people, even those with whom we
have close relationships, then you might suspect that we are not very good at
detecting lies and liars. In general, you are right. But some people can learn
to be quite accurate in detecting lies. Paul Ekman and his coworkers asked 20
males (ages 18 to 28) to indicate how strongly they felt about a number of
controversial issues. These males were then asked to speak to an interrogator
about the social issue about which they felt most strongly. Some were asked to
tell the truth; others were asked to lie about how they felt (Ekman, O’Sullivan,
& Frank, 1999). If the truth tellers were believed, they were rewarded with $10; liars who were
believed were given $50. Liars who were caught and truth tellers who were
disbelieved received no reward. So, the 20 males were motivated to do a good
job. Ekman and his colleagues filmed the faces of the 20 participants and found
that there were significant differences in facial movements between liars and
truth tellers.
The
researchers were interested in whether people in professions in which detection
of lies is important were better than the average person in identifying liars
and truth tellers. Ekman tested several professional groups, including federal
officers (CIA agents and others), federal judges, clinical psychologists, and
academic psychologists. In previous research, the findings suggested that only
a small number of U.S. Secret Service agents were better at detecting lies than
the average person, who is not every effective at recognizing deception. Figure
3.2 shows that federal officers were most accurate at detecting whether a
person was telling the truth. Interestingly, these officers were more accurate
in detecting lies than truth. Clinical psychologists interested in deception
were next in accuracy, and again, they were better at discerning lies than
truth telling.
The best
detectors focused not on one clue but rather on a battery of clues or symptoms.
Ekman notes that no one clue is a reliable giveaway. Perhaps the most difficult
obstacle in detecting liars is that any one cue or series of cues may not be
applicable across the board. Each liar is different; each detector is different
as well. Ekman found a wide range of accuracy within each group, with many
detectors being at or below chance levels.
If people
are not very good at detecting lies, then they ought not to have much
confidence in their ability to do so. But as DePaulo and her colleagues have
shown, people’s confidence in their judgments as to whether someone else is
telling the truth is
not reliably related to the accuracy of their judgments (DePaulo, Charlton,
Cooper, Lindsay, & Muhlenbruck, 1997). People are more confident in their
judgments when they think that the other person is telling the truth, whether
that person is or not, and men are more confident, but not more accurate, than
are women. The bottom line is that we cannot rely on our feelings of confidence
to reliably inform us if someone is lying or not. As suggested by the work of
Gillis and colleagues (1998) discussed earlier, being in a close relationship
and knowing the other person well is no great help in detecting lies (Anderson,
Ansfield, & DePaulo, 1998). However, we can take some comfort in the results
of research that shows that people tell fewer lies to the individuals with whom
they feel closer and are more uncomfortable if they do lie. When people lied to
close others, the lies were other-oriented, aimed at protecting the other
person or making things more pleasant or easier (DePaulo & Kashy, 1999).
In a book
by neurologist Oliver Sacks, The Man Who Mistook His Hat for His Wife, there is
a scene in which brain-damaged patients, all of whom had suffered a stroke,
accident, or tumor to the left side of the brain (aphasics) and therefore had
language disorders, were seen laughing uproariously while watching a TV speech
by President Ronald Reagan. Dr. Sacks speculated that the patients were picking
up lies that others were not able to catch.
There is
now some evidence that Sacks’s interpretation may have been right. Etcoff, Ekman, and Frank (2000)
suggested that language may hide the cues that would enable us to detect lying,
and therefore those with damage to the brain’s language centers may be better at detecting
lies. The indications are that when people lie, their true intent is reflected
by upper facial expressions, whereas the part of the face around the mouth
conveys the false emotional state the liar is trying to project. It may be that
aphasics use different brain circuitry to detect liars. For the rest of us, it’s
pretty much pure
chance.
GBr. 3.2
A recent
examination of over 1,300 studies concerning lying has shown how faint the
traces of deception are (DePaulo, Lindsay, Malone, Charlton, & Cooper,
2003). This massive review indicates that there are “158” cues to deception,
but many of them are faint or counterintuitive—things that you might not
expect. So, liars say less than truth tellers and tell stories that are less
interesting, less compelling. The stories liars tell us, however, are more
complete, more perfect. Clearly, liars think more about what they are going to
say than do truth tellers. Cues that would allow us to detect lying are
stronger when the liar is deceiving us about something that involves his or her
identity (personal items) as opposed to when the liar is deceiving about
nonpersonal things.
To
illustrate the difficulties, consider eye contact. According to DePaulo et al.
(2003) motivated liars avoid eye contact more than truth tellers and
unmotivated liars. So, the motivation of the liar is important. To further
complicate matters, other potential cues to lying, such as nervousness, may not
help much in anxiety-provoking circumstances. Is the liar or the truth teller
more nervous when on trial for her life? Perhaps nervousness is a cue in
traffic court but maybe not in a felony court (DePaulo et al., 2003).
We know,
then, that the motivation of the liar may be crucial in determining which cues
to focus on. Those who are highly motivated may just leave some traces of their
deception. DePaulo’s question about what cues liars signal
if they are at high risk and therefore
highly motivated was examined by Davis and her colleagues (2005), who used
videotaped statements of criminal suspects who were interviewed by assistant
district attorneys (DAs). This was after the suspects had been interviewed by
the police, who had determined that a crime had been committed by these
individuals. These were high-stakes interviews because the assistant DAs would
determine the severity of the charge based on the results of the interviews.
All the criminals claimed some mitigating circumstances (Davis, Markus,
Walters, Vorus, & Connors, 2005).
In this
study, the researchers knew the details of the crimes so they, by and large,
knew when the criminal was lying and could match his or her behavior (language
and gestures) against truthful and deceitful statements. While the researchers
determined that the criminals made many false statements, the deception cues
were few, limited, and lexical(e.g., saying no and also shaking the head no)
(Davis et al., 2005, p. 699). The lady “doth protest too much, methinks,” as
William Shakespeare wrote in Act 3 of “Hamlet,” has the ring of truth, for
those criminals who did protest too much by repeating phrases ands vigorous
head shaking were in fact lying. Curiously, nonlexical sounds (sighing, saying
umm or er) were indicators of truth telling. This latter finding may relate to
DePaulo et al.’s observation that liars try to present a
more organized story
then do truth tellers.
And
sometimes, the liar may be a believer. True story: Not long ago an elderly
gentleman was unmasked as a liar when his story of having won a Medal of Honor
in combat during World War II was shown to be false. By all newspaper accounts,
he was a modest man, but every Memorial Day he would wear his Medal and lead
the town’s parade.
The Medal was part of his identity, and the town respected his right not to
talk about his exploits. It is a federal crime to falsely claim to be a Medal
of Honor winner. Those who questioned the man about his false claims came to
understand that he had played the role for so long it truly became a part of
him, and thus after a while, he was not being deceptive. He came to believe who
he said he was.
The
Attribution Process: Deciding Why People Act
As They Do
We make
inferences about a person’s behavior because we are interested in
the cause of that
behavior. When a person is late for a meeting, we want to know if the individual
simply didn’t care or
if something external, beyond his or her control, caused the late appearance.
Although there is a widespread tendency to overlook external factors as causes
of behavior, if you conclude that the person was late because of, say, illness
at home, your inferences about that behavior will be more moderate than if you
determined he or she didn’t care (Vonk, 1999).
Each of
the theories developed to explain the process provides an important piece of
the puzzle in how we assign causes and understand behavior. The aim of these
theories is to illuminate how people decide what caused a particular behavior.
The theories are not concerned with finding the true causes of someone’s
behavior. They are concerned with
determining how we, in our everyday lives, think and make judgments about the
perceived causes of behaviors and events.
In this
section, two basic influential attribution theories or models are introduced,
as well as additions to those models:
• Correspondent inference theory
• Covariation theory
• Dual-process models
The first
two, correspondent inference theory and covariation theory, are the oldest and
most
general attempts to describe the attribution process. Others represent more
recent,
less
formal approaches to analyzing attribution.
Heider’s
Early Work on Attribution
The first
social psychologist to systematically study causal attribution was Fritz
Heider.
He assumed
that individuals trying to make sense out of the social world would follow simple
rules of causality. The individual, or perceiver, operates as a kind of “naïve
scientist,” applying a set of rudimentary scientific rules (Heider, 1958).
Attributiontheories are an attempt to discover exactly what those rules are.
Heider
made a distinction between internal attribution, assigning causality to
something about the person, and external attribution, assigning causality to
something about the situation. He believed that decisions about whether an
observed behavior has an internal (personal) or external (situational) source emerge
from our attempt to analyze why others act as they do (causal analysis).
Internal sources involve things about the individual—character, personality,
motives, dispositions, beliefs, and so on. External sources involve things
about the situation—other people, various environmental stimuli, social
pressure, coercion, and so on. Heider (1944, 1958) examined questions about the
role of internal and external sources as perceived causes of behavior. His work
defined the basic questions that future attribution theorists would confront.
Heider (1958) observed that perceivers are less sensitive to situational
(external) factors than to the behavior of the individual they are observing or
with whom they are interacting (the actor). We turn now to the two theories
that built directly on Heider’s work.
Correspondent
Inference Theory
Assigning
causes for behavior also means assigning responsibility. Of course, it is
possible to believe that someone caused something to happen yet not consider
the individual responsible for that action. A 5-year-old who is left in an
automobile with the engine running, gets behind the wheel, and steers the car
through the frozen food section of Joe’s convenience store caused the event but
certainly is not responsible for it, psychologically or legally.
Nevertheless,
social perceivers have a strong tendency to assign responsibility to the
individual who has done the deed—the actor. Let’s say your brakes
fail, you are unable to
stop at a red light, and you plow into the side of another car. Are you
responsible for those impersonal brakes failing to stop your car? Well, it
depends, doesn’t it? Under what circumstances would you
be held responsible, and when would you
not?
How do
observers make such inferences? What sources of information do people use when
they decide someone is responsible for an action? In 1965, Edward Jones and
Keith Davis proposed what they called correspondent inferencetheory to explain
the processes used in making internal attributions about others, particularly
when the observed behavior is ambiguous—that is, when the perceiver is not sure
how to interpret the actor’s behavior. We make a correspondent
inference when we conclude that a person’s
overt behavior is caused by or corresponds to the person’s internal characteristics
or beliefs. We might believe, for example, that a person who is asked by others to
write an essay in favor of a tax increase really believes that taxes should be
raised (Jones & Harris, 1967). There is a tendency not to take into account
the fact that the essay was determined by someone else, not the essayist. What
factors influence us to make correspondent inferences? According to
correspondent inference theory, two major factors lead us to make a
correspondent inference:
1. We perceive that the person freely chose the
behavior.
2. We perceive that the person intended to do
what he or she did.
Early in
the Persian Gulf War of 1991, several U.S.-coalition aircraft were shot down
over Iraq. A few days later, some captured pilots appeared in front of cameras
and denounced the war against Iraq. From the images, we could see that it was
likely the pilots had been beaten. Consequently, it was obvious that they did
not freely choose to say what they did. Under these conditions, we do not make
a correspondent inference. We assume that the behavior tells us little or
nothing about the true feelings of the person. Statements from prisoners or
hostages always are regarded with skepticism for this reason. The perception
that someone has been coerced to do or say something makes an internal
attribution less likely. The second factor contributing to an internal
attribution is intent. If we conclude that a person’s
behavior was intentional rather than accidental, we are likely to make an internal
attribution for that behavior. To say that a person intended to do something
suggests that the individual wanted the behavior in question to occur. To say
that someone did not intend an action, or did not realize what the consequences
would be, is to suggest that the actor is less responsible for the outcome.
Covariation
Theory
Whereas
correspondent inference theory focuses on the process of making internal
attributions, covariation theory, proposed by Harold Kelley (1967, 1971), looks
at external attributions—how we make sense of a situation, the factors beyond
the person that may be causing the behavior in question (Jones, 1990). The
attribution possibilities that covariation theory lays out are similar to those
that correspondent inference theory proposes. What is referred to as an
internal attribution in correspondent inference theory is referred to as a
person attribution in covariation theory. What is called an external
attribution in correspondent inference theory is called a situational
attribution in covariation theory.
Like
Heider, Kelley (1967, 1971) viewed the attribution process as an attempt to
apply some rudimentary scientific principles to causal analysis. In
correspondent inference theory, in contrast, the perceiver is seen as a moral
or legal judge of the actor. Perceivers look at intent and choice, the same
factors that judges and jurors look at when assigning responsibility. Kelley’s
perceiver is more a scientist: just the facts, ma’am.
According
to Kelley, the basic rule applied to causal analysis is the covariation
principle, which states that if a response is present when a situation (person,
object, event) is present and absent when that same situation is absent, then
that situation is the cause of the response (Kelley, 1971). In other words,
people decide that the most likely cause of any behavior is the factor that
covaries—occurs at the same time—most often with the appearance of that
behavior.
As an
example, let’s say your friend Keisha saw the hit movie Crashand raved
about it. You are
trying to decide whether you would like it too and whether you should go see
it. The questions you have to answer are, What is the cause of Keisha’s
reaction? Why did she
like this movie? Is it something about the movie? Or is it something about
Keisha?In order to make an attribution in this case, you need information, and
there are three sources or kinds of relevant information available to us:
1. Consensus information
2. Distinctiveness information
3. Consistency information
Consensus
informationtells us about how other people reacted to the same event or
situation. You might ask, How did my other friends like Crash? How are the
reviews? How did other people in general react to this stimulus or situation?
If you find high consensus—everybody liked it—well, then, it is probably a good
movie. In causal attribution terms, it is the movie that caused Keisha’s
behavior. High consensus leads to a situational
attribution.
Now, what
if Keisha liked the movie but nobody else did? Then it must be Keisha and not
the movie: Keisha always has strange tastes in movies. Low consensus leads to a
person attribution (nobody but Keisha liked it, so it must be Keisha). The
second source or kind of data we use to make attributions is distinctiveness
information. Whereas consensus information deals with what other people think,
distinctiveness information concerns the situation in which the behavior
occurred: We ask if there is something unique or distinctive about the
situation that could have caused the behavior. If the behavior occurs when
there is nothing distinctive or unusual about the situation (low
distinctiveness), then we make a person attribution: If Keisha likes all
movies, then we have low distinctiveness: There’s nothing special
about Crash—it must be
Keisha. If there is something distinctive about the situation, then we make a
situational attribution. If this is the only movie Keisha has ever liked, we
have high distinctiveness and there must be something special about the movie.
Low distinctiveness leads us to a person attribution; high distinctiveness
leads us to a situational attribution. If the situation is unique—very high
distinctiveness—then the behavior probably was caused by the situation and not
by something about the person. The combination of high consensus and high distinctiveness
always leads to a situational attribution. The combination of low consensus and
low distinctiveness always leads to a person attribution.
The third
source or kind of input is consistency information, which confirms whether the
action occurs over time and situations (Chen, Yates, & McGinnies, 1988). We
ask, Is this a one-time behavior (low consistency), or is it repeated over time
(high consistency)? In other words, is this behavior stable or unstable?
Consistency is a factor that correspondent inference theory fails to take into
account.
What do we
learn from knowing how people act over time? If, for example, the next time we
see Keisha, she again raves about Crash, we would have evidence of consistency
over time (Jones, 1990). We would have less confidence in her original
evaluation of the movie if she told us she now thought the movie wasn’t
very good (low consistency).
We might think that perhaps Keisha was just in a good mood that night and that
her mood affected her evaluation of the movie. Consistency has to do with
whether the behavior is a reliable indicator of its cause.
The three
sources of information used in making attributions are shown in Figures 3.3 and
3.4. Figure 3.3 shows the combination of information—high consensus, high consistency,
and high distinctiveness—that leads us to make a situational attribution. Go
see the movie: Everybody likes it (high consensus); Keisha, who likes few, if
any, movies, likes it as well (high distinctiveness of this movie); and Keisha
has always liked it (high consistency of behavior).
Figure 3.4
shows the combination of information—low consensus, high consistency, and low
distinctiveness—that leads us to a person attribution. None of our friends
likes the movie (low consensus); Keisha likes the movie, but she likes all
movies, even The Thing That Ate Newark(low distinctiveness); and Keisha has
always liked this movie (high consistency). Maybe we ought to watch TV tonight.
Not
surprisingly, research on covariation theory shows that people prefer to make
personal rather than situational attributions (McArthur, 1972). This conforms
with the (correspondence) bias we found in correspondence inference theory and
highlights again the tendency toward overemphasizing the person in causal
analysis. It also fits with our tendency to be cognitive misers and take the
easy route to making causal attributions.
Gbr. 3.3
Dual-Process
Models
We have
emphasized that people are cognitive misers, using the least effortful strategy
available. But they are not cognitive fools. We know that although impression
formation is mainly automatic, sometimes it is not. People tend to make
attributions in an automatic way, but there are times when they need to make
careful and reasoned attributions (Chaiken & Trope, 1999).
Trope (1986)
proposed a theory of attribution that specifically considers when people make
effortful and reasoned analyses of the causes of behavior. Trope assumed, as
have other theorists, that the first step in our attributional appraisal is an
automatic categorization of the observed behavior, followed by more careful and
deliberate inferences about the person (Trope, Cohen, & Alfieri, 1991).
The first
step, in which the behavior is identified, often happens quickly,
automatically, and with little thought. The attribution made at this first
step, however, may be adjusted in the second step. During this second step, you
may check the situation to see if the target was controlled by something
external to him. If “something made him do it,” then you might hold him less
(internally) responsible for the behavior. In such instances, an inferential
adjustment is made (Trope et al., 1991).
What
information does the perceiver use to make these attributions? Trope plausibly
argued that perceivers look at the behavior, the situation in which the
behavior occurs, and prior information about the actor. Our knowledge about
situations helps us understand behavior even when we know nothing about the
person. When someone cries at a wedding, we make a different inference about the
cause of that behavior than we would if the person cried at a wake. Our prior
knowledge about the person may lead
us to
adjust our initial impression of the person’s behavior.
GBr. 4.3
A somewhat
different model was developed by Gilbert (1989, 1991) and his colleagues.
Influenced by Trope’s two-step model, they proposed a model with three
distinct stages. The first stage is the familiar automatic categorization of
the behavior (that action was aggressive); the second is characterization of
the behavior (George is an aggressive guy); and the third, correction, consists
of adjusting that attribution based on situational factors (George was provoked
needlessly). Gilbert essentially divided Trope’s first step, the identification process, into two
parts: categorization and characterization. The third step is the same as Trope’s
inferential-adjustment second step.
For
example, if you say “Good to see you” to your boss, the statement may be
categorized as friendly, and the speaker may be characterized as someone who
likes the other person; finally, this last inference may be corrected because
the statement is directed at someone with power over the speaker (Gilbert,
McNulty, Guiliano, & Benson, 1992). The correction is based on the
inference that you had better be friendly to your boss. Gilbert suggests that
categorization is an automatic process; characterization is not quite automatic
but is relatively effortless, requiring little attention; but correction is a
more cognitively demanding (controlled and effortful) process (Gilbert &
Krull, 1988). Of course, we need to have the cognitive resources available to
make these corrections. If we become overloaded or distracted, then we are not
able to make these effortful corrections, and our default response is to make
internal and dispositional attributions and to disregard situational
information (Gilbert & Hixon, 1991; Trope & Alfieri, 1997).
Intentionality
and Attributions
Malle
(2006) has filled some gaps in our understanding of how individuals make
attributions by considering the relationship between intentionality (did the
individual intend to do what she actually did?) and judgments about the causes
of a behavior. Judging intent has many implications for our sense of what
defines blame and morality. The offender who cries, “I didn’t
know the gun was loaded,” however falsely, is making a claim on our understanding of
intentionality and blame. If I thought the gun was not loaded, I could not have
meant to kill the victim, and hence, I am blameless, or should be held
blameless legally, if not morally.
Malle
asked, What constitutes ordinary folks’notions of what is an “intentional” action? The responses to Malle’s
question revealed four factors: desire, belief, intention, andawareness.
Desirerefers to a hope
for a particular outcome; beliefwas defined as thoughts about what would happen
before the act actually took place; intention meant that the action was meant
to occur; and awareness was defined as “awareness of the act while the person
was performing it” (Malle, 2006, p. 6). Further research, however, showed that
there was a fifth component of ordinary notions of intentionality. We judge
whether the person actually has the skill or ability to do what was desired.
Thus, if I am a lousy tennis player, which I am, and I serve several aces in a
row, it is clear that while I desired to do so, observers, knowing my skill
level, will be unlikely to conclude that I intended to serve so well. Note
here: There is a difference between attributions of intention and attributions
of intentionality. An intention to do something is defined by wanting to do
something (desire) and beliefs about which actions will provide me with the
outcome that I want. But intentionality requires the first two components plus
the skill or ability to be able to do what is desired as well as the intention
to do it.
Malle
offer us the following situation: A nephew plans to kill his uncle by running
him over with his car. While driving around, the nephew accidentally hits and
kills a man who turns out, unbeknownst to the nephew, to be his uncle. So what
we have here is the comparison between actions performed as intended (he
planned to kill the uncle) and actions that were unintended (he accidentally
ran someone over who happened to be his uncle). Malle asked people to judge
whether the killing was intentional murder or unintentional manslaughter.
There is
no right answer here, but when people returned a murder verdict, it was because
they concluded that the intent to murder had been there and the actual event,
the accident, was less crucial than the attribution of the original murderous
intent. Others who voted for “unintentional” manslaughter concluded that the
action (running uncle over) was separate from the intent to murder (Malle,
2006).
While the
circumstances of the case Malle has used are rather unusual, the results show
that observers may make attributions based upon different interpretations of
intent.
Attribution
Biases
We know
that individuals are not always accurate in determining what other people are
really like. Although these attribution models assume people generally can make
full use of social information, much of the time we take shortcuts, and we make
a number of predictable errors. These errors or biases are examples of the
cognitive miser as social perceiver. We deviate from the rules that a “pure
scientist” would apply as outlined in the correspondent inference and
especially the covariation models. Note, however, that some theorists argue
that these biases are a consequence of the fact that people use a somewhat
different attribution model than earlier theorists had assumed. In other words,
there are no biases in the sense that people do something wrong in the way they
make attributions; people just use the models in a different way than the
earlier theorists thought they did.
Misattributions
A famous
example of how our attributions may be misdirected is illustrated by a now
classic experiment by Schachter and Singer (1962). Schachter and Singer
demonstrated that two conditions are required for the production of an
emotional response: physiologi cal arousal and cognitions that label the
arousal and therefore identify the emotion for the person experiencing it.
Schachter and Singer injected participants with epinephrine, a hormone that produces
all the symptoms of physiological arousal—rapid breathing, increased heart
rate, palpitations, and so on. Half these people were accurately informed that
the injection would create a state of arousal, and others were told the
injection was only a vitamin and would not have any effect. In addition,
subjects in a control group were not given any drug.
Participants
were then placed in a room to await another part of the experiment. Some
subjects were in a room with a confederate of the experimenters, who acted in a
happy, excited, even euphoric manner, laughing, rolling up paper into balls,
and shooting the balls into the wastebasket. Others encountered a confederate
who was angry and threw things around the room. All subjects thought that the
confederate was just another subject.
Schachter
and Singer (1962) argued that the physiological arousal caused by the injection
was open to different interpretations. The subjects who had been misinformed
about the true effects of the injection had no reasonable explanation for the
increase in their arousal. The most obvious stimulus was the behavior of the
confederate. Results showed that aroused subjects who were in a room with an
angry person behaved in an angry way; those in a room with a happy confederate
behaved in a euphoric way. What about the subjects in the group who got the
injection and were told what it was? These informed subjects had a full
explanation for their arousal, so they simply thought that the confederate was
strange and waited quietly.
The research
shows that our emotional state can be manipulated. When we do not have readily
available explanations for a state of arousal, we search the environment to
find a probable cause. If the cues we find point us toward anger or aggression,
then perhaps that is how we will behave. If the cues suggest joy or happiness,
then our behavior may conform to those signals. It is true, of course, that
this experiment involved a temporary and not very involving situation for the
subjects. It is probable that people are less likely to make misattributions
about their emotions when they are more motivated to understand the causes of
their feelings and when they have a more familiar context for them.
The
Fundamental Attribution Error
One
pervasive bias found in the attributional process is the tendency to attribute
causes to people more readily than to situations. This bias is referred to as
the fundamental attribution error.
If you
have ever watched the television game show Jeopardy,you probably have seen the
following scenario played out in various guises: A nervous contestant selects
“Russian history” for $500. The answer is, “He was known as the ʻMad
Monk.’” A contestant
rings in and says, “Who was Molotov?” Alex Trebek, the host replies, “Ah,
noooo, the correct question is “Who was Rasputin?” As the show continues,
certain things become evident. The contestants, despite knowing a lot of
trivial and not so trivial information, do not appear to be as intelligent or
well informed as Trebek.
Sometimes
we make attributions about people without paying enough attention to the roles
they are playing. Of course, Trebek looks smart—and in fact, he may be smart,
but he also has all the answers in front of him. Unfortunately, this last fact
is sometimes lost on us. This so-called quiz show phenomenonwas vividly shown
in an experiment in which researchers simulated a TV game show for college
students (Ross, Amabile, & Steinmetz, 1977). A few subjects were picked to
be the questioners, not because they had any special skill or information but
by pure chance, and had to devise a few fairly difficult but common-knowledge
questions. A control group of questioners asked questions formulated by others.
Members of both groups played out a simulation quiz game. After the quiz
session, all subjects rated their own knowledge levels, as well as the
knowledge levels of their partners.
Now, all
of us can think of some questions that might be hard for others to answer. Who
was the Dodgers’third baseman in the 1947 World Series?
Where is Boca Grande? When
did Emma Bovary live? Clearly, the questioners had a distinct advantage: They
could rummage around in their storehouse of knowledge, trivial and profound,
and find some nuggets that others would not know.
When asked
to rate the knowledge levels of the questioners as opposed to the contestants,
both the questioners and the contestants rated the questioners as more
knowledgeable, especially in the experimental group in which the questioners
devised their own questions. Only a single contestant rated herself superior in
knowledge to the questioner.
The
fundamental attribution error can be seen clearly in this experiment: People
attribute behavior to internal factors, even when they have information
indicating situational factors are at work. Because the questioners appeared to
know more than the contestants, subjects thought the questioners were smarter.
The great majority of participants failed to account for the situation.
The quiz
show phenomenon occurs in many social situations. The relationship between
doctor and patient or teacher and student can be understood via this effect.
When we deal with people in positions of high status or authority who appear to
have all the answers, we attribute their behavior to positive internal
characteristics such as knowledge and intelligence. Such an attribution
enhances their power over us.
Why We
Make the Fundamental Attribution Error
Why do we
err in favor of internal attributions? Several explanations have been offered
for the fundamental attribution error, but two seem to be most useful: a focus
on personal responsibility and the salience of behavior. Western culture
emphasizes the importance of individual personal responsibility (Gilbert &
Malone, 1995); we expect individuals to take responsibility for their behavior.
We expect to be in control of our fates—our behavior—and we expect others to
have control as well. We tend to look down on those who make excuses for their
behavior. It is not surprising, therefore, that we perceive internal rather
than external causes to be primary in explaining behavior (Forgas, Furnham,
& Frey, 1990).
The second
reason for the prevalence of the fundamental attribution error is the salience
of behavior. In social situations as in all perception situations, our senses
and attention are directed outward. The “actor” becomes the focus of our
attention. His or her behavior is more prominent than the less commanding
background or environment. The actor becomes the “figure” (focus in the
foreground) and the situation, the “ground” (the total background) in a complex
figure-ground relationship. A well-established maxim of perceptual psychology
is that the figure stands out against the ground and thus commands our
attention.
The
perceiver tends to be “engulfed by the behavior,” not the surrounding
circumstances (Heider, 1958). If a person is behaving maliciously, we conclude
that he or she is a nasty person. Factors that might have brought on this
nastiness are not easily available or accessible to us, so it is easy, even
natural, to disregard or slight them. Thus, we readily fall into the
fundamental attribution error.
Correcting
the Fundamental Attribution Error
So, are we
helpless to resist this common misattribution of causality? Not necessarily. As
you probably already know from your own experience, the fundamental attribution
error does not always occur. There are circumstances that increase or decrease
the chances of making this mistake. For example, you are less likely to make
the error if you become more aware of information external to another person
that is relevant to explaining the causes for his or her behavior. However,
even under these circumstances, the error does not disappear; it simply becomes
weaker. Although the error is strong and occurs in many situations, it can be lessened
when you have full information about a person’s reason for doing something and are
motivated to make a careful analysis.
The
Actor-Observer Bias
Actors
prefer external attributions for their own behavior, especially if the outcomes
are bad, whereas observers tend to make internal attributions for the same
behavior. The actor-observer biasis especially strong when we are trying to
explain negative behaviors, whether our own or that of others. This bias alerts
us to the importance of perspective when considering attributional errors,
because differing perspectives affect the varied constructions of reality that
people produce.
A simple
experiment you can do yourself demonstrates the prevalence of the actorobserver
bias (Fiske & Taylor, 1984). Using a list of adjectives such as those shown
in Table 3.1, rate a friend on the adjectives listed and then rate yourself. If
you are like most people, you will have given your friend higher ratings than
you gave yourself.
TAbel.
Why these
results? It is likely that you see your friend’s behavior as
relatively consistent across situations, whereas you see your own behavior as
more variable. You probably
were more likely to choose the 0 category for yourself, showing that sometimes you
see yourself as aggressive, thoughtful, or warm and other times not. It depends
on the situation. We see other people’s behavior as more stable and less
dependent on
situational factors.
The
crucial role of perspective in social perception situations can be seen in a
creative experiment in which the perspectives of both observer and actor were
altered (Storms, 1973). Using videotape equipment, the researcher had the actor
view his own behavior from the perspective of an observer. That is, he showed
the actor a videotape of himself as seen by somebody else. He also had the
observer take the actor’s perspective by showing the observer a
videotape of how the world looked
from the point of view of the actor. That is, the observer saw a videotape of
herself as seen by the actor, the person she was watching.
When both
observers and actors took these new perspectives, their attributional analyses
changed. Observers who took the visual perspective of the actors made fewer
person attributions and more situational ones. They began to see the world as
the actors saw it. When the actors took the perspective of the observers, they
began to make fewer situational attributions and more personal ones. Both
observers and actors got to see themselves as others saw them—always an
instructive, if precarious, exercise. In this case, it provided insight into
the process of causal analysis.
The False
Consensus Bias
When we
analyze the behavior of others, we often find ourselves asking, What would I
have done? This is our search for consensus information (What do other people
do?) when we lack such information. In doing this, we often overestimate the
frequency and popularity of our own views of the world (Ross, Greene, &
House, 1977). The false consensus biasis simply the tendency to believe that
everyone else shares our own feelings and behavior (Harvey & Weary, 1981).
We tend to believe that others hold similar political opinions, find the same
movies amusing, and think that baseball is the distinctive American game.
The false
consensus bias may be an attempt to protect our self-esteem by assuming that
our opinions are correct and are shared by most others (Zuckerman, Mann, &
Bernieri, 1982). That is, the attribution that other people share our opinions
serves as an affirmation and a confirmation of the correctness of our views.
However, this overestimation of the trustworthiness of our own ideas can be a
significant hindrance to rational thinking, and if people operate under the
false assumption that their beliefs are widely held, the false consensus bias
can serve as a justification for imposing one’s beliefs on others (Fiske & Taylor, 1991).
Constructing
an Impression of Others
After
attributions are made, we are still left with determining what processes
perceivers use to get a whole picture of other individuals. We know that
automatic processing of social information is widely used. We also know how
people make attributions and what their biases are in making those
attributions. Let’s see how they might put all this social influence together in
a coherent picture.
The
Significance of First Impressions
How many
times have you met someone about whom you formed an immediate negative or
positive impression? How did that first impression influence your subsequent interactions
with that person? First impressions can be powerful influences on our
perceptions of others. Researchers have consistently demonstrated a primacy
effectin the impression-formation process, which is the tendency of early
information to play a powerful role in our eventual impression of an individual.
Furthermore,
first impressions can, in turn, bias the interpretation of later information.
This was shown in a study in which individuals watched a person take an
examination (Jones, Rock, Shaver, Goethals, & Ward, 1968). Some of the
observers saw the test-taker do very well at the start and then get worse as
the test continued. Other observers saw the test-taker do poorly at the
beginning and then improve. Although both test-takers wound up with the same
score, the test-taker who did well in the beginning was rated as more
intelligent than the test-taker who did well at the end. In other words, the
initial impression persisted even when later information began to contradict
it.
This
belief perseverance, the tendency for initial impressions to persist despite
later conflicting information, accounts for much of the power of first
impressions. A second reason that initial impressions wear well and long is
that people often reinterpret incoming information in light of the initial
impression. We try to organize information about other people into a coherent
picture, and later information that is inconsistent with the first impression
is often reinterpreted to fit the initial belief about that person. If your
first impression of a person is that he is friendly, you may dismiss a later
encounter in which he is curt and abrupt as an aberration—“He’s
just having a bad day.”
We can see that our person schemas are influenced by the primacy effect of the
social information together.
Schemas
The aim of
social perception is to gain enough information to make relatively accurate
judgments about people and social situations. Next, we need ways of organizing
the information we do have. Perceivers have strategies that help them know what
to expect from others and how to respond. For example, when a father hears his
infant daughter crying, he does not have to make elaborate inferences about
what is wrong. He has in place an organized set of cognitions—related bits of
information—about why babies cry and what to do about it. Psychologists call
these sets of organized cognitions schemas. A schema concerning crying babies
might include cognitions about dirty diapers, empty stomachs, pain, or anger.
Origins of
Schemas
Where do
schemas come from? They develop from information about or experience with some
social category or event. You can gain knowledge about sororities, for example,
by hearing other people talk about them or by joining one. The more experience
you have with sororities, the richer and more involved your schema will be.
When we are initially organizing a schema, we place the most obvious features
of an event or a category in memory first. If it is a schema about a person or
a group of people, we begin with physical characteristics that we can see:
gender, age, physical attractiveness, race or ethnicity, and so on.
We have
different types of schemas for various social situations (Gilovich, 1991). We
have self-schemas, which help us organize our knowledge about our own traits
and personal qualities. Person schemas help us organize people’s
characteristics and store them
in our memory. People often have a theory—known as an implicit personality
theory—about what kinds of personality traits go together. Intellectual
characteristics, for example, are often linked to coldness, and strong and
adventurous traits are often thought to go together (Higgins & Stangor,
1988). An implicit personality theory may help us make a quick impression of
someone, but, of course, there is no guarantee that our initial impression will
be correct.
The
Relationship between Schemas and Behavior
Schemas
sometimes lead us to act in ways that serve to confirm them. In one study, for
example, researchers convinced subjects that they were going to interact with
someone who was hostile (Snyder & Swann, 1978). When the subjects did
interact with that “hostile” person (who really had no hostile intentions),
they behaved so aggressively that the other person was provoked to respond in a
hostile way. Thus, the expectations of the subjects were confirmed, an outcome
referred to as a self-fulfilling prophecy (Jussim, 1986; Rosenthal &
Jacobson, 1968). The notion of self-fulfilling prophecies suggests that we
often create our own realities through our expectations. If we are interacting
with members of a group we believe to be hostile and dangerous, for example,
our actions may provoke the very behavior we are trying to avoid.
This does
not mean that we inhabit a make-believe world in which there is no reality to
what we think and believe. It does mean, however, that our expectations can
alter the nature of social reality. Consider the effect of a teacher’s
expectations on students. How
important are these expectations in affecting how students perform? In one
study, involving nearly 100 sixth-grade math teachers and 1,800 students,
researchers found that about 20% of the results on the math tests were due to
the teachers’expectations (Jussim
& Eccles, 1992). Twenty percent is not inconsiderable: It can certainly
make the difference between an A and a B or a passing and a failing grade. The
researchers also found that teachers showed definite gender biases. They rated
boys as having better math skills and girls as trying harder. Neither of these
findings appeared to have been correct in this study, but it showed why girls
got better grades in math. The teachers incorrectly thought that girls tried
harder, and therefore rewarded them with higher grades because of the girls’presumed
greater effort.
The other
side of the self-fulfilling prophecy is behavioral confirmation(Snyder, 1992).
This phenomenon occurs when perceivers behave as if their expectations are
correct, and the targets then respond in ways that confirm the perceivers’beliefs.
Although behavioral
confirmation is similar to the self-fulfilling prophecy, there is a subtle
distinction. When we talk about a self-fulfilling prophecy, we are focusing on
the behavior of the perceiver in eliciting expected behavior from the target.
When we talk about behavioral confirmation, we are looking at the role of the target’s
behavior in confirming
the perceiver’s beliefs. In behavioral confirmation, the social perceiver
uses the target’s
behavior (which is partly shaped by the perceiver’s expectations) as evidence that the expectations are correct.
The notion of behavioral confirmation emphasizes that both perceivers and
targets have goals in social interactions. Whether a target confirms a
perceiver’s expectations depends on what they both want from the
interaction.
As an
example, imagine that you start talking to a stranger at a party. Unbeknownst
to you, she has already sized you up and decided you are likely to be
uninteresting. She keeps looking around the room as she talks to you, asks you
few questions about yourself, and doesn’t seem to hear some of the things you
say. Soon you start to withdraw from
the interaction, growing more and more aloof. As the conversation dies, she
slips away, thinking, “What a bore!”
You turn
and find another stranger smiling at you. She has decided you look very
interesting. You strike up a conversation and find you have a lot in common.
She is inter ested in what you say, looks at you when you’re
speaking, and laughs at your humorous comments. Soon you are talking in a
relaxed, poised way, feeling and acting both confident and interesting. In each case, your behavior
tends to confirm the perceiver’s expectancies.
Because someone shows interest in you, you become interesting. When someone
thinks you are unattractive or uninteresting, you respond in kind, confirming
the perceiver’s expectations (Snyder, Tanke, & Berscheid, 1977).
As can be
seen, whether the perceiver gets to confirm her preconceptions depends on what
the target makes of the situation. To predict the likelihood of behavioral
confirmation, we have to look at social interaction from the target’s
point of view. If the goal
of the interaction from the target’s viewpoint is simply to socialize with the other
person, behavioral confirmation is likely. If the goal is more important, then
behavioral disconfirmation is likely (Snyder, 1993). Note that the decision to
confirm or disconfirm someone’s expectations is by no means always a conscious one.
Assimilating
New Information into a Schema
Schemas
have some disadvantages, because people tend to accept information that fits
their schemas and reject information that doesn’t fit. This
reduces uncertainty and ambiguity,
but it also increases errors. Early in the formation of a schema of persons,
groups, or events, we are more likely to pay attention to information that is
inconsistent with our initial conceptions because we do not have much
information (Bargh & Thein, 1985). Anything that doesn’t
fit the schema surprises us and makes us take notice. However, once the schema is well formed, we
tend to remember information that is consistent with that schema. Remembering
schema-consistent evidence is another example of the cognitive miser at work.
Humans prefer the least effortful method of processing and assimilating
information; it helps make a complex world simpler (Fiske, 1993).
If new
information continually and strongly suggests that a schema is wrong, the
perceiver will change it. Much of the time we are uncomfortable with
schemainconsistent information. Often we reinterpret the information to fit
with our schema, but sometimes we change the schema because we see that it is
wrong.
The
Confirmation Bias
When we
try to determine the cause or causes of an event, we usually have some
hypothesis in mind. Say your college football team has not lived up to
expectations, or you are asked to explain why American students lag behind
others in standardized tests. When faced with these problems, we may begin by
putting forth a tentative explanation. We may hypothesize that our football
team has done poorly because the coach is incompetent. Or we may hypothesize
that the cause of American students’poor
performance is that they watch too much TV. How do we go about testing these
hypotheses in everyday life?
When we
make attributions about the causes of events, we routinely overestimate the
strength of our hypothesis (Sanbonmatsu, Akimoto, & Biggs, 1993). We do
this by the way we search for information concerning our hypothesis, typically
tending to engage in a search strategy that confirms rather than disconfirms
our hypothesis. This is known as the confirmation bias.
One
researcher asked subjects to try to discover the rule used to present a series
of three numbers, such as 2, 4, 6. The question was, What rule is the
experimenter using? What is your hypothesis? Let’s say the
hypothesis is consecutive even numbers. Subjects could test their hypothesis
about the rule by presenting a set of three numbers to see if it fit the rule.
The experimenter would tell them if their set fit the rule, and then they would
tell the experimenter what they hypothesized the rule was.
How would you test your hypothesis? Most individuals would
present a set such as 8, 10, 12. Notice the set is aimed at confirming the
hypothesis, not disconfirming it. The experimenter would say, Yes, 8, 10, 12
fits the rule. What is the rule? You would say, Any three ascending even
numbers. The experimenter would say, That is not the rule. What happened? You
were certain you were right.
The rule could have been any three ascending numbers. If you
had tried to disconfirm your hypothesis, you would have gained much more
diagnostic information than simply trying to confirm it. If you had said 1, 3,
4 and were told it fit the rule, you could throw out your hypothesis about even
numbers. We tend to generate narrow hypotheses that do not take into account a
variety of alternative explanations. In everyday life we tend to make
attributions for causes that have importance to us. If you hate the football
coach, you are more likely to find evidence for his incompetence than to note
that injuries to various players affected the team’s performance. Similarly, we
may attribute the cause of American students’poor performance to be their
TV-watching habits, rather than search for evidence that parents do not
motivate their children or that academic performance is not valued among
students’peers. Of course, we should note that there may be times that
confirmation of your hypothesis is the perfectly rational thing to do. But, to
do nothing but test confirmatory hypotheses leaves out evidence that you might
very well need to determine the correct answer.
Shortcuts to Reality: Heuristics
As cognitive misers, we have a grab bag of tools that help us
organize our perceptions effortlessly. These shortcuts—handy rules of thumb
that are part of our cognitive arsenal—are called heuristics. Like illusions,
heuristics help us make sense of the social world, but also like illusions,
they can lead us astray.
The Availability Heuristic
If you are asked how many of your friends know people who are
serving in the armed forces in Iraq, you quickly will think of those who do.
The availability heuristicis defined as a shortcut used to estimate the
frequency or likelihood of an event based on how quickly examples of it come to
mind (Tversky & Kahneman, 1973). If service in Iraq is uncommon in your
community, you will underestimate the overall number of soldiers; if you live
in a community with many such individuals, you will overestimate the incidence
of military service. The availability heuristic tends to bias our
interpretations, because the ease with which we can imagine an event affects
our estimate of how frequently that event occurs. Television and newspapers,
for example, tend to cover only the most visible, violent events. People
therefore tend to overestimate incidents of violence and crime as well as the
number of deaths from accidents and murder, because these events are most
memorable (Kahneman, Slovic, & Tversky, 1982). As with all cognitive
shortcuts, a biased judgment occurs, because the sample of people and events
that we remember is unlikely to be fair and full. The crew and captain of the
Vincennesundoubtedly had the recent example of the Starkin mind when they had
to make a quick decision about the Iranian airbus.
The
Representativeness Heuristic
Sometimes
we make judgments about the probability of an event or a person falling into a
category based on how representative it or the person is of the category
(Kahneman & Tversky, 1982). When we make such judgments, we are using the
representativeness heuristic. This heuristic gives us something very much like
a prototype (an image of the most typical member of a category).
To
understand how this heuristic works, consider Steve, a person described to you
as ambitious, argumentative, and very smart. Now, if you are told that Steve is
either a lawyer or a dairy farmer, what would you guess his occupation to be?
Chances are, you would guess that he is a lawyer. Steve seems more
representative of the lawyer category than of the dairy farmer category. Are
there no ambitious and argumentative dairy farmers? Indeed there are, but a
heuristic is a shortcut to a decision—a best guess.
Let’s
look at Steve again. Imagine now that Steve, still ambitious and argumentative,
is 1 of 100 men; 70 of
these men are dairy farmers, and 30 are lawyers. What would you guess his
occupation to be under these conditions? The study that set up these problems
and posed these questions found that most people still guess that Steve is a
lawyer (Kahneman & Tversky, 1982). Despite the odds, they are misled by the
powerful representativeness heuristic.
The
subjects who made this mistake failed to use base-rate data, information about
the population as opposed to information about just the individual. They knew
that 70 of the 100 men in the group were farmers; therefore, there was a 7 out
of 10 chance that Steve was a farmer, no matter what his personal
characteristics. This tendency to underuse base-rate data and to rely on the
special characteristics of the person or situation is known as the base-rate
fallacy.
Counterfactual
Thinking
The
tendency to run scenarios in our head—to create positive alternatives to what
actually happened—is most likely to occur when we easily can imagine a
different and more positive outcome. For example, let’s
say you leave your house a bit later than you had planned on your way to the airport
and miss your plane. Does it make a difference whether
you miss
it by 5 minutes or by 30 minutes? Yes, the 5-minute miss causes you more
distress, because you can easily imagine how you could have made up those 5
minutes and could now be on your way to Acapulco. Any event that has a negative
outcome but allows for a different and easily imagined outcome is vulnerable to
counterfactual thinking, an imagined scenario that runs opposite to what really
happened.
As another
example, imagine that you took a new route home from school one day because you
were tired of the same old drive. As you drive this unfamiliar route, you are
involved in an accident. It is likely that you will think, “If only I had stuck
to my usual route, none of this would have happened!” You play out a positive
alternative scenario (no accident) that contrasts with what occurred. The
inclination of people to do these counterfactual mental simulations is
widespread, particularly when dramatic events occur (Wells & Gavanski,
1989).
Generally,
we are most likely to use counterfactual thinking if we perceive events to be
changeable (Miller, Turnbull, & McFarland, 1989; Roese & Olson, 1997).
As a rule, we perceive dramatic or exceptional events (taking a new route home)
as more mutable than unexceptional ones (taking your normal route). Various
studies have found that it is the mutability of the event—the event that didn’t
have to be—that affects the perception of causality (Gavanski & Wells,
1989; Kahneman & Tversky, 1982). People’s reactions to their own
misfortunes and those of others may be determined, in great part, by the
counterfactual alternatives evoked by those misfortunes (Roese & Olson,
1997).
Positive Psychology: Optimism, Cognition,
Health, and Life
Social psychology, after years of studying interesting but
rather negative behaviors such as violence and aggression, prejudice, and evil
(Zimbardo, 2005), has turned its eyes, like Mrs. Robinson, to a more uplifting
image, and that image is called positive psychology. Prodded by the arguments
of Martin Seligman (Simonton & Baumeister, 2005), psychologists over the
past decade have begun to study what makes people happy, how optimism and
happiness affect how people think and act. The findings suggest that one
manifestation of happiness—an optimistic outlook on life—has rather profound
affects on our health, longevity, and cognition.
Optimism and Cognition
We seem to maintain an optimistic and confident view of our
abilities to navigate our social world even though we seem to make a lot of
errors. Perhaps this is because our metacognition—the way we think about
thinking—is primarily optimistic. We know that in a wide variety of tasks,
people believe they are above average, a logical impossibility because, except
in Lake Wobegon, Garrison Keillor’s mythical hometown, not everyone can be
above average. So let’s examine the possibility that the pursuit of happiness,
or at least optimism and confidence, is a fundamental factor in the way we
construct our social world.
Metcalfe (1998) examined the case for cognitive optimism and
determined from her own research and that of others that in most cognitive
activities individuals express a consistent pattern of overconfidence. Metcalfe
found, among other results, that individuals think they can solve problems that
they cannot; that they are very confident they can produce an answer when they
are in fact about to make an error; that they think they know the answer to a
question when in fact they do not; and they think the answer is on the “tip of
their tongue” when there is no right or wrong answer.
It is fair to say that optimists and pessimists do in fact see
the world quite differently. In a very clever experiment, Issacowitz (2005)
used eye tracking to test the idea that pessimists pay more attention to
negative stimuli than do optimists. College students were asked to track visual
stimuli (skin cancers, matched schematic drawings, and neutral faces). The
experimenter measured the amount of fixation time—the time students spent
tracking the stimuli. Optimists showed “selective inattention” to the skin
cancers. Optimists avert their gaze from the negative stimuli so they may, in
fact, wear “rose-colored glasses,” or rather they may take their glasses off
when negative stimuli are in their field of vision. Such is the gaze of the
optimist, says Issacowitz (2005).
Optimism and Health
We know that optimism is sometimes extraordinarily helpful in
human affairs. Laughter and a good mood appear to help hospitalized patients
cope with their illnesses (Taylor & Gollwitzer, 1995). An optimistic coping
style also appears to help individuals recover more rapidly and more
effectively from coronary bypass surgery. Research demonstrates that optimistic
bypass patients had fewer problems after surgery than pessimistic patients
(Scheir et al., 1986). Following their surgery, the optimists reported more
positive family, sexual, recreational, and health-related activities than did
pessimistic patients.
Many individuals react to threatening events by developing
positive illusions, beliefs that include unrealistically optimistic notions
about their ability to handle the threat and create a positive outcome (Taylor,
1989). These positive illusions are adaptive in the sense that ill people who
are optimistic will be persistent and creative in their attempts to cope with
the psychological and physical threat of disease. The tendency to display
positive illusions has been shown in individuals who have tested positive for
the HIV virus but have not yet displayed any symptoms (Taylor, Kemeny,
Aspinwall, & Schneider, 1992). These individuals often expressed the belief
that they had developed immunity to the virus and that they could “flush” the
virus from their systems. They acted on this belief by paying close attention
to nutrition and physical fitness.
However, the cognitive optimism discussed by Metcalfe is
different from that of AIDS or cancer patients. In these instances, optimism is
both a coping strategy (I can get better, and to do so, I must follow the
medical advice given to me) and a selfprotective or even self-deceptive shield.
Metcalfe argued that the cognitive optimism seen in everyday life, however, is
not self-deceptive but simply a faulty, overoptimistic methodology. The result
of this optimistic bias in cognition is that people often quit on a problem
because they think they will get the answer, or they convince themselves they
have really learned new material when in fact they have not. Optimism may simply
be the way we do our cognitive daily business.
Positive emotions seem to not only help us fight disease, but
some evidence suggests that these positive, optimistic emotions may forestall
the onset of certain diseases. Richman and her colleagues studied the effects
of hopeand curiosityon hypertension, diabetes mellitus, and respiratory
infections. They reasoned that if negative emotions negatively affected disease
outcomes, then positive ones may be helpful. As is well known, high levels of
anxiety are related to a much higher risk of hypertension (high blood
pressure). This research studied 5,500 patients, ages 55 to 69. All patients
were given scales that measured “hope” and “curiosity.” Independently of other
factors that affected the health of the patients, there was a strong
relationship between positive emotions and health. The authors hypothesize that
the experience of positive emotions bolsters the immune system. Also, it is
reasonable to assume that people with hope and curiosity and other positive
emotions may very well take steps to protect their health (Richman, Kubzansky,
Kawachi, Choo, & Bauer, 2005). One way of looking at these studies is to
observe that happy people are resilient. They take steps to protect their
health, and they respond in a positive manner to threats and disappointments.
Optimism and Happiness
Diener and Diener (1996) found that about 85% of Americans rate
their lives as above average in satisfaction. More than that, 86% of the
population place themselves in the upper 35% of contentment with their lives
(Klar & Gilardi, 1999; Lykken & Tellegren, 1996). It is clearly quite
crowded in that upper 35%. Although 86% obviously cannot all be in the top 35%,
Klar and Gilardi (1999) suggest that people feel this way because they have
unequal access to other people’s states of happiness compared to their own.
Therefore, when a person says that he or she is really happy, it is difficult
for him or her to anticipate that others may be quite so happy, and therefore
most (although certainly not all) people may conclude that they are well above
average.
The
pursuit of happiness, enshrined no less in the Declaration of Independence, is
a powerful if occasionally elusive motive and goal. But what factors account
for happiness? Can it be the usual suspects: money, sex, baseball? Edward
Diener’s longtime research concerning happiness suggests that
subjective factors (feeling in control, feeling
positive about oneself) are more important than objective factors such as
wealth (Diener, Suh, Lucas, & Smith, 1999). Yes, wealth counts, but not as
much as one would think. For example, one of Diener’s
studies showed that Americans earning millions of dollars are only slightly happier
than those who are less fortunate. Perhaps part of the reason those with more
are not significantly happier than those with less is that bigger and better
“toys” simply satiate, they gratify no more, and so one needs more and more and
better and better to achieve a positive experience (Lyubomirsky & Ross,
1999). One’s first automobile, as an example, may bring greater
gratification than the one we
buy if and
when money is no object.
Knutson
and his colleagues have examined how money affects our happiness.
Knutson is
a neuroscientist and is therefore interested in how the brain reacts both to
the anticipation of obtaining money and actually having the money (Kuhnen &
Knutson, 2005). The brain scans revealed that anticipation of financial rewards
makes one happier than actually obtaining that reward. You may be just as happy
anticipating future rewards as actually getting those rewards, and it saves the
trouble. Money doesn’t buy bliss, but it does buy a chunk of happiness.
How much of a chunk? Economists have reported that money and sex may be
partially fungible commodities (Blanchflower & Oswald, 2004). These
researchers found that if you are having sex only once a month and you get
lucky and increase it to twice a week, it is as good as making an extra $50,000
a year. This does not necessarily mean that you would give up $50,000 to have
four times as much sex. Lyubomirsky and Ross (1999) examined how happy and
unhappy individuals dealt with situations in which they either obtained goals
they wanted or were rejected or precluded from reaching those goals, such as
admission to a particular college. In one study, these researchers examined how
individuals dealt with either being accepted or rejected from colleges. Figure
3.5 shows what happened. Notice that happy participants (selfrated) show a
significant increase in the desirability of their chosen college (the one that
accepted them, and they in turn accepted), whereas unhappy (self-rated)
participants show no difference after being accepted and, in fact, show a
slight decrease in the desirability ratings of their chosen college.
Furthermore, happy seniors sharply decreased the desirability of colleges that
rejected them, whereas their unhappy counterparts did not.
These
results, according to Lyubomirsky and Ross (1999), illustrate the way happy and
unhappy individuals respond to the consequences of choices that they made and
were made for them (being accepted or rejected). Happy seniors seemed to make
the best of the world: If they were accepted to a college, well then, that was
the best place for them. If they were rejected, then maybe it wasn’t
such a good choice after all. Unhappy people
seem to live in a world of unappealing choices, and perhaps it seems to them
that it matters not which alternative they pick or is chosen for them. It also
appears that if unhappy people are distracted or stopped from ruminating—from
focusing on the dark state of their world—they tend to respond like happy
people: Obtained goals are given high ratings; unobtainable options are
downgraded.
It may be
a cliché but even a cliché can be true: Americans are generally optimistic.
Chang and Asakawa (2003) found that at least European Americans held an
optimistic bias (they expected that good things are more likely to happen to
them) whereas Japanese had a pessimistic bias, expecting negative events. This
cultural difference seems to project the notion that many Americans expect the
best, while many Japanese expect the worst.
Figure 3.5
The
Effects of Distressing and Joyful Events on Future Happiness
Lou
Gehrig, the great Yankee first baseman afflicted with amyotrophic lateral
sclerosis (ALS; also known as Lou Gehrig’s disease), told a full house at Yankee
Stadium in July 1939
that, all and all, he considered himself the luckiest man on the face of the
earth. Gehrig spoke bravely and movingly, but surely he must have thought his
luck had turned bad.
Perhaps
not, according to Gilbert and his associates. Gilbert suggested that there is a
“psychological immune” system, much like its physiological counterpart, that
protects us from the ravages of bacterial and viral invasions. The
psychological immune system fights off doom and gloom, often under the most
adverse circumstances (Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 1998).
In the
classic movie Casablanca(which, no doubt, none of you may have seen) Humphrey
Bogart’s character “Rick” gallantly (foolishly, I thought) gives up
Ingrid Bergman so that
she can stay with her Nazi-fighting husband. Rick himself was heading down to
Brazzaville to join the French fighting the Nazis (this was World War II, for
those of you who have taken a history course). Will she regret giving up the
dashing Rick? Was she happier with her husband? Gilbert (2006) suggests that
either choice would have made her happy. Gilbert asks, Is it really possible
that the now-deceased actor Christopher Reeve was really better off in some
ways after his terrible and tragic accident than before, as Reeve claimed?
Gilbert
says, yes, it is possible.
Gilbert
and his colleagues suggested that the psychological immune system works best
when it is unattended, for when we become aware of its functioning, it may
cease to work. Gilbert notes that we may convince ourselves that we never
really cared for our ex-spouse, but that protective cover won’t
last long if someone reminds us of the 47
love sonnets that we forgot we wrote. In an initial series of studies, Gilbert
and colleagues asked their participants to predict their emotional reactions to
both good and bad events. First, the subjects reported on their happiness. All
individuals were asked if they were involved in a romantic relationship and
whether they had experienced a breakup of a relationship. Those in a
relationship who had not experienced a breakup (“luckies”) were asked to
predict how happy they would be 2 months after a breakup. Those who had been in
a romantic relationship but were no longer (“leftovers”) were asked to report
how happy they were. Others not in a relationship (“loners”) were asked to
predict how happy they would be 6 months after becoming involved romantically.
First, we
find that being in a romantic relationship means greater happiness than not
being in one. Loners thought that 6 months after being in a relationship, they
would be as happy as people in a romantic relationship. So loners were accurate
in their predictions, because people in relationship report as much happiness
as loners predicted they would experience if they were in a 6-month
relationship. But, most interestingly, luckies were no happier than were
leftovers. Luckies thought that if their relationship broke up, they would be
very unhappy. But, those who experienced a breakup—the archly named
leftovers—were in fact pretty happy, so the luckies were wrong.
The
college students in the first study made grave predictions about the state of
their happiness after the end of a romantic involvement. Gilbert and colleagues
found that professors denied tenure and voters whose candidate lost an
important election all overestimated the depth of their future unhappiness
because of the negative outcome and, in fact, about 3 months later all were
much at the same state of happiness that existed before the negative event.
Indeed, Gilbert’s research suggests that even more harmful events yield the same
results.
One of the
curious aspects of optimism is that we don’t seem to quite know what will make us happy or how happy something
will make us feel. Wilson, Meyers, and Gilbert (2003) reported that people may
overestimate the importance of future events on their happiness. For example,
these investigators found that supporters of George W. Bush overestimated how
happy they would be when Mr. Bush won the election. Similarly, there is a
“retrospective impact bias,” which refers to overestimating the impact of past
events on present happiness. People overestimate how durable their negative
reactions will be (the “durability bias”) and don’t take into
account that the psychological immune system tends to regulate our emotional state. Rather, they may
explain their ability to bounce back afterward by saying something like, “I can
deal with things better than I thought,” to explain why they mispredicted their
long-range emotional reactions. It appears that most of us can rely on this
immune system to maintain a degree of stability in the face of life’s
ups and downs. Much
research remains to be done, but it may be that there are significant
individual differences in the workings of the psychological immune system, and
that may account for different perceptions of happiness among individuals
(Gilbert, 2006). The Incompetent, the Inept: Are They Happy?
Kruger and
Dunning (1999) found in a series of studies that incompetent people are at
times supremely confident in their abilities, perhaps even more so than
competent individuals. It seems that the skills you need to behave competently
are the same skills you need to recognize incompetence. If incompetent people
could recognize incompetence, they would be competent. Life is indeed unfair.
For example, students who scored lowest in a test of logic were most likely to
wildly overestimate how well they did. Those scoring in the lowest 12% of the
test-takers estimated that they scored in the low 60s percentiles. In tests of
grammar and humor, the less competent individuals again overestimated their
performance.
The less
competent test-takers, when given the opportunity to compare their performance
with high-performing individuals, did not recognize competence: That is, the
inept thought that their own performance measured up. The competent subjects,
in contrast, when confronted with better performances, revised estimates of
their own work in light of what they accurately saw as really competent
performances by others.
These
results, although intriguing, may be limited by a couple of factors. It may be
that the nature of the tasks (which involved logic, grammar, and humor) was
rather vague, so it may not have been intuitively clear to everyone what was
being tested. Also, when you ask people to compare themselves to “average”
others, they may have varying notions of what average is. In any event, we see an
example of the false consensus effect here: Other people must be performing
about as well as I am, so the 60% level (a bit better than average; remember
Lake Woebegone) is okay. Alternately, if you go bowling and throw 20 straight
gutter balls, the evidence is undeniable that you are inept.
Cognitive
Optimism: An Evolutionary Interpretation
Clearly we
humans do not judge the world around us and our own place in that world with a
clear, unbiased eye. We have listed many cognitive biases, and the question arises
as to what purpose these biases serve. Haselton and Nettle (2006) persuasively
argue that these biases serve an evolutionary purpose. For example, males tend
to overestimate the degree of sexual interest they arouse in females. Haselton
and Nettle (2006) observe that this is an “adaptive” bias in that
overestimation of sexual interest will result in fewer missed opportunities.
Consider
the sinister attribution errorthat we discussed earlier—this is a kind of
paranoid cognition in which certain individuals develop a rather paranoid
perception style. When someone is new to a group, or is of a different racial
or ethnic background than other members of the group, that individual is very
attentive to any signs of discrimination, however subtle or even nonexistent
they may be. These “paranoid” reactions are likely hard-wired in our brain,
derived from ancestral environments when moving into a new group or new village
required exquisite attention to the reactions of other people. One mistake and
you might be asked to leave, or worse (Haselton & Nettle, 2006).
Even the
most extreme positive illusions may serve important evolutionary purposes. The
adaptive nature of these illusions can be observed when individuals face
diseases that are incurable. The illusion that one may “beat” the disease is
adaptive in the sense that individuals may take active health-promoting steps
that at the very least increase their longevity, even if they cannot beat the
disease in the long term (Haselton & Nettle, 2006).
Bottom
Line
Much of
what we discussed in this chapter suggests that we, as social perceivers, make
predictable errors. Also, much of what we do is automatic, not under conscious
control. The bottom line is that we are cognitive tacticians who expend energy
to be accurate when it is necessary but otherwise accept a rough approximation.
Accuracy in perception is the highest value, but it is not the only value;
efficiency and conservation of cognitive energy also are important. And so, we
are willing to make certain trade-offs when a situation does not demand total
accuracy. The more efficient any system is, the more its activities are carried
out automatically. But when we are motivated, when an event or interaction is
really important, we tend to switch out of this automatic, nonconscious mode
and try to make accurate judgments. Given the vast amount of social information
we deal with, most of us are pretty good at navigating our way.
The
VincennesRevisited
The events
that resulted in the firing of a missile that destroyed a civilian aircraft by
the U.S.S. Vincennesare clear in hindsight. The crew members of the
Vincennesconstructed their own view of reality, based on their previous
experiences, their expectations of what was likely to occur, and their
interpretations of what was happening at the moment, as well as their fears and
anxieties. All of these factors were in turn influenced by the context of
current international events, which included a bitter enmity between the United
States and what was perceived by Americans as an extremist Iranian government.
The crew members of the Vincennes had reason to expect an attack from some
quarter and that is the way they interpreted the flight path of the aircraft.
This is true despite that fact that later analysis showed that the aircraft had
to be a civilian airliner. The event clearly shows the crucial influence of our
expectations and previous experience on our perception of new events.
Chapter
Review
1. What is
impression formation?
Impression
formation is the process by which we form judgments about others. Biological
and cultural forces prime us to form impressions, which may have adaptive
significance for humans.
2. What
are automatic and controlled processing?
Much of
our social perception involves automatic processing, or forming impressions
without much thought or attention. Thinking that is conscious and requires
effort is referred to as controlled processing. If, however, we have important
goals that need to be obtained, then we will switch to more controlled processing
and allocate more energy to understanding social information. Automatic and
controlled processing are not separate categories but rather form a continuum,
ranging from complete automaticity to full allocation of our psychic energy to
understand and control the situation.
3. What is
meant by a cognitive miser?
The notion
of a cognitive miser suggests that humans process social information by
whatever method leads to the least expenditure of cognitive energy. Much of our
time is spent in the cognitive miser mode. Unless motivated to do otherwise, we
use just enough effort to get the job done.
4. What
evidence is there for the importance of nonconscious decision making? Recent
research implies that the best way to deal with complex decisions is to rely on
the unconscious mind. Conscious thought is really precise and allows us to
follow strict patterns and rules, but its capacity to handle lots of
information is limited. So conscious thought is necessary for doing, say, math,
a rule-based exercise, but may not be as good in dealing with complex issues
with lots of alternatives.
5. What is
the effect of automaticity on behavior and emotions?
Behavior
can be affected by cues that are below the level of conscious awareness.
Evidence indicates that priming, “the nonconscious activation of social
knowledge” is a very powerful social concept and affects a wide variety of
behaviors. Recall the research showing that the mere presence of a backpack in
a room led to more cooperative behavior in the group, while the presence of a
briefcase prompted more competitive behaviors. It has also become clear that
often our emotional responses to events are not under conscious control.
Researchers have demonstrated that we are not very good at predicting how
current emotional events will affect us in the future. For one thing, we tend
not to take into account the fact that the more intense the emotion, the less
staying power it has. We tend to underestimate our tendency to get back to an
even keel (homeostasis), to diminish the impact of even the most negative or
for that matter the most positive of emotions. It appears that extreme emotions
are triggered—psychological processes are stimulated that serve to counteract
the intensity of emotions such that one may expect that intense emotional
states will last a shorter time than will milder ones.
6. Are our
impressions of others accurate?
There are
significant differences among social perceivers in their ability to accurately
evaluate other people. Those who are comfortable with their own emotions are
best able to express those emotions and to read other people. Individuals who
are unsure of their own emotions, who try to hide their feelings from others,
are not very good at reading the emotions of other people.
Despite
distinct differences in abilities to read others, most of us are apparently
confident in our ability to accurately do so. This is especially true
if we have
a fair amount of information about that person. However, research shows that no
matter the information at our disposal, our accuracy levels are less than we
think. In part, this appears to be true because we pay attention to obvious
cues but do not attend to more subtle nonverbal ones. We are especially
incompetent at determining if someone is lying, even someone very close to us.
7. What is
the sample bias?
The sample
bias suggests that our initial interaction with individuals is crucial to
whether any further interaction will occur. Imagine you are a member of a newly
formed group, and you begin to interact with others in the group.
You meet
Person A, who has low social skills. Your interaction with him is limited, and
your tendency, understandably, is to avoid him in the future. Now Person B is
different. She has excellent social skills, and conversation with her is easy
and fluid. You will obviously sample more of Person B’s
behavior than Person A’s.
As a result, potentially false negative impressions of Person A never get changed, while a false positive
impression of B could very well be changed if you “sample” more of her
behavior. That is, the initial interaction determines whether you will sample
more of that person’s behavior or not. This seems especially true for persons
belonging to different racial or ethnic groups.
8. Can we
catch liars?
Not very
well. A massive review of all the literature on detecting lies shows that while
there are many cues to lying, they are unusual and unexpected cues and very
subtle. When people lie about themselves, the cues may be a bit stronger, but
it is still a guessing game for most of us.
9. What is
the attribution process?
The
attribution process involves assigning causes for the behavior we observe, both
our own and that of others. Several theories have been devised to uncover how
perceivers decide the causes of other people’s behaviors. The correspondent inference and the
covariation models were the most general attempts to describe the attribution
process.
10. What
are internal and external attributions?
When we
make an internal attribution about an individual, we assign the cause for
behavior to an internal source. For example, one might attribute failure on an
exam to a person’s intelligence or level of motivation.
External attribution
explains the cause for behavior as an external factor. For example, failure on
an exam may be attributed to the fact that a student’s
parents were killed in an automobile
accident a few days before the exam.
11. What
is the correspondent inference theory, and what factors enter into forming
a
correspondent inference?
Correspondent
inference theory helps explain the attribution process when perceivers are
faced with unclear information. We make a correspondent inference if we
determine that an individual entered into a behavior freely (versus being
coerced) and conclude that the person intended the behavior. In this case, we
make an internal attribution. Research shows that the perceiver acting as a
cognitive miser has a strong tendency to make a correspondent inference—to
assign the cause of behavior to the actor and downplay the situation—when the
evidence suggests otherwise.
12. What
are covariation theory and the covariation principle?
The
covariation principle states that people decide that the most likely cause for
any behavior is the factor that covaries, or occurs at the same time, most often
with the appearance of that behavior. Covariation theory suggests that people
rely on consensus (What is everyone else doing?), consistency (Does this person
behave this way all the time?), and distinctiveness (Does this person display
the behavior in all situations or just one?) information.
13. How do
consensus, consistency, and distinctiveness information lead to an
internal
or external attribution?
When
consensus (Everyone acts this way), consistency (The target person always acts
this way), and distinctiveness (The target person only acts this way in a
particular situation) are high, we make an external attribution. However, if
consensus is low (Nobody else behaves this way), consistency is high (The
target person almost always behaves this way), and distinctiveness is low (The
target person behaves this way in many situations), we make an internal
attribution.
14. What
is the dual-process model of attribution, and what does it tell us about the
attribution process?
Trope’s
two-stage model recognized that the initial stage of assigning causality is an automatic categorization of
behavior; a second stage may lead to a readjustment of that initial
categorization, especially when the behavior or the situation is ambiguous.
Trope’s model led theorists to think about how and when people readjust their initial
inferences.
15. What
is meant by attribution biases?
Both the
correspondent inference and covariation models emphasize that people often
depart from the (causal) analysis of the attribution models they present and
make some predictable errors in their causal analyses.
16. What
is the fundamental attribution error?
The
fundamental attribution error highlights the fact that people prefer internal
to external attributions of behavior. The fundamental attribution error may be
part of a general tendency to confirm what we believe is true and to avoid
information that disconfirms our hypotheses. This is known as the confirmation
bias.
17. What
is the actor-observer bias?
The
actor-observer bias occurs when observers emphasize internal attributions,
whereas actors favor external attributions. That is, when we observe someone
else, we make the familiar internal attribution, but when we ourselves act, we
most often believe that our behavior was caused by the situation in which we
acted. This seems to occur because of a perspective difference. When we observe
other people, what is most obvious is what they do. But when we try to decide
why we did something, what is most obvious are extrinsic factors, the situation.
18. What
is the false consensus bias?
The false
consensus bias occurs when people tend to believe that others think and feel
the same way they do.
19. What
is the importance of first impressions?
First
impressions can be powerful influences on our perceptions of others.
Researchers have consistently demonstrated a primacy effect in the
impression-formation process, which is the tendency of early information to
play a powerful role in our eventual impression of an individual. Furthermore,
first impressions, in turn, can bias the interpretation of later information.
20. What
are schemas, and what role do they play in social cognition?
The aim of
social perception is to gain enough information to make relatively accurate
judgments about people and social situations. One major way we organize this
information is by developing schemas, sets of organized cognitions about
individuals or events. One type of schema important for social perception is
implicit personality theories, schemas about what kinds of personality traits
go together. Intellectual characteristics, for example, are often linked to
coldness, and strong and adventurous traits are often thought to go together.
21. What
is the self-fulfilling prophecy, and how does it relate to behavior?
Schemas
also influence behavior, as is illustrated by the notion of selffulfilling
prophecies. This suggests that we often create our own realities through our
expectations. If we are interacting with members of a group we believe to be
hostile and dangerous, for example, our actions may provoke the very behavior
we are trying to avoid, which is the process of behavioral confirmation. This
occurs when perceivers behave as if their expectations are correct and the
targets of those perceptions respond in ways that confirm the perceivers’beliefs.
When we
make attributions about the causes of events, we routinely overestimate the
strength of our hypothesis concerning why events happened the way they did.
This bias in favor of our interpretations of the causes of behavior occurs
because we tend to engage in a search strategy that confirms our hypothesis
rather than disconfirms it. This is known as the confirmation bias.
22. What
are the various types of heuristics that often guide social cognition?
A
heuristic is a shortcut, or a rule of thumb, that we use when constructing
social reality. The availability heuristic is defined as a shortcut used to
estimate the likelihood or frequency of an event based on how quickly examples
of it come to mind. The representativeness heuristic involves making judgments
about the probability of an event or of a person’s falling into a
category based on how
representative it or the person is of the category. The simulation heuristic is
a tendency to play out alternative scenarios in our heads.
Counterfactual
thinking involves taking a negative event or outcome and running scenarios in
our head to create positive alternatives to what actually happened.
23. What
is meant by metacognition?
Metacognition
is the way we think about thinking, which can be primarily optimistic or
pessimistic.
24. How do
optimism and pessimism relate to social cognition and behavior?
We tend to
maintain an optimistic and confident view of our abilities to navigate our
social world, even though we seem to make a lot of errors. Many individuals
react to threatening events by developing positive illusions, beliefs that
include unrealistically optimistic notions about their ability to handle the
threat and create a positive outcome. These positive illusions are adaptive in
the sense that people who are optimistic will be persistent and creative in
their attempt to handle threat or illness. Most people think they are very
happy with their lives, certainly happier than others. Happy and unhappy
individuals respond differently to both positive and negative events. For
example, happy individuals accepted by a college believe that it is the best
place for them. If they are rejected, they think maybe it wasn’t
such a good choice
after all.
Unhappy
people seem to live in a world of unappealing choices, and perhaps it seems to
them that it doesn’t matter which alternative they pick or
is chosen for them. It
seems that incompetents maintain happiness and optimism in part because they
are not able to recognize themselves as incompetent.
Indeed, it
is fair to say that optimists and pessimists do in fact see the world quite
differently. In a very clever experiment, Issacowitz (2005) used eye tracking
to test the idea that pessimists pay more attention to negative stimuli than do
optimists. Positive emotions seem to not only help us fight disease but some
evidence suggests that these positive, optimistic emotions may forestall the
onset of certain diseases.
25. How do
distressing events affect happiness?
Research
also suggests that we may have a psychological immune system that regulates our
reactions and emotions in response to negative life events. Social
psychological experiments suggest that this psychological immune system—much
like its physiological counterpart that protects us from the ravages of
bacterial and vial invasions—fights off doom and gloom, often under the most
adverse circumstances. So the effects of negative events wear out after a time,
no matter how long people think the effects will last.
26. What
does evolution have to do with optimistic biases?
Haselton
and Nettle (2006) persuasively argue that these biases serve an evolutionary
purpose. For example, males tend to overestimate the degree of sexual interest
they arouse in females. This is an “adaptive” bias in that overestimation of
sexual interest will result in fewer missed opportunities.
Or, the
illusion that one can “beat” a deadly disease may work to prolong life longer
than anyone could possibly have expected.
*********************************************
Social Psychology
Third Edition
Kenneth S. Bordens Indiana University—Purdue University Fort Wayne
Irwin A. Horowitz - Oregon State University
Social Psychology, 3rd Edition
Copyright ©2008 by Freeload Press
Illustration used on cover © 2008 JupiterImages Corporation
ISBN 1-930789-04-1
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or
by any means, electronic, mechanical, recording, photocopying, or otherwise, without the prior written
permission of the publisher.
Printed in the United States of America by Freeload Press.
10 9 8 7 6 5 4 3 2 1
It was a wonderful article. Thank you for sharing your views on the growing topic of psychology called Social Perception .
BalasHapusOnce Have a look on this.
Factors Influencing Social Perception