|
||
Groupthink chapter with examples From: http://www.afirstlook.com/archive/groupthink.cfm?source=archther (From the Third Edition of A First Look at Communication Theory by Em Griffin, Ó 1997, McGraw-Hill, Inc. This text-only version of the article appears on the World Wide Web site www.afirstlook.com. The text version does not contain any figures. A facsimile of the original article, which includes all figures, is also available in PDF format.) Chapter 18 Groupthink of Irving Janis On the morning of January 28, 1986, the space shuttle Challenger blasted off from the Kennedy Space Center in Florida. Seventy-three seconds later, millions of adults and school children watched on television as the rocket disintegrated in a fiery explosion, and the capsule plunged into the Atlantic Ocean. The death of all seven crew members, and particularly teacher Christa McAuliffe, shocked the nation. For many Americans, the Challenger disaster marked the end of a love affair with space. As they learned in the months that followed, the tragedy could have been—should have been—avoided. President Reagan immediately appointed a select commission to determine the probable cause(s) of the accident. The panel heard four months of testimony from NASA officials, rocket engineers, astronauts, and anyone else who might have knowledge about the failed mission. In a five-volume published report, the presidential commission identified the primary cause of the accident as a failure in the joint between two stages of the rocket that allowed hot gases to escape during the ‘‘burn." Volatile rocket fuel spewed out when a rubber O-ring failed to seal the joint. The average citizen could understand the mechanics of the commission’s finding. After all, everyone knows what happens when you pour gasoline on an open flame. What people found difficult to fathom was why NASA had launched the Challenger when there was good reason to believe the conditions weren’t safe. In addition to the defective seal, the commission also concluded that a highly flawed decision process was an important contributing cause of the disaster. Communication, as well as combustion, was responsible for the tragedy. The Challenger Launch: A Model of Defective Decision MakingAs the person in charge of the Flight Readiness Review for NASA, Jesse Moore had the ultimate authority to approve or scrub the shuttle mission. He relied on the assessments of managers at the Kennedy, Johnson, and Marshall Space Centers, who in turn consulted with engineers from the companies that designed the Challenger’s subsystems. The film Apollo 13 dramatized the final phase of this ‘‘go/no-go" launch procedure.1 NASA has always taken the position that ‘‘a launch should be canceled if there is any doubt of its safety."2 The day before the launch, Morton Thiokol engineers warned that the flight might be risky. As the team responsible for the performance of the rocket booster, they worried about the below-freezing temperature that was forecast for the morning of the launch. The O-ring seals had never been tested below 53 degrees Fahrenheit, and as Thiokol engineer Roger Boisjoly later testified, getting the O-rings to seal gaps with the temperature in the 20s was like ‘‘trying to shove a brick into a crack versus a sponge."3 The O-ring seals had long been classified a critical component on the rocket motor, ‘‘a failure point—without back-up—that could cause a loss of life or vehicle if the component failed."4 Yet when Thiokol engineers raised the safety issue in a teleconference, NASA personnel discounted their concerns and urged them to reconsider their recommendation. After an off-line caucus with company executives, Thiokol engineers reversed their ‘‘no-go" position and announced that their solid rocket motor was ready to fly. When the Kennedy, Johnson, and Marshall Space Center directors later certified that the Challenger was flight ready, they never mentioned any concern about the O-rings. At the top of the flight readiness review chain, Jesse Moore had every reason to believe that the shuttle was ‘‘A-OK." Irving Janis, Yale social psychologist, was fascinated with the question of how an acknowledged group of experts could make such a terrible decision. He was convinced that their grievous error wasn’t an isolated instance limited to NASA decisions, corporate boardrooms, or matters of a technical nature. He believed he could spot the same group dynamic at work in other tragic decisions. He was especially interested in White House fiascos—Roosevelt’s complacency before Pearl Harbor, Truman’s invasion of North Korea, Kennedy’s Bay of Pigs fiasco, Johnson’s escalation of the Vietnam War, Nixon’s Watergate break-in, and Reagan’s Iran-Contra scandal coverups. If Janis were alive today he would probably also examine Clinton’s approval of the raid on the Branch Davidian compound in Waco, Texas. Janis didn’t regard chief executives or their advisors as stupid, lazy, or evil. Rather, he saw them as victims of ‘‘groupthink." Groupthink: A Concurrence-Seeking TendencyJanis originally defined groupthink as ‘‘a mode of thinking that people engage in when they are deeply involved in a cohesive in-group, when the members’ strivings for unanimity override their motivation to realistically appraise alternative courses of action."5 According to his definition, groupthink occurs only when cohesiveness is high. It requires that members share a strong ‘‘we-feeling" of solidarity and desire to maintain relationships within the group at all costs. When colleagues operate in a groupthink mode, they automatically apply the ‘‘preserve group harmony" test to every decision they face."6 Janis pictured this kind of group as having a ‘‘warm clubby atmosphere." This description captures the image a minority businessman had in mind when a friend asked him what clubs he would like to join when racial integration became a reality. His answer: ‘‘Only one. I’d like to be part of the ‘good ole boys club.’ That’s where the ‘insider’ deals are made."7 Most students of group process regard members’ mutual attraction to each other as an asset. Marvin Shaw, a University of Florida psychologist and the author of a leading text in the field, states this conviction in the form of a general hypothesis that has received widespread research support: ‘‘High-cohesive groups are more effective than low-cohesive groups in achieving their respective goals."8 But Janis consistently held that the ‘‘superglue" of solidarity that bonds people together often causes their mental process to get stuck: The more amiability and esprit de corps among members of a policy-making in-group, the greater is the danger that independent critical thinking will be replaced by groupthink. . . . The social constraint consists of the members’ strong wish to preserve the harmony of the group, which inclines them to avoid creating any discordant arguments or schisms.9 Janis was convinced that the concurrence-seeking tendency of close-knit groups can cause them to make inferior decisions. Symptoms of GroupthinkWhat are the signs that group loyalty has caused members to slip into a groupthink mentality? Janis listed eight symptoms that show that concurrence seeking has led the group astray. The first two stem from overconfidence in the group’s prowess. The next pair reflect the tunnel vision members use to view the problem. The final four are signs of strong conformity pressure within the group. I’ll illustrate many of the symptoms with quotes from the Report of the Presidential Commission on the Space Shuttle Challenger Disaster.10 1. Illusion of Invulnerability. Despite the launchpad fire that killed three astronauts in 1967 and the close call of Apollo 13, the American space program had never experienced an in-flight fatality. When engineers raised the possibility of catastrophic O-ring blow-by, NASA manager George Hardy nonchalantly pointed out that this risk was ‘‘true of every other flight we have had." Janis summarizes this attitude as ‘‘everything is going to work out all right because we are a special group."11 2. Belief in Inherent Morality of the Group. Under the sway of groupthink, members automatically assume the rightness of their cause. At the hearing, engineer Brian Russell noted that NASA managers had shifted the moral rules under which they operated: ‘‘I had the distinct feeling that we were in the position of having to prove that it was unsafe instead of the other way around." 3. Collective Rationalization. Despite the written policy that the O-ring seal was a critical failure point without backup, NASA manager George Hardy testified that ‘‘we were counting on the secondary O-ring to be the sealing O-ring under the worst case conditions." Apparently this was a shared misconception. NASA manager Lawrence Mulloy confirmed that ‘‘no one in the meeting questioned the fact that the secondary seal was capable and in position to seal during the early part of the ignition transient." This collective rationalization supported a mindset of ‘‘hear no evil, see no evil, speak no evil."12 4. Out-group Stereotypes. Although there is no direct evidence that NASA officials looked down on Thiokol engineers, Mulloy was caustic about their recommendation to postpone the launch until the temperature rose to 53 degrees. He reportedly asked whether they expected NASA to wait until April to launch the shuttle. 5. Self-Censorship. We now know that Thiokol engineer George McDonald wanted to postpone the flight. But instead of clearly stating ‘‘I recommend we don’t launch below 53 degrees," he offered an equivocal opinion. He suggested that ‘‘lower temperatures are in the direction of badness for both O-rings. . . ." What did he think they should do? From his tempered words, it’s hard to tell. 6. Illusion of Unanimity. NASA managers perpetuated the fiction that everyone was fully in accord on the launch recommendation. They admitted to the presidential commission that they didn’t report Thiokol’s on-again/off-again hesitancy with their superiors. As often happens in such cases, the flight readiness review team interpreted silence as agreement. 7. Direct Pressure on Dissenters. Thiokol engineers felt pressure from two directions to reverse their ‘‘no-go" recommendation. NASA managers had already postponed the launch three times and were fearful the American public would regard the agency as inept. Undoubtedly that strain triggered Hardy’s retort that he was ‘‘appalled" at Thiokol’s recommendation. Similarly, the company’s management was fearful of losing future NASA contracts. When they went off-line for their caucus, Thiokol’s senior vice president urged Roger Lund, vice president of engineering, to ‘‘take off his engineering hat and put on his management hat." 8. Self-Appointed Mindguards. ‘‘Mindguards" protect a leader from assault by troublesome ideas. NASA managers insulated Jesse Moore from the debate over the integrity of the rocket booster seals. Even though Roger Boisjoly was Thiokol’s expert on O-rings, he later bemoaned that he ‘‘was not even asked to participate in giving input to the final decision charts." It Doesn’t Always Happen / It’s Not Always BadJanis introduced the concept of groupthink through the popular press in 1971.13 The idea struck a responsive chord with policy planners who had hastily approved courses of action that just as quickly turned out to be major blunders. The term groupthink paralleled the ominous expression doublethink in George Orwell’s novel 1984, and it immediately caught on among business and government leaders as a catch-all term to refer to any ill-conceived group plan. In later extensions of his theory, Janis emphasized that not all bad decisions are the result of groupthink, and not all cases of groupthink end up failing. Figure 18.2 diagrams Janis’s extended theory of groupthink. The boxes on the left lay out the preconditions for a concurrence-seeking tendency to emerge and the boxes on the right show the path the group takes when groupthink is present. Box A shows that cohesiveness is a major contributor to groupthink. Yet even though Janis regarded groups that are highly attractive to members as especially prone to making bad policy decisions, he didn’t believe that all cohesive groups end up succumbing to groupthink. Cohesiveness is a necessary but not sufficient condition for excessive concurrence-seeking. The likelihood of groupthink increases when there are structural faults within the organization (box B-1) and the policy decision has to be made during a time of high stress and low self-esteem (box B-2). The secret of short-circuiting the process lies in altering the factors in the B boxes that act as catalysts in cohesive groups. The items that a wise leader can change are the first three in box B-1, concerning insulation of the group, lack of impartial leadership, and lack of procedural norms. Because a close-knit group at the top of an organization is insulated from outside opinions, Janis suggested breaking up into subgroups that work simultaneously on the same issue. Each subgroup can draw on the expertise of trusted subordinates who are encouraged to give their advice freely. Leaders climb to the top by being ‘‘take-charge" people. Unfortunately, the very force of personality that placed them in authority can have a chilling effect on group candor. Some leaders are able to lead an impartial discussion without imposing their opinions, but Janis’s prescription for open inquiry is to have the leader periodically leave the group so that members will feel free to express their personal views. Since many groups have no set procedures to ensure close scrutiny of favored solutions, Janis recommended assigning the role of critical evaluator to every member. Instead of representing his or her own constituency or narrow area of expertise, each participant would take responsibility for the entire plan. Of course, a leader’s request for critical comments is a hollow exercise if he or she shows irritation or cuts off debate when the group starts to carve up a cherished idea. If these measures fail, we can spot the presence of groupthink by its observable effects listed on the right side of Figure 18.2. We’ve already looked at the symptoms of groupthink in box C. Janis claimed that these inevitably lead to the seven flawed procedures cataloged in box D. Does all this automatically produce a ruinous outcome like the Challenger disaster? Not necessarily. Groups that do everything wrong may luck out from time to time. There are also many routine occasions when a groupthink mode is actually helpful because it makes for a speedy and amicable consensus on issues of minor importance. But according to Janis, when a group confronts a great threat or a grand opportunity, concurrence-seeking almost always produces an inferior solution. Participant-Observation of Groupthink in Action . . . Or Was It?Groupthink researchers typically identify a grievous case of poor decision making like the Challenger disaster and then comb through historical records to see if the theory applies. Janis warned against jumping to conclusions on the basis of just a few signs. He had to spot all or most of the symptoms before he would make a diagnosis of groupthink. In the following pages I outline the events leading up to a crucial boardroom decision that could cost a charity up to one million dollars. As I sketch the events that led to this fiasco, see how many of the eight symptoms of groupthink (box C) and the seven symptoms of defective decision making (box D) are evident. Did the virus of groupthink infect an otherwise healthy body? The Grand Opportunity. For the past ten years I’ve served on the board of directors of a Christian nonprofit organization committed to serving kids raised in poverty.14 A longtime benefactor offered to donate a half-million dollars if we could match his gift. In the world of charitable giving, big gifts like this are typically used to leverage other contributions. He also urged us to place the funds for six months with the Foundation for New Era Philanthropy in Philadelphia, which promised to pair our gift with that of an anonymous megabuck donor. After six months we’d end up with a total of two million dollars to start a camp for inner-city kids. For their part, New Era would get the interest from our million-dollar principal to use for the expenses of running a foundation. And the anonymous donor would have the satisfaction of stimulating others to be generous, yet she or he wouldn’t have the hassle of dealing with daily requests for money.The Decision. Our initial reaction was similar to the treasurer of the University of Pennsylvania: ‘‘It sounds too good to be true, and it’s got all the earmarks of a Ponzi scheme."15 Yet his school and most of our sister agencies were already in the program. Since our benefactor urged us to place his funds with New Era, we thought we should at least check it out.We formed a committee to perform ‘‘due diligence," the legal term for the kind of vigilant investigation Janis encouraged. A lawyer, a money manager, and a partner in one of the Big Six accounting firms spent two months gathering a thick batch of financial records, tax returns, and references. Although I wasn’t on the research team, I had three hour-long phone conversations with friends in Philadelphia who knew Jack Bennett, the founder and CEO of New Era. What did we find? The good news was that people we knew intimately trusted Jack Bennett implicitly. Money sent to New Era was always matched dollar for dollar six months later. Not one charity had lost a dime; to the contrary, for every dime they invested, they now had twenty cents. The bad news was that we could learn nothing about New Era’s anonymous million dollar donors. Only Bennett knew their names, and he warned that any group that pressed him for their identity would no longer be eligible for a matching grant. Wealthy board members who were giving freely had no trouble believing that such megabuck donors existed. They said that if they had vast resources, they would do the same. The difference was just a matter of scale. During a break in our deliberations, one of these members pulled me aside and confided, ‘‘Em, this is so big that there are only six or seven people around the country who’d be willing and able to put up that kind of money. I think I know who four of the mystery donors are." After ten hours of lively discussion spanning a three-week period, we decided to take the plunge. I wish I could say that I was a prophetic voice denouncing the folly of my colleagues, but I wasn’t. (Another member and I did insist that we only use money from our contributors who gave us written approval to place their funds in the risky venture.) Amidst much soul-searching, I voted to send the money to New Era for the matching grant. I thought it was worth the risk. The Reality. New Era was the front page story of The Wall Street Journal for the entire week of May 15–19, 1995. On successive days the paper reported that New Era was in financial trouble, that Jack Bennett now admitted there were no anonymous donors, and that New Era was bankrupt with obligations of over a half billion dollars to three hundred nonprofits and individual contributors. I personally felt shock, shame, and incredibly stupid. By the end of the week the Journal asked,Why did so many smart people entrust [Bennett] with so much money on so little evidence regarding his background and with so many red flags flying over his double-your-money program?16 A good question. To what extent is groupthink the answer? The Assessment. The volunteer board of our organization is a prime example of the cohesive in-group with a warm clubby atmosphere that Janis described. Most members are white male business executives. We’re encouraged to bring our spouses to the meetings, and as couples we enjoy the nonagenda times together. I’ve never talked with an ex-director who didn’t want to be asked back.The small world of charitable giving has the same cozy feel. As fund-raisers know, $100,000 gifts are made on the basis of long-term personal relationships. Due to interlocking directorships, when organizations undertook their ‘‘due diligence," on New Era, they were in effect talking to themselves and other members of the in-group. It took an outsider—a South African accounting instructor at a small liberal arts college—to blow the whistle on the whole scam.17 In terms of Janis’s symptoms of defective decision making (box D), two items stand out. Our board showed a selective bias in processing the information that we gathered by interpreting New Era’s flawless payout history as evidence that the plan was legitimate. Instead, it was the classic mark of a well-conceived pyramid swindle. We also failed to work out contingency plans. Although we joked darkly about New Era being a Ponzi scheme, I don’t think we ever discussed what we’d do if it were. On the other hand, the decision was no rush to judgment. In his book Crucial Decisions, Janis characterizes defective decision making as ‘‘premature closure,"18 a label that certainly doesn’t describe our board process. After two months of seeking every scrap of information we could get, we vigorously discussed the relative merits of each option, and worked to create new options. At no point did I feel that our leadership tried to impose a solution or close out debate. I sensed, rather, a desire for more creative input and a hesitancy to act on the take-it-or-leave-it proposition that New Era offered. There’s no doubt that we made a horrendous mistake with tragic consequences. But the question still remains, Was this groupthink? As you decide, consider that 115 supposedly savvy individuals, including former Secretary of the Treasury William Simon and philanthropist Lawrence Rockefeller, reached the same decision without benefit or curse of group involvement. Also remember that Jack Bennett conned 185 other nonprofits into sending money for the supposed match. Janis never suggested that groupthink was a mass phenomenon. Is it likely that a concurrence-seeking tendency explains why all of these groups were taken in? Wishful thinking, excessive trust, or a ‘‘greed to do good" seem to be equally powerful and vastly simpler explanations. Critique: Avoiding Uncritical Acceptance of GroupthinkJanis calls for greater critical assessment of proposals lest they be adopted for reasons other than merit. Since his description of groupthink has received great popular approval—perhaps because we’re fascinated with colossal failure, it seems only fair to note that efforts to validate the theory have been sparse and not particularly successful. Most students of groupthink pick a high-profile case of decision making where things went terribly wrong and then use Janis’s model as a cookie cutter to analyze the disaster—much as I’ve done with the Challenger and New Era. They seem to take the existence of groupthink for granted and employ the theory to warn against future folly or suggest ways to avoid it. This kind of retrospective analysis is great for theory construction, but provides no comparative basis for accepting or rejecting the theory. For example, is the lack of evidence that NASA managers formed a cohesive in-group when they approved the Challenger launch a good reason to drop or revise the theory? Or does my report of extensive ‘‘due diligence" of New Era invalidate the claim that groupthink was a reason so many people fell for the fraud? Janis thought it made sense to test the groupthink hypothesis in the laboratory prior to trying to prove it in the field.19 His suggestion is curious, however, because a minimal test of his theory that controls for the antecedent conditions shown on the left side of Figure 18.2 would require over 7000 willing participants.20 As it is, the few reported groupthink experiments have tended to focus on cohesiveness—a quality that’s hard to create in the laboratory. The results are mixed at best. Janis’s quantitative study of nineteen international crises is problematic as well. When he and two co-authors linked positive outcomes with high-quality decision-making procedures during international crises, they never assessed the cohesiveness of the groups in charge.21 You may never be a power broker on the international scene, but you could check out the effects of high cohesiveness in groups close to home. I suggest you gauge the desire for consensus in your family, fraternity or sorority, church group, team, or organizational committee. Then watch for the symptoms Janis described. Even though there doesn’t seem to be a definitive way to prove Janis’s theory right (or wrong), his concept of groupthink continues to capture the imagination of those who have seen close-knit groups make terrible decisions. After being ridiculed as a sky-is-falling alarmist, Thiokol engineer George McDonald could only say that launching the Challenger would be ‘‘an act away from goodness." As subsequent events made clear, so is the process of groupthink. Questions to Sharpen Your Focus1. Janis defines groupthink as a consensus-seeking tendency. What alternative terms would you use to describe the same group phenomenon? 2. Suppose your instructor leads a discussion about whether communication theory should be a required course for majors. Which of the eight symptoms of groupthink do you think would emerge? Why? 3. Risk may be irrelevant to those who share an illusion of invulnerability22 (‘‘These things happen, but not to people like us"). Do you think that groupthink explains the continued high rate of the sexual transmission of AIDS? 4. What other theories covered in earlier chapters are consistent with Janis’s groupthink hypothesis? Can you spot five parallels? A Second LookRecommended resource: Irving Janis, Groupthink, 2d ed., Houghton Mifflin, Boston, 1982. Original statement: Irving Janis, Victims of Groupthink, Houghton Mifflin, Boston, 1972. Vigilant problem solving: Irving Janis, Crucial Decisions: Leadership in Policymaking and Crisis Management, Free Press, New York, 1989, pp. 89–117. Decision-making context: Irving Janis and Leon Mann, Decision Making, Free Press, New York, 1977. Historical test of the theory: Gregory Herek, Irving Janis, and Paul Huth, ‘‘Decision Making During International Crises," Journal of Conflict Resolution, Vol. 31, 1987, pp. 203–226. Research Review: Won-Woo Park, ‘‘A Review of Research on Groupthink," Journal of Behavioral Decision Making, Vol. 3, 1990, pp. 229–245. Challenger disaster: Randy Hirokawa, Dennis Gouran, and Amy Martz, ‘‘Understanding the Sources of Faulty Group Decision Making: A Lesson from the Challenger Disaster," Small Group Behavior, Vol. 19, 1988, pp. 411–433. Challenger disaster: James Esser and Joanne Linoerfer, ‘‘Groupthink and the Space Shuttle Challenger Accident: Toward a Quantitative Case Analysis," Journal of Behavioral Decision Making, Vol. 2, 1989, pp. 167–177. Challenger disaster: Gregory Moorhead, Richard Ference, and Chris Neck, ‘‘Group Decision Fiascoes Continue: Space Shuttle Challenger and a Revised Groupthink Framework," Human Relations, Vol. 44, 1991, pp. 539–550. New Era fiasco: The Chronicle of Philanthropy, ‘‘A Debacle for Charities’ Credibility," June 1, 1995, pp. 1, 24–30. Critique: Jeanne Longley and Dean G. Pruitt, ‘‘Groupthink: A Critique of Janis’s Theory," in Review of Personality and Social Psychology, Vol. 1, Ladd Wheeler (ed.), Sage, Beverly Hills, Calif., 1980, pp. 74–93. |
||
Department of Communication, Seton Hall University |