23 January 2011

The Power of the Reviewer—Myth or Fact?: Part 1

[A number of years ago, I became curious about the proposition that producers closed shows when they got bad reviews because they believed that that meant audiences would stay away. I decided to see whether this belief was real or apocryphal. I combed through the literature and the field studies of arts that relied on published or broadcast reviews, I found all the surveys and statistical analyses I could. I have from time to time rechecked the record to see if new studies or surveys have been published, but the last time I updated my data, George Wachtel, then Director of Research for the League of American Theatres and Producers (now the Broadway League), informed me that there’d been no industry-wide studies of press or reviews since the League published its 1980 study. In other words, however old my statistics are, they’re the newest available. Here’s Part 1 of my report.]


In “Reviewing a Play Under Injunction” (4 April 1915), the New York Times reported the following incident:

Beginning the day after there had been printed in The New York Times an unfavorable review of “Taking Chances,” a new farce presented on March 17 [1915] at the Thirty-ninth Street Theatre, Alexander Woollcott, dramatic critic of The Times, received several indirect notifications that he would thereafter be excluded from all theatres under the control of the Messrs. Shubert.

. . . .

Last Thursday evening the Shuberts executed their threat against Mr. Woollcott by excluding him from Maxine Elliott’s Theatre when he presented purchased tickets entitling him to orchestra seats.

Woollcott, arguably the most famous theater reviewer of his day, had bought tickets to Edward Locke’s The Revolt because the Shuberts, the most powerful producers in the country, had already ceased sending press seats to the Times for him. When the producers prevented a legal ticketholder from entering the theater, legislation was proposed in Albany making such action illegal. This may not have been the first case of a producer taking action against a reviewer, but it may have been the point at which their adversarial relationship solidified. Within days after J. J. Shubert and two house managers physically blocked Woollcott from seeing the play, the reviewer, backed by his paper, got an injunction prohibiting the Shuberts from keeping him out of their theaters and sued them under the Civil Rights Act of 1871. Times publisher Adolph Ochs canceled the Shuberts’ advertising, sued them for “prior restraint of the press,” and awarded Woollcott more space, a byline, and a raise. Within a few weeks, the injunction was lifted, Woollcott and the Times eventually lost their suits, and the Shuberts were able to bring pressure in Albany to defeat legislation prohibiting them from denying entry to any law-abiding person, but by that time the damage to the producers’ cause had been long done. All the New York papers lined up behind the Times, and Woollcott was thrust into the forefront of New York theater journalism and the paper began its rise to its present-day prominence.

Despite the Shuberts’ eventual victory in court, the battle ended badly for the producers: “The power of New York theater critics . . . was confirmed by the time the curtain came down and the Shuberts conceded,” reads one subsequent report. The question is, How did reviewers get this power, and is it based on fact—or assumptions shared by the producers and the general public? How much, in fact, do theatergoers rely on reviews to decide about going to a show?


Of one effect there’s no doubt: producers do close shows because of poor critical response. Producer Joseph Kipness said simply, “I found there’s no sense fighting if you get lousy reviews. You can’t fight it,” and former New York Times review writer Brooks Atkinson wrote, “When the notices are particularly bad, most producers close without further exploitation” of the show’s audience-drawing potential. The presumption is that bad reviews will stop people from coming to the box office. The problem is that it is a presumption. It has seldom been tested, since most badly reviewed shows close so quickly no one can see them. It’s simply “conventional wisdom” that bad reviews kill ticket sales.

So sure are producers that reviewers can damage a play’s run that several in recent years have taken to a new tactic—or a variation of an old one. In an echo from the 1915 Shubert-Woollcott clash, some non-commercial producers have begun to run shows in previews, to which the press isn’t invited and, by convention, cannot write about, virtually until the show’s scheduled to close. Some theaters outside of New York City adopted a policy of inviting only local reviewers to productions and actually refusing seats to New York-based or national writers. The message is, of course, that these producers are so sure that reviewers’ opinions will adversely affect the success of their shows that they have become irreconcilably hostile even to their presence among the theaters’ audiences.

The curious thing about all this certainty is that it has been questioned very little, either by the producers themselves, or by independent research.

Hundreds of audience surveys are, of course, conducted for every type of presentation, but few deal with the commercial theater audience, and even fewer ask about reviews as anything more than a way of finding out what’s playing. Most are quite old—little of the data has been updated since the 1980s—and none delve deeply into the matter, not so much of whether potential theatergoers use reviews, but how they use them. This leaves a great gap between the actions of the producers and the provable facts upon which those actions should be based.


The other side of this issue—what the journalists believe their power is—is addressed in the United Church of Christ 1969 survey, “Criticism and Critics in the Mass Media” by the Louis Harris organization. This survey didn’t address the audience’s use of reviews, but it did examine the reviewers’ own opinions about their influence, concluding that “the critical profession believes they have considerable impact . . . .”

To the broadest question, whether or not the reviewers think “the public really pays attention to criticism and is affected by it,” 87% said yes. Only 5% said that the public pays no attention to criticism, and 8% said that it “depends on circumstances.”

When the reviewers were asked if they have too much, too little, or the right amount of influence on the public, most naturally responded, “the right amount.” However, in the national sample, 30% felt that they have too little; 11% felt they have too much.

“Overall, then,” the report concluded, “critics and editors see criticism as having impact on the public educationally, economically, and in terms of political and social attitudes.” It must be noted, however, that these were the reviewers’ subjective responses on their own work. There’s no proof that the public really agrees with these opinions; even many prominent reviewers don’t agree on their influence. Rocco Landesman, then editor of Yale/Theatre (now Theater magazine) and now Chairman of the NEA, wrote that “Drama critics . . . have an inordinate amount of power within their field, for no other art depends so heavily on the brief quotation for the ad and marquee.” Even the late Nation review writer Joseph Wood Krutch asserted, “That the professional reviewer wields enormous immediate and practical influence is plain enough from the growing tendency of managers to close, at once, any production which has received generally unfavorable notices.”

On the other hand, a considerable number of important review writers don’t feel they have much power or influence. The late New York Times reviewer Walter Kerr, for instance, was once asked, “[W]hy are producers so taken with the ‘myth’ of critic power? ‘Because,’ he says with a laugh, ‘they are fools.’” The late Clive Barnes, arguably the most influential reviewer when he was writing for the New York Times (he was reviewing for the New York Post at his death), said at the time he held the Times post:

I don’t think we make a play fail. A play fails because it fails. There are so many other factors in a run apart from the critic’s notice. . . . The reason a play fails is because the producer took it off.


The debate between journalists and producers about the power of reviews could go on forever. The only judgment that matters must come from the public who reads or hears the reviews, and then decides to go or not to go to the theater. This is where there’s so little information. Many people believe that the reviewer has great power over what runs and what doesn’t. In The Season, a survey of Broadway in 1967 and 1968, William Goldman wrote that “the critics’ importance . . . is enormously variable from one kind of play to another. But, in any case, their influence is considerable . . . .” Despite this statement, Goldman’s estimation indicates the conflicting reactions among the general public: elsewhere in The Season he wrote that the Broadway reviewers “are individually meaningless in their importance to the theatre.” He even cited a study showing that only “20% of New Yorkers and 10% of out-of-towners say they’re chiefly influenced by the notices. . . . It’s probably fair to estimate roughly that one person in six [16.67%] attends a production because of critical enthusiasm.”

According to the study Criticizing the Critics, little research has probed this question, but one report confirmed “that critics’ opinions were a relatively unimportant factor in people’s decisions to go to a play or film.” Another Harris study, for the Associated Councils of the Arts, recorded “that about 60 percent of the more than 3,000 people interviewed said critics’ reviews were of minor importance in affecting their choice of entertainment fare.” In a survey of the way audiences hear about a performance, statistics showed that only 28.9% rely on newspaper stories, including reviews. This, however, isn’t the same as basing a choice on a review, and the survey didn’t single out reviews as a separate category.

The truth of the assertion that reviews lack influence is suggested by several cases of Broadway shows. First, in June 1988, August Wilson’s Joe Turner’s Come and Gone closed after playing 105 regular performances, despite “almost universally enthusiastic” reviews and designation by the New York Drama Critics Circle as the best play of 1987-88. Producer Elliot Martin reported, “I read the notices on opening night and I presumed there’d be a line the next day around the block. But it didn’t happen.” Word-of-mouth, apparently, wasn’t “uniformly favorable.” The same was true of the Steppenwolf stage adaptation of John Steinbeck’s The Grapes of Wrath, which received “overwhelmingly favorable” critical response and won the best-play Tony in 1990 but still closed after a scant 188 performances and 11 previews. Conversely, when Mikhail Baryshnikov opened in Steven Berkoff’s adaptation of Franz Kafka’s Metamorphosis, the play got generally “mediocre reviews.” Despite this, and “the usual disinterest in more serious drama,” it set box-office records in March 1989, playing to 93.9% of capacity. In more recent seasons, Broadway and Off-Broadway have both seen plays which, despite receiving excellent notices, had foundered at the box office because there were no star names in the casts.

Possibly the ultimate ruling on this question comes from a theatergoer on line for discount seats at the Theatre Development Fund’s TKTS booth in Duffy Square. Asked by a reporter if she knew who was David Richards, the New York Times’s chief review writer who’d just resigned his position, she asserted that she didn’t. Furthermore, she said she didn’t “really go by [a reviewer’s] opinion.” When it came to theater advice, she’d “rather hear it from a friend of mine.” Probably for this woman at least, aside from word-of-mouth, the half-price ticket was more of a motivating factor in her decision than the published views of any reviewer.


Most available audience surveys and consumer studies that dealt with theater at all frequently saw it as one element in a broad spectrum of entertainment or cultural outlets. A few surveys did ask if reviews are an important factor in the spectators’ decisions about seeing a show, but none went further to determine how prospective theatergoers use, read, or base decisions on them. If reviews have the power to close shows, the logical presumption is that would-be spectators read them and then follow the recommendations of the reviewer. Where, however, is the proof of this contention?

The most useful audience survey was A Study of the New York Audience of the Broadway Theatre, prepared for the League of New York Theatres and Producers (now the Broadway League). This 1979 study divided the theater audience into four components according to theater-going habits. Among “Traditionalists,” described as “older” and “veteran theatregoers,” reviews were both a major source of information and a major influence. This group made up only 24% of the Broadway audience, however. The largest portion of the audience was the “Entertainment Seekers” who made up 35% and were the “oldest group” who’d been “attending theater over 10 years.” Reviews were neither a prime information source nor a major influence for this group, but they attended theater less often than the Traditionalists.

The group most nearly related in size and attendance frequency to the Traditionalists were the “Theatre Enthusiasts” who made up 23% of the audience. These were “younger” and “below average in years of attendance”; they “read reviews, but [are] not strongly influenced” by them.

The smallest group, 18% of the regular audience, were the “Dispassionate Theatregoers” who were also the “youngest group,” many of whom were “new patrons” of theater. The study found that reviews were a minor source of theater information for this group. In all, less than 20% of all theatergoers rated reviews a major influence on their choices.

In an audience survey by the League in 1990 at a performance of City of Angels, 46% of the respondents said they got their theater information from reviews, but only 34% said the reviews were a major influence on their decisions. The largest group, 58%, was influenced by “a friend”—in other words, by word-of-mouth. “A friend” was also the source of theater information for 66% of the polled audience.

Another interesting and useful study wasn’t an audience survey at all. In 1977, More magazine published a ten-year study of reviews by the then-major New York theater journalists to establish a statistical connection between the critical response and the length of a production’s run. The study hypothesized:

[I]f the critics had no power, then there would be no correlation between their reviews and the length of run. Shows panned by the critics would be just as likely to have long runs as they would be to close Saturday night. Conversely, if the critics had absolute power, then every play panned would close the morning after and every play raved would make a fortune for its backer.

In fact, they found that

when the critics expressed a strong negative or positive opinion about a play, there was a marked correlation with the length of run. Of all the pans written by all the critics that we examined, nearly three quarters of them were of plays that closed in less than 50 days.

The statistics for the opposite end of the scale weren’t so clear: only 32% of the rave reviews went to plays that ran 500 days or more. Where the reviews were mixed, the results were even less significant, giving the very clear impression that “it is at the extremes that [the reviewers] exercise what power they have.” The study determined that, all the reviews taken together, the difference between how well plays do when they’re panned or raved is significant. Significant enough, particularly at the negative end, to warrant the conclusion that the reviewers wield more power than they’re willing to admit. (Though there’s no concrete evidence to support this, it seems possible that theatergoers put more reliance in pans than raves, avoiding the former but not necessarily flocking to the latter. Of course, it’s easier to point to a show that was closed because of its reviews than to one that ran because of them, and though a short run’s obvious, a long run’s somewhat harder to define.)

There is, however, a flaw in the study’s logic. These statistics only prove an apparent correlation between the reviewers’ response and the length of the show’s run; it may be a case of post hoc, non propter hoc. The authors didn’t consider the obvious possibility that producers, faced with bad notices and convinced of the reviewers’ influence, simply closed the shows. Neither did they entertain the possibility, however slim, that the shows ran or closed not because of the reviews at all, but because the potential audience decided independently that they were good or bad shows.

[Return to ROT for the conclusion of my report on the power of the reviewer. I’ll pick up with some examples of the marketing techniques that producers use to sell their shows in the face of poor notices. Look for Part 2 in the next few days.]

No comments:

Post a Comment