04 October 2021

Computers and Actors, Part 1

 

[I’ve recently finished posting a series of articles on Rick On Theater on theatrical design and technology from a Special Section of American Theatre (see “Design & Tech: The Magic Of Design,” 9, 12, 15, 18, 21, and 24 September).  In my introduction to the first installment, I referenced an article I wrote called "Theater and Computers," posted on ROT on 5 December 2009, shortly after I started this blog.

[Now, as a sort of companion post to “The Magic Of Design,” I’m posting a small collection of articles dealing with the interface of actors and acting with computers, CGI, and computer animation.  Some of this is straight-up fascinating and intriguing, but other aspects and implications are daunting—especially to actors.]

DREAMING THE IMPOSSIBLE AT M.I.T.
by Philip Elmer-Dewitt
reported by Robert Buderi/Cambridge 

[In "Theater and Computers,” I affirmed that the impetus for writing that post had been an article I’d read decades earlier in Time magazine “that reported on a new computer program, developed at MIT, that let playwrights test scenes on screen without hiring actors and a stage.”

[That report was “Dreaming the Impossible at M.I.T.” by Philip Elmer-Dewitt, which appeared in Time on 31 August 1987—and I’m posting it now on ROT as a sort of predecessor to the AT series, which is almost 30 years newer than the Time report (and six years newer than my own take on Elmer-Dewitt’s discussion).]

In the Media Lab, the goal is to put the audience in control.

What if television sets were equipped with knobs that let viewers customize the shows they watch? If they could adjust the sex content, for example, or regulate the violence, or shift the political orientation to the left or right? What if motion pictures were able to monitor the attention level of audiences and modify their content accordingly, lengthening some scenes while cutting others short if they evoke yawns. What if the newspapers that reach subscribers’ homes every morning could be edited with each particular reader in mind – filled with stories selected because they affected his neighborhood, or had an impact on his personal business interests, or mentioned his friends and associates?

There are a lot of what ifs,” but none of these is mere futuristic fantasy. All of them, in fact, are the goals of research projects now under way at the Media Laboratory, a dazzling new academic facility at the Massachusetts Institute of Technology. The lab’s unique mission is to transform today’s passive mass media, particularly TV, into flexible technologies that can respond to individual tastes. Because of advances in computers, says Nicholas Negroponte [b. 1943], 43, the co-founder and director, “all media are poised for redefinition. Our purpose is to enhance the quality of life in an electronic age.”

Two years ago [i.e., September 1985], when the lab first opened its doors in Cambridge, Mass., the announced intention of “inventing the future” seemed like an impossibly vague undertaking. But Negroponte has made believers of much of the corporate and academic establishment. Bankrolled by more than 100 business and government sponsors, he has filled his $45 million facility with a group of 120 gifted researchers that includes some of the brightest and quirkiest minds in computer science: Marvin Minsky [1927-2016], dean of artificial-intelligence research; Seymour Papert [1928-2016], disciple of Child Psychologist Jean Piaget [1896-1980] and a leading advocate of computerized education; Alan Kay [b. 1940], one of the most influential designers of personal computers.

Some of the projects are still in the visionary stage, but several investigative teams have come up with working products and prototypes. In many cases, research relating to electronic media has led to spin-offs that could have wide applications for both individuals and businesses. Consider the following:

The lab’s Conversational Desktop is a voice-controlled computer system that acts like an automatic receptionist, personal secretary and travel agent – screening calls, taking messages, making airline reservations. “Get me two seats to the Bahamas," says Research Scientist Chris Schmandt [b. 1952] to his computer. “When do you want to go?" replies the machine. 

⏵NewsPeek is a selective electronic newspaper made of material culled daily from news services and television broadcasts. By sliding their fingers across the screen of a computer terminal, viewers can ask to see lengthier versions of particular stories, roll selected videotapes or call up related articles. The computer remembers what it has been asked to show and the next day tailors its news gathering to search for similar stories. Says Associate Director Andrew Lippman [b. 1949]: “It’s a newspaper that grows to know you.

The lab has developed the world’s first computer-generated freestanding hologram – a three-dimensional image of a green Camaro sedan suspended in midair. Unlike most holographic images, which are put onto flat photographic plates, the Camaro is recorded on a concave plate and projected into the air by laser beams. The hologram was designed with funding from General Motors, which still painstakingly builds scale models of new car designs out of clay. In the future, GM and other automakers may be able to use holograms to see what a car will look like before it is actually manufactured. Eventually, such images may be made by laser-age copying machines for a few dollars apiece.

In the field of fine arts, the world-class music research center in the lab has already produced the Synthetic Performer, a computerized piano-playing accompanist. The system not only plays along with soloists but also adapts to changes in their tempo and cadence without losing a beat. The project is part of an ongoing effort to explore the mysteries of harmony and composition by teaching music appreciation to computers. 

Negroponte began raising funds for the Media Lab in 1980 with the help of Jerome Wiesner [1915-94], former M.I.T. president. The two men sought out publishers, broadcasters and electronics manufacturers whose businesses were being transformed by the advent of VCRs, cable television and personal computers. Then they hinted broadly that the faculty at M.I.T. knew precisely where all this was headed. Money came in from such leading sponsors as IBM, CBS, Warner Communications, 20th Century Fox, Mitsubishi, Time Inc. and the Washington Post. Sponsors can send scientists and other observers to the Media Lab and make commercial use of any of the facility’s research. Though many of the projects may never yield commercial or educational applications, only one company, Toshiba, has failed to renew its funding.

Visitors to the lab, a sleek four-story maze of gadget-filled work areas, are assaulted by strange sights. In a 64-ft.-high atrium, 7-ft.-long computer-controlled blimps may be flying overhead -- part of a project to develop stimulating science activities for elementary and high schools. In another area visitors encounter computers that can read lips. After spending three months at M.I.T. last year, Stewart Brand [b. 1938], the counterculture guru who originated the Whole Earth Catalog [1968-71], was impressed enough to write a flattering book titled The Media Lab, which will be published next month by Viking Press [September 1987].

But the lab’s high-tech razzle-dazzle masks plenty of serious business. Investigators are experimenting with new forms of teleconferencing. One idea involved projecting video images of individuals onto plaster casts of their faces. The resulting “talking heads” were so lifelike that people using the system felt they were "meeting" with colleagues who were actually in another city. A major effort is also being made to enhance computer animation. Assistant Professor David Zeltzer [b. 1949], building on research he started at Ohio State, is developing new ways of simulating human figures and movement. One application would allow playwrights to see just how scenes would look without having to hire live actors to try them out.

                                    David Zeltzer’s ”Skeletal Animation System,” ca. 1982

Within the Media Lab there is a lurking fear that the research might prove too successful. Some of the scientists, who point to TV’s mesmerizing impact, worry about creating new media so powerfully seductive that they might keep many viewers from venturing into the real world. Minsky, for one, has given that a lot of thought. “Imagine what it would be like if TV actually were good,” he told Brand. “It would be the end of everything we know.” Yet he and his groundbreaking colleagues seem more than willing to take that risk.

[Philip Elmer-DeWitt (b. 1949) is a writer and editor and was Time magazine’s first computer writer and for 12 years its science editor.   He produced much of the magazine’s early coverage of personal computers and the Internet.  He retired from Time Inc., which he joined in 1979, in 2008 and is currently writing a daily blog about Apple Inc.

[I’m sure all you ROTters caught the comment that exercised actors—and even directors and designers: the statement in the last sentence of the penultimate paragraph reporting on the work of David Zeltzer, the expert in computer graphics and 3D animation.

[As you might imagine, Zeltzer’s now-primitive computer theater was seen as the nose of a very scary camel inside the tent.  If playwrights figure they don’t need actors, directors, and designers to see their work come alive, what might ensue?  (Imagine: holographic actors performing on a CGI set being reviewed by robot critics!  Oy vey iz mir!)  We could all be out of business permanently.]

[The trepidation I recounted among actors in the face of the MIT experiments Philip Elmer-DeWitt reported in the article above is evident in Rick Lyman’s New York Times article on 8 July 2001, “Movie Stars Fear Inroads By Upstart Digital Actors.”  Computer animation had advanced considerably in the 14 years since the Time magazine report.]

Aki Ross, a versatile young actress who stars in a movie to be released this week, rakes slender fingers through wind-rippled hair. The light contracts her pupils and glistens on sweat-streaked cheeks, as her eyes sparkle with the eerie illusion of intelligence.

It feels eerie because Aki is composed only of pixels, and she is created and manipulated by a computer animator who works his mouse like a weaver at his loom. Aki makes her bid for stardom as the heroine of “Final Fantasy: The Spirits Within,” a fully digital film to be released on Wednesday [11 July 2001] by Columbia Pictures that is loosely based on a popular series of video games [Final Fantasy, a role-playing video game developed by Square Co., Ltd., in 1987].

“The eyes are one of the single biggest things that make people alive,” said Andy Jones, the animation director. “We’re moving the eyes around to make the character seem like it’s thinking and feeling for itself. Like there’s a soul.”

In Hollywood, many people believe digital production and distribution will revolutionize the way movies are shot, edited and sent to a multiplex near you. Already, the computer has made it possible to convincingly recreate ancient Rome and a dogfight over Pearl Harbor. Until now, just about all that has remained beyond the computer’s grasp has been the actor’s realm: too nuanced, too human, too unknowable for the animator’s skill.

But with movies like “Final Fantasy,” filmmakers are beginning to create photo-realistic computer characters that, at least in fleeting moments, will try to convince the audience that actual humans are on the screen.

It is called photo-realistic animation, and “Final Fantasy” promises to carry it further than any movie has.

Not everyone is overjoyed.

“I am very troubled by it,” said Tom Hanks, who does not like to think that his carefully chosen roles and hard-fought performances can be tampered with by after-the-fact computer auteurs, or that someone might make unwanted use of his digital self. “But it’s coming down, man. It’s going to happen. And I’m not sure what actors can do about it.”

The specter of the digital actor – a kind of cyberslave who does the producer’s bidding without a whimper or salary – has been a figure of terror for the last few years in Hollywood, as early technical experiments proved that it was at least possible to create a computer image that could plausibly replace a human being. But as “Final Fantasy” makes its way into theaters – the first of what promises to be a string of movies trying to put this challenge to the test – many wonder if the threat is as real as it once seemed, or if it simply takes computer animation down a fruitless cul-de-sac.

“I believe that I have used more digital characters than anyone,” said George Lucas, whose Jar Jar Binks, a virtual character in [1999’s] “Star Wars: Episode 1 – The Phantom Menace,” helped raise concerns in Hollywood. “But I don’t think I would ever use the computer to create a human character. It just doesn’t work. You need actors to do that.”

Steven Spielberg put it even more succinctly: “It’s a nonissue.”

But this has not alleviated the concerns of actors like Mr. Hanks, who are suspicious of the ways their images could be used in photo-real computer animation. And the Screen Actors Guild, which has closely monitored the use of digital actors since the emergence of Jar Jar Binks, says it will do so with even more vigor as photo-real characters actually begin to appear on the screen.

At Harbor Place, a new skyscraper on the downtown Honolulu waterfront, where Square Productions [a subsidiary of Square Co.], famous for its trend-setting video games, has set up a filmmaking division, the computer’s aquarium-blue glow filled a small cubicle on the 16th floor. It is as close to halfway between Tokyo and Hollywood as you can get without treading water.

“There is a Japanese saying that comes from the art of dollmaking, a sort of catch phrase, that the face is the life of the doll,” said Hironobu Sakaguchi, a celebrated Japanese video-game creator who is making his feature-film directing debut with this movie.

For now, it is impossible for computer-generated films to be made without actors. Actors are often used to capture the movement of characters, and as yet, no one has been able to figure out how to do without the voices of actors like James Woods, who personalizes the quasi villain General Hein in “Final Fantasy.”

The greater concern is not that digital actors will replace movie stars – even the most optimistic projections of the technology put that prospect far in the future – but that the technology may make it easier for the unscrupulous to make improper use of actors’ images (or of digital creations that are strikingly reminiscent of celebrities).

So far, the most significant legal challenge came in 1999, from Robyn Astaire, the widow of Fred Astaire [1899-1987]: she sued the Fred Astaire Dance Studios for using images from her late husband’s films in advertisements. Her suit failed, but she took her case to the California Legislature, which passed a bill [Astaire Celebrity Image Protection Act, 1999] making clear that the rights to celebrity images remained with the heirs for 70 years after the celebrity’s death.

Columbia Pictures showed about 17 minutes of “Final Fantasy” to some entertainment writers in the spring, and has only recently begun to screen the finished film for industry audiences. Many who have seen parts of the film have reacted with a mixture of astonishment and disappointment. “When it works, it works,” one rival studio marketing executive said. “And it works more often than I thought it would.”

[According to the movie website IMDb, due to the poor box-office performance of Final Fantasy, the first computer-animated motion picture with photo-realistic characters, Square Pictures announced its withdrawal from the film business in October 2001.]

Since 1982, when computer-generated images [were first used in entire sequences*] in a feature film with “Tron,” a not-so-secret goal of many computer animators has been to create convincingly lifelike human characters. Many have even dreamed of using the technology to bring long-dead stars back to life or, more intriguingly, to create virtual images of a performer in youth and then graft that digital skeleton over the shape of that same actor, now middle-aged or older. The prospect delights many directors, who dream of an endlessly pliable performer whose work can be digitally tweaked to generate exactly the desired effect.  

[*statement adjusted pursuant to a correction that appeared in the New York Times a week after this article was published.]

“Filmmaking is always going to be a collaborative art,” the director Ron Howard said. “But we are getting to the point where the director will have even greater control over the look and feel of the film, even down to the individual performances.”

And producers can see the benefits of a performer who requires no salary, no days off, no coterie of agents and publicists, one who could be called into service at any time to promote or endorse anything, with every nickel going into the producer’s pocket. The topic will be explored intriguingly in “Simone,” a film currently in production for release in 2002 from the writer-director Andrew Niccol (“The Truman Show” [1998]). Al Pacino plays a movie producer whose star storms off the set. He responds by secretly replacing her with a digital actress. (The filmmakers are being coy about whether Simone will be played in the film by a live actress, a digital one or some combination of the two.) The problem is that he succeeds too well. Simone – Sim One, get it? – becomes an overnight sensation, and the producer must prolong the illusion that she is an actual person [see also  “‘Virtual Superstars?’” in “Computers and Actors,” Part 2, coming 7 October].

[After seeing the photo-realism of the computer-generated actors in Final Fantasy, IMDb reports, the producers of Simone (styled on the poster and in advertising as S1mØne) started to lean toward the idea of having Simone actually be a computer generated actress.  However, after opposition from the Screen Actors Guild (now merged with American Federation of Television and Radio Artists to form SAG-AFTRA), arguing that replacement of actors in all movies would be the next logical step, the idea was abandoned.  Ultimately, Simone was played by (human) actress Rachel Roberts.  The film received mixed reviews from critics and was considered a minor box-office hit.]

The technological tools that might allow computer animators to create convincing digital actors would also give producers and directors the ability to alter or augment a performance, whether the actor likes it or not. In the climax of [1997’s] “Contact,” the director Robert Zemeckis wanted a long, emotional close-up of the star, Jodie Foster, as she stared into a visionary Eden beyond the stars. But in the last few seconds of the best take, one of Ms. Foster’s eyebrows involuntarily twitched upward, Mr. Zemeckis explained. “So I just went in and moved the eyebrow,” he said.

While this instance falls far short of actually reaching in and creating a performance out of whole cloth, it certainly points in that direction.

“I know Tom is worried about it,” said Mr. Zemeckis, who has frequently collaborated with Mr. Hanks, including on the Oscar-winning “Forrest Gump.” In that 1993 film, Mr. Zemeckis used what were then the latest digital tools to implant the actor’s image into actual historical scenes, though not to alter Mr. Hanks performance.

“But I’ve taken to making digital scans of all of the actors in my movies,” Mr. Zemeckis said. “I know some are worried about what uses will be made of it, but think of what we could have – complete digital versions of actors at various stages in their life.”

Many of the biggest leaps in computer animation are introduced at Siggraph, the annual gathering of the nation’s computer graphics and animation specialists. One of the most talked-about efforts of recent years was a film shot for Seattle’s new rock ’n’ roll museum. [The Artist’s Journey: Funk Blast, 2000; the museum, now called the Museum of Pop Culture, was the Experience Music Project.] The film appears to be a performance by a young James Brown. But it isn’t. The filmmakers cunningly superimpose the performance of the current-day Mr. Brown, who is 68, over a digital skeleton of the performer as a young man. [Brown died in 2006 at age 73.]

“The next hurdle, the next step, will be a soliloquy or a dramatic performance,” said Joshua Kolden, who worked on the James Brown project. “It won’t be long.”

But it will be trickier, he and his colleague Andre Bustanoby agreed, because people are accustomed to the signals that tell us whether someone is sincere, threatening, flirtatious, sober or plain off his noodle. “The problem with human faces is that you get just a little bit off, and it immediately becomes very disturbing,” Mr. Bustanoby said.

Eventually, if animation technology and artistry continue to improve, it will be possible for directors to reach deeper into a filmed performance – doing more than simply unarching an eyebrow. If preview audiences didn’t like the ending where the hero died, it’s easily fixed. And cheaply, too. The hero can just be digitally resuscitated and sent off into a virtual sunset.

“The advantage of computer-graphic actors is that they don’t do any complaining,” Mr. Sakaguchi said. “The vision I have is to take the characters that we have in this movie and basically help them be viewed as real actors and actresses. And so, we sort of become a talent agency.”

Well, exactly.

Once the goal of creating a photo-real human character is reached – if for no other reason than to show that it is indeed possible – many computer animators believe that the next generation of animated films will move away from photo-realism. Already, many projects under way are tending toward a warmer, almost impressionistic look.

“Once you sit down in front of that box, it’s infinity in there,” said Neil Eskuri, who was the digital effects supervisor for Disney’s [2000] “Dinosaur,” a hybrid of photo-real computer animation and live-action footage. “Given enough time, you can make that box do anything.”

*  *  *  *
THE ANCIENT ART OF KABUKI MADE NEW, WITH COMPUTER ANIMATION
by Micheline Maynard

[In that same 2009 post, I cited as an example of the collision of computers and theater a kabuki production by Koji Orita.  From the “Circuits” section (sec. G) of the New York Times of 2 May 2002, here’s a report on that experiment in computer graphics for the stage.]

KOJI ORITA was looking for a way to stage a kabuki play that tells of a mythical creature who escapes from a river and becomes the companion of a bumbling fisherman. He found his solution watching the movie “Terminator 2.”

Inspired by the computer graphics techniques used to turn Arnold Schwarzenegger into a cyborg, Mr. Orita created an animated creature that sang, ate and conversed onstage during the play, “Aki No Kappa,” performed in March at the National Theater of Japan [9-24 March 2002; the first time in Kabuki history CG animations were used in a performance].

The show was a milestone for kabuki, a 600-year-old art form in which men play all the parts and the sets and the acting style are guided by tradition [see my post “Kabuki: A Trip to a Land of Dreams,” 1 November 2010]. “We wanted to show an audience that we could create a new type of play, even within the old arts of Japan,” said Mr. Orita, who has been a kabuki director for 30 years.

“Aki No Kappa” had won a competition for new kabuki plays in 1982 but had never been staged. The script did not call for the water creature, or kappa, to be seen by the audience. It was meant to be imaginary, like the giant rabbit in “Harvey.” But Mr. Orita wanted it to be visible to the audience, like the ghost of Hamlet’s murdered father.

Years before, Mr. Orita had visited the Universal Studios theme park near Los Angeles, where technicians explained special effects techniques used in movies. But the kappa could not be filmed because it had to react to the other actors onstage. Mr. Orita began scouting in Tokyo for computer technicians familiar with real-time motion systems, which combine three-dimensional computer graphics with animation to make characters that can move and speak.

First, the theater’s art department made a detailed drawing of the kappa. The picture was scanned into a computer and translated into a series of mathematical calculations.

Mr. Orita, meanwhile, selected an actor from the Kabuki Company to play the kappa offstage. The actor was wired with sensors at 12 key joints on his body. The sensors fed movement data into the computer, which transmitted it to matching points on the cartoon version of the kappa. The information created a hologram that was projected on a 67-inch screen. Each time the actor moved, the kappa moved, too.

Getting from concept to live performance required some ingenuity and cooperation from the cast. It was the first time the computer experts had worked with kabuki actors, said Mr. Orita, who called the play “a test case.” Despite the complexity of the task, he said, the addition of the kappa required only one more day of rehearsal than other plays.

Along the way, the computers occasionally froze, leaving the kappa in suspended animation. And more than once, an error message was transmitted to the screen instead of the kappa. When that would happen, an actor would jump in front of the screen, arms outstretched, as if to keep the audience from seeing the mishap.

The play’s three-week run was glitch-free and almost a sellout. And Mr. Orita says the kappa will come back to the National Theater. He plans to use the kappa and its wife and child (who are not depicted in the play) as the virtual guides for an interactive kabuki education center that the theater expects to open next year. [In 2003, the Traditional Performing Arts Information Center opened behind the National Theater.]

Mr. Orita said that computerized characters were likely to appear in future kabuki shows. Already the technology is more refined, meaning that actors will not need to be tethered to sensors, he said. He believes that the animated characters will someday be able to move and speak on their own.

The new methods have enlivened an ancient tradition, Mr. Orita said, providing new proof that “kabuki is a performance art that is very much alive and kicking.”

[It’s one of the attributes of kabuki that alongside its ancient and codified acting and production practices, called kata, it allows for and even welcomes modern techniques—although computer animation is by far the most radical adoption of which I’d ever heard.

[In addition to “A Trip to a Land of Dreams,” I have one more kabuki article on ROT: “Grand Kabuki (July 1985),” a review of the 1985 performances of the Tokyo company at the Met, 6 November 2010.  There are also some reposted pieces from other authors on the subject: “Two Kabuki Reviews (2014)” by Charles Isherwood (New York Times) and Joan Acocella (New Yorker), 20 January 2018, and “‘Kabuki: Inside the Japanese Artform with its Biggest Star, Ebizo’” by Jon Wertheim (60 Minutes), 1 May 2020.]


No comments:

Post a Comment