Lights, camera… cloud: How film is spreading its wings- Zoe Kleinman

Jaylah and Scotty from Star Trek Beyond

We’re used to streaming TV and films to our digital devices over the cloud these days, using services such as Netflix, Amazon Prime, iPlayer and YouTube. But cloud computing is also having a big impact on how this entertainment is being created. Take US-Canadian visual effects studio Atomic Fiction, for example. It worked on films such as Star Trek Beyond, Deadpool, and upcoming Brad Pitt movie Allied.

But Laurent Taillefer, the firm’s computer graphics supervisor, believes his company would not have been able to compete with larger studios without access to outsourced cloud computing power.

Rendering – the process of assembling all the component elements of a film – video, audio, graphics, filters and so on – into one final version, can take an agonisingly long time and requires vast computing power, he says. “The number of shots we are dealing with… and the level of detail of their contents – the photo-real reconstruction of Manhattan for Robert Zemeckis’ movie The Walk, for example – require a computational power that would imply a massive investment which would make it impossible for a studio like ours to be competitive,” says Mr Taillefer.

Brad Pitt and Marion Cotillard
Image captionBrad Pitt and Marion Cotillard’s new film Allied opens in November

So Atomic Fiction uses a cloud-rendering service called Conductor, which gives the firm access to turbo-charged computing power as and when it needs it.

“For Deadpool,” he says, “some shots of the city had so much detail in the models and textures that rendering the final images required more memory than available on standard computers. “Cloud machines offered us that missing power, making extremely complex shots possible to render.”

‘Very enticing’

These cloud-based services – Google-owned Zync and Rayvision are two others – and their “pay-for-what-you-use” business models, are giving smaller studios the chance to compete with the biggest companies in the world. “A lot of businesses like the scalability of the cloud,” says Simon Robinson, chief scientist at The Foundry, a firm that makes software tools for the film industry.

“If you know you can produce something that a very large company can do – that’s very enticing. It gives you that combination of scalability and accessibility to play up there with the big firms.”

Drawing of sci-fi monster from film Is This Heaven
Image captionThe Foundry collaborates with filmmakers via the cloud, for instance with Marauder Film on this sci-fi project, Is This Heaven

Before the cloud, some small studios found it difficult to handle the huge file sizes the switch to digital film-making entailed.The processing power required to create 21st Century film special effects is “up there with supercomputing”, he says.

For example, a film in production can grow in size to a petabyte of data – that’s the equivalent of 1,000 terabyte hard drives. And all this data needs to be moved around, manipulated, uploaded and downloaded by the various teams involved in the stages of movie post-production. So the benefits of sticking it somewhere remote and secure, yet accessible, may seem obvious.

‘Complete lockdown’

But while the public cloud giants, Amazon, Microsoft and Google, already have vast data centres with rentable capacity, Hollywood studios have been slow to make use of this “public space”, preferring instead to build their own cloud infrastructures. Why? One reason is Hollywood studios have invested large sums in their own private data centres and private clouds so are reluctant to give up on that investment immediately, despite cheaper alternatives becoming available.

Security is another concern.

Hollywood director King Vidor and wife Eleanor Boardman
Image captionHollywood technology has changed a lot since King Vidor (left) directed films

“As you can imagine the film industry is highly paranoid about security and data,” says Mr Robinson. “The security that a lot of the cloud vendors can offer now is as good as anything else…. but what people worry about comes back to our old friends the humans – mistakes and lapses that humans make.”

Dr Richard Southern, senior lecturer in computer animation at Bournemouth University, agrees, saying: “In our crime-focused world, studios are in complete lockdown. Take [visual effects company] MPC, which is working on the Marvel films. No way would they permit a public system to be used in their production management.”

Another concern is about the speed and reliability of cloud networks.

Technician working on scene from Gravity
Image captions designer for the visual effects firm Framestore works on a scene from the film Gravity

“Let’s say you are rendering at 30 frames per second,” says Dr Southern. “Each frame can be up to 100 megabytes. So the amount of data you are transferring becomes completely unmanageable if the network you are on is poor.” This is why many studios with deeper pockets have decided to keep the data in-house and build their own “server farms”.

And uploading and downloading data in the huge quantities demanded by a film project still takes time and that time would be billed for by a cloud provider, Mr Robinson adds. “It’s still a slight barrier today. We’re just near the edge now where people can say: ‘I can either go and buy [my own hardware] or I can go to large [cloud] vendors.’ “It’s almost worth their while doing it but the cost is still such that they often say: ‘Ah forget it, I’ll just buy my own.'”

Entire Studios in the Cloud: Future of Entertainment and Cloud-Computing Adoption-MAURICE PATEL

In a scene from the recent film The Walk, artist Philippe Petit, portrayed by Joseph Gordon-Levitt, takes his first steps onto a high wire strung between the Twin Towers of the World Trade Center. Clouds glide gently past his feet as the audience experiences the dizzying heights of drama.

Based on a true story of the ultimate high-wire stunt in 1974, director Robert Zemeckis’ film is a monumental achievement in visual effects production. Not too coincidentally, visual effects studio Atomic Fiction used technology in the cloud to create these epic scenes.

As a whole, the entertainment industry has been slower to adopt cloud computing than other major industries. Security has been a major concern, as has cost and performance—especially given the unique requirements of the entertainment industry, where file sizes tend to be large and data storage needs can quickly reach terabytes. On the other hand, large-scale computing has become so essential to movie production that many visual effects companies have set up their own private data centres (render farms), typically comprising hundreds to thousands of servers. However, industry attitudes toward the cloud have recently started to change.

Atomic Fiction’s Kevin Baillie is an early pioneer in the use of cloud technology for movie production. He attributes much of the challenge to the cost of transitioning from the current infrastructure and to concerns about security. But for him, the benefits are sky-high.

Behind the scenes of visual effects production for The Walk, starring Joseph Gordon-Levitt. Courtesy Atomic Fiction.

“From a business perspective, one of the biggest advantages of the cloud is the cost savings, but there actually ends up being a creative advantage, too,” Baillie says. “On The Walk, we saved about 50 percent over what we would have spent using traditional infrastructure. And that’s just if we leased 600 servers, which is the capacity we needed to hit the deadlines and get the turnaround that the artists needed. Instead of electricity, people, and leases, we only paid for what we used.”

The savings can be greater if compared to purchasing the computer equipment needed instead of leasing it. Cloud services can substantially reduce both the capital expenditure and the fixed costs of a visual effects company. They also can get things done a lot faster. According to Baillie, a scene from The Walk that would take 10 hours to get back using 100 computers could just as easily be done in an hour on 1,000 computers and for the same cost. “There’s no monetary penalty since you’re getting billed for a cumulative amount of time that was spent processing a task and not for the computers themselves,” he says. “You can actually get the results back much quicker when the ideas are still fresh in an artist’s head.

“When we explain this to people, you can see the light bulbs start to turn on, and they think, ‘Oh, going bigger is better,’” Baillie continues. “There is also a lot of benefit to having the improved creativity that comes from artists actually being able to work and iterate because they’re not sitting around waiting.”

Seeing the benefit doesn’t always equate to a massive exodus to the cloud, though. The infrastructure disruption and lack of expertise create significant hurdles. But many companies are working to provide solutions to these challenges.

“It does take time to create a massively scalable cloud implementation, which, to me, is the whole point of it,” Baillie says. “It’s not only years’ worth of development; it also takes web technologies that are really unfamiliar to people in the entertainment industry. It’s like speaking Greek to them. If you don’t work at Airbnb or Yelp, they’re not going to know about Docker and how containers work, which are key concepts to deploying applications on the cloud. But now there are tools in the industry, including Conductor [Atomic Fiction’s platform that the company has now spun off as a product], that can help people get over that hump and take advantage of all the awesome technologies without having to suffer the learning curve.”

And then there’s the biggest elephant in the room: security. Nothing is more under wraps than a film and its assets, considering the concerns about potential leaks before a release. It literally can make or break a production. The Motion Picture Association of America (MPAA) publishes guidelines on security compliance for cloud deployments, but this does little to alleviate the unease in the industry when it comes to security. However, there is a growing awareness that public cloud providers such as Amazon, Google, and Microsoft are spending far more time, money, and resources on data security and protection than even the largest IT departments in entertainment. As the Sony Entertainment hack in December 2014 showed, even the data behind the corporation’s firewall is far from invulnerable.

Baillie acknowledges this and agrees that the real security problem is in the traditional setup.

“The more we work with the cloud, the more I become convinced that a good cloud implementation can actually be even safer than the vast majority of local installations,” he says. “I feel that with a lot of local installations, there’s a false sense that because everything is inside the company’s walls and because they’re a good company, everything is going to be safe. That’s absolutely not the case.”

As the cloud continues to become more and more affordable and bandwidth continues to ramp up, smaller studios will be able to compete more effectively with bigger studios. The “little guys” will soon be able to access data centres as large as those of Digital Domain, Weta, or Industrial Light & Magic at a fraction of the cost. And when it comes to replacing equipment, the bigger studios will increasingly need to evaluate whether it still makes financial sense to buy the computer hardware themselves.

“Right now, it’s still mainly the midsize and larger companies that can access the kind of bandwidth needed to make the cloud work really well and quickly for their artists,” Baillie says. “Within a few years, that’s going to change. It’ll be the everyday Joe who will have this kind of creativity-enhancing access. When that happens, it’s really going to democratize computing and visual effects because Joe in his garage will have the ability to render files of the same size and complexity as Industrial Light & Magic or others.”

And it’s not just who is creating the visual effects; it’s how the entertainment industry will approach the entire production process. This trend is already clear in the rapid growth the industry is seeing in cloud-based production management, collaboration, and review approval and tools like Autodesk Shotgun and RV.


“The big, big picture for the industry is, we’re going to be moving everything to the cloud, and, by that, I mean every application is going to be running in the cloud,” Baillie says. “You’ll no longer have to synchronize data to the local sites. It will save people the cost of workstations, storage, render farms, and all the security infrastructure. I think it’s pretty amazing because it will enable companies to start up an entire studio at a snap of the fingers—fully armed and operational within a day and with a state-of-the-art studio structure.”

How James Cameron’s Innovative New 3D Tech Created Avatar-By Anne Thompson

Director James Cameron is known for his innovations in movie technology and ambitions to make CG look and feel real. Avatar put his reputation to the test. How did Cameron make blue, alien creature look real on the big screen?

Director James Cameron holding an antique stereoscope.

The 280,000-square-foot studio in Playa Vista, Calif., has a curious history as a launching pad for big, risky ideas. In the 1940s, Howard Hughes used the huge wooden aeroplane hangar to construct the massive plywood H-4 Hercules seaplane—famously known as the Spruce Goose. Two years ago, movie director James Cameron was in the Playa Vista studio at a crucial stage in his own big, risky project. He was viewing early footage from Avatar, the sci-fi epic he had been dreaming about since his early 20s. Cameron’s studio partner, Twentieth Century Fox, had already committed to a budget of $200 million (the final cost is reportedly closer to $300 million) on what promised to be the most technologically advanced work of cinema ever undertaken. But as Cameron looked into his computer monitor, he knew something had gone terribly wrong.

The film—although “film” seems to be an anachronistic term for such a digitally intense production—takes place on a moon called Pandora, which circles a distant planet. Jake Sully, a former Marine paralyzed from the waist down during the battle on Earth, has travelled to this lush, green world teeming with exotic, bioluminescent life to take part in the military’s Avatar program. The human settlers are interested in mining Pandora’s resources but can’t breathe its toxic atmosphere, so to help explore the moon and meet with the native Na’vi who live there, Sully has his consciousness linked with a genetically engineered 9-foot-tall human-alien hybrid.

Cameron wrote his first treatment for the movie in 1995 with the intention of pushing the boundaries of what was possible with cinematic digital effects. In his view, making Avatar would require blending live-action sequences and digitally captured performances in a three-dimensional, computer-generated world. Part action–adventure, part interstellar love story, the project was so ambitious that it took 10 more years before Cameron felt cinema technology had advanced to the point where Avatar was even possible.

The scene on Cameron’s screen at Playa Vista—an important turning point in the movie’s plot—showed Na’vi princesses Neytiri, played by Zoë Saldana, as she first encounters Sully’s Avatar in the jungles of Pandora. Everything in the forest is luminous. Glowing sprites float through Pandora’s atmosphere, landing on Sully as Neytiri determines if he can be trusted. Playing Sully is Sam Worthington, an Australian actor whom Cameron had plucked from obscurity to play the movie’s hero. Cameron was staring directly into Worthington’s face—or, rather, he was looking into the face of a digitally rendered Worthington as a creature with blue skin and large yellow eyes—but he might as well have been staring into a Kabuki mask.

The onscreen rendering of Worthington was supposed to be a sort of digital sleight of hand—a human character inhabiting an alien body so that he could blend into an alien world, played by a human actor inhabiting a digital body in a digital world. To make the whole thing work, Worthington’s performance, those subtle expressions that sell a character to the audience, had to come through the face of his Avatar. But after millions of dollars of research and development, the Avatar‘s face was not only lifeless, it was downright creepy. It “scared the crap out of me,” Cameron recalls. “Horrible! It was dead, it was awful, it wasn’t Sam. God, I thought. We’ve done everything right and this is what it looks like?”

The reaction Cameron was feeling has a name. It’s called the uncanny valley, and it’s a problem for roboticists and animators alike. Audiences are especially sensitive to renderings of the human face, and the closer a digital creation gets to a photorealistic human, the higher expectations get. If you map human movements and expression to cute furry creatures that dance and sing like people, then audiences willingly suspend disbelief and go along with it. (Think of the penguins in Happy Feet.) But if you try to give a digital character a humanoid face, anything short of perfection can be uncanny—thus the term. Sometimes audience unease is to a character’s advantage; in The Lord of the Rings, the creature Gollum was supposed to be unsettling. But Cameron was looking for empathy, and in the first footage, that’s not what he got.

Why is the computer-generated face of a blue, cat-eyed human-alien hybrid so important? Well, for one thing, lots of money is riding on it. But so, to an extent, is James Cameron’s stature as an unstoppable force in Hollywood. Cameron has built up enormous fame and power based on his reputation as a technical innovator—pushing the science and technology of modelmaking, digital animation and camera engineering. But Cameron is perhaps even more famous as the industry’s biggest risk-taker, which might have made him a lot of enemies if his risks hadn’t been so spectacularly rewarded in the past. In 1997, the film Titanic taught Hollywood a powerful lesson in Cameronomics: The director’s unquenchable thirst for authenticity and technological perfection required deep-sea exploratory filming, expensive scale models and pioneering computer graphics that ballooned the film’s budget to $200 million. This upped the ante for everyone involved and frightened the heck out of the studio bean counters, but the bet paid off—Titanic went on to make $1.8 billion and win 11 Academy Awards.

A unique hybrid of scientist, explorer, inventor and artist, Cameron has made testing the limits of what is the possible part of his standard operating procedure. He dreams almost impossibly big and then invents ways to bring those dreams into reality. The technology of moviemaking is a personal mission to him, inextricably linked with the art. Each new film is an opportunity to advance the science of cinema, and if Avatar succeeds, it will change the way movies are captured, edited and even acted.

Filmmakers, especially those with a technical bent, admire Cameron for “his willingness to incorporate new technologies in his films without waiting for them to be perfected,” says Bruce Davis, the executive director of the Academy of Motion Picture Arts and Sciences. It adds to the risky nature of Cameron’s projects, but his storytelling has reaped enormous benefits. There’s a term in Hollywood for Cameron’s style of directing, Davis says: “They call this ‘building the parachute on the way down.'”

But repeatedly pulling off these feats of derring-do requires both the drive of an ambitious egomaniac and an engineer’s plodding patience. “You have to eat pressure for breakfast if you are going to do this job,” Cameron says. “On the one hand, the pressure is a good thing. It makes you think about what you’re doing, your audience. You’re not making a personal statement, like a novel. But you can’t make a movie for everybody—that’s the kiss of death. You have to make it for yourself.”

Gonzo Effects

Cameron’s dual-sided personality has roots in his upbringing—the brainy sci-fi geek from Chippewa, Ontario, was raised by a painter mother and an engineer father. “It was always a parallel push between art and technology,” he says. “My approach to filmmaking was always very technical. I started off imagining not that I would be a director, but a special-effects practitioner.”

Unable to afford to go to film school in Los Angeles, Cameron supported himself as a truck driver and studied visual effects on weekends at the University of Southern California library, photocopying dissertations on optical printing and the sensitometry of film stocks. “This is not bull,” he says. “I gave myself a great course on film FX for the cost of the copying.”

Cameron eventually landed a job on the effects crew of Roger Corman’s low-budget 1980 film Battle Beyond the Stars, but he didn’t tell anyone that he was an autodidact with no practical experience. When he was exposed to the reality of film production, it was very different from what he had imagined, he recalls: “It was totally gonzo problem-solving. What do you do when Plans A, B and C have all crashed and burned by 9 am? That was my start. It wasn’t as a creative filmmaker—it was as a tech dude.”

Over the years, Cameron’s budgets have increased to become the biggest in the business, and digital technology has changed the realm of the possible in Hollywood, but Cameron is still very much the gonzo engineer. He helped found the special-effects company Digital Domain in the early 1990s, and he surrounds himself with Hollywood inventors such as Vince Pace, who developed special underwater lighting for Cameron’s 1989 undersea sci-fi thriller, The Abyss. Pace also worked with Cameron on Ghosts of the Abyss, a 2003 undersea 3D documentary that explored the wreck of the Titanic. For that movie, Pace and Cameron designed a unique hi-def 3D camera system that fused two Sony HDC-F950 HD cameras 2½ inches apart to mimic the stereoscopic separation of human eyes. The Fusion Camera System has since been used for 3D movies such as Journey to the Center of the Earth and the upcoming Tron Legacy, and at sporting events such as the 2007 NBA finals.

The 3D experience is at the heart of Avatar. (In fact, some suspect that Cameron cannily delayed the movie’s release to wait for more theatres to install 3D screens—there will be more than 3000 for the launch.) Stereoscopic moviemaking has historically been the novelty act of cinema. But Cameron sees 3D as a subtler experience. To film the live-action sequences of Avatar, he used a modified version of the Fusion camera. The new 3D camera creates an augmented-reality view for Cameron as he shoots, sensing its position on a motion-capture stage, then integrating the live actors into CG environments on the viewfinder. “It’s a unique way of shooting stereo movies,” says visual-effects supervisor Stephen Rosenbaum. “Cameron uses it to look into the environment; it’s not about beating people over the head with visual spectacle.” This immersive 3D brings a heightened believability to Avatar‘s live-action sequences—gradually bringing viewers deeper into the exotic world of Pandora. In an early scene, Sully looks out the window as he flies over the giant trees and waterfalls of the jungle moon, and the depth afforded by the 3D perspective gives the planet mass and scale, making it as dizzyingly real for viewers as it is for him.

Shooting the Virtual World

Yet live-action 3D was hardly the biggest technical challenge. Only about 25 percent of the movie was created using traditional live performances on sets. The rest takes place in an entirely computer-generated world—combining performance capture with virtual environments that have never before been realized on film. Conjuring up this exotic world allowed Cameron to engage in “big-time design,” he says, with six-legged hammerhead thanators, armoured direhorses, pterodactyl-like banshees, hundreds of trees and plants, floating mountains and incredible landscapes, all created from scratch. He drew upon his experience with deep-sea biology and plant life for inspiration. Sigourney Weaver, who plays botanist Grace Augustine, calls it “the most ambitious movie I’ve ever been in. Every single plant and creature has come out of this crazy person’s head. This is what Cameron’s inner 14-year-old wanted to see.”

To bring his actors into this world, Cameron collaborated with Weta Digital, an effects house founded by The Lord of the Rings director Peter Jackson. Weta has created some of the most groundbreaking characters in recent years, using human performances to animate digital creatures such as Gollum in the Rings series and the great ape in Jackson’s 2005 version of King Kong. By now, the process of basic motion capture is well-established. Actors are dressed in “mocap” suits studded with reflective reference markers and stripes, then cameras capture the basic movements of a performance, which are later mapped to digital characters in a computer.

For actors, the process of performing within an imaginary world, squeezed into a leotard while pretending to inhabit an alien body, is a challenge. Motion-capture technology is capable of recording a 360-degree view of performances, so actors must play scenes with no idea where the “camera” will eventually be. Weaver found the experience liberating. “It’s simpler,” she says. “You just act. There’s no hair or makeup, nothing. It’s just you and the material. You forget everything but the story you’re telling.” Directing within a virtual set is more difficult. Most directors choose their angles and shots on a computer screen in postproduction. But by then, most of the immediacy of the performance is lost. Cameron wanted to be able to see his actors moving within the virtual environments while still on the motion-capture stage (called the volume). So he challenged his virtual-production supervisor Glenn Derry to come up with a virtual camera that could show him a low-resolution view of Pandora as he shot the performances.

The resulting swing camera (so-called because its screen could swing to any angle to give Cameron greater freedom of movement) is another of Avatar‘s breakthrough technologies. The swing camera has no lens at all, only an LCD screen and markers that record its position and orientation within the volume relative to the actors. That position information is then run through an effects switcher, which feeds back low-resolution CG versions of both the actors and the environment of Pandora to the swing cam’s screen in real time.

This virtual camera allowed Cameron to shoot a scene simply by moving through the volume. Cameron could pick up the camera and shoot his actors photographically, as the performance occurred, or he could reshoot any scene by walking through the empty soundstage with the device after the actors were gone, capturing different camera angles as the scene replayed.

But all of this technology can lead right back into the uncanny valley because capturing an actor’s movements is only a small step toward creating a believable digital character. Without the subtle expressions of the face, Cameron might as well be playing with marionettes. Getting this crucial element right required him to push Weta’s technology far beyond anything the company had done before.

In fact, Cameron doesn’t even like the term “motion capture” for the process used on Avatar. He prefers to call it “performance capture.” This may seem like semantics, but to Cameron, the subtle facial expressions that define an actor’s performance had been lost for many of the digital characters that have come before. In those films, the process of motion capture served only as a starting point for animators, who would finish the job with digital brush strokes. “Gollum’s face was entirely animated by hand,” says Weta Digital effects master Joe Letteri. “King Kong was a third or so straight performance capture. It was never automatic.” This time, Cameron wanted to keep the embellishment by animators to a minimum and let the actors drive their own performances.

In order to pull more data from the actors’ faces, Cameron reworked an old idea he had sketched on a napkin back in 1995: fasten a tiny camera to the front of a helmet to track every facial movement, from darting eyes and twitching noses to furrowing eyebrows and the tricky interaction of jaw, lips, teeth and tongue. “I knew I could not fail if I had a 100 percent closeup of the actor 100 percent of the time that travelled with them wherever they went,” he says. “That really makes a closeup come alive.”

The information from the cameras produced a digital framework, or rig, of an actor’s face. The rig was then given a set of rules that applied the muscle movements of each actor’s face to that of the Avatar or the Na’vi that he or she was playing. To make a CG character express the same emotion as a human actor, the rig had to translate every arch of a human eyebrow directly to the digital character’s face.

But it turns out there is no magic formula that can supplant hard work and lots of trial and error. After Cameron complained about the uncanny-valley effect, Weta spent another year perfecting the rig on Worthington’s Avatar by tweaking the algorithms that guided its movements and expressions until he came alive enough to meet Cameron’s sky-high standards. “It was torturous,” Letteri admits. But when Weta was finished, you could pour the motion-capture data into the rig and it would come out the other side right.

With all the attention focused on Avatar, anything short of perfection may not be good enough. Cameron is asking moviegoers to believe in a deep new universe of his own design and to buy the concept that 9-foot-tall blue aliens can communicate human emotions. If Cameron is wrong, then Avatar may be remembered as the moment when the battle for the uncanny valley was lost.