by January 2, 2003 0 comments



Like any other industry, the SFX industry also has a fairly high level of vocabulary unique to itself. We have used many of these terms in our story. Let’s see what those terms mean.

But before that, let us take a quick look at how the SFX industry operates. Typically, about 300 or more special effects shots means that the movie is very SFX intensive. Different effects companies work on different effects shots for the same movie.

So, a movie like Spiderman or Titanic will have at least two or three effects companies working on it. On the other hand, an effects company could be working on more than one movie at the same time.

One exception to this rule in recent times has been Weta Digital of New Zealand. They have been working exclusively on the Lord of The Rings trilogy for some years now. Some of the big names in this business are ILM (Industrial Light and Magic), a company formed by George Lucas of Star Wars, Cinesite, a Kodak subsidiary, and Pixar Animation studio, part owned by Steve Jobs of Apple and Sony Imageworks. Of these Pixar is more or less focused on cartoon animation movies like Toy Story and Monsters Inc. There are a number of effects houses that work on movies. You can catch their names in the credits of movies.

2D or 3D?
You will most commonly come across these terms when you talk of animation. Animation is one of the key elements of special effects, and can be done in two dimensions (2D) or in three (3D). As a rule, 3D animation is more complex, time consuming and costly than 2D animation.

What is the difference between the two? As the terms themselves suggest, 2D animation is done only on two axes, the length and breadth axes. For example, a robot moving across a screen, from left to right at a constant distance from the viewer, can be done in 2D. On the other hand, if you want to show the robot walking away from you, into the screen, or you want to show it turning around to face you, then you opt for 3D animation, where perspectives and a sense of depth also have to come 
into play. 

3DStudio Max
Usually referred to as 3DS Max, a 3D animation software from Discreet used for 3D modeling, animation and rendering. Version 5 is now available and has been used in movies like Tomb Raider, Mission Impossible II and The Mummy Returns. Originally, it was created by Autodesk, the creators of AutoCad, who spun off Discreet as a separate division in 1999 after acquiring Discreet Logic and merging the activities of its Kinetix division into it. 3DS Max was earlier under
Kinetix.

Alpha channel
The Alpha channel contains transparency information about an image. This is used when compositing images. For example, when you shoot something against a solid (blue/green) background, you can composite a different background shot, by replacing the solid background color by making it an 
Alpha channel.

Blue screening 
Suppose you want to film a set of events in a background that you cannot control or do not have live access to, say a particular cloud formation. The way out is to use blue screen photography.

The actors are filmed in action against a solid blue screen. The background sequence is separately shot or created. The two are now composited, and during compositing, the blue screen is replaced (keyed out in, industry parlance) with the
background you originally wanted (clouds, in this case).

Blue was used because removing it did not affect the colors of the human skin. If there is blue in the shot, like in an actor’s dress (Superman, for example), then green is used as the background. 

But with today’s digital capabilities, almost any color can be used as the background. Or, as was done for the latest Bond movie, To Die Another Day, you do not need any specific background. Any background made up of multiple colors can be replaced during
compositing.

CGI
Computer-generated imagery (CGI) is a generic term for computer-generated effects
including 3D animation and 2D backgrounds.

Character animation
Character animation is when a character is modelled physically, or in software, and is shot in film while moving it. Obviously, creating a model in software provides for much more flexibility and better control over movement. Early animators used toys and mechanical models for doing character animation in movies like King Kong. A computer-animated character can do much more than a mechanical one, and moviemakers are increasingly moving to software animation.

Compositing
Compositing is key to most of the effects work that is done in movies. It is about combining two or more images to create a new one. Complicated scenes could be shot seperately, and finally composited into one. In our context, SFX could be created seperately on software, and composited into live shots. For example, images of the Green Goblin flying about on his glider in Spiderman, was animated in software (Maya) and was digitally composited into the live crowd scenes 
shot.

Compositing is now almost exclusively done in software.

Chroma background
See blue screening. Matte painting Matte painting is when a background is painted into a movie scene. Earlier, it used to be a laborious hand process, but is now a less tedious, software-driven affair.

Maya
Software from Alias/WaveFront that is extensively used in film making for doing animations and other SFX. Now into version 4.5, Maya is originally from SGI (Silicon Graphics) and has been used in movies like Spiderman and Harry Potter.

Morphing
Smoothly animated conversion of one object into another digitally. Morphing can produce striking effects, as demonstrated in movies like Terminator. 

Motion capture
A way of realistically animating a digital character. What is done here is that a person is made to undergo the basic motions that the animated character is supposed to undergo. The characteristics of this motion (not the person, but the
motion) like the gait and the length of the stride is captured by cameras on to special software. These characteristics are then
be used to animate the character in the animation software. 

Motion control
Suppose a scene is shot as two separate shots, for compositing into one later on. Or suppose a shot is made up of one live shot and one computer animation. Now the camera movements and settings in both the sequences has to be exactly same (or proportional) for the two sequences to match in the final composite.

This is achieved by using motion-control cameras. These cameras use a computer-controlled jig and are capable of highly precise and repeatable movements.

Behind the scenes
A very detailed presentation was made at the O’Reilley Open Source Convention (July 02) on the challenges of using open source in visual effects. This presentation discusses how the effects for Lord of the Rings was done, and includes footage of how the effects were applied. The entire presentation, including the footage, is available at
http://conferences.oreillynet.com/cs/os2002/view/e_ sess/3118 as a 127 MB video file. If you are really interested, and can afford such a huge download, then this one is definitely worth watching.

Rotoscoping
When you composite two images, it is fairly easy, if the objects in the two remain at the same distance from each other and from the viewer. But if any of these distances change, say if the goblin in the foreground comes towards the viewer, while the crowd in the background is scattered away, then you need to rotoscope them in order to maintain continuous perspective, and to have the feeling that the action in the composited image is happening in 3D space, with the goblin being amidst the crowds.
Rendering

Digital 3D animations are created as a combination of polygons, splines and nurbs, which are somewhat akin to line drawings in 2D space. The surfaces we see, like skin, clothing and leaves is rendered on top of this polygon model and then captured on to film. Rendering of complex scenes is a very processor-intensive activity and companies doing SFX for movies often use render farms for the purpose.

Render farm
A collection of computers networked together with high bandwidth connections that are exclusively used to render animations for movies.

RenderMan
From Pixar studios, Renderman is both a set of technologies and a set of products. Both are widely used in movie animation. As the name suggests, Renderman software is used for rendering the animations in a movie. The Renderman API’s are followed by most animation software to make work being done in one compatible with others. Renderman has been used in movies like The Mummy, Star Wars, Minority Report and A Bug’s Life.

A Titanic Effort

In 1997, Titanic won an Oscar for achievements in visual effects. That may sound strange. Titanic has none of the high-speed chases or the dramatic action that is commonly associated with visual effect-heavy movies. It doesn’t quite fit the genre of a James Bond or a Spiderman. Then what was the award for?

The fact is that Titanic has some of the most realistic SFX working for it. But, unlike action movies where effects are the key to holding the audience, in Titanic, they are in the background, often lurking just at the periphery of the viewers’ vision. This adds that extra touch of realism to the movie, making it different from the rest of the SFX movies.

Take, for example, the dolphins that follow the Titanic at sea. Some of those dolphins were shot separately and then digitally composited in. And some of the dolphins, particularly the close ups, do not exist in real life. They exist only inside computers and are nothing more than CGI images, digitally created and composited in. Many of the crowd scenes are peopled by animations, created through motion capture.

Perhaps the most famous shot in the movie is the one of Jack and Rose at the bow of the ship–Rose with her arms stretched wide, Jack holding her, and both leaning perilously out into the ocean. The fact is that there was no real ship. They did not stand at the bow and, in all probability, the water that you see also did not exist. So, how was that dramatically romantic shot done? This is nothing but compositing, rotoscoping and green-screening at their best.

The actors acted out the sequence on a green-screen stage. The camera movements from this sequence were repeated on a miniature model of the Titanic (1/20 scale model). The two were then composited using software, with the effects of smoke and water thrown in. Remember, that the shooting is in 3D space, and there is movement in both the sequences (ship and actors). To seamlessly combine the two into one synchronized video, computerized rotoscoping comes into play.

Some of the other breathtaking computerized effects in the movie include the transitions between scenes, like from the grand ship at sea to the later day rusted sunken one towards the end.

No Comments so far

Jump into a conversation

No Comments Yet!

You can be the one to start a conversation.

<