Monday 6 June 2011

         Present Practice & Future of Compositing & 3D

                                                  By: Pulasthi Gunawardhana
                                                        University of Kent
                                                              May 2011


Putting an image onto the screen is an easy method in comprehending 3D. There is no horizontal offset and no distinction between the left and right eye, when the image on the scene gives the illusion of being where the theatre screen is. In real life for the projected scene to be one-dimensional, a flat wall or billboard poster would have to be used. If a real scene were filmed, all points would not fit on top of each other as points would in an ordinary scene. Shooting using two cameras would cause all of it to offset in a different way within the scene. A car far away appears slightly different for each eye, (inside the same shot) for someone close to the camera, the right and left would have a big differentiation. The whole image could be made to line up and overlap, if a poster on the wall were to be filmed; in addition, the poster would need to be observed by the audience as if it was in the same 3D space it appears in on the projection screen.

The 3D effect or „parallax‟ is relative to horizontal dislodgment, because of this if one of the eyes was shifted to the right or left, the image could be forced to shift in front from the back of the screen. If the image were shifted left or right, an opening would exist. The right and left frustum is like imagining the 3D world in movies, similar to having a viewing pyramid in each eye.

A scene filmed using parallel cameras produces a small piece one side not covered by both frustums. This same outcome occurs when the left eye is shifted to the side to increase the 3D effect.

In addition, a black strip would appear alongside one edge in 2D if a background plate were moved. There is not a black edge in 3D, but there is a section of the screen not containing information for both the right and left eyes. For example, the right eye might have some image, while the left eye does not. This could be removed by cropping, but this step is not necessary; the gap becomes nearly invisible in 3D projection. Furthermore, a large area of missing information at the frame edge of one eye seen when viewed without glasses is not visible when watching with glasses.


Colour Adjustment

The left eye has to go with the right eye in basic grade from the start. Generally, a beam splitter or a mirror will colour the image a bit, usually in shadow areas. Each eye needs to be balanced accurately, for the area of research that is used to supply automated tools that perfectly match the right and left eye.


Colour Grading

Both eyes can be graded once the eyes have been balanced. The colourist risks getting headaches if the systems do not automatically implement matching grade to each eye right away or if the project cannot be graded in stereo. Frequently, one eye is graded at a time, with grade applied on the top of the above-mentioned offset grade. In the Scratch program, for instance, a grade can be transferred from one eye to the other by a hot key; however, in stereo programs, this type of grade transfer is not feasible.


Stereo Adjustments

Matching right and left eyes vertically is the first adjustment. Even if the head tilted, the eyes always remain set in relation to each other, so vertical position has to be verified in every shot.
The on-set team can correctly pull convergence, but edits can still utilize altering, and occasionally scenes require re-converging, basically a horizontal change. Moreover, a shot itself might be technically ideal in terms of convergence, but might not cut well with the shot subsequent or preceding it. It is feasible for convergence to be tapered for easier editing and avoiding irritating convergence jumps.

Forcing audience members‟ eyes to move a great deal between near and far objects quickly is never desirable, for this reason, Tim Baier created a graph from a python script based on a database. The database is used by Baier to graphically follow shots throughout the edit. Distant and near objects in the scenes are shown in the graph. If they are jumping too much, he could alleviate eyestrain by transporting the objects closer to each other on each side of a slice and feathering the disparity of the divergence and convergence. This is based on the content of the subject point and scene where the audience is likely to be looking.


Effects in Stereo

In 3D, straightforward tasks such as paint and roto are more complicated than it is in 2D. The right and left eye roto have to match exactly. It is not feasible to simply copy and paste from one eye to the other; roto work requires close attention because the alteration in perspective is so fine in detabuil.


Creative Stereo Effects

The director could still score the convergence per shot for comic or dramatic effect, even when the technical problems are solved.


Light and Flares Plays

Two items that do not film well in stereo are light plays and lens flares. Because a lens flare is so positional-based, the simple reserved interoccular variation of 64mm could cause the flares to vary from one eye to the other.


Special Passes

The silver screen utilized in Real-D could create a ghosting highlight from one eye to the other that is read similar to a second offset limit; but Real-D continues to be extremely well liked and is gaining popularity. A “ghost-busting” pass is done for the Real–D version; this reduces the bright highlights of the primary eye from the secondary eye to resolve the problem. Even though this avoids the ghosting effect, the left eye gets left with odd dark hot spots. The untouched primary eye is used as the standard TV and DVD versions because of these hot spots, and reasons of minor softening from beam mirrors/splitters and stereoscopic adjustment.


3D Scanning

A number of consumer solutions are available for 3D object scanning (LASER/LIDAR) or (photogrammetric based) for making digital doubles, but all these lack one thing; the extraction of 3D Mesh and Textures of live actors or animals.


Compositing in 3D Space

Whereas there is not anything to stop 2D techniques from being used for 3D, composting at the accurate depth has to be given particular thought. When it is composited with 3D live action, 3D objects that have wrong stereo placement in the scene could possibly be created. For instance, even though a car in 3D could be composited so that the car projects as if it was in front of other on-screen objects, the illusion of a 3D car would not have to be behind those objects 3D space. With green screen layering, the result effect is the same. 3D stereo priority and layer priority have to be given particular attention.

Wise ways to track scenes with fixed-relationship cameras are having one track for the primary eye and offsetting the second camera based on the known interoccular distance and convergences, following the second camera. When convergence is unknown or if convergence is dynamic throughout the shot, dual stereo tracks are necessary for both eyes.


3D Compositing Techniques

The proposed techniques are:

a) 2D video and texture acquisition using chromakey or rotoscoping techniques;
b) Separate editing of each video layer with masks creation;
c) The use of an existing 3D modeling application, where the 2D video layered mapped textures are placed in a 3D virtual environment amid 3D models.
d) Seamless incorporation of 2D and 3D elements by camera match of the virtual camera;
e) Colour correction of the elements independently, using an external video editing application (the linked videos in the virtual environment are updated automatically when editing);
f) Export (rendering) producing either a stereo-pair for anaglyph projection, or a 2D + Depth format; and
g) Final editing in a conventional 2D editing application.




Stereoscopic Visual Effects Compositing

Stereo filmmaking, focusing mainly on documentaries and educational productions, was forgotten until its recent revival in Hollywood studios. The reason for this was the high production cost and inadequate delivery methods (Red-Cyan Anaglyphs). The use of IMAX dual projection systems in combination with comfortable polarizing glasses has made the stereo viewing experience more exciting. Today, stereo 3D content design focuses mostly in 3D animations, because the virtual nature of the genre makes it simple to create. Existing 2D video/film Visual Effects composting techniques have advanced considerably in the last ten years, but are still lacking in the 3D stereo production area, and are very costly, even for a Hollywood budget studio. With the introduction of 3DTV auto-stereoscopic and volumetric displays, the next sensible development step in entertainment is to address the need for suitable content creation techniques.


Free Viewpoint Video

Research has shown that the same abilities that are common in 3D computer graphics also are offered by Free Viewpoint Video (FVV). Interactive free navigation means that users could choose their viewing path inside a visual scene and their viewpoint. FVV applies to real world scenes captured on camera, as opposed to computer graphics programs. Multi-view video coding is explained as an important component for free viewpoint video systems and 3D, when done in combination with the related research done for the standardization of MPEG multi-view (MVC). In addition, FVV systems are the best input for future Volumetric displays systems.


Product Objectives

To generate and demonstrate innovative and creative techniques and methodologies, a final video composition in stereoscopic or 3DTV format (a multi-user lenticular lens auto-stereoscopic displays is required to demonstrate the effectives of the 2D+Depth format, therefore if none is accessible at the time of the showcase, a Red-Cyan Stereo Anaglyphic video with the suitable glasses will be presented instead)


Art of Roto (compositing)

Phoenix Editorial‟s Matt Silver has provided the background of Roto and an explanation of Roto elements.

Rotoscoping is the method of physically changing video or film footage frame by frame. To make custom animated effects like light-sabers or lightning, add paint randomly to it, or construct holdout mattes for compositing elements in a scene or frames could be traced to make realistic conventional animation. Visual effects and motion graphics are the two things generally made by VFX artists. If a detailed knowledge of rotoscoping and how it works into the modern digital pipeline is not utilized, then effects and designs are limited.

Rotoscoping is done very differently today, because digital tools like After Effects (AE), Combustion (C3), Digital Fusion (DF), Commotion, and Shake was introduced. When digital artists have detailed information about Rotoscoping, they are able to make improved incredible visual effects and live-action or CG composites. There are different Rotoscoping techniques: matte creation, digital cloning, paint touchup, effects painting, and motion tracking. These are covered below, including a short history of the craft.


Flint, Flame, Inferno, Fire, Smoke


The Advanced System, which includes Flint, Flame, Inferno, Fire, and Smoke, run on SGI workstations. Strong and high-spend Rotoscoping tools are a total post-production solution offered by these products. The painting and cloning tools are top-quality because of superior features, which includes brushed based warping and grand brushes. Even though Rotosplining is great, the inability to play spline over a moving image in real time and the lack of b-splines make Commotion better than Rotosplining. Roto work is offset to PCs and Macs that run Shake, Combustion, and Commotion, by facilities using Discreet advanced systems.


Digital Fusion

Today on NT/Windows desktop compositing solutions, Eyeon has one of the strongest positions, although in the past Alias 3D was used to provide a type of Fusion.

Digital Fusion has two chief products, Digital Fusion and DFX+.

Eyeon‟s ninth major release and flagship product was Digital Fusion 4. Eyeon‟s image processing software is Digital Fusion; the 8-bit flexible version is DFX+4. Superior character generation, flexible flow, and PSD importation to different layers for animation, are some of the important enhancements offered to DFX (DFX+‟s predecessor) by DFX+, which is based on the architecture of DF4. DF has provided cost effective solution, ever since Shake shifted away from NT/Windows.


Shake

For Quickshape, Quickpaint, Roto, and Rotoshape there are three options offered by Shake. Quickpaint is a procedural paint package within Shake. It is feasible to paint with interpolation or to paint frame by frame and then see it in real time. It is a sensible roto tool because every one of the paint elements could be animated over time. Today Rotoshape to a certain extent entirely overshadows Quickshape, an essential roto tool. Logical operations between roto shapes and variable edge gentleness are allowed by Rotoshape. Rotoshapes rotos, which are classic spline shapes, have velocity based on blur and difficult parent child relationships. Difficult rotoscoping are given especially precise results because of this. Shake has 2D trackers that can be used by Quickpaint and Rotoshape. Worthwhile to note that it is feasible to roto or paint throughout an image transform or track, even though Shake is a node workflow model.


Photoshop

Most likely the first digital rotoscoping tool used in film and video postproduction and the graphics application most widely used in the world is Photoshop. Although Photoshop could work by importing filmstrip files from video applications or with motion by importing frames one at a time, it was originally intended for still images. The target everyone else strives for is Photoshop‟s brush engine, and when used with pressure sensitive Wacom tablets it exercises great control. The lack of a real time preview of sequential frames is the leading drawback. Until clips are played back in real time at full resolution, the functionality level of the cloning cannot be determined.

In Photoshop after painting various frames, the sequence has to be taken back into compositing or editing applications, such as Final Cut Pro, to observe real time playback. Doing this makes a time-consuming method, and because video was not intended for it, it does not have moving matte capabilities and motion tracking.


Nuke

Nuke is the software which is currently using in the field for composting and VFX. It‟s a unique software which helps to create 3D VFX & stereoscopic task. This software got more than 200 nodes which is help to work more creatively and easily the task you do. Such as Rotoscopy, Keylight, Primatte, Vector blur, Multi color correction approaches, animated curve editor, multiple merge editor and vector paint tool are some of them which use full to do most tasks. Nuke got phythone scripting which help to user to customize the software as they need. Mostly user interface and control the process of the software. Using this software you can deliver simple pixel displacement to to 3D- macth move geometry.





Bibliography:
Content creation methods for stereoscopic / 3DTV Compositing in a 3D environment, and a multi-purpose Camera array acquisition system for multi-viewpoint video.
Pan. Vafeiadis

Compositing: Camera Matching
By: Allan McKay

Matting and Compositing of Transparent and Refractive Objects:
CHI-KEUNG TANG, MICHAEL S. BROWN, SAI-KIT YEUNG, and SING BING KANG.

1 comment:

  1. It's boring I tried to read it but I don't like it. it seems it's copied from some source

    ReplyDelete