Sponsored Links

Sabtu, 23 Juni 2018

Sponsored Links

3D tracking / matchmoving car and compositing - YouTube
src: i.ytimg.com

In cinematography, moving games is a cinematic technique that allows the insertion of computer graphics into live-action recordings with the correct position, scale, orientation, and motion relative to the objects photographed in the shoot. The term is used loosely to describe several different methods for extracting camera motion information from a film. Sometimes referred to as motion tracking or camera breaking , moving matching related to rotoscoping and photogrammetry. Matching movements are sometimes likened to motion capture, which records the movement of objects, often human actors, rather than cameras. Typically, motion-capture requires cameras and special sensors and controlled environments (although recent developments such as Kinect cameras have begun to change this). Suitable motions are also different from motion control photography, which uses mechanical hardware to run multiple identical camera movements. Matching moves, on the other hand, are usually software-based technologies, applied after the facts for normal recordings recorded in an uncontrolled environment with regular cameras.

Movement of the game is mainly used to track the movement of the camera through shooting so that identical virtual camera movements can be reproduced in 3D animation programs. When the new animated element is compiled back into the original live action shot, the element will appear in a completely matching perspective and therefore looks seamless.

Since most are software-based, match transfers are becoming more affordable as the cost of computer power decreases; it is now an established visual effects tool and even used in live television broadcasts as part of giving effect like a virtual yellow down-line in American football.


Video Match moving



Principles

A suitable transfer process can be broken down into two steps.

Tracking

The first step is to identify and track features. The feature is the specific point in the image that can be tracked by the tracking algorithm and follows some frames (SynthEyes calls it blip ). Often features are selected because they are a bright/dark point, edge or corner depending on a particular tracking algorithm. Popular programs use template matching based on NCC scores and RMS errors. The important thing is that each feature represents a specific point on the surface of the real object. When a feature is tracked, it becomes a series of two-dimensional coordinates that represent feature positions across the frame set. This series is called a "track". Once a track is created, they can be immediately used for 2D motion tracking, or later used to calculate 3D information.

Calibration

The second step involves solving for 3D motion. This process attempts to gain camera movement by solving 2D path inversions for camera positions. This process is called calibration.

When a point on the surface of a three-dimensional object is photographed, its position in a 2D frame can be calculated by the 3D projection function. We can think of the camera as an abstraction that stores all the parameters needed to model the camera in the real or virtual world. Therefore, the camera is a vector that includes as its camera position element, orientation, focal length, and other possible parameters that determine how the camera focuses light onto the film plane. Exactly how this vector is built is not important as long as there is a compatible projection function P .

The projection function P takes as its input the camera vector (denoted camera ) and another vector position of the 3D dot in space (denoted xyz ) and restores the projected 2D point to the plane in front of the camera (denoted XY ). We can express this:

XY = P ( camera , xyz )

The projection function changes the 3D point and removes the depth component. Without knowing the depth of the component, the inverted projection function can only return a set of possible 3D points, which form a line radiating from the nodal point of the camera lens and through the projected 2D point. We may disclose the inverse projection as:

xyz ? P '( camera , XY )

atau camera xyz i> ) = XY }

Let's say we are in a situation where the features we track are on the surface of rigid objects such as buildings. Since we know that the real point xyz will remain in the same place in the real space from one frame of image to the next we can make a constant point even though we do not know where it is. So:

xyz i = xyz j

of mana subscripts i day j merujuk ke frame sewenang-wenang dalam tembakan yang kita analisis. Karena ini selalu benar maka kita tahu bahwa: and , XY > and )? X sub> j )? {}

Because the value X i has been specified for all frames tracked by features by tracking program, up and down between any two frames during P '( camera XY i )? P '( camera j sub> j ) is a small set. A group is likely to be the camera that solves the equations on i and j (denoted by C ij ). > ij = {( camera i > camera ): P '( sub>)? P '( camera j sub> j )? {})

So there is a set of paired camera vectors C ij where the intersection junction of two points XY i and XY j is non-narrow, hopefully small , set centered on theoretical stationary point xyz .

In other words, imagine a black dot floating in a white void and a camera. For any position in the space we place the camera, there is an appropriate set of parameters (orientation, focal length, etc.) that will photograph the black dot in exactly the same way. Since C has an infinite number of members, a single point is never enough to determine the actual camera position.

When we start adding tracking points, we can narrow down the possibility of camera positioning. For example, if we have a series of points { xyz , 0 ,..., xyz i, n } and { xyz j, 0 ,..., xyz j, n } where I and j are still referring to the frame and n is the index to one of the many tracking points we follow. We can get a set of camera vector sets set {C i, j, 0 ,..., C i, j, n }.

In this way some tracks allow us to narrow down the possibility of camera parameters. The possible set of possible camera parameters, F, is the intersection of all sets:

F = C i, j, 0 ?...? C i, j, n

The fewer elements in this set the closer we can come to extract the actual parameters from the camera. In fact mistakes introduced into the tracking process require a more statistical approach to determining good camera vectors for each frame, optimization algorithms and bundle adjustment blocks are often used. Unfortunately there are so many elements on the camera vector that when every parameter is free, we still may not be able to narrow F to one possibility, no matter how many features we track. The more we can limit the various parameters, especially the focal length, the easier it is to determine the solution.

Overall, the 3D splitting process is the process of narrowing down the possibility of a solution to camera movement until we reach the one that suits the composite needs we are trying to create.

Cloud-point projection

Once the camera position is determined for each frame, it is possible to estimate the position of each feature in real space with reversed projection. The collection of points generated is often referred to as the cloud point because of its raw appearance like a nebula. Because cloud points often reveal some form of 3D scenes, they can be used as a reference to place synthetic objects or with a reconstruction program to create a 3D version of the real scene.

Determination of ground-plane

Cameras and cloud point should be oriented to some kind of space. Therefore, after the calibration is complete, it is necessary to determine the plot of land. Typically, this is the unit of aircraft that determines the scale, orientation and origin of the projected space. Some programs try to do this automatically, though more users define this plane. Due to shifting the ground plane performing a simple transformation of all points, the actual position of the plane is really a matter of convenience.

Reconstruction

Reconstruction is an interactive process of recreating a photographed object using tracking data. This technique is related to photogrammetry. In this particular case, we are referring to the use of game transfer software to reconstruct scenes from incidental recordings.

The reconstruction program can create a three-dimensional object that mimics the real object of the photographed scene. Using data from cloud point and user estimation, the program can create virtual objects and then extract textures from recordable tapes into virtual objects as surface textures.

Maps Match moving


2D vs. 3D

Matching movements have two forms. Some combined programs, such as Shake, Adobe After Effects, and Discreet Combustion, include the two dimensional motion tracking capabilities . Two dimensional matches move only track features in two-dimensional space, regardless of camera movement or distortion. This can be used to add a blur effect or image stabilization on a recording. This technique is enough to create a realistic effect when the original recording does not include major changes in the camera's perspective. For example, a remote billboard in the background of shooting can often be replaced using two-dimensional tracking.

The Three dimensional match switch tool allows to extrapolate three-dimensional information from two-dimensional photography. These tools allow the user to gain camera movement and other relative motions from random recording. Tracking information can be transferred to computer graphics software and used to animate virtual cameras and simulated objects. Programs capable of moving 3D matches include:

  • 3DEqualizer from Science.D.Visions (which won an Academy Award for Technical Achievement)
  • Blender (open source; using libmv)
  • Voodoo
  • OVERCOMING automated camera tracking with an in-depth recovery system for image/video handling
  • LS-ACTS powerful and efficient structure-of-motion system that can handle image/video data sets in the near future and works well in challenging cases (eg loopback sequences and multiple sequences)
  • VISCODA VooCAT
  • Icarus (University of Manchester research project, now stopped but still popular)
  • Maya MatchMover
  • Pixrack Agriculture Pixel, PFMatchit, PFHoe (based on PFTrack algorithm)
  • SynthEyes by Andersson Technologies
  • Boujou (who won an Emmy award in 2002)
  • NukeX from The Foundry
  • fayIN a plug-in for Adobe After Effects from fayteq
  • CameraTracker (a plug for Adobe After Effects) from The Foundry.
  • VideoTrace from Punchcard (software for generating 3D models from videos and images)
  • IXIR 2D Track Editor It is capable of 2D tracks and Mask file software like 3D Equalizer, PFTrack, Boujou, SynthEyes, Matchmover, Movimento, Nuke, Shake, Fusion, After Effects, Burning, Mocha, Silhouette
  • mocha Pro from Imagineer Systems, Planar Tracker based utility for post production

Matchmoving Production Techniques | The Gnomon Workshop
src: d8jxi8g09pf42.cloudfront.net


Automatic vs. tracking. interactive

There are two methods by which motion information can be extracted from an image. Interactive tracking, sometimes referred to as "supervised tracking", depends on the user to follow the feature through a scene. Automatic tracking relies on a computer algorithm to identify and track features through shooting. The movement of the tracked points is then used to calculate the "solution". This solution consists of all camera information such as movement, focal length, and lens distortion.

The advantage of automatic tracking is that computers can make a lot of points faster than humans can. A large number of points can be analyzed with statistics to determine the most reliable data. The disadvantage of automated tracking is that, depending on the algorithm, the computer can easily be confused as it traces the object through the scene. The automatic tracking method is very ineffective in shooting which involves fast camera motion as seen with handheld camera work and shooting with repetitive material such as small tiles or any ordinary pattern in which one area is not too different. This tracking method also suffers when a shot contains a large number of blur, making the small details required more difficult to distinguish.

The advantage of interactive tracking is that human users can follow the feature through the entire scene and will not be confused with features that are not rigid. The human user can also specify where the feature is in a shot that is suffering from blur; it is very difficult for automatic trackers to find features with high motion blur counts correctly. The disadvantage of interactive tracking is that users will inevitably introduce a small error when they follow an object through a scene, which can cause what is called "drift".

Professional level motion tracking is usually achieved using a combination of interactive and automatic techniques. An artist can remove obviously anomalous points and use "matte tracking" to block confusing information from the automatic tracking process. Matte tracking is also used to cover a shooting area that contains movable elements such as actors or ceiling fans that spin.

Track matte

A similar matte tracking in concept with matte trash used in compositing matte travel. However, the purpose of matte tracking is to prevent tracking algorithms using unreliable, irrelevant, or non-rigid tracking points. For example, in a scene where an actor walks in front of the background, the tracking artist will want to only use the background to track the camera through the scene, knowing that the actors' movement will discard the calculations. In this case, the artist will create matte tracking to follow the actor through the scene, blocking the information from the tracking process.

Matchmoving using 3DEqualizer
src: www.aie.edu.au


Refine

Because there are often several possible solutions to the calibration process and a large number of errors can accumulate, the final step to match the movements often involves refining the solution by hand. This may mean changing the camera's motion itself or providing guidance to the calibration mechanism. This interactive calibration is called "purification".

Most suitable removal apps are based on the same algorithm for tracking and calibration. Often, the initial results obtained are similar. However, each program has different purification capabilities.

Match Moving for Maya - Part 2 - Setting up in MatchMover and ...
src: i.ytimg.com


Real time

Real-time camera tracking that has been installed becomes more widely used in feature film production to allow elements to be inserted in post-visualized production directly set. It has the benefit of helping directors and actors improve performance by actually looking at the specified extensions or CGI characters while (or soon after) they do the fetch. They no longer need to do for green/blue screens and have no feedback from the end result. Eye line references, actors positioning, and CGI interaction can now be performed directly on-on-set that gives everyone confidence that the shot is right and will work in the final composite.

To achieve this, a number of components from hardware to software need to be combined. The software collects all 6 degrees of camera freedom and metadata such as zoom, focus, iris and shutter elements from different types of hardware devices, ranging from motion capture systems such as LED-based active marking systems from PhaseSpace, passive systems like Motion Analysis or Vicon, to rotary encoders mounted for crane and doll cameras such as Technocranes and Fisher Dollies, or inertia & amp; gyroscopic sensor mounted directly to the camera. There is also a laser-based tracking system that can be attached to anything, including Steadicams, to track the camera outside in rain at distances up to 30 meters.

Motion control cameras can also be used as a source or destination for 3D camera data. The mobile camera can be pre-visualized first and then converted into motion control data that moves the camera crane along the same path as the 3D camera. The encoders on the crane can also be used in real time set to reverse this process to produce a live 3D camera. Data can be sent to a number of different 3D applications, allowing 3D artists to modify their CGI elements directly on the set as well. The main advantage is that setting design issues that will take time and cost issues then below the line can be sorted during the shooting process, ensuring that the actors "fit" in each environment for each take as they perform their performances.

Real time motion capture systems can also be mixed in camera data streams allowing virtual characters to be incorporated into the regulated live photoshoot. This dramatically improves the interaction between characters driven by a real and unreal MoCap as both CG plates and shows can be choreographed together.

Matchmoving Production Techniques | The Gnomon Workshop
src: d8jxi8g09pf42.cloudfront.net


See also

  • 1 & amp; Ten (graphical system)
  • Virtual Media Service PVI
  • The structure of the movement
  • Virtual Studio

Matchmoving in NUKEX | Pluralsight
src: img.pluralsight.com


References

Matchmoving: The Invisible Art of Camera Tracking, by Tim Dobbert, Sybex, Feb 2005, ISBNÃ, 0-7821-4403-9
  • 3D Estimates and Applications to Match Move - Initial paper on the transfer of the match, which discussed in depth on math.
  • Comparison of matchmoving and tracking applications
  • 3D Matchmoving Tracking and Tutorials * Dead Links *

  • WIP - Matchmoving & Compositing - Nicolas Marie
    src: img.over-blog-kiwi.com


    External links

    • Matchmoving explained on the FLIP Animation blog Obtained in May 2013

    Source of the article : Wikipedia

    Comments
    0 Comments