top of page

Search Results

24 results found with an empty search

  • Understanding lenses & camera tracking

    In this article, we focus purely on the lens, a component many consider to be the most influential factor in a film's look. But how does it influence the way we matchmove? What is lens distortion, and how does it affect camera tracking? When taking a picture or filming, the job of the lens is to direct beams of light onto the film or image sensor. In reality, lenses are not perfect at performing this job and photons from a straight-line object often end up in a curved line, which results in a  distorted  image. This is called lens distortion. The most straightforward types of lens distortion are barrel distortion, where straight lines curve outwards, and pincushion distortion, where straight lines curve inwards. Lens distortion is usually more pronounced towards the edges of the frame. Lens distortion causes features in the captured image to not reflect their position in the real world, which does not suffer from lens distortion. Match-moving applications, however, often assume such ideal cameras as their underlying model to reengineer the camera and movement of a shot. Where image features deviate from the assumed position in a perfect camera, their corresponding reengineered 3D positions will not match their real-world locations. In the worst cases, this could cause your camera track to fail. But that’s not where lens distortion’s influence in visual effects ends. For example, the mathematically perfect cameras in 3D animation packages do not exhibit any lens distortion either. Undistorted CG images, however, would not fit the distorted live-action plate. Even where 3D packages can artificially distort the renders, the distortion must match the objective lens’ distortion for the composite to work. In practice, the effects of lens distortion on the plate (the live-action image) will be removed during camera tracking, which makes the matchmoving artist responsible for dealing with lens distortion. As a result, you will get a mathematically perfect virtual camera and undistorted plates. The resulting virtual camera will be used to render the CG elements, which are then composited into the undistorted plates. At this point, we have perfectly matched CG integrated into the undistorted live-action plate. However, with other (non-VFX) parts of the footage still exhibiting lens distortion, your undistorted VFX shots may stand out, even if the CG is perfectly matched. That’s why, at the end of this process, (the original) lens distortion is re-applied to the composited frames. Consequently, matchmoving not only needs the ability to remove lens distortion and export undistorted plates but also provides a means to re-apply the same lens distortion on the composited result. Types of lenses There are (at least) two ways of classifying lenses: prime (or fixed focal length) versus zoom, which can be further complicated by being  spherical  or  anamorphic . Prime lenses cannot change their focal length (more on focal length below), whereas zoom lenses can do so within their zoom range. Not being able to change the focal length comes with some advantages for prime lenses. The more straightforward design and less optical elements in the lens commonly result in a higher quality image, for example, exhibiting less distortion than comparable zoom lenses. A rule of thumb for matchmoving is that the more information about the real live camera you have, the easier it is to get a good solution. When it comes to collecting this camera information to assist camera tracking, prime lenses have the advantage that if you know which lens was being used for a shot, you automatically also know which focal length it has. This is much harder when it comes to using zoom lenses. Even if you know which lens has been used for a shot, you still don’t know the focal length the lens was set to. It is much harder to keep track of any focal length changes, ideally frame accuracy. The good news is that knowing the type of zoom lens can still help matchmoving. If nothing more, knowing the range of a zoom lens can provide boundaries when calculating the actual focal length for a frame during matchmoving. Anamorphic lenses’ breakthrough in filmmaking began with the adoption of widescreen formats. The scene was squeezed horizontally to utilise as much of the film surface area as possible. With digital sensors, the need for anamorphic lenses is reduced to aesthetic considerations. Common anamorphic lenses squeeze the horizontal by a factor of 2, which means for a digitised image, a single pixel is twice as wide as it is high, compared to the square pixels for spherical lenses.​ When matchmoving anamorphic footage, make sure to account for the correct pixel aspect ratio. In the above example, this ratio would be the common 2:1, but there are also lenses with different ratios. Anamorphic lenses are available as both prime and zoom lenses. Focal length in matchmoving Focal length is a lens's most prominent property. It is often the first thing mentioned in any listing of lenses to distinguish them and distinguish prime and zoom lenses. The focal length, usually denoted in millimetres (mm), defines, for a given camera, the extent of the scene that is captured through the lens. This is also called the (angular) field of view (FOV). It is no surprise that focal length also plays a part in matchmoving. On the other hand, it may surprise you that focal length is only half the story regarding camera tracking. Matchmoving applications are interested in the field of view rather than any focal length value in mm. To calculate this field of view, they need to know the focal length and the size of the camera’s sensor or film back. You may have encountered this relationship with the term  35mm equivalent focal length . For example, the iPhone 5S’ primary camera’s sensor size is 4.89×3.67mm, and its lens has a focal length of 4.22mm. Its 35mm equivalent focal length, however, is 29mm, which means that to get the same FOV with a full-frame 36x24mm sensor, you would need a 29mm lens rather than the 4.22mm lens for the iPhone’s smaller sensor. This relationship is sometimes called crop factor, as RED Digital Cinema explains in  Understanding Sensor Crop Factors . Luckily, the sensor sizes for most digital cameras can be found easily online, for example, in the VFX Camera Database, so always note the camera model and the lens when collecting information on the set. The matter gets a bit more complicated through today’s plethora of different sensor sizes and the fact that, depending on the format, not all of the sensors is being used to capture images. In the above illustration, it doesn’t matter whether the sensor in the bottom camera is smaller than in the top camera or if it’s just a smaller part of the sensor used due to the chosen format. For example, your camera’s resolution may be 4500 x 3000, a 3:2 aspect ratio. If you plan to shoot an HD video with an aspect ratio of 16:9, some parts of the sensor will not be recorded in the video. For a full-frame sensor, this would reduce the effective sensor size for HD video from 36 x 24 mm to 36 x 20.25 mm, as illustrated below. Depending on the sensor size and format, cropping may occur at the top & bottom, as in the example above, or from the sides of the sensor. Conclusion Lenses & Camera Tracking The camera’s lens significantly impacts the VFX pipeline, and the matchmove artist’s job is to mitigate most of this impact. The Pixel Farm’s matchmoving application, PFTrack, has a wide range of tools to use information about the lens and camera and handle situations where no such information is available. It also provides the tools required to manage all aspects of lens distortion. Links: Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • Camera angles and shot sizes explained

    Understanding the language used for shot size/camera angles and combining them with the terms used for camera motion can help you identify and communicate the type of shot you have been assigned. Additionally, it can help you break a shot into components to get a better estimate of how long something will take. This article examines and demystifies the industry terms used to describe camera shot sizes and angles.  Shot Size: what is it? It may not initially seem like the shot size will affect your matchmoving in the same way camera movement does, but depending on how we arrive at a specific framing choice, it can provide us with some useful clues that can help during the matchmoving process. Let’s take a look at the terms we use to describe shot size and angle. The shot size determines how large or small a character or subject is in the frame relative to their surroundings. How we arrive at the various framing sizes can also impact matchmoving. Traditionally, the director/director of photography would choose a favoured single focal length for a scene, and the camera would be physically moved backwards and forwards until the correct shot size was achieved. Alfred Hitchcock famously used 50mm throughout most of his films, building the sets to accommodate the focal length. The benefit of the single focal length is that the distortion will be consistent across all shots. Sometimes, due to a scene's physical location, a given focal length is not possible. For example, in order to achieve the angle of view required for a particular framing size in a small room, the DOP may have to swap to a shorter focal length. This can potentially distort trackable features over the frame. Occasionally, the shot size needs to be adjusted after the shooting has finished. This could be for a technical reason, such as cropping an undesired element out of a shot or for a thematic reason, perhaps because there wasn’t coverage from a specific frame size for that section. This is where one of the most tricky situations for matchmoving can arise. Panning and scanning a clip in post-production will mean that the optical centre no longer coincides with the centre of the image, which can adversely affect how the camera motion, focal length, and distortion are calculated. We will start by looking at the widest perspective and move towards the narrowest. Extreme long shot (ELS) / Very long shot (VLS) Starting with the extreme/very long shot this type of shot is used to establish a scene, usually the geography of where a character or subject may be. As long as sufficient parallax and low distortion are sufficient, the ELS can provide many trackable features due to the large angle of view. Long shot (LS) This shot size, sometimes referred to as a wide shot or full shot, is frequently used for action shots showing the character or subject in full and in context with their surroundings. Often used for master shots, this shot size, along with the ELS, is one of the more common shots you will encounter when creating set extensions for a scene. Medium long shot (MLS) The medium-long shot, also known as the three-quarters shot, refers to framing a character from the knees up. Wider than a medium shot and closer than a long shot, this particular shot type allows multiple characters and elements to be in frame at the same time while being close enough for dialogue. Medium shot (MS) The medium shot frames the character from the waist up, which is why it is sometimes called a waist shot. It is a general-purpose shot intended to direct the viewer’s attention to the character and motions rather than the surroundings. Medium close-up (MCU) Closer than the medium shot, the medium close-up is usually framed from the chest or shoulders up and is used to showcase a character’s face. It is used mostly for dialogue shots, and the surroundings generally don’t feature heavily in this framing set-up. Close up (CU) Framed from the neck up, the character’s face will almost fill the entire frame. The close-up is used to focus the viewer more intensely on the character’s facial detail and expressions. You might see a shot like this where geometry tracking is required to apply digital makeup effects or to replace the head entirely. It can be tricky to establish camera placement with a close-up as the trackable features may be obscured or blurred. Extreme close up (ECU) Sometimes referred to as a big close-up, the extreme close-up will frame only a portion of a character’s face. An example of this would be where a character’s eyes fill most of the frame. A famous example of such a shot can be seen in the opening to Blade Runner (1982) where the character’s eye fills the frame. As with the close-up, you might come across this type of shot where geometry tracking is required to change or replace a key part of the character’s face. Insert (INS) The definition of insert will vary greatly depending on whom you talk to, but the traditional definition is a detail shot of an inanimate object or part of the body other than the head. The ultimate purpose is to look closely at something in the scene. An example of an insert shot is a hand operating a dial on a radio. Inserts can be taken from multiple angles but are generally a tight shot size similar to a close-up. You might come across this type of shot size when matchmoving the camera so that a digital object can be placed in the scene. Angles Knowledge of the camera’s angle can be very useful when match moving. It helps us to determine the orientation of the camera relative to the subject. Below are some of the more common angles you will likely encounter. Eye level Sometimes referred to as neutral, this particular angle is filmed from the viewer’s perspective or the character’s eyeline. With this type of shot, you can usually make an educated guess that the camera will be around 5–6 feet from the ground. High angle Taken from above eye level with the camera pointed downward, this angle is often used to convey the vulnerability of a character or subject in the scene. The elevated perspective of this particular angle type can sometimes make it easier than others to establish a ground plane. Top shot Also called a birds-eye, this shot is taken from a straight-down perspective, usually from quite a high elevation, to show the context of the character and their surroundings. Due to the flattened perspective, it can be tricky to matchmove, especially if shot from a high elevation on a longer focal length with little change in the elevation of the geography. Low angle Shot from a low position and angled up, this is perhaps one of the more tricky angles to matchmove, as there is no easy way of determining where the ground plane is. Additional cameras filming the same scene can potentially be used in PFtrack to accurately determine the low angle’s orientation. Canted angle Also known as a Dutch tilt, this framing rolls the camera on the side axis so that the horizon is not parallel with the bottom of the frame. Canted angles are often used to convey unease within a scene. While specific shot sizes and angles can initially seem problematic, some useful tools in PFTrack can help you find a solution. For example, you can matchmove multiple cameras into the same 3D scene by looking for similar features in each shot. You can even use set photos and witness cameras to help. Conclusion Now when you hear someone say they are working on a low-angle, long shot tracking into a medium close-up, you can already picture what this might look like and the components that make up the shot. Links: Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • Types of camera movement explained

    Knowledge of camera movement, shot size, and angle is essential for all skilled matchmove artists. When directors, editors, and cameramen refer to particular types of camera shots, the terminology can sound like a foreign language if you’re unfamiliar. Our essential guide to camera movement will help demystify some of the common terminology used in film production. Core Camera Movement Types This article will examine some of the more common terms we use to describe how a camera moves through a scene. Camera motion is a fundamental part of how we narrate a story visually and has created some of the defining moments in popular films, such as the contra-zoom in Jaws (1975) or the Steadicam shots in The Shining (1980) . For matchmoving, camera motion is essential to help determine the correct scale, position and orientation of the camera within a 3D scene. This article will help you identify the types of camera motion that make up your shots. While it is not a complete summary of all camera movement types and terms, it provides widely used core essentials in every production. Static / Lock off The static shot, sometimes called a lock-off, has no intentional camera movement. While this might seem an easy shot to matchmove as there is no camera movement to match, it can be tricky to match perspective exactly when integrating CGI. However, you can eliminate the guesswork and position a static camera accurately with an application like PFTrack, which uses its unique ability to use multiple cameras to solve a scene, even if a camera isn’t moving. Pan A panning shot involves lateral movement of the camera to the right or left of a given starting position. Depending on the choice of focal lengths, the relative position of objects near and far to the optics will be exaggerated. Wide-angle lenses will make distant objects move slowly and seem far away, while longer focal lengths will make objects in the distance seem closer and move more quickly. With good Parallax Matchmoving, a panning shot can be relatively easy. Nodal pan Nodal pans involve the same lateral movement to the left or right as the standard pan. The difference is that with a nodal pan, the camera will pan around the entrance pupil of the optics. This particular type of camera movement is intended to eliminate the parallax in the shot. This type of movement would be useful for stitching plates together for visual effects shots or generating a large digital matte where parallax would be an issue. This move was sometimes used in the past to disguise foreground miniatures in forced perspective shots. These shots can be tricky to generate a virtual camera from as there are little to no clues for the depth of a scene. Tilt A tilt is the camera's vertical movement up or down, usually from a fixed starting position, while keeping the horizontal axis consistent. Tilts are often used in establishing shots or in a reveal. Depending on the lens used and the position of the camera on the tripod, these shots can be more tricky to matchmove than a pan. Pan and tilt This is a combination of horizontal and vertical motion from a fixed point. An example shot may follow a character as they walk from one end of a room to another, panning and tilting the camera as they go to keep the framing consistent. Track/dolly A tracking shot, also known as a dolly shot, is the forward and backwards motion of the camera commonly used to follow a character as they traverse a scene. While these shots can seem quite daunting to matchmove, with suitable masking it can actually be quite easy to find a solution. Lateral track/crab/truck Similar to a standard tracking shot, lateral tracking – or crab – is the sideways movement of the camera. Depending on the scene, this type of shot can provide a large amount of parallax, which is useful when calculating depth and solving a camera. Some good examples of lateral tracking shots can be found in the films of Wes Anderson and Steven Spielberg . Crane / pedestal / jib This is the vertical raising or lowering of the camera, which will normally remain in relatively the same position while motioning up or down. The camera can be boomed out on some rigs to make for a more complex motion. These types of shots are often used to establish the geography of a scene, starting high and lowering to eye level. Crane shots are sometimes more straightforward than others to establish a good ground plane when matchmoving due to the elevated perspective. Handheld Handheld is as it sounds – the camera operator is hand-holding the camera, usually shoulder-mounted or slung underarm. Movement of the camera is completely free because there are no mechanical axial restrictions. Some good examples of handheld camera work can be found in Paul Greengrass's films. Motion blur can become a factor when attempting to matchmove handheld shots. The motion can also be hard to predict due to its non-linear nature. Stabilised Usually mounted on a Steadicam, gimbal, or a combination of the two, a stabilised camera moves through the scene, performing many, if not all, of the camera moves as handheld but with the ability to remove the high-frequency movement. Smooth, stable shots with linear motions are generally much easier to matchmove. Aerial / drone Aerial shots taken from either a helicopter or drone allow the camera to be at a more significant elevation than a crane/jib while being stabilised via a gimbal to remove high-frequency movement. They are usually combined with other camera moves and tracked forward or backwards through the scene to establish an environment or to follow the action from a more significant elevation. Due to the vertical perspective, these shots often provide plenty of trackable detail and parallax when matchmoving. Conclusion Of course, shots can combine many of the techniques above, and there are also many more complex camera movements, but it’s good to be able to identify the basic components that make a shot. In part two, we will examine the common terms used to describe the framing of a scene in both size and angle. Links : Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • Why you need to know about distortion calibration

    What is lens distortion, and what is distortion calibration? Why is it crucial for your camera tracking and visual effects workflow? In this article, we’ll explore lens distortion and distortion calibration and explain how they affect your camera tracking process. We’ll also highlight why properly calibrating distortion is essential for achieving accurate results in visual effects workflows. Understanding and correcting lens distortion ensures seamless integration between real-world footage and CG elements, making it a key step in any VFX project. What is lens distortion? Why should I care?  Lens distortion comes in several forms, each affecting the image uniquely. The three most common types are barrel distortion, pincushion distortion, and moustache distortion. These can significantly impact visual effects (VFX), particularly in camera tracking and integrating CGI elements within a scene. Barrel Distortion  In barrel distortion, straight lines appear to bulge outward from the centre of the image, creating a "barrel" effect. This type is common with wide-angle lenses and can cause real-world objects to look unnaturally curved, especially near the edges of the frame. Pincushion Distortion  Pincushion distortion causes straight lines to bend inward, creating a pinched effect toward the centre of the image. This distortion is often seen in telephoto or zoom lenses and compresses the visual field, making objects near the edges appear stretched toward the corners. Moustache (or Complex) Distortion  Moustache distortion is a combination of both barrel and pincushion distortion, where the image curves outward near the centre and inward toward the edges. This more complex form is often found in some wide or zoom lenses, creating a wavy, uneven warping across the image. Why It Matters for VFX Camera Tracking In VFX, camera tracking is the process of matching the movement of a virtual camera to that of the real-world camera used to film live-action footage. Lens distortion introduces inconsistencies that can throw off this critical match-up. For example, straight edges in a scene may curve unexpectedly, making it challenging to track points and solve them into their correct 3D positions. If lens distortion is ignored, the camera solution may work, but it won't be accurate. Correcting lens distortion is one of the first steps in the visual effect pipeline pipeline. By accounting for and removing distortion, VFX artists ensure that their computer-generated imagery (CGI) elements integrate seamlessly with the live-action footage. If distortion is ignored, tracking points can become misaligned, leading to digital assets that appear unnatural or misplaced in the scene. This makes understanding distortion calibration essential, from accurately tracking a real-world camera right through to realistically integrating CGI into a final VFX composite. (In this simplified overview of a visual effects (VFX) pipeline, we can see how crucial distortion calibration is and why matchmoving or camera tracking forms the foundation of VFX work.) STMaps An STMap in visual effects is a type of UV map used to warp or distort images based on texture coordinates. It stores pixel values representing positions from one image, allowing you to remap or undistort another image using those coordinates.  STMaps are commonly used for tasks like lens distortion correction, texture mapping, and compositing. They help ensure images align properly when integrating 3D elements into live-action footage. PFTrack can generate STMaps to both Undistort and Redistort images. What is distortion calibration? Distortion calibration is the process of identifying and correcting optical distortions in camera footage. There are three main ways to approach distortion calibration in PFTrack: Automatic calibration , calculated during tracking and solving, doesn't need specially shot footage but may not always yield the best results.  Calibration grids , on the other hand, require special footage shot specifically with the same camera and optics as the original plates for accurate undistortion.  Measuring distortions  involves identifying straight lines in the footage, like a lamp post, and using a tool to map and correct any curvature. I will focus on calibration grids, one of the most common methods for distortion calibration. Using footage of calibration grids in software like PFTrack allows for the analysis of optical distortions by detecting known patterns, such as a checkerboard, within the footage. Once the pattern is identified, the software calculates the degree and type of distortion affecting the image. This data is then used to adjust the footage, correcting lines and image geometry to more accurately reflect the scene, free from lens imperfections. Checkerboard Lens Chart A checkerboard lens chart is a common calibration tool for measuring lens distortion. It features a grid of alternating black and white squares designed with precise dimensions. This regular pattern allows the software to detect how the camera lens distorts straight lines and geometry, especially at the edges of the frame. It is one of the most widely used methods for distortion calibration. Calibration Pattern A calibration pattern is a general term for any structured design used to calibrate a camera lens. These patterns can include dots or grids with geometric shapes like circles or crosses. They provide reference points for software to measure lens distortion and other camera parameters. The pattern's known geometry enables precise calculations of lens characteristics, which is crucial for accurate 3D camera tracking and aligning computer-generated (CG) elements in visual effects. In a typical visual effects pipeline, this process creates industry-standard STMaps, which can later be used to undistort or redistort footage captured with the same camera and lens. Can I calibrate my lens by myself? Although calibration may seem complicated, it’s actually quite simple to do on your own. One of the key advantages of PFTrack is its flexibility—it can work with standard checkerboard lens charts if you have them, or you can use the built-in calibration tools. The built-in tools offer a big benefit: you don’t need an expensive checkerboard chart or a full lighting setup. A workstation monitor or TV, being self-illuminated and flat, makes it easy to calibrate distortion. I launched the built-in calibration pattern within PFTrack, set up a Sony Cine camera with a vintage Lomo 18mm prime (the same setup I used for the shot I planned to track), placed it on a sturdy, levelled tripod, and positioned it squarely in front of the 27” monitor. I then recorded a short 10-second clip. I loaded the calibration pattern footage into PFTrack and ensured that I entered accurate details for both the Sony camera and the Lomo OKC1-18-1 18mm spherical prime lens. To find the correct sensor size, I used the VFX camera database , which is an excellent resource for this information. Many people hesitate to use vintage optics because older lens designs often have challenges like soft edges, veiling, low contrast, and noticeable distortion, which can be difficult to manage in a visual effects pipeline. However, despite the optical quirks of the Lomo 18mm, PFTrack handled it with ease, successfully detecting the calibration pattern and accurately calculating the distortion in the image. Conclusion Distortion calibration is critical to ensuring that visual effects align seamlessly with live-action footage. By understanding and correcting lens distortion, you prevent issues in camera tracking and CGI integration, resulting in a more realistic final product. With PFTrack, distortion calibration is straightforward and efficient, allowing you to create precise corrections with just a few steps. Links : Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • Tutorial 04: A Closer Look at Tracking and Solving Issues

    Learn how to refine your solved camera using these essential techniques. Learn how to manage moving objects using masks, calibrate the camera and resolve tracking errors in the camera solver.  Tracking and solving issues video overview 0:00 - Brief overview of what we will be covering in this video 0:28 - Analysing the clip and learning the camera motion 1:07 - Performing the initial track & solve of the clip 1:42 - Checking the results and discussing the issues 4:03 - Using masking to occlude moving objects from the camera solver 7:29 - Calibrating the virtual camera and re-solving the clip 11:04 - Using the tools in the camera solver to remove errors in the solve 17:00 - Re-fining the camera solve with all the fixes made so far 17:31 - A look at the updated solve and video conclusion Before you begin, download the latest version of PFTrack and the media files linked below. Downloads Download the assets used in this video Links: Head back to PFTrack Tutorials . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros.

  • Tutorial 03 - Aligning the camera & exporting

    In this video, we'll cover how to reorient your virtual camera, correctly scale your 3D scene, and export your completed 3D scene. Video Overview 0:00 - Overview of what is covered in the video 0:38 - Adding the Orient Camera node 1:24 - The Orient Camera node 1:49 - Setting the ground height 2:19 - Rotating the axis 3:42 - Playing the aligned clip 3:55 - Setting the scene scale 5:06 - Using the ground plane 5:32 - Adding the Scene Export node 6:40 - Scene Export brief overview 7:20 - Adjusting the export 7:50 - Export destination 8:48 - Exporting your 3D scene 8:58 - Outro Before you begin, download the latest version of PFTrack and the media files linked below. Downloads Download the assets used in this video Links: Head back to PFTrack Tutorials . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros.

  • Tutorial 02 - The basics of tracking & solving a clip

    Learn the basics of how to track and solve an example clip to enhance your VFX skills. Video Overview 0:00 - What we will be covering in the video 0:32 - Adding the first node to the tracking tree 1:13 - Connecting nodes together 1:55 - Repositioning nodes 2:15 - The Auto Track 3:24 - Running the Auto Track 3:48 - Cinema controls 5:26 - Adding the Camera Solver 6:05 - Solving the camera 6:44 - Viewing the results 7:23 - An introduction to the 3D viewer 8:01 - Adjusting the windows 9:18 - Checking the point cloud 10:07 - The virtual camera 10:23 - Verifying the results 11:13 - Observing the ground plane 11:38 - Outro Before you begin, download the latest version of PFTrack and the media files linked below. Downloads Download the assets used in this video Links: Head back to PFTrack Tutorials . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros.

  • Tutorial 01 - Introduction to PFTrack & creating a project

    Learn how to create your first project, import example clips, and get a brief tour of the UI. Video Overview 0:00 - Brief overview of what we will be covering in this video 0:12 - Introduction to the Project Manager 0:31 - Creating a new project in PFTrack 1:21 - Introduction to the node panel 1:40 - How to import and export in PFTrack 2:14 - Adding a node to the tree 2:45 - Importing a clip into your new project 3:38 - Using the playback controls in the cinema 3:53 - How to automatically calibrate a clip 4:52 - A tour of the PFTrack user interface 6:40 - Video conclusion Before you begin, download the latest version of PFTrack and the media files linked below. Downloads Download the assets used in this video Links: Head back to PFTrack Tutorials . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros.

  • Tracking Trees: Basic Presets

    What is it? A couple of basic tracking trees designed for simple camera tracking and object-solving tasks. These trees offer an efficient starting point for common tasks, allowing you to work quickly and effectively. Downloads: Camera Solve tree This consists of an Auto Track, Camera Solver, and Orient Camera Object Solve tree This consists of a User Track, Object Solver and Test Object How do I use these trees? We have a video covering this and much more about using powerful tracking tree presets. Links: Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • Tracking Trees: Solver Presets

    What is it? A selection of tracking tree presets showing how to solve your camera using a mix of different nodes, ranging from simple to more complex configurations. These examples aren't the only possible solutions for each task, but they provide a selection of presets on how you can organise your nodes effectively. Tracking Trees Downloads Basic Tracking and Solving Using the Survey Solver with Geometry/LIDAR Using the Survey Solver with Photogrammetry Solving Multiple Cameras Camera Solve with Helper Photos How do I use these trees? Easy! We have a video covering this and much more about using powerful tracking tree presets. Links: Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • Export Discreet Ascii Trackers

    What does it do? Exports the trackers as 2D points (one tracker per file). Downloads: Script: # Discreet ASCII trackers export # # Generates one file for each exported tracker # import pfpy def pfExportName(): return 'Discreet ASCII trackers' def pfExportExtension(): return 'ascii' def pfExportSupport(): return ('trackers') def main(): # set desired coordinate system pfpy.setExportCoordinateSystem('left', 'y') # check each group numGroups= pfpy.getNumGroups() for j in range(0, numGroups, 1): g= pfpy.getGroupRef(j) # export each tracker numTrackers= g.getNumTrackers() for i in range(0, numTrackers, 1): t= g.getTrackerRef(i) if (t.getExport()): # construct a filename for this tracker fname= pfpy.getExportFilename().replace(".ascii", "-"+t.getName()+".ascii") fobj= open(fname, 'w') # tracked frame range firstFrame= t.getInPoint() lastFrame= t.getOutPoint() # frame number padding padding= 1+len('%d'%(lastFrame-1)) # export each valid track position for j in range(firstFrame, lastFrame+1, 1): if (t.validPosition(j)): # fetch the tracker position pos= t.getTrackPosition(j) # pad the frame number fn= '%d'%j fn= fn.rjust(padding)+'.0' # x and y coordinates xp= '%+.2f'%pos[0] yp= '%+.2f'%pos[1] # write to file fobj.write(fn+' : '+xp+', '+yp+'\n') # finished this tracker fobj.close() Links : Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • PySide2 Example Script

    What does it do? This is an example PySide2 script. It allows you to select any node in your project from the popup window. Note: This script will only work with PFTrack Enterprise 24.08.22 and onwards Downloads: Script: import pfpym import PySide2 from PySide2 import QtWidgets, QtCore from PySide2.QtWidgets import QApplication, QDialog, QPushButton, QGridLayout, QVBoxLayout, QLabel from PySide2.QtCore import QSignalMapper def pfMacroName(): return 'PySide2 Dialog' class Form(QDialog): def __init__(self, tree, parent = None): super(Form, self).__init__(parent) self.setWindowTitle("PySide2 Example") self.tree = tree vl = QVBoxLayout() vl.addWidget(QLabel("Jump to a node by clicking on a button")) gl = QGridLayout() vl.addLayout(gl) mapper = QSignalMapper(self) mapper.mapped[str].connect(self.button_clicked) # create a button for each node in the tree row = 0 col = 0 for i in range(self.tree.getNumNodes()): name = tree.getNode(i).getName() # create the button and store its name with the signal mapper b = QPushButton(name) b.clicked.connect(mapper.map) mapper.setMapping(b, name) # add to the grid layout gl.addWidget(b, row, col) col += 1 if col == 4 : col = 0 row += 1 if row == 0 and col == 0 : l = QLabel("Error: No nodes in the tree. Create some nodes and try again") gl.addWidget(l, 1, 0) self.setLayout(vl) @QtCore.Slot(str) def button_clicked(self, name): # find the node in the tree for i in range(self.tree.getNumNodes()): if self.tree.getNode(i).getName() == name : print("Activating node " + name) n = self.tree.getNode(i) # fetch the node position pos = n.getPosition() # move to the centre of the viewport self.tree.setPosition(pos[0], pos[1]) # select the node and activate the editor self.tree.activateEditor(n) # uncomment to close the dialog as soon as the editor is activated #self.accept(); break; def main(tree): # QApplication is already built, so just exec the dialog and # wait for it to run inside the application's event loop form = Form(tree) form.exec() # cleanp form.deleteLater() Links: Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

bottom of page