top of page

Search Results

24 results found with an empty search

  • Precision in a Click: PFTrack Camera Sensor Database

    In high-stakes matchmoving, the conflict between speed and sub-pixel accuracy is constant. Manually inputting camera data is a notorious drain on production time, it requires searching disparate sources for specs, cross-referencing active sensor areas and shooting modes, and then hoping the information found online is reliable. The PFTrack Camera Sensor Database  fundamentally changes this workflow. It translates repetitive, error-prone technical setups into a professional standard in just a few clicks, enabling you to dedicate your focus to achieving the highest-quality camera solve, not tedious data entry. This article details why the Database is indispensable for your pipeline, how to leverage its functionality, and the methods for extending it to accommodate proprietary or custom camera assets. Why Accuracy Matters: The Foundation of Your Solve Field of View (FOV) is governed by two core parameters: the camera's sensor size and its focal length. For example, a 50 mm lens on a full-frame (36×24 mm) sensor gives about a 39.6° horizontal FOV; change the sensor to APS-C (24×16 mm), and the same lens yields a narrower FOV of only ≈27° (the “crop factor” effect). These values determine how 2D image data is projected into 3D space, directly influencing perceived scale, perspective, and depth.  While PFTrack is highly capable of estimating FOV, and even lens distortion, when explicit sensor or focal length data is unknown, PFTrack performs best when supplied with accurate physical measurements. Providing real-world camera metrics ensures the strongest foundation for precise, repeatable results. This level of accuracy becomes especially critical in multi-shot sequences, where consistent focal length interpretation across shots is essential. Without defined physical parameters, slight variations in estimated FOV can lead to inconsistencies in scene scale, misaligned assets, or solve drift across edits. Even small discrepancies in sensor dimensions, on the order of tenths of a millimeter, can introduce meaningful errors in FOV calculations, which in turn degrade 3D track quality and overall solve fidelity. The bottom line is that supplying accurate, real-world camera metrics  removes ambiguity: the solver has one less unknown to estimate, which yields more stable and precise production-grade tracking. While manually entering sensor details can be error prone, especially when relying on ambiguous manufacturer specifications, PFTrack provides the perfect solution: the Camera Sensor Database . The Workflow Advantage: Why Presets Matter When you need to enter the camera sensor size manually switching to the camera sensor database provides four immediate benefits: Eliminating Ambiguity:  Manufacturer specifications can be vague, often listing "total" vs. "effective" pixels or using very generalised terminology to approximate size such as ‘ Super 35 ’.  Preventing Calculation Errors:  Calculating the physical size of a sensor based on pixel pitch or unverified online info is prone to error. A single millimetre of discrepancy can degrade the quality of a 3D solve. Managing Shoot Modes:  Modern cameras frequently use specific shoot modes ( cropping or windowing  the sensor). Manually calculating these deviations is complex; the database provides accurate presets from major manufacturers for these modes in seconds. Team-Wide Consistency:  On large features with multiple artists, the database ensures every user is working with identical, verified camera parameters, preventing drift between different shots. For a deeper look on sensor modes and terminology we have a camera sensor size guide available here . Using the  Fuji X-H2  as an example: switching to 4K 16:9 significantly reduces the active sensor area compared to the full sensor, causing a noticeable change in field of view (FOV)  when using the same lens. Presuming the manufacturer’s full sensor specifications apply to all shoot modes can lead to inaccurate tracking results. The camera sensor database addresses this by providing accurate shoot mode data matched to your clip’s resolution. How to Use the Sensor Database The sensor database is integrated into the areas where camera data is defined. You can access it in two primary locations, the Clip Input  node and the Camera Lens Distortion Presets  panel.  It’s designed to be a seamless part of your workflow and can be launched using the following button . It is broken down into three sections:  Database selection and information.  Camera make and model selection.  Sensor information and shoot mode selection. Camera Sensor Database in action Quickly select verified sensor presets and ensure physically accurate FOV across your shots in PFTrack. Find Your Camera: Type in your manufacturer or camera model in or browse the list (e.g. " RED Komodo " or " Sony Venice "). Select the ‘Shoot Mode’ :  Pick the sensor shoot mode used in the clip. Modes that don’t match resolution will be greyed out. Click ‘Use Sensor’ to automatically fill in the sensor measurements in both Clip Input Node & camera lens distortion presets. You can filter the list further by checking for proxies 2x & 4x as well as matching resolution and aspect ratio. Pro Tip: Need more flexibility  The database allows users to switch between The Pixel Farm’s internal database and the Matchmove Machines  database. This provides unparalleled flexibility, if a specific or niche camera body isn't found in one, a quick toggle often finds it in the other. Expanding the Database: Custom XML Data For proprietary camera rigs, prototypes, or bespoke setups, PFTrack allows you to expand the database with your own metrics. By creating a custom sensor xml file, you can define custom sensor sizes for your projects that can be selected directly within the camera sensor database. File structure:  To simplify the process, PFTrack ships with an example-sensors.xml  file. This file serves as a template for defining your own , , and . You can find the template in the installation folder on your system /media/example-sensors.xml . The custom .xml file you create can be named whatever you want. Where to put your custom file? Once you have created your custom xml file you will need to place it in the directory listed below, once in place relaunch PFTrack and you will have access to the updated file via the camera sensor database dropdown.  MacOS:  /Users/USERNAME/Documents/The Pixel Farm/PFTrack/presets/sensors Windows: C:\Users\USERNAME\Documents\The Pixel Farm\PFTrack\presets\sensors Linux: /home/USERNAME/Documents/The Pixel Farm/PFTrack/presets/sensors For a deep dive into the XML schema and specific formatting rules, please refer to our   User Defined File Formats Documentation . Pro Tip: Centralised Management - for Multi-User Pipelines For studios managing multiple PFTrack seats, maintaining a "single source of truth" for camera data is essential for pipeline integrity. Rather than managing local files on every machine, you can centralise your custom presets: Place your custom.xml file in a centralised network location accessible by all workstations. In the PFTrack software settings, navigate to ‘ Additional file locations’  and point the ‘Sensor presets’  path to your network directory. Once configured, any update made to the central XML file is propagated across the entire facility and available in PFTrack on relaunch. This centralised approach allows Pipeline TDs to deploy updated camera information for a specific feature or production once, ensuring every artist, regardless of their workstation, is working with identical, verified technical specifications. You can use multiple xml files as well with each defining the cameras used for a specific production, this can be further customised with a project or company logo .  Conclusion The Camera Sensor Database isn't just a convenience; it is a precision tool. By moving away from manual entry and unverified data, you safeguard your solves and streamline your pipeline.  Ready to upgrade your tracking? Ensure you are running the latest version of PFTrack to access the most recent camera sensor database updates. Want to add a camera model to our database, or discuss a camera system you're using for tracking with other professional matchmovers? Join our support community here .

  • PFTrack 26.02.11 available for download now.

    Build 26.02.11  is now live for Solo, Studio, and Enterprise . This update boosts pipeline reliability, incorporating key refinements driven by our global user community. Key Enhancements: Maya 2026 Export Script: Updated export scripts ensure background image planes are perfectly organized upon import. More information here . Enhanced LiDAR Reliability:  Advanced logging now identifies corrupted or incomplete survey files instantly. Robust Asset Support:  Consistent loading for OBJ mesh textures with numbered filenames within the Survey Solver. Expanded Node Logic:  Optimised connectivity for photogrammetry nodes and secondary Survey Solver inputs (Studio/Enterprise). Refined Navigation: Improved UI responsiveness with resolved shortcut conflicts for the Set-Axis tool. Solo  users download the latest version of PFTrack directly from within the application. Studio  users download the software from within the St udio Account . Enterprise  customers login to the PFAccount Portal   and download tghe software from there. Join the Conversation:  Help us build the next generation of tracking tools by joining our PFTrack Support Community . We’ve also introduced a new PFTrack   Solo Trial Mode . Whether you are on an older build or brand new to the ecosystem, we encourage you to download  the latest version and explore the industry's leading tracking tools firsthand.

  • Maya 2026 export scripts

    What does it do? We have added a new export scripts for Autodesk Maya that improved compatibility by automatically parenting the camera image plane to the moving camera. Just unzip them into: Documents/The Pixel Farm/PFTrack/exports Contents: mayaASCII-2026-zxy.py mayaASCII-2026.py Once installed, choose “Maya 2026” as the export format. These scripts should work with earlier versions of Maya as well. Downloads: Links : Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • Master PFTrack Solo in 4 Minutes: Fast 3D Camera Tracking & Solving

    In this tutorial, I will guide you through the process of creating a project, importing a clip, performing tracking, solving and orienting your camera, and exporting your 3D scene. Link to the footage: The rock of dunamase by Jay's Photography Step 01 - Create a Project To begin, click on 'Create New Project'. Enter a project name and set a project path. Once done, click 'Create'. Step 02 - Importing Your Clip Next, double-click the 'Clip Input' and click the media import button. Browse to your clip and click open. Click on the Undistort tab and set the mode to Automatic. Step 03 - Track Your Clip The node tree is a unique feature of PFTrack in matchmoving. It sets PFTrack apart as an industry leader. To create a tree quickly, double-click on the nodes you want to use. For this step, click on the 'Tracking' tab and double-click the 'Auto Track' node. With the Auto Track parameters open at the bottom, click 'Start Tracking'. Step 04 - Solve Your Camera Enter the node panel and click on the 'Solving' tab. Double-click the camera solver and then select 'Solve All'. Press play to review the result. Step 05 - Orient the Camera Enter the node panel and click the 'Utilities' tab. Select a point on the ground near the castle door and click 'Set Origin'. Use the 'Edit Mode' and the 'Rotation' controls to align the virtual horizon with the real horizon. Click and drag the scrub bar for a better view. Using the size of the doorway as a reference, scale the ground plane to the correct size. Each ground plane square represents 1m². The approximate width of the doorway is 1m. Controls - Cinema Pan viewer : 2nd mouse button Zoom viewer : middle mouse button Now press play to view the results. Use the maximise option on the viewer to see your 3D scene clearly. Adjust the controls for a better perspective. Controls - 3D Viewer Rotate viewer : 1st mouse button Pan viewer : 2nd mouse button Zoom viewer : middle mouse button Step 06 - Exporting Your Scene Finally, add a 'Scene Export' node. Choose a format from the available options, set your export path and file name, and click export to deliver your 3D scene. If you are a discovery mode user, you will not be able to complete this step until you subscribe. Additional Resources For further learning, head back to PFTrack Tutorials . You can also explore our Learning Articles for a deeper understanding of camera tracking and matchmoving concepts. Alternatively, check out our extensive Resources for valuable presets, Python scripts, and macros. In conclusion, mastering 3D camera tracking with PFTrack opens up new possibilities in visual effects and forensic analysis. With practice, you will enhance your skills and create stunning visualizations.

  • PFTrack Integrates Matchmove Machines CamDB via New Sensor Preset System

    Building on the internal Camera Library introduced in our last major release, we are excited to introduce a powerful new capability: we are opening up our Sensor Preset Database to third parties. This feature extends our existing library architecture, allowing third-party experts to integrate their own precision sensor data directly into PFTrack. To launch this, we are proud to feature Matchmove Machines CamDB  as the first available integration. Why add Matchmove Machines to your library? Enhanced Precision: Sensor sizes are meticulously recalculated from original manufacturer data to eliminate rounding errors. Workflow Efficiency: Instantly access a vast, verified catalog of modern camera bodies and modes without leaving PFTrack. Seamless Extension: Instantly expands your native PFTrack library with specialized third-party data. Upgrade your sensor accuracy today. Learn more about Matchmove Machines CamDB Happy Tracking, The PFTrack Team

  • Postshot export script

    #postshot   #exportscripts   What does it do? This is an experimental export script for exporting cameras and points from PFTrack 24.12.19 and later to Jawset Postshot for Gaussian Splat training https://www.jawset.com/ Before using the script, please review the usage guidelines below to get the best results. Download: Shot Setup The script can export movie or photogrammetry cameras, along with tracking points and point clouds. Download and unzip the file into your Documents/The Pixel Farm/PFTrack/exports folder and relaunch PFTrack. This will create a new export format in the Scene Export node called "Jawset Postshot (.json)" When setting up your cameras in PFTrack, it is important to ensure you are correcting for lens distortion. Gaussian Splatting is initialised from your point dataset and trains a radiance field to match your image data, so the results you get will depend strongly on the quality of your input cameras and points. A sparse set of points may not initialise the training as well as a dense point cloud, and datasets with low parallax or coverage may not give the best results when viewed from angles other than your original cameras, so please refer to the the Postshot user guide for capturing guidelines: https://www.jawset.com/docs/d/Postshot+User+Guide Point density You should make sure your shot has enough tracking points to initialise the Postshot training. If you've just used a few User Track points or a small number of Auto Track points, you will probably get better results by adding some more to your shot. You can do this easily by placing an empty Auto Track node upstream from your Camera Solver before solving. Then, after you've solved your camera and are happy with the result, go back to your Auto Track node and generate more tracking points, increasing the Target Number up to 500 or more. In your camera solver, select all your tracking points and click the Solve Trackers button to solve for their 3D positions whilst keeping the camera fixed. Alternatively, you can use the Select Frames node to decimate your movie clip into a set of photos, and then use the Photo Cloud node to create a dense point cloud, and attach both the camera and dense point cloud to the export node. The Select Frames node could also be used before exporting your movie camera to reduce the number of frames being loaded into Postshot if you have a very long image sequence. Exporting from PFTrack Select the "Jawset Postshot (.json)" export format and make sure to enable the "Undistorted clip" Distortion export option, setting the image format to either JPEG, TIFF or OpenEXR with suitable frame number padding. We recommend using TIFF or OpenEXR as this will ensure invalid pixels around the boundary of your undistorted images are written with a zero in the alpha channel and ignored by Postshot. Alternatively, make sure your undistorted images are cropped to the original image size during the solve to reduce empty pixels as much as possible. After exporting, you will find a .json file and a .ply file in the export folder containing your camera and point data respectively, along with your undistorted images in the clips folder. Importing into Postshot You can drag-and-drop the entire export folder directly into Postshot, but it is important to ensure no other files are present in the folder. Macos users in particular should remove all .DS_Store files that are created when opening the folder in Finder, as they will prevent the dataset from loading and give an "invalid string position" error message. In Postshot, make sure to enable the "Treat Zero Alpha as Mask" option to ensure the boundary pixels in the undistorted images are ignored during training. Please refer to the Postshot user guide for all other settings. Links: Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • Lock Object Motion script

    What does it do? This script is for PFTrack 24.12.19 and later, and can be used to transfer the motion from a moving object geometry track to a camera, keeping the object locked in position in the first frame. To use the script, download and unzip the file into your Documents/The Pixel Farm/PFTrack/nodes folder and relaunch PFTrack. This will create a new node called Lock Object Motion in the Python node category. Download: Script: # # PFTrack python script lockObjectMotion.py # # Takes object motion from a geometry track and converts to camera # motion, keeping the object locked in position in the first frame # import math import pfpy def pfNodeName(): return 'Lock object motion' def quaternionInverse(q): return (-q[0],-q[1],-q[2],q[3]) def quaternionNormalize(q): n = 1.0/math.sqrt(q[0]*q[0]+q[1]*q[1]+q[2]*q[2]+q[3]*q[3]) return (n*q[0],n*q[1],n*q[2],n*q[3]) def quaternionMult(a, b): return (a[3]*b[0]+a[0]*b[3]+a[1]*b[2]-a[2]*b[1], a[3]*b[1]-a[0]*b[2]+a[1]*b[3]+a[2]*b[0], a[3]*b[2]+a[0]*b[1]-a[1]*b[0]+a[2]*b[3], a[3]*b[3]-a[0]*b[0]-a[1]*b[1]-a[2]*b[2]) def quaternionToMatrix(q): return (1.0-2.0*(q[1]*q[1]+q[2]*q[2]), 2.0*(q[0]*q[1]-q[3]*q[2]), 2.0*(q[0]*q[2]+q[3]*q[1]), 2.0*(q[0]*q[1]+q[3]*q[2]), 1.0-2.0*(q[0]*q[0]+q[2]*q[2]), 2.0*(q[1]*q[2]-q[3]*q[0]), 2.0*(q[0]*q[2]-q[3]*q[1]), 2.0*(q[1]*q[2]+q[3]*q[0]), 1.0-2.0*(q[0]*q[0]+q[1]*q[1])) def matrixMult4x4(a, b): return (a[0]*b[0]+a[1]*b[4]+a[2]*b[8]+a[3]*b[12], a[0]*b[1]+a[1]*b[5]+a[2]*b[9]+a[3]*b[13], a[0]*b[2]+a[1]*b[6]+a[2]*b[10]+a[3]*b[14], a[0]*b[3]+a[1]*b[7]+a[2]*b[11]+a[3]*b[15], a[4]*b[0]+a[5]*b[4]+a[6]*b[8]+a[7]*b[12], a[4]*b[1]+a[5]*b[5]+a[6]*b[9]+a[7]*b[13], a[4]*b[2]+a[5]*b[6]+a[6]*b[10]+a[7]*b[14], a[4]*b[3]+a[5]*b[7]+a[6]*b[11]+a[7]*b[15], a[8]*b[0]+a[9]*b[4]+a[10]*b[8]+a[11]*b[12], a[8]*b[1]+a[9]*b[5]+a[10]*b[9]+a[11]*b[13], a[8]*b[2]+a[9]*b[6]+a[10]*b[10]+a[11]*b[14], a[8]*b[3]+a[9]*b[7]+a[10]*b[11]+a[11]*b[15], a[12]*b[0]+a[13]*b[4]+a[14]*b[8]+a[15]*b[12], a[12]*b[1]+a[13]*b[5]+a[14]*b[9]+a[15]*b[13], a[12]*b[2]+a[13]*b[6]+a[14]*b[10]+a[15]*b[14], a[12]*b[3]+a[13]*b[7]+a[14]*b[11]+a[15]*b[15]) def vectorMult4x4(m, v): n = 1.0/(v[0]*m[12]+v[1]*m[13]+v[2]*m[14]+m[15]) return ((v[0]*m[0]+v[1]*m[1]+v[2]*m[2]+m[3])*n, (v[0]*m[4]+v[1]*m[5]+v[2]*m[6]+m[7])*n, (v[0]*m[8]+v[1]*m[9]+v[2]*m[10]+m[11])*n) def buildTransformationMatrix(t, q): T = (1.0,0.0,0.0,t[0], 0.0,1.0,0.0,t[1], 0.0,0.0,1.0,t[2], 0.0,0.0,0.0,1.0) r = quaternionToMatrix(quaternionInverse(q)) R = (r[0],r[1],r[2],0.0, r[3],r[4],r[5],0.0, r[6],r[7],r[8],0.0, 0.0,0.0,0.0,1.0) return matrixMult4x4(T,R) def buildInverseTransformationMatrix(t, q): T = (1.0,0.0,0.0,-t[0], 0.0,1.0,0.0,-t[1], 0.0,0.0,1.0,-t[2], 0.0,0.0,0.0,1.0) r = quaternionToMatrix(q) R = (r[0],r[1],r[2],0.0, r[3],r[4],r[5],0.0, r[6],r[7],r[8],0.0, 0.0,0.0,0.0,1.0) return matrixMult4x4(R,T) def main(): if pfpy.getNumCameras() > 0 and pfpy.getNumMeshes() > 0 : # fetch the first camera and mesh cam0 = pfpy.getCameraRef(0) mesh0 = pfpy.getMeshRef(0) inp = cam0.getInPoint() outp = cam0.getOutPoint() # take copies to read from safely c = cam0.copy() m = mesh0.copy() # keep the camera in position in the first frame, but transfer the relative object motion in other frames to the camera objT0 = m.getTranslation(inp) objQ0 = m.getQuaternionRotation(inp) objM0 = buildTransformationMatrix(objT0, objQ0) f= inp+1 while (f <= outp) : # object pose in this frame objT = m.getTranslation(f) objQ = m.getQuaternionRotation(f) objiM = buildInverseTransformationMatrix(objT, objQ) # map the relative camera position back to the object in the first frame t = vectorMult4x4(objM0, vectorMult4x4(objiM, c.getTranslation(f))) # and likewise for the camera rotation q = quaternionNormalize(quaternionMult(c.getQuaternionRotation(f), quaternionMult(quaternionInverse(objQ), objQ0))) # position the camera cam0.setTranslation(f, t) cam0.setQuaternionRotation(f, q) # object is no longer moving mesh0.setTranslation(f, objT0) mesh0.setQuaternionRotation(f, objQ0) print('Positioned camera in frame %d'%f) f += 1 # cleanup c.freeCopy() m.freeCopy() Links: Head back to PFTrack Resources . Or check out our Learning Articles  for a deeper look at camera tracking and matchmoving concepts. Or visit our PFTrack Tutorials  for step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack.

  • Sensor Size: A Practical Guide for Camera Tracking

    Navigate the learning article What is Sensor Size? How do I find out the size of a camera sensor? Why is ‘Sensor Size’ important for camera tracking? Sensor Size Considerations Loose terminology Full Area Vs Active Area Windowed Sensor Mode Scaled Sensor Mode Metadata and sensor size A multiformat sensor? Full-frame equivalent Intro What exactly is sensor size, and why does it matter for VFX, particularly when calculating the field of view for camera tracking? This post explores the intricacies of sensor size. We’ll demystify key terminology, including "full-frame equivalent," "windowed," and "cropped" sensors. We'll also examine how metadata and other resources can help you out of a tight spot if you don’t have the info. By understanding and applying these concepts, you can gain greater control over your camera tracking projects. What is Sensor Size? So, perhaps a good place to start is defining what we mean by sensor size. Sensor size refers to the physical dimensions of a camera's imaging sensor—the part that turns incoming light into digital images. Usually, the height and width are measured in millimetres (mm); it is where the lens projects an image to be captured and converted into a digital signal. Sensor size affects how much of the lens's image is recorded, influencing the field of view, depth of field, and overall image quality.  How do I find out the size of a camera sensor? While a quick Google search will provide the necessary information for most professional cine and mirrorless camera systems, it is worth delving deeper into the manufacturer's website to find the exact size. However, remember the considerations that will be discussed later in this article when finding out the sensor's 'actual size'. If a simple search doesn't yield results, the following links lead to excellent websites that cover the essential details you may need. Matchmove Machine matchmovemachine.com is run by a knowledgeable team experienced in all aspects of camera tracking and matchmoving. Their standout resource is a comprehensive camera sensor database, covering a wide variety of cinema, consumer, and drone cameras with detailed specs essential for accurate tracking work. The site also offers helpful guides and insights, making it a useful reference point for anyone involved in matchmoving. https://camdb.matchmovemachine.com VFX camera database  The VFX Camera Database is a valuable resource that offers an extensive, mostly up-to-date collection of professional and prosumer cameras. What sets this site apart is its inclusion of detailed measurements, not only for the full active sensor area but also for windowed sizes in different recording modes.  https://vfxcamdb.com/ DXOMARK   This website is a database primarily focused on testing camera sensor performance. However, under the specifications tab, it also provides key details like the actual sensor size and other useful data for matchmoving, such as rolling shutter performance. This resource is especially helpful for tracking phone footage, as it includes information on most of the latest phone cameras, including sensor sizes and field-of-view equivalence.  https://www.dxomark.com/Cameras/ CINED CINED is a production-focused website offering in-depth testing and reviews of the latest camera gear. While it primarily focuses on sensor performance, much like DXOMARK, it also provides valuable details on sensor size and windowing for various popular camera systems, including some phones and mirrorless cameras—making it a useful resource for camera tracking tasks. https://www.cined.com/camera-database/ Why is ‘Sensor Size’ important for camera tracking? Camera tracking applications like PFTrack require the camera's field of view (FoV) to accurately track and solve a shot. The FoV is determined by both the lens's focal length and the sensor size. However, precise knowledge of the sensor size remains crucial for ensuring the virtual camera accurately replicates the real-world camera's perspective and movement. Without this, discrepancies in scale, position, and motion can misalign digital assets, disrupting the realism of the final shot. Sensor Size Considerations Loose terminology for the sizing of an imaging sensor You’ve probably come across terms like Super35, Full Frame, and large format to describe the size of an imaging sensor in cine or still cameras. While it might seem straightforward to search " What is the size of a Super35 sensor? " and rely on the results, the information can often be inconsistent due to manufacturers' generalised and imprecise terminology. To illustrate, suppose we have a camera with a 24mm focal length, and the camera in question is a Sony PMW-F3 , which uses a "Super35" sensor. If we rely solely on Google's definition of Super35 based on the traditional film format, we might calculate the following horizontal and vertical field of view: Super35 format size: 24.89 mm x 18.66 mm Horizontal FoV: 54.82° Vertical FoV: 42.49 Delving deeper into the manufacturer's sensor specifications reveals that its size is not identical to a true Super35mm sensor; instead, it has been rounded up and features a different aspect ratio. This discrepancy impacts calculations, resulting in a field of view that is 2.46° narrower horizontally and 11.52° narrower vertically than anticipated. Actual sensor size: 23.6 mm x 13.3 mm Horizontal FoV:  52.36° Vertical FoV: 30.97° While this may not seem significant, even small sensor size discrepancies can impact your solve's overall accuracy. This is especially true for smaller sensors, such as those used in drones and cameras built into phones, where precise field of view (FoV) calculations are critical.  The term "Large Format" adds further confusion, as it has come to refer to any sensor larger than 36x24mm without clearly defined upper limits, complicating efforts to strictly define sensor size for camera tracking. The situation becomes even more complex when considering sensor crop and various windowed shoot modes.  Windowed, Scaled, and Crop Modes: Effects on FoV If you have looked up the size of the image sensor in the camera that shot your clip and entered the information, and things just don’t seem to be making sense or working, it might be because your camera is shooting in a mode that affects the FoV of your image.  Full Area Vs Active Area The difference between the active imaging area and the full sensor area lies in how much of the sensor's surface is actually used for capturing an image versus the total physical size of the sensor itself. Full Sensor Area:  This refers to the total physical dimensions of the sensor, including all of its pixels and regions, whether they are used for capturing an image or not. The full sensor area accounts for every part of the sensor's surface, including pixels reserved for other functions (such as calibration or stabilisation) or areas that may be masked out during image capture. Active Imaging Area:  This is the portion of the sensor that is actively used to capture an image. It defines the region where incoming light is collected and converted into a digital image. Due to manufacturer-specific design choices, cropping, or masking, the active imaging area can be smaller than the full sensor area. This distinction is important in applications like camera tracking, as it directly affects how the image is projected onto the sensor and impacts field-of-view calculations. When entering information about the sensor in your camera, it is always important to use the ‘Active Imaging Area’ over the ‘Full Sensor Area’ where possible for best accuracy. Windowed Sensor Mode Sensor windowing occurs when only a portion of the imaging sensor is used to capture an image, effectively "cropping" the sensor's active area. This is common in cine cameras when recording RAW and selecting a resolution or format other than the sensor's native resolution. Instead of resampling the full sensor, the camera activates a smaller portion of it, which alters the field of view. Similarly, sensor windowing is often used to achieve very high frame rates, as processing data from a smaller sensor area reduces the hardware's workload.  For instance, the RED MONSTRO 8K sensor, measuring 40.96 x 21.6 mm, utilises its full area when shooting at its maximum resolution of 8K (8192 x 4320). However, the camera applies sensor windowing to achieve lower resolutions, using only a portion of the sensor’s area.  RED MONSTRO Windowed shooting modes:  6K shooting mode (6144 x 3240), area is 30.72 x 16.20 mm 5K shooting mode (5120 x 2700), area is 25.6 x 13.5 mm 4K shooting mode (4096 x 2160), area is 20.48 x 10.80 mm Sensor windowing may also be necessary when using lenses for smaller imaging circles, such as a Super35 (31.1mm) optic on large format sensors). When using optics designed for the Super 35 format, the RED camera can use the 5K Super 35 windowed mode Scaled Sensor Mode Sensor scaling is a simpler concept compared to sensor windowing. It involves resampling the image captured by the entire sensor area to a lower resolution while preserving the full active sensor area on one or more axes.  Full Area Resampling Full-area resampling takes the image captured by the entire sensor and downsamples it to a lower resolution without altering the active area or the FoV. For example, a sensor with a native resolution of 3840x2160 might be resampled to 1920x1080, maintaining the full sensor area while reducing the pixel count.  Despite resampling from UHD to HD, the sensor's full FoV is maintained Aspect Sensor scaling can also account for changes in aspect ratio. For instance, a sensor with a native aspect ratio of 1.78:1 (16:9) may crop or scale the image at the top and bottom to produce a 1.85:1 aspect ratio while maintaining the full sensor width and horizontal FoV. You can enter the sensor's width, and PFTrack will calculate the height automatically. Changing aspect scale can also happen vertically; for example, if you have a native 1.85:1 sensor, the scaling may crop the sides to reach a 1.78:1 aspect ratio (see anamorphic). Camera with a native 1.78:1 aspect sensor with a 1.85:1 ‘scaled’ sensor mode active Anamorphic Anamorphic scaling preserves the full sensor height and vertical FoV, while the sides are scaled/cropped to achieve the desired anamorphic recording ratio, such as 1.33:1. You can enter the sensor's height, and PFTrack will calculate the width automatically.  RED VV Raptor shooting in its “Scaled” 8K 6:5 anamorphic 2X mode It's important to note that resampling a 3840x2160 resolution sensor using the full sensor area to 1920x1080 is not the same as using a 1920x1080 windowed shoot mode, where only a portion of the sensor's full area is utilised. The two methods will result in very different fields of view (FoV). Camera Sensor Database PFTrack 25.11.13 introduces a smart new camera sensor database, adding support for both online and locally-hosted options. No more hunting through specs or scouring the web, finding the right sensor size for your camera is now quick and effortless, letting you focus on the creative side of tracking. The database is built to evolve, with a growing collection of camera presets and the ability to build your own custom lists of frequently used cameras. This means your most-used setups are always at your fingertips, and new cameras can be added as your toolkit expands. By centralising sensor information and putting it directly in the app, PFTrack removes a common source of friction, helping you set up shots faster and keep your workflow running smoothly. Metadata A key advantage of using an application like PFTrack is its ability to read metadata from formats like DPX and EXR and many camera RAW files. But why is this important for determining a camera's sensor size? Metadata often contains critical details, including the camera model and shooting mode, which can help quickly and accurately identify the sensor size from the data or use it to select an appropriate sensor preset.  A multiformat sensor? Don’t worry this sounds more complicated than it is. The term refers to using a slightly larger sensor than standard, allowing the camera to “window” the sensor to achieve various aspect ratios, formats, and frame rates directly in-camera rather than capturing the full sensor area and cropping it later.  ARRI Alexa 35 cinema camera, which has an Open Gate sensor size of 27.99 x 19.22 mm A prime example is the ARRI Alexa 35 , which utilises its full active sensor area of 27.99mm x 19.22mm in Open Gate mode and dynamically windows/scales the sensor to accommodate common standards at the correct measurements. For instance, the 1.78:1 mode uses a 24.88mm x 14.00mm area, while the anamorphic 6:5 mode employs a 20.22mm x 16.95mm area. This adaptability ensures the sensor can handle diverse applications, leveraging its entire surface or specific regions to deliver the desired field of view and resolution for each format using the correct imaging circle. Full-frame equivalent or the actual sensor size? If you've searched everywhere but can’t find information about your sensor size, you might still have a "full-frame equivalent" focal length to work with. The term "full-frame equivalent" refers to the focal length of a lens on a camera with a sensor size other than full-frame (36mm x 24mm) that produces a similar field of view to a lens on a full-frame camera. Essentially, it allows for comparing how a lens on a smaller sensor camera would behave if mounted on a full-frame camera. Manufacturers often use full-frame equivalence to simplify marketing, particularly in systems with integrated optics, such as handheld gimbals and drones. However, this approach can obscure the true sensor size. So, can you rely on full-frame equivalence instead? The answer is both yes and no. For example, the DJI Osmo Pocket 3 does not readily disclose its sensor size, but it states that the combination of its sensor and optics produces a full-frame equivalent field of view to a 20mm lens. Using this information, you could input the horizontal size of a full-frame sensor (36mm) and a 20mm focal length to estimate the field of view. However, this assumes the 20mm equivalence is precise. Manufacturers often round up or down to the nearest common photographic focal length for simplicity. While such minor differences are negligible for everyday filming or photography, they can lead to inaccuracies in precision workflows like camera tracking. Full-frame equivalence can provide a starting point if no other data is available. However, be cautious, as it might not deliver the accuracy required for tasks like camera tracking. Wrap Up In conclusion, we hope this post has clarified some of the challenges in identifying the correct sensor size for your camera while providing a foundation in key concepts and terminology. Whether working with high-end cine cameras or smaller devices like drones, understanding and applying sensor size information is essential for accurate tracking. Leverage resources like the VFX Camera Database and DXOMARK to quickly access precise sensor specifications for your projects. Finally, remember that PFTrack offers powerful tools for calibrating your camera body, and the Auto camera model can be a reliable fallback when all else fails. Armed with this knowledge and the right tools, you'll be well-equipped to tackle your next camera tracking challenge with confidence. Links : Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • The How and Why of Feature Tracking in PFTrack

    What’s the difference between automatic and manual tracking? Which is better? When should I use one instead of the other? And how do the differences affect the camera solver? In this article we’ll take a look at some of the more technical details of how trackers are used in #PFTrack, and suggest some ways of getting the most out of PFTrack’s advanced feature tracking tools. What is a tracker? A tracker defines the location of a single point in 3D space, as viewed by a camera in multiple frames. In PFTrack, trackers are generally created using two nodes: Auto Track and User Track. The Auto Track node is able to generate a large number of estimated trackers automatically, and the User Track node provides manual tracking tools for precise control over exactly where each tracker is placed in each frame. Trackers form the backbone of any camera solve, and they are used to work out how the camera is moving along with its focal length and lens distortion if they are unknown. But how many trackers do you need, and what is the best way of generating them? How are trackers used to solve the camera? When solving for the camera motion in a fixed scene under normal circumstances, PFTrack needs a minimum of 6 trackers to estimate the motion from one frame to the next. This is the bare minimum, however, and we generally recommend using at least 8 or 10, especially if you’re not sure of the focal length, sensor size, or lens distortion of your camera. Using a few more than the minimum can also help smooth out transitions in the camera path from one frame to the next, where one tracker might vanish and another one appears in the next frame. Trackers should be placed at points that are static in the real world ( i. e.  do  not move in 3D space), such as the corner of a window frame or a distinguishable mark in an area of brickwork. This allows the 3D coordinates of the point to be estimated, which in turn helps to locate where the camera is in each frame. To help with estimating camera motion, trackers also need to be placed in both the foreground and background of your shot, especially when trying to estimate focal length, as this provides essential parallax information to help the solve. It’s also important to have trackers placed in as many parts of the frame as possible, rather than just bunching them together in a single area. Think of your camera’s grid display as dividing your frame into a 3x3 grid of boxes - try to have at least one tracker in each box in every frame, and you’ll have good overall coverage. Not every tracker is equal We’ll get into the details of how to generate trackers shortly, but before we do it’s important to understand that not every tracker is considered equally when solving the camera. The most significant distinction is whether a tracker is defined as being a soft  or hard  constraint on the camera motion. Hard constraints mean the placement of the tracker in every frame is assumed to be exact. If you’ve generated trackers manually using a User Track then these will be set as hard constraints by default. The solver will try to adjust the camera position and orientation to make the tracker’s 3D position line up with its 2D position exactly in every frame when viewed through the camera lens. On the other hand, trackers that are generated automatically with the Auto Track node are marked as soft constraints and don’t have to be placed exactly in every frame. The camera solver is able to recognise that some errors in the 2D positions exist and ignore them. These are often referred to as “outliers” and might correspond to a temporary jump in the tracker position for a couple of frames or the subtle motion of a background tree in the wind, resulting in the 3D location of the tracking point changing from frame to frame. So now that we’ve explained some of the details about how the camera solver uses trackers, what is the best way of generating them? Auto-Track? User-Track? Or both? Ultimately, the answer to this comes down to experience with the type of shot you’re tracking, how much time you have to spend on it, and the final level of accuracy you need to complete your composite. To get started, here are some guidelines that should help you quickly get the most out of PFTrack’s tools. Automatic feature tracking If you have all the time in the world to track your shot, then of course, manually placing each tracker in every frame is the way to go, as this ensures each one is placed exactly where it should be. Alternatively, automatic feature tracking is a way of generating a large number of trackers very quickly, but because the tracking algorithm is attempting to quickly analyse the image data and work out the best locations to place them, not every tracker is going to be perfect. The Auto Track node picks out a large number of "interesting" points and corners in each image, and tracks those points bi-directionally between each pair of frames (i.e. from frame 1 to 2 and then from 2 back to 1). It compensates for any differences in exposure or lighting whilst doing this, and also tries to ensure that jumps and inconsistencies in the motion of each point between frames are avoided wherever possible. After all the points are tracked, it filters them down to select around 40 trackers in each frame (using the default settings). The trackers are chosen in a way that tries to distribute them evenly over the image area whilst also ensuring tracks with the longest length are used wherever possible, so each tracker is visible in many frames of the clip to help out the camera solver. However, these trackers may end up being placed on objects that are moving independently from the camera, or at other locations that cannot be resolved to a single point in 3D space. For example, so-called “false corners” that result from the intersection of two lines at different distances from the camera can often be indistinguishable from real corners when looking at a single image. Whilst the camera solver will ignore these outliers to a certain extent, having too many trackers falling into these categories can adversely affect the solve, so how should you deal with them? Identifying errors Whilst PFTrack will attempt to detect when tracking fails, not every glitch can be easily detected, especially when your shot contains motion blur or fast camera movement. It’s always worth reviewing automatic tracking results to check whether there are any obvious errors. For example, the motion graphs in the Auto Track node can be used to quickly identify trackers that are moving differently from the others. Trackers can be selected for adjustment or deletion The “Centre View” tool can also be used to lock the viewport onto a single tracker. Scrubbing forwards and backwards through the shot will often expose motion that is subtly different from the background scene, which may indicate a false corner or other gradual object movement. Adjusting trackers So now you’ve identified some trackers that need some attention. What’s next? If you just need to make a few quick adjustments, such as adjusting tracker visibility or re-positioning it in a couple of frames, the Auto Track node provides some Tracker Adjustment tools directly in the Cinema window you can use to get the job done: Tracker adjustment tools You can use these tools to make any minor adjustments to your tracking points before passing them downstream to the camera solver. If you want finer-grain control over your adjustments you can also use the Fetch tool in the User Track node to convert an automatic tracker into a manual one, and all the tools of the User Track node are available to you to adjust the tracker as needed. To adjust or disable? You can manually correct every single one of your automatic trackers if you wish, but as we mentioned earlier, the Auto Track node generates many more trackers than are actually needed to solve the camera motion. This means you may well be spending a lot of time unnecessarily correcting trackers if you have a particularly tricky shot. It can often be just as effective to quickly disable the bad trackers, especially if time is short. This is certainly the case if you’ve only got a few outliers, and also have other trackers nearby that don't need fixing. You could also use the masking tools in PFTrack to mask out any moving objects before automatic tracking, although it’s important to weigh the time it will take you to draw the mask against the time it takes to identify and disable a handful of trackers afterwards. Remember that trackers should be distributed over as much of the frame as possible, and we recommend a minimum of around 10 in each frame, so keep this in mind when disabling. If you end up having to disable a lot and are approaching single-figures, then maybe a different strategy is going to be necessary: supervised tracking. Supervised feature tracking Ultimately, a lot of shots will need some level of manual, or 'supervised', tracking using the User Track node. This is especially important if you’re tracking an action shot with actors temporarily obscuring the background scene. One limitation of automatic feature tracking is that it can’t connect features from widely different parts of the shot together if something is blocking their view or the feature point moves out of frame for a significant length of time. In these cases, human intervention is often necessary, and this is where the User Track node comes into play, allowing you to create trackers from scratch to perform specific tasks. For example, you may have a shot where the camera pans away from an important area for a few seconds and then pans back. Or an actor may walk in front of an important point before moving out of frame. In these cases, you want to make sure the 3D coordinates of points at the beginning are the same as at the end. Creating a single tracker and manually tracking over frames where it is visible (whilst hiding the tracker in frames where it is not visible) will achieve this goal. The same guidelines apply when creating tracking points manually - try to distribute them over your entire frame, and make sure that you’ve got a good number of trackers in each frame. Also, try not to have many trackers stop or start on the same frame (especially when they are treated as hard constraints), as this can sometimes cause jumps in your camera path during the solve that will require smoothing out. If you do, adding a couple of “bridging” trackers elsewhere in the image that are well tracked before and after the frame in question can often help out. Wrap Up Hopefully, this article has shed some light on things to consider when tracking your points. In the end, this all comes down to experience, and as you track more shots, you’ll get a better feel for when to use specific tools, and whether to start with supervised tracking straight away or give the Auto Track node a go first of all.  If you are using automatic tracking, you can easily place an empty user track node between the Auto Track and Camera Solver to hold any user tracks that you may want to create manually as you solve your camera. Also, don’t worry about getting every tracker perfect before you first attempt a camera solve. It’s often possible to try auto tracking first and see where that gets you, then consider how to address any problems and add a few user tracks to help the solver out. PFTrack lets you adjust and change your trackers however you want. If you’ve almost got a solve but can see a bit of drift in a few frames, try creating a single manual tracker over those frames in a sparsely populated area of the image, then solve for the 3D position of that tracker alone, fix your focal length and refine your solution - you don’t have to solve from scratch every time. Links: Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • PFTrack 25.11.13: Elevating 3D camera tracking to the highest professional standards

    We are thrilled to announce the immediate release of the first in a series of significant updates to our professional camera tracking and matchmoving solution. These planned enhancements deliver powerful new features and refinements designed to boost productivity and provide our users with even greater precision and reliable results. What’s New and Amazing? Revolutionary Anamorphic Lens Distortion:  Experience unmatched accuracy with our new anamorphic lens distortion model and smart calibration system, generating better presets for a wider range of lenses. Enhanced Tracking Toolsets:  The User Track and Auto Track nodes have been completely overhauled with updated UIs, new tracking and editing tools located directly in the Cinema window, and a new ‘Localized’ motion prediction algorithm for superior performance. Unified Solver Adjustments:  All solver nodes now feature a unified Tracker Adjustment toolset and a new Parameter Refinement toolset in the Camera and Survey Solvers, giving you direct, intuitive control over your solve refinement process. Major Photogrammetry Performance Boost:  The Photo Mesh node now boasts greatly improved memory usage and processing speed, especially when building very large meshes, with support for displaying 50M+ triangle meshes on macOS. Smarter Media Handling:  The Clip Input node now offers a more flexible distortion model for ‘Measured’ estimation and caches original input frames for improved interactivity and performance. Workflow Refinements:  The Workpage and Node Panel have been updated for a smoother experience, and a new online AI assistant is available to help you quickly find documentation and learning resources. Community-Driven Improvements: This release also incorporates numerous fixes and feature requests directly addressing customer-reported issues and needs, making the application more stable and productive than ever. The full detailed changelog can be found here: https://pftrack.thepixelfarm.co.uk/documentation/changelog.html

  • 5 Issues that Could Derail Your Camera Tracking

    Matchmoving is becoming more and more of an automated process of tracking and solving. But there are still cases where the keen eye of the matchmove artist can save time by spotting potential issues that could derail your tracks and solve them. This post will list what to look for to identify which clips need your attention. 1 — Lens distortion Lens distortion is an optical aberration that causes straight lines to appear curved in photos or films, and it is easy to see how this can cause issues for the matchmove artist. Trackers along a straight line in the real world are no longer on a straight line in the resulting distorted image, and the effect on a camera track can, at best, produce false positives or, in the worst case, cause the 2D tracking to fail altogether. Lens distortion can be recognised by looking for straight objects at the edges of the frame, such as the beam in the image below. Due to the 3D representation of trackers that should be in a straight line now being on an arc and not truly reflecting the real-world scene, the camera solve will fail when it becomes impossible to line the virtual camera up with the distorted tracking points. Fix? Film and television audiences are used to a certain amount of lens distortion in their viewing experience, and any CG must be distorted in the same way as the background plate to blend in perfectly. The trick is to undistort the image plate BEFORE carrying out any tracking/matchmove operations, then use the calculated distortion models further into the VFX pipeline. All good matchmoving software has distortion pipeline tools built in, which allow the distortion of the background plate prior to tracking and the ability to pass the distortion metrics (more commonly supported through  ST Maps ) further into the VFX pipeline, usually the composting software. 2 — Rolling shutter Like lens distortion,  rolling shutter results from limitations of the image capture technology employed to shoot the footage. The effect of rolling shutter occurs when different lines of the image sensor are recorded at slightly different times, which commonly happens with  CMOS  sensors. The effects of shutter roll are most noticeable with whip-pans or rapid translations. If the camera sensor records the image line by line during such fast movements, different parts of a frame are recorded at different times and from different camera locations. Unfortunately, a bad rolling shutter can render your footage almost unusable for motion effects such as tracking and titling, not just because the distorted image will cause tracking to fail but also because it is virtually impossible to match any form of CG to the unpredictable distortion. Fix The best fix is to sidestep any capture technology that produces this particular effect and opt for a better-quality device. However, the fix-it-in-post mentality that can sometimes occur means the VFX departments get what they are given. Fortunately, there are fixes out there. To make a usable image, you must reverse-engineer a unique camera position for a single frame when no such position exists. Shutter roll must be treated before the tracking, so matchmoving applications can rely on all scanlines of a single frame to represent the same time and location. Shutter roll became such a big issue that numerous plugins from third-party vendors are available to provide fixes, with varying results. PFTrack has a tool built in to undistort the background, which can be passed down the tracking tree, and other matchmove apps can deal with footage similarly. After undistorting the rolling shutter and tracking, you will need to provide the resulting undistorted background plate further into the VFX pipeline for any compositing, etc. to be carried out. Unlike lens distortion, it is not usual to re-introduce the distortion characteristics. PFTrack’s  Shutter Fix  node can be used to reduce the effects of rolling shutter. additional rolling shutter ref —  https://en.wikipedia.org/wiki/Rolling_shutter 3 — Lack of features Matchmoving applications rely on tracking static object features within the image. From the way these features move through an image sequence, the matchmoving application reverse engineers how the camera was moved to film it and even some properties of the camera itself, such as focal length. Ideally, the features to be tracked will be well distributed over the entire 2D image, as well as the 3D space of the scene. So, the key to a successful auto-track and camera-solve is to have plenty of well-spread, trackable features in your clip. A trackable feature can be virtually anything that stands out in the image, such as the corner of a window. No background detail Uniform backgrounds, such as a green screen used in many VFX shots, however, don’t have as many features as in the example above. There is nothing to track in the worst cases, such as in the clip below. This clip will require some manual work to get a working camera. On the other hand, even green screens do have tracking markers in many cases, but due to the nature of green screens, these markers will not always sufficiently stand out. Fix The clip can be altered in many cases to make it more visible, as in the example above. Motion blur Another common case that can result in a lack of trackable features is motion blur caused by a fast-moving camera. As such, motion blur makes it harder for an algorithm to locate trackable features. Any features that may be found are also harder to track due to the fast camera motion. Fix You may be able to recover enough detail for a track through  image processing , but in many cases, clips with heavy motion blur will require manual trackers to get the best result. 4 — Incorrect features In some cases, there may be plenty of features to track, yet these features would not feed the correct information to the matchmoving applications. To be of any use to solve for a camera, trackers must represent the same real-world 3D position throughout the clip. Below are some examples of where this is not the case. Too much movement One obvious example of trackers not sticking at what represents the same real-world position is when there is movement inside the shot, such as moving cars or people. In an exterior scene, these could also be branches of trees subtly swaying in the wind. Even though they may appear not to move very much, they can pose a problem if too many trackers are on them. The clip below shows an example of a moving person. While these trackers cannot be used to solve a camera, they would still be helpful to solve the object’s motion in a later step. Fix If a shot contains too much motion, the moving objects may have to be masked out before tracking or any trackers on such objects removed before feeding them into the camera solver. In many cases, however, the consistency parameter in PFTrack’s Auto Track node can automatically eliminate independently moving trackers. False corners Another example where trackers do not provide helpful information, neither for camera nor object tracks, is false corners. False corners occur when two objects at different distances from the camera overlap. Tracking algorithms could interpret the intersection of these two objects as a trackable feature. Solving algorithms, however, expect features to represent the same 3D real-world position, which is not the case for false corners. Fix This issue requires an observant operator to spot suspicious trackers. Turning on tracker motion prediction in PFTrack’s  Auto Track  or  Auto Match  node may help avoid tracking false corners, as can the Auto Track node’s consistency setting. 5 — No Parallax Matchmoving relies heavily on parallax, the familiar effect that objects far away move more slowly than objects closer to us. For camera tracking, the application uses this knowledge to estimate the relative distance of trackers from the camera and determine how the camera moves. But there are types of shots that do not exhibit any parallax. Locked off shots Without any camera motion, background features will not move at all, which means features further away cannot move slower than features closer to the camera. Zoom shots At first glance, it may look like motion, but zooming into a locked-off camera does not exhibit parallax. Zooming only magnifies a part of the image, and all objects inside that part keep their relative positions. The following example shows the different results you get from a zoom shot compared to a dolly shot, where the camera moves forward. Note how in the dolly shot, the objects move relative to each other, and, as a result, more of the circled object is revealed at the end of the shot. Nodal pan Nodal pans are a third example of shots that don’t contain parallax. The easiest way to imagine a nodal pan is a camera mounted on a tripod with no horizontal movement. This rotational motion of the camera does not create any parallax, as illustrated in the clip below. While most tripod shots are not actual nodal pans, they must rotate the camera around its optical centre. They often still do not contain enough parallax to solve for accurate 3D tracker positions. Fix Introducing additional views of the scene, such as still images shot from a different position, will let you extract 3D data from nodal pans. Conclusion Spotting these issues early can help you distinguish easy-to-track shots from those that need extra care in matchmoving. The Pixel Farm’s matchmoving application, PFTrack, provides tools to help you mitigate these issues (as outlined in the fix suggestions) and solve many difficult situations. Links: Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

  • Camera sensors and their effect on matchmoving

    The stress levels are rising, the deadline is looming, and the shot you’re working on is taking far longer to matchmove than you first thought. Don’t worry – we’ve all been there!  Matchmoving is a technique used to track how the camera moves through the shot so that an identical virtual camera can be reproduced inside a software package, a process crucial in visual effects for integrating and matching the perspective of CGI (computer-generated images) with live-action plates. In this article, we examine some of the camera acquisition types commonly used for film, television, and VR and outline some key factors and limitations that can make a seemingly straightforward matchmove take much longer than expected. Camera Acquisition Cinema or cinema-style cameras usually have a high resolution, high dynamic range, large format sensor with RAW data recording and the ability to capture high frame rates for slow motion. Commonly used for feature films, television dramas and commercials, this type of camera offers the very peak in acquisition technology. Using an industry-standard, positive lock (PL) lens mount enables using the same cinema primes and zooms on different manufacturers’ cameras. Nearly all cine-style cameras record to common UHD broadcast and DCI spec film standards, along with non-standard raw frame sizes beyond 4K. Super 35 and Full Frame sensors have become the standard for high-end acquisition and will be the formats you will most likely come across when matchmoving. One thing I’ve noticed is the loose definition manufacturers use to describe the size of the sensor. For example, you will see Super 35 within their marketing, referring to the motion picture film format size of 24.89mm x 18.66mm. However, if we delve deeper into the actual specifications, we will see that this description only approximates the actual physical size of the sensor plane. While slight differences in the field of view are not hugely important for camera operators, it is very important for VFX professionals such as matchmovers, compositors and 3D artists. Slow motion can cause problems for matchmovers in certain circumstances. To achieve high frame rates, some camera systems have to window the sensor, effectively cropping it to increase the sensor’s readout performance, resulting in a reduced field of view. This means measurements in the manufacturers’ specifications are purely the sensor size rather than the imaging area used to capture a given format. The same thing can happen when selecting a different recording standard. For example, DCI 2k resolution (2048×1080) might use more of the sensor's imaging area than HD (1920×1080), meaning HD effectively has a narrower field of view. Factors to consider when handling footage Resolution Image resolution defines the amount of detail in footage or a still image. Modern high-end cine camera systems, such as those from Red and Arri, have resolutions of 6K and beyond. However, optics and sensor characteristics play a part in the fidelity of the final recorded footage. Not all 4K/HD cameras are born equal. Some use pixel binning and interpolation to arrive at a given resolution. While it’s not essential to know how this works, it is important to know that this can dramatically affect the overall quality. In the example below, I simultaneously shot a scene in 4K (4096×2160) and HD 720p (1280×720). Notice how fine details in the stonework are visible in the 4K version, whereas they have disappeared in the 720p HD footage. How does this affect matchmoving? With good-quality footage, high-resolution plates can be a joy to work with. Fine details in the scene, which would have been completely lost in lower resolution formats, suddenly become a rich array of trackable features. High resolution is not without its downsides, though. Apart from the obvious increase in processing time, you’ve actually got to increase your feature sizes accordingly to avoid ending up with very small feature windows with limited useful data inside them. We can see this in the example below. The left-hand image is the feature window from the HD (1920×1080) clip and the right is from UHD (3840×2160). Ultimately, increasing resolution does not always lead to increased tracking accuracy. Soft or poorly calibrated optics can have a similar effect on your footage. Dynamic Range One area with many variances is dynamic range. In simple terms, dynamic range is the range of light/brightness that a camera can see. Have you ever taken a photo with your mobile phone on a bright sunny day and wondered why the sky looks so bright, and the clouds have disappeared? This is caused by a limitation in the sensor’s ability to reproduce the brightest and darkest parts of the scene at the same time. Some sensors are better at reproducing a range of brightness than others. I shot the image below using the same exposure settings, once with an HD cine camera and again with a mobile phone in HD video mode. Ignoring the lack of sharpness and depth of field differences for the moment, we can see the phone image has a complete lack of detail in the sky and the roof compared to the cine camera’s image. Additionally, all the detail in the foreground blinds is absent where they intersect with the sky in the phone image. The difference in detail between the two images is that the cine camera sensor can capture two-thirds of the total brightness range in the scene, whereas the phone camera sensor can only capture a quarter of the total brightness range at best. The detail not captured by the sensor will rapidly clip to white in the highlights and crush to black in the shadows. It’s important to note that incorrect handling of recorded footage can also result in a loss of dynamic range. Let’s look at another example below. Notice the lack of trackable detail in the shadow portion of the image on the right. How does this affect matchmoving? Good contrast is essential to matchmoving, but not at the expense of detail. Put simply, it’s the difference between a few trackable features and many trackable features. While tracking a low dynamic range scene is far from impossible and could potentially yield great results, having a feature-rich, high dynamic range scene can make your life much easier and get you closer to the results you desire quicker. Rolling Shutter There are two types of sensor readout: global shutter, which reads the image data from the sensor all at once, and rolling shutter, which reads each line of image data sequentially from top to bottom. A slow rolling shutter readout time causes the image to skew in fast motion, commonly seen on low-end cameras.  You can see the effects of rolling shutter for yourself using a mobile phone set to video mode. Point the camera towards a vertical surface like a door frame. Record with the phone held steady for a few seconds, then gradually pan left and right with the phone, slowly increasing the rate at which you pan. When you play back the footage, you will notice that the door frame tilts as you increase the pan speed rather than being perfectly vertical as it should be. Below are some stills from the footage I recorded of a brick wall with my phone camera demonstrating the issue. Most cameras, especially consumer and semi-professional cameras, will suffer from rolling shutter, sometimes severely. In simple terms, rolling shutter is caused by the image being read off the sensor row by row, and by the time it reaches the bottom, the camera orientation has changed slightly. In effect, the top of the image is a slightly different point in time from the bottom. High-end cameras from Red and Arri do suffer from the effects of rolling shutter but reduce them dramatically by increasing the speed at which the image is read off the sensor. How does this affect matchmoving? Rolling shutter is movements where there should be none, which leads to false results when we matchmove the footage. Rolling shutters are complex problems that need fixing. Foreground elements skew to a greater degree than the background. However, advanced matchmoving software like The Pixel Farm’s PFTrack offers a solution to correct or minimise this. Image Noise When taking a photo in a dimly-lit environment using your camera phone, the pictures can look a bit noisy and lacking in fidelity. This is because the camera is gaining the signal by increasing the ISO in order to reach an adequate exposure level. Lower ISO values generally mean lower noise levels, while higher ISOs increase the noise levels. High-end cinema and stills cameras will perform much better in this regard than consumer-grade camera systems. They are not immune to excessive noise when using a high ISO, but they can reach a higher ISO before noise becomes a limiting factor. Underexposure of footage can have the same effect as high ISO, revealing more of the noise floor when correcting the image back to its proper exposure level. The example below shows a crop from the shot exposed first at 800 ISO and then at 3200 ISO. Notice how quickly fine details are obscured and lack microcontrast as we increase through the range. How does this affect matchmoving? Noise can be a big problem during matchmoving, especially if the footage is tracked from cameras with smaller sensors in less than adequate lighting conditions. Fine details are lost due to interpolation errors in the debayering process, which we can see clearly in the 3200 ISO example above. Excessive noise can affect how tracking points are located (e.g. when auto-tracking) and how accurately they are tracked. However, a lot of noise must be present for it to be a real problem regarding matchmoving. Compression Have you ever been streaming your favourite series when, suddenly, the internet connection dropped, and you were left with a mess of blocks and squares, making it difficult to even make out people’s faces? This is the result of compression. A similar effect can happen during a shoot when there is significant camera movement, and a highly compressed codec is used to record the footage. Most cameras offer an option to record to a compressed codec to save space on memory cards when longer recording durations are required. Point-of-view (POV) cameras frequently use highly compressed codecs for recording. Modern high-end broadcast codecs will deliver images almost indistinguishable from the uncompressed version. They do this by compressing the footage just enough so that it throws away information that we are not likely to need and maintains the bits that we do need. While the footage may look great when the camera is still, this might not be the case when moving. In the handheld panning shot example below, I recorded to a highly compressed AVCHD @28Mbps / 3.5MB/s codec. I simultaneously recorded uncompressed with the same camera as a comparison. Notice how some fine details have completely disappeared with the compressed recording in the image on the right. Additionally, the edges have become unrefined and, when viewed in motion, appear to dance around and jitter. How does this affect matchmoving? Camera movement is everything in matchmoving, and to give the software the best chance of finding an accurate solution, we will want to give it the highest-quality footage. Unfortunately, camera movement, or any movement, is the worst enemy of compression. This will present itself as mosquito noise around fine detail and macroblocking around areas of movement, as we have seen in the example above. Some video codecs group frames together, comparing each other and only storing and interpolating information that has changed between frames and averaging any detail that hasn’t. Matchmoving with compressed footage is still possible and will provide adequate results, but it can take a lot longer due to errors created from false details caused by interpolation and compression artefacts. In any situation, RAW data recording is always preferable to compression. Spherical 360 video 360 video consists of a real-world video shot with a 360-degree camera that allows viewers to change their viewing angle at any point during playback. These videos can be enhanced further with computer-generated images (CGI) in the postproduction process in the same way we would a conventional 2D production. However, this requires specialist matchmoving software and toolsets like The Pixel Farm’s PFTrack. VR 360 cameras commonly involve two or more cameras recording at least HD. The clips from each camera are then stitched together, either internally or in post, to form a 360-degree spherical panorama that can be viewed in a desktop viewer or VR headset. The two main types of VR camera systems are back-to-back and multi-camera rigs.  Back-to-back rigs are simply two optics and sensors in one housing or two separate cameras placed back to back with combined optics that cover 360 degrees. The benefits of these systems are low parallax, size, ease of use and small footprint, making them perfect for situations where a larger 360 rig would not be practical. The downside is the somewhat limited resolution combined with the extreme nature of the optics can lead to aberrations and relatively soft results. Multi-camera rigs share many of the same principles as the back-to-back systems but add more cameras to achieve better quality results. These rigs can comprise multiple cinema cameras or a single housing with many integrated sensors and optics. The distinct advantage multi-camera systems offer is due to the larger number of higher-quality cameras. The optics don’t have to cover such an extreme angle of view, which makes them less susceptible to complex distortions, aberrations, flaring and softening towards the extreme edges. Clearer, higher-resolution images with greater dynamic range will always have the potential to provide better results during the matchmoving process. Unique factors with 360 video 360 camera systems can run into the same issues we discussed above but also have a few unique problems. Parallax Parallax is a common problem shared by both back-to-back and multi-camera systems. This presents itself as errors of overlapping detail along the stitch line, with objects closer to the camera rig being the worst affected. To achieve a perfect stitch line, all cameras must rotate around the entrance pupil of the optics. Unfortunately, this would be physically impossible as all cameras must occupy the same space simultaneously. We can see the effect of parallax in the frame below, where the wall is close enough to the rig for parallax to be an issue. This is a misregistered detail on the wall along the stitch line. The effects of parallax can be minimised by making sure the cameras are as close to the central axis plane as possible, and the rig is not too close to the subject you wish to track. This is achieved successfully in systems where optics and sensors are built into the same unit. However, image quality compromises must be made to shrink the cameras and sensors enough to do this. Parallax errors can be problematic as they can cause camera registration errors and create accuracy problems when positioning tracking points in 3D space. Camera Synchronisation Camera synchronisation is a big problem with some VR 360 camera rigs. We used a back-to-back VR system comprising two separate cameras during our testing. Despite extensive experimentation, we struggled to get sufficient synchronisation with both front and rear cameras. While it was still possible to track the clip, we could never get a perfect sync between the stitched clips due to slight variances in the sensor timing. This ultimately led to errors in accuracy during the tracking process due to independent movement between cameras. The example below shows a 360 clip manually adjusted for correct sync on the left and the recorded incorrect sync along the stitch line on the right. Larger single-housing multi-cam rigs and rigs made up of professional cinema cameras solve this problem using a locking signal and timecode to sync the clips together during recording, but they do, on occasion, still fall out of sync. Links: Head back to Learning Articles . Alternatively, explore our extensive Resources  for valuable presets, Python scripts, and macros. You can also find step-by-step video guides covering the fundamentals of camera tracking and matchmoving in PFTrack within our PFTrack Tutorials .

bottom of page