2.2 Tracking, input filtering and the mirror mode


The space right over the input contains controls for some of the most interesting features of Z Vector (from left to right):

  • Mirror mode
  • Automatic input center tracking
  • Background filtering based on depth
  • Automatic human form filtering or autofilter (only available if the prerequisite libraries have been enabled in Preferences)

Mirror mode flips the image horizontally. This can be useful depending on the use case (for example for background projection at an event).

Automatic input center tracking calculates the input’s camera center based on the available depth data and places the virtual cameras pivot point there.

Depth based background-filtering makes it possible to scan a background sample that will be used to filter any consecutive data following the scan. This can be useful depending on the use case (for example taking a sample prior to people entering the dance floor or the performer entering the stage). The sample is automatically taken when the mode is first activated (after software startup), but it can be recreated at any point by shift clicking on the icon.

Automatic human form detection and “autofiltering” can be used to algorithmically detect, separate and mask humanoid forms in and out of the image in real time. It is important to note that this mode is only available on particular sensor types.

Pro tip: When human form detection is supported, humanoid forms are overdrawn in blue inside each input preview. If so, forms can also be masked out individually. You can activate this mode at any time by shift clicking on a tracked (blue) form. This will turn the tracked individual green. Now only the green forms will be used for the visuals, while everything else is masked out. You can add and deduct individual tracked forms from the mask by shift clicking on them. Again this can be useful depending on the use case (for example pick out a single dancer from a crowd). You can always go back to the “autofiltering” mode simply by clicking the corresponding button.

Continue to the next part in the tutorial series:

2.0 Basics: Input parameters, the virtual camera and smoothing
2.1 Input colouring and parameters