Camera and object tracking
Last updated
Last updated
LightAct has integrated the following camera tracking protocols: MoSys, FreeD, Stype, Ncam and SPNet (Stage Precision). Most of them use UDP protocol under the hood, while Ncam can use either UDP or TCP.
LightAct has integrated the following object tracking protocols: PSN, BlackTrax, OptiTrack, SPNet and Vive. All of them use UDP protocol under the hood.
On the table below you can find which protocols support camera tracking, which support object tracking, and which support both.
Good to know: when we say 'camera tracking' we refer to whether a particular protocol is able to transmit lens data such as FOV and lens distortion data on top of regular position and rotation data.
Protocol | Camera tracking | Object tracking |
---|---|---|
PSN | ||
MoSys | ||
FreeD | ||
BlackTrax | ||
OptiTrack | ||
Stype | ||
Ncam | ||
SPNet | ||
Vive |
Good to know: Setting up any kind of tracking is done first in Devices window, where you receive the data and then in one of the Layer Layouts you can decide what you want to do with this data.
In order to receive camera/object data via one of these protocols, first we need to create a Receiver device. Right-click on a Device window and choose the specific Receiver device.
In order to use the values we received in the Devices window, we need to create a Read node in one of the layers. So open a layout of one of the layers (by double clicking on it), right-click on a Layer window and choose a specific Layer reader node.
In the next chapters, we will create the camera and object tracking devices for demonstration purposes. We will also describe unique specifics for each device that differentiate one from the other.