Overview of Content Pipeline
Last updated
Last updated
LightAct uses a flexible and powerful multi-stage content pipeline that has been fine-tuned for performance. In this article we will go through the basic concepts, terminology and how it all comes together.
As a media server, perhaps the most standard way for LightAct to ingest content is through Layer nodes such as Get Texture (for image files), Play Video or Image Sequence (for video files and image sequences) and Play Notch Block (for Notch blocks).
LightAct can also ingest content through Device nodes such as NDI, Spout or Unreal Texture Share, or through Layer nodes that allow you to play video or image files or Notch blocks.
Once you have your content in a layer, there are several ways it can go.
You can render your content (usually referred to as a Texture) to a Canvas using Render to Canvas node. This is the most standard and recommended way as it gives you the most options afterwards.
You can also push your content directly to Throwers, Viewport objects (such as video screens or 3D models) or Device Sender nodes (such NDI Sender node, for example). To do that, use Set Texture node.
Viewport objects are virtual replicas of real objects. There are 4 different types of it:
video screens,
DMX fixtures,
projectors,
throwers and
3D models.
For a diagram of various flow options, please refer to the image at the top of this article, but for a bulleted list read on.
Video screens can get content from:
canvases
throwers
DMX fixtures can get content from
canvases
video screens
Projectors can get content from
canvases
video screens
throwers
3D models
3D models can get content from
canvases
throwers
Please note all of the above can also get content with Set Texture nodes.