Logemas now distributes the next must-have technology for your Virtual Production tech stack, Digital Camera Systems (DCS) lens encoders!
What are lens encoders and why do I need them for Virtual Production?
In short, lens encoders digitise manual camera lens settings for recording or streaming purposes. When you adjust the Focus, Iris, or Zoom (FIZ) values of a lens, the lens encoder will translate those adjustments into digital signals for interpretation into your virtual environment
Why does this matter?
When adjusting the FIZ values on a camera lens the image changes qualities. This can be done for technical or creative reasons when filmmaking. Technically lens settings are adjusted for correct exposure and correctly focusing on a subject.
Here is an example of a focus pull from the Mandalorian, shifting the focus from near (live performer) to far (LED wall footage) and back:
Creatively you could choose to draw the audience’s attention onto a subject by creating a shallow depth of field and focusing on a particular character or prop. Another option storytellers may use is adjusting the focal length (zoom value) to achieve a specific feel. Higher focal length values will create greater image compression causing the background to appear closer.
Below is an example of the compression change of a zoom lens, shown using a dolly zoom in Jaws:
For cinematographers, lens settings are the bread and butter for visual storytelling. One technical problem cinematographers face when working with LED screens is that real-time lens adjustments will not affect the rendered images displayed on the LED screens. Hence the cinematographer is unable to alter compression, field of view, exposure, and focus easily during takes. This limitation impacts the cinematographer’s ability to make full use of their visual storytelling skillset.
Although it may be possible to animate the virtual camera’s FIZ data and play that back on the screen, matching the physical cameras FIZ adjustments to match this animation is incredibly difficult and time consuming. Replicating a shot like the dolly zoom in Jaws in front of an LED screen, or pulling focus from virtual background elements to real actors in real-time becomes near impossible. Lens encoders, like the DCS, make all these shots possible, in real-time and in-camera.
Lens Encoding for Virtual Production?
The ability to stream lens data to affect the virtual world in real-time puts the creative control back on the creatives during production. The cinematographer gains control over everything they are viewing down the lens, both real and virtual world responding in-sync to the single input. On set this allows for greater cohesion between all creatives, because final pixel (or near-final pixel) results can be seen in-camera live on set. For producers, lens encoding saves time in post-production because it reduces the need to resort to green/blue chromakey work.
However, lens encoders are not limited to use on large sets with LED walls. They can also be used in green screen live virtual production with real-time comping, or even to adjust virtual cameras in a fully digital storytelling workflow.
Lens Encoding + Camera Tracking = Worlds combined
Coupling DCS encoders with Vicon camera tracking creates a powerful tracking solution for many virtual production real-time workflows. The submillimeter accuracy of Vicon’s camera tracking allows the virtual camera to identically copy the cinema cameras movement in real-time. This makes camera movements that feel authentically real and assists in ensuring correct parallax is displayed.
The DCS lens encoders further increase the believability of the virtual camera by making the virtual lens adjustments work in real-time. This combination allows the entire virtual camera to behave identically to the real-world camera.
Local Success story:
Here are some examples of one of our customers at Macquarie University in Sydney using their DCS and Vicon tracking system to perform a Green-Screen live comp in their film studio.