Skip to content

Volinga Integrates with disguise for AI-Driven 3D Content Creation

Share this Post:

disguise’s new integration with SaaS platform Volinga.ai enables filmmakers and virtual production specialists to generate 3D environments in minutes. Users can now capture real-world 2D content with their phones, upload the footage to Volinga’s AI Platform, and, through disguise’s RenderStream™ bi-directional protocol, translate it into immersive virtual scenes that anyone on set can interact with.

It’s now faster and easier than ever to recreate environments shot on location using a virtual production volume. Users can simply upload their phone footage via the Volinga RenderStream™ plugin, and modify the weather, or atmosphere of a scene – without having to recreate it in 3D from scratch. Scenes can easily be run in real-time with little optimization needed, cutting down on lengthy pre-production work. Here’s an overview video from disguise on Volinga’s AI platform:

This seamless workflow uses NeRFs, a transformative AI technology that enables the capture and representation of 3D scenes from 2D images. Offering a robust, user-friendly pipeline for 3D content generation, it overcomes common bottlenecks in VFX and virtual production workflows, such as extensive 3D asset creation and complex photogrammetry.

The Volinga Suite end-to-end pipeline is also the only solution that allows efficient, fast creation of 3D environments with the full parallax effect, which is essential for dynamic perspectives and depth in a 3D environment. Because of this, DPs can move the camera freely on a virtual production set, without any restrictions.

“With this new integration, disguise and Volinga users can easily create and navigate 3D environments for virtual production, dramatically reducing the time and effort required in creating 3D environments. This signifies a massive step-change in Virtual Production capabilities,” says Volinga Co-Founder Fernando Rivas-Manzaneque.

So far, 700+ beta users have provided optimistic feedback, noting the significant reduction in time and budget needed to create photorealistic environments. For example, they could capture a scene on an overcast day, and load the NeRF into content engines like Unreal Engine.

“By providing native support for 3D generative AI in disguise, we will ease the process of capturing, manipulating, and visualising 3D environments, bringing techniques previously exclusive to high-budget productions to mid and lower-budget productions. This will drive the adoption of digital twin creation and democratise virtual production for all,” says disguise Solutions Director Peter Kirkup.

To get started, download the disguise RenderStream™ plugin for Volinga, upload images or videos of a 3D environment to the Volinga.ai platform, and convert them into a NeRF. This will be in the form of an NVOL file that can be rendered in both Unreal Engine and disguise RenderStream™ software for use in virtual sets.

Download the plugin

Learn more about RenderStream