Phil Galler is the Chief Technology Officer for NEP Virtual Studios, a division of the NEP Group. Galler, along with Zach Alexander, founded and were co-presidents of the innovative Lux Machina, which is now an independent brand under the NEP Group. Galler has long been known for pushing the boundaries of technology and design in the entertainment industry. Focusing on in-camera visual effects, real-time mixed reality, and futurecasting technology for filmmaking and live broadcast, he has worked on groundbreaking shows at the crossroads of technological innovation and creativity. Some of those projects include The Mandalorian: Season 1, Solo: A Star Wars Story, Top Gun: Maverick, Golden Globes, The Emmys, League of Legends World Championship 2020, House of The Dragon among many more projects across feature film, television, live events, fashion, and eSports. Before co-founding Lux Machina, Galler managed projects for Production Resource Group (PRG) where, with Alexander, they handled the boundary pushing projection solution for the film Oblivion. Galler is passionate about merging tools such as high-end media servers, automated lighting, LED, camera and projection systems into filmmaking to keep innovating unique in-camera visual effects solutions. He is widely recognized as a pioneer in spearheading the modern use of interactive, immersive lighting setups on stage to accompany ever-changing advanced VFX workflows over the last decade. Galler took some time around the recent NAB conference to share with PLSN some of his thoughts on the state of virtual production technology.
What are you focused on at NEP Virtual Studios?
We’re focused on five pillars—live TV, film and episodic TV, corporate installation work, research and development, and consulting. Those are our five key areas. We’re exploring all of them and continue to do some cool work in those areas. We have 10 years of history of doing this work, so for us, it’s not just what virtual production is, but also our history that we bring to it. We see virtual production not just as an LED Volume, but that it also includes real-time animation and rendering solutions for all sorts of applications. We do a lot of motion capture; we do a lot of visualization.
How do you define Virtual Production?
I think the current definition of Virtual Production in the Virtual Production Glossary is that it’s the blending of the physical and the virtual, and virtual production is the combination of those two worlds. But I think everyone should be striving to make tools that people in the real world, are comfortable using in the digital world—and vice versa. We really want that back and forth. To blend those two worlds together correctly, we really need to understand how people work. And I don’t think that changes when you’re on set versus when you’re working in front of a computer.
Ultimately, at the end of the day, you’re trying to capture a performance in camera.
Yes, exactly. 100% right. Everything has to work to that.
Describe your role now.
I’m the Chief Technology Officer of NEP Virtual Studios, overseeing the technology and technological vision for Lux Machina, as well as for Halon Entertainment, which is our pre-vis and content creation company. Prysm Stages, which is our stages brand. Then the effects company Screen Scene that just joined our division; they focus on more traditional VFX work and cleanup work. So, I look across all four of those businesses and try to tie stuff together, take on some of the more complex jobs to help route that traffic, and hopefully build a unified infrastructure and ecosystem for the four businesses to work out of, so that we can be efficient, which is always the goal.
Talk about the evolution you’ve seen in virtual production segment since the projection work you did back on the film Oblivion.
It’s been fascinating, I think with the advent of modern game engines and real-time rendering. Leveraging modern game engines, the access to creating content in a non-linear and much more dynamic fashion has exploded. Over the last 10 years, you start with some projectors on a screen, and you end up with projectors, and interactive lighting, and LED, and motion capture, and facial capture. There’s a bunch of these tent pole technologies that have existed in some way over the last decade; 20 years in some cases. I think now, even really in the last five years, what we’re seeing is the convergence of production and real-time capabilities. For me, just as a person, I like to sit at that nexus.
When you look at the sky solution for Oblivion, where we provided projected content for the film, at one point it took three weeks to blend a lot of projectors; now we can do it in three days. That was the change going from Oblivion in 2013 to Top Gun: Maverick in 2022. Working again with [DP] Claudio Miranda, who also did Oblivion, we did a very similar setup on Top Gun with projectors. Something that would’ve taken us three or four weeks prior, took us 48 hours total. I think that, to me, is the indication of where we’re headed. Building on what’s been achieved in the last 10 years.
We’re seeing even small shows take up the mantle of using tools like VR to help pre-vis shows or things like that. A great example is the guys over at Carbon for Unreal, Tom Thompson and David Perkins. Their pre-vis work in Unreal with real-time lighting has enabled a whole number of broadcast shows to use a tool that was previously unavailable to them at the fidelity level that they’ve managed to achieve. It’s about achieving much higher quality results much faster, and they’re much more flexible than previously available technologies.
Everyone cites Mandalorian as an example to immediately let people understand the broad concept of Virtual Production. What are some other examples you’d point out that had an advance in the technology or a workflow change?
There’s a whole bunch of others. I think one of the most recent ones that was a really good use of technology was Bullet Train. That film used scenic automation to move the LED walls around the room in a way that enabled them to have less LED and get more shots done in a day. They also used a whole bunch of 10-bit content that was layered, so there was a real-time compositing element to the project. I think that was a really cool application of tech.
I would also cite the League of Legends World Championship 2020; it was certainly one of our achievements that we always like to talk about because it was such a phenomenal experience. I think it was the first time anyone had used any of this work in a live setting. We were live on air for 10 days straight for 24 hours a day, so 240 hours. It was basically multi-cam, all in XR, in the middle of the pandemic in October 2020. It was something that no one had ever even tried before. That shows, as we were talking about how quickly the technology advances, just from something like the Mandalorian to League of Legends; for us to be able to do something of that quality—in-camera—and live for TV directors, in a different countries and working with significant language barriers. That was an achievement that shows off, I think, the innovation possible especially in a live setting. That to me is probably one of the biggest innovations in the last couple of years that we’ve been able to see.
Also, the move to HDR [High Dynamic Range] for us was huge. All our shows now are done in HDR. We push HDR and then we manage the conversion to SDR if we have to. But we are largely working in HDR for pretty much every single show that we do. I think the fidelity increase has been great for our clients and they really appreciate that extra range, especially when they go into post—and in live TV. There’s more and more TV being done in some format that has some type of HDR conversion or format in it. Those are some of the pretty cool things that we’ve gotten to do over the last few years.
As people cross over into Virtual Production what are some foundational things they should focus on learning first?
I think there’s two ways to look at it. One is if you want to be an operator or a technician. I think, if you want to be an operator or a technician, having some understanding of computer science is important. I’ll stop short of saying everyone should go get a STEM degree, but everyone should go get a STEM degree. Some level of knowledge of computer science I think is really, really important in terms of understanding how to troubleshoot. Then if you want to be an artist, looking at the artist’s side, understanding how light and photography work is super important.
No matter what, whether it’s virtual production you’re trying to get into it or you’re going into physical production—how do you learn to troubleshoot? How do you think outside of the box and how do you work well as a team? Those are the things that are really important skills. It’s the soft skills aspect of it. I believe that the hard skills, the technical skills, can be picked up. The soft skills are the part that people should be trying to learn early on, and navigating those soft skills early on in their career can be really treacherous, but they’re the things that I always tell people to work on building. Really figure out how to work as a team, really figure out how to troubleshoot something, and don’t be afraid to have an idea that’s outside the box.
Of course, everyone should learn some type of real-time rendering engine just so they understand the fundamentals. But really, I think of virtual production as production. In my mind there isn’t a huge divide; in the digital world we still use cameras, we still use lighting, we still use, in many cases, the same techniques, that we would use in the real world. So, understanding those and why they’re important from a production point of view will help you translate those tools into their digital equivalents.
What are some technology trends you’re seeing in the Virtual Production segment?
Lots of AI, of course. I think largely there’s some bluster around image generation and stuff like that with Artificial Intelligence (AI), but I think that’s not really where we’re going to see it be the most prevalent in the short term. Maybe the short-term machine learning will help us with things like increasing the quality of computer vision models, motion capture, and facial capture. We see that with Move.ai, they really help democratize motion capture, to some extent; it’s expensive, but they are helping to democratize it. We see a lot of advancements there. Then eventually machine learning, of course for image generation for backgrounds and frame segmentation for 3D environments, real-time rotoscoping, etc. I believe that’s a huge benefit to our industry. Also, of course, a risk, but I think that’s one of the biggest advancements and others branch off of that.
I think unified asset platforms centered around things like USD and material X so that everyone can talk the same language when they’re sharing digital files is super important. I see a lot of people starting to leverage graphics cards in ways that they hadn’t before. A person at home with $4,000 can start creating incredible environments. The advancements are actually the low barrier of entry. We now have significantly easier access to really, really powerful tools; and free tools like Blender. I remember when Blender was the laughingstock of the industry, and I’m sitting here today pondering how to move our entire business over to Blender as are many VFX houses. It’s incredible. As a free tool, it’s incredibly powerful.
Those are some of the advancements I see. But there are also some of the more ubiquitous across the media entertainment space—laser projectors, higher quality HDR, RGBW LED panels, better lighting fixtures, denser pixel spacing on display devices, etc. SMPTE ST 2110 is a big one for us that we’re focused on right now. Trying to bring that standard across our entire business because it’s become such an efficient transport mechanism for so many things that we work on. Also, I know our team is headed in a really exciting direction and focused a lot on asset management software and virtual production tools more than we are hardware these days. I think that also tells a pretty interesting story.
What are some misconceptions of Virtual Production or where people don’t understand what it is?
I always like to talk about this idea. I get calls from producers and there’s this idea, this will save them money. The reality is, I don’t know that it’s going to save you money. I think if you’re doing car process work, it can maybe save you money. I think if you’re doing something like the Mandalorian, it’s highly unlikely it’s going to save you money, but it might provide you with a better-looking result in less time at any equivalent costs to what you would spend on the effects. I think ultimately that depending on how you look at your variables and travel time and costs and lodging and can you build an environment in CG, you may save money, but fundamentally it should be about doing something that you can’t do in any other way. There’s this hype cycle created by a variety of things, certainly around the pandemic, but this hype cycle right around, ‘oh, we’ve got to do the Mandalorian.’
I think so many people forget that these processes have been around for 70 years and those techniques that we used 70 years ago—projection, and now projection and LED, are completely valid still when we’re just doing plate work. People forget that there’s this massive spectrum to virtual production. Even something as simple as a basic motion capture set up could save money in post-production and enhance the feeling of your movie. Putting a single LED wall outside the window of a house and just putting a cornfield image on it, instead of building an entire cornfield could potentially save you money, be easier, be more flexible for your show, and allow you to shoot more digital places over the course of a day.
I think that’s something that people generally forget. What they’re really looking for is that ‘easy button,’ the silver bullet. It just doesn’t exist; or at least it doesn’t exist right now. I’d like to get there, obviously, like Star Trek’s Holodeck tech, but we’re not quite there yet. That’s the thing I always want to remind people—it’s not one size fits all. There’s a massive spectrum of this work that we should be looking at because just it’s not virtual production and camera tracking and 3D real-time environments… It doesn’t mean it doesn’t work for your show, in fact, it may work way better for your show and be the thing that you’re looking for, but basically, don’t succumb to the hype, know why you are choosing to use it.
Any final thoughts on where things are at or where things are headed?
We’re seeing a ton of this being used in rock and roll, which is awesome. Look, people are still using Notch, people are using Unreal Engine. I think Unreal as a world building tool and Notch as a motion graphics building tool, even if it’s in 3D, are very different tools, but they give people so much. Man, everyone just wants to noodle. They want to walk in, and go, ‘I want to make that cloud to the left. I want to make that ray of light blue. I want to make crackling lightning there.’ Very skilled artists, who are very familiar with the tools they’re using, are granted the ability to execute those requests in very little time, which I think is amazing. That to me summarizes where we are as a media entertainment industry. We have all these tools at our fingertips, and we need a bunch of really skilled fingertips to do the work that we’re being asked to do. But I think when done with the right planning and with the right budgets and with the right understanding of scope, we can achieve almost anything. And again, that to me is amazing. The advances and where we are with the tech affects all of us in the entire industry. I mean, how many corporate clients right now are using Unreal for data visualization in their corporate show? We’re using Unity to create an interactive mobile app that ties into a second screen experience for clients. Almost all of them are doing some version of this and I think that really speaks to the growth of the industry and the direction of needed skills over the last three years.
I think as a virtual production industry, it’s important to remember that we’re in this weird place where we have a long history, but we’re also almost at this singularity moment where technology is increasing at such a pace that while we did a bunch of cool stuff 10 years ago, we need to remember tomorrow it’s going to be completely different; then the day after tomorrow it’s going to be completely different. And things we couldn’t do last week, we’re going to be able to do next week. And I’m really excited for that, and for the innovation that comes out of the industry that really has hopefully been democratized across the globe. I think that’s a really cool thing happening. Certainly, for me to witness as someone who was at the nexus of creating a bunch of it early on, before it became what it is now, I’m excited to see where everyone heads with it.