frame:work LA Conference for the Creative Video Community this August
frame:work, the community organization for creative video professionals, announces its next conference for the creative video community—frame:work:losangeles 2023, that will take place this August. frame:work will partner with XR Studios to present a two day conference in Hollywood at XR Studios’ newly launched campus on August 4 and 5, 2023. frame:work:losangeles is a conference for conversations about the unique challenges of creating video for physical spaces, live performance, and virtual worlds. Attendees range from expert practitioners, partner creative disciplines, production leadership, new professionals, and students. This conference is a place to share valuable insights into the practice of the work that is shared in common from distinctive points of view. Prices will range from $39 to $99. For more information go to https://framework.video/la23/.
What is frame:work?
frame:work is a community organization for creative video professionals working in screens dominated live events, installations, and virtual productions. They bring together all professions involved in the design of, to the delivery of, pixels to screen for view by an audience or camera. Their tagline is—We are Live Pixel People.
frame:work shares insights from leaders in creative video fields of production including: video content creation, video & media server engineering, screens producing, real-time generative content, media server programming, virtual production, and mixed reality. They organize meetups and conferences and maintain an online community discussion at Discord.
At frame:work, they believe the way to improve the standard of work is through good communication and engagement with each other, no matter the discipline. Theater, film, television broadcast, rock tours, conferences, installations, online events—the more that is learned from each other about process, the more that everyone can improve relationships to our larger production communities.
frame:work is a volunteer effort, founded on community supporting principles.
For information about frame:work go to: https://framework.video/
disguise and ZeroSpace Announce New Spout Integration
disguise has announced integration with ZeroSpace’s SpoutBridge, a tool that bridges Spout technology with disguise RenderStream to facilitate production for xR and live events. The ZeroSpace engineering team, a next-generation media production facility and research lab in downtown Brooklyn, NY, worked with disguise to open access to ZeroSpace’s Spout technology within the disguise user community. Spout is integrated into most commercial VJ software and is available as a free plug-in for OBS Studio, the free, open-source cross-platform screencasting and streaming app. This allows sharing of the same GPU video texture with content render engines for uncompressed, zero-latency frame streaming.
“Spout is a tool used by visualists and creative technologists that is integrated into every major live visual rendering software. It’s a simple-to-use interface that’s become the default standard in the live visuals environment for over a decade,” says Evan Clark, Head Research Engineer at ZeroSpace. “SpoutBridge integrates Spout with RenderStream, allowing the distribution of video content over IP networks between disguise machines. It also offers 12-bit RGBA color, ultra-low latency and ease of setup while removing the need to distribute SDI inputs across all physical machines,” says disguise Lead Engineer Josh McNamee. Users need to configure their Spout application on a disguise rx render node to target the RenderStream bridge.
Aimed at installations, festivals, and events with large LED displays that need an easy way to distribute live and interactive visuals across displays, this solution minimizes the need for physical cables and enables resolutions and color depths far beyond what HDMI or SDI are capable of. Currently, visualists are constrained by the physical outputs of their machines. This solution provides them with complete control over massive LED screens by creating a limitless canvas. By taking advantage of cluster rendering with video over IP, it will dramatically change the output requirements to drive LEDs while minimizing extraneous wiring.
The integration of SpoutBridge with disguise is the latest development in the disguise-ZeroSpace relationship. ZeroSpace began collaborating with disguise in early 2020 as part of disguise’s xR insider group. The company then became one of the first partners in disguise Metaverse Labs, a global network of experts that deliver hybrid experiences in the metaverse, drawing on the RenderStream infrastructure’s ability to connect real and virtual worlds.
ZeroSpace was also the first official Metaverse Labs partner site. It has collaborated closely with the disguise Labs team in New Zealand to create interconnected stage workflows, cross-continent communication and develop the future of truly hybrid events. The team is also working closely with disguise partner 4Wall Entertainment—a team known for their expertise in xR and ICVFX—to provide a truly unique space for real-time rendering, ICVFX and xR workflows.
SpoutBridge’s integration with disguise doesn’t end with its current capabilities. “We are looking forward to what features will come next and are excited about the possibility of the motion capture features coming soon in RenderStream releases,” says Clark.
Further information from disguise: www.disguise.one
ROE Visual, Vizrt & GhostFrame Demonstrated at NAB 2023
ROE Visual’s Ruby LED panels, GhostFrame, and Vizrt’s Viz Engine 5 brought multi-layer virtual graphics and video wall control to the NAB 2023 show floor for real-time virtual production solutions for the broadcast market. GhostFrame can receive up to four Viz Engine signals at one time, which enables a range of creative possibilities in virtual production for broadcast. The live XR demonstration used ROE Visual’s Ruby LED panel, with a 1.9-pixel pitch, wide color gamut, and a bit-depth of 16bit, to create brilliant visuals.
GhostFrame is a game-changing technology for virtual production developed by AGS AG, Megapixel VR, and ROE Visual. With GhostFrame, end users combine hidden chromakey, hidden tracking, and multiple source video feeds into a single production frame. The end user chooses which elements are visible to the human eye and which are visible only to the camera.
When combined with Vizrt’s Virtual Window technology broadcasters can preview other camera perspectives simultaneously which was previously impossible. The demo visuals were captured by two RED Digital Komodo cameras. One mounted on a stYpe Human Crane with stYpe RedSpy tracking and Follower for interacting with AR graphics, the other camera was mounted on a Blackcam rail dolly with AI automated shots. The virtual scenes were rendered natively in Viz Engine 5.1 and in Unreal Engine 5.1. In this setup, both render blades had an ultra-low latency, allowing for fast camera movements. The idea behind this demo was that with one studio shoot, there could be four different shots—with separate backgrounds—sent out. This could be different sponsors in different markets, different content for specific audiences, etc. This demo illustrated well the ability to use eXtended Reality in a live-multi camera production without limitations.
Combining cutting-edge LED and camera technology, GhostFrame enables creative and innovative use of video and broadcast technology. Uniting with Viz Engine 5’s advanced rendering capabilities, ROE Visual’s Ruby RB1.9Bv2 high-frequency video wall, driven by Megapixel VR’s HELIOS® LED processing allows for new possibilities in virtual live production. GhostFrame works exclusively with ROE Visual LED panels and HELIOS® LED Processing.
“GhostFrame’s technology can drive different features depending on the needs of a virtual set, and we think that Vizrt and ROE Visual are fantastic partners to show this off,” comments Jeremy Hochman, co-founder, and CEO of Megapixel VR. “One camera could capture international graphics, and another one, two, or three cameras from additional perspectives can have personalized graphics, a hidden green screen, and virtual background locations based on the region. These features don’t consume bandwidth because they are generated in the HELIOS® processing card on the LED panel without requiring complex upstream content feeds.”
Related links: www.ghostframe.com, www.roevisual.com, www.vizrt.com, www.megapixelvr.com.