Skip to content

Moving Video, Moving Data

Share this Post:

Just about a year ago, I wrote a primer on LED digitizers, which included a lengthy discussion of the processing required between the video source and the LED wall itself.  True, you simply can’t connect a video signal directly to an LED wall and expect to see a picture.  In order to create an image of the desired size, shape and aspect ratio, the wall (regardless of the manufacturer) requires proprietary video processing and a proprietary input signal, rather than a standard DVI, HDMI, or HD-SDI format.  To put it another way, the wall is looking for data — not video.

During the interim months, in tandem with changes in the way we categorize devices, changes are taking place that affect the capabilities of just about every video device we currently use.  Plans are also underway to move the industry (slowly and carefully) into the realm of data.

Transcoders and Displays

First of all (and I stand corrected), LED processors are primarily transcoders, not digitizers.  Better yet, we should call them “display processors,” and by doing so, we’re combining multiple categories of devices under the heading of displays — of which there are many types.  HD and Ultra HD monitors, plasmas, OLEDs, LEDs, LPDs, cubes of all sizes, and even good ol’ CRTs are all displays, and there are undoubtedly a host of acronyms to come as the industry evolves.  We’ve also dealt with the term “tiles” for years when working with LEDs, but the industry is moving towards a more generic and far-reaching term — modular displays.  Whether they’re projection cubes, LED tiles, laser phosphor displays or LCD monitors, they’re all modular, and they’re the building blocks required to meet the client’s request.  That’s enough about semantics for now.

Lossless and Lossy

By definition, a digitizer converts analog signals to their digital representations.  That function still plays a big part in the display processor’s tool kit, because legacy analog devices refuse to disappear from our clients’ portfolios.

However, digitizing is not the key function anymore.  Since the vast majority of the video signals we work with are already in digital format, the term “transcoder” fits better.  If you take in analog and output digital, that’s digitizing.  If you input digital and output digital, that’s transcoding — a function performed by any device that converts from one digital representation to another, with (or without) any loss of fidelity.

For example, transcoding DVI to HDMI is a “lossless” digital-to-digital process, in which the processor reformats the data from one standard to another.  The resulting transcoded data stream is a perfect “high fidelity” reconstruction of the original. We perform this function all the time when we’re connecting gear backstage, using various cables and your standard big box o’ adapters.

On the other hand, transcoding MPEG2 to MPEG4 is a “lossy” scheme in which data is discarded in order to minimize file size and increase the storage capacity on devices and applications such as computers, DVRs, YouTube and your latest Facebook video post.  If you’re a PhotoShop wizard, each time you perform a “save as” and adjust the quality of a JPG downwards, you’re performing a lossy transcoding operation.  You can easily decrease the quality, but you can’t ever put the resolution back in — once you’ve increased the compression.

Standard and Non-standard

All LED manufacturers use display processors of one form or another, but the processor-to-module (and module-to-module) link is different in each case.  Manufacturer A’s processor can’t connect to manufacturer B’s modules.  The primary reason is that these links have traditionally been optimized for each manufacturer’s key markets and tend to be proprietary.  When we’re connecting video for an I-Mag display at a concert or festival, we typically expect standard formats and aspect ratios (e.g., 1920×1080 and 16:9).  However, optimizing for that display makes no sense when dealing with a “perimeter” LED display at a soccer stadium that’s 500 meters long and two meters high, or in a creative stage design, dealing with arrays of LEDs that meet the set designer or lighting designer’s requirements — not the video engineer’s requirements.

The secondary reason is that the “data” requirements are different.  To make the display behave as a single, monolithic unit, you need more than video in the connection.  In addition to video, you also need control, synchronization, bi-directional diagnostics and more.  In other words, you need a data stream.

Video as Data

Which brings me to an interesting observation.  At the dawn of the computer age, when the video industry was only making analog devices, trade journals touted the “convergence” of computers and video, and industry trade shows jumped on “convergence” as their theme-of-the-year.  At that point, there wasn’t an Ethernet connector in sight.

Fast forward, blink twice, and now everything is networked.  From the home to the workplace to the concert arena, the industry has left convergence back in the dust and adopted a highly networked architecture that’s shifting rapidly towards IP (Internet Protocol).

Today, we’re sending video, audio, control and diagnostics through networked switches, and we’re extending video’s reach through fiber optics — all in real time.  We’re hiring IT professionals on the crew, people who actually understand advanced networking and can facilitate moving those bits around in a timely fashion.  We’re transcoding standard formats to IP using endpoints, shipping the data around, and transcoding back to video, using additional sets of endpoints.   Between endpoints, all of those signals are encapsulated in little packets of data — and whether it’s video, audio or control, each packet has a specific payload.

Video gear (from cameras to switchers to displays) is now being manufactured with IP inputs and outputs, side by side with traditional video connections.  True, the networked video architecture is certainly more complex, but it is also vastly more capable.

Big Issues

When dealing with video over IP, big issues are at hand, and they are by no means simple.  Manufacturers today are treating video as data in our day-to-day journeys (smart phones and tablets, to name two examples) — and in our industry, that data train is coming fast.  To the average consumer, that data layer is hidden behind intelligent and intuitive user interfaces, but for video technicians in the industry, we need a deeper understanding.

The transition to IP takes into account audio, video, bandwidth, compression, codecs, and computers with high-speed multi-core GPUs (Graphics Processing Units).  As technicians and artists in our creative realm, we have to stay current — and it’s not getting easier.  The industry needs savvy video technicians, not only to understand the new networking tools, but also how to apply them backstage, in the studio and in the home.

On a personal note, I am very comfy within the realm of pixels, but I have miles to go to fully grasp the realm of data — and to that end, I’ll offer a few proactive homework assignments.  When you see a seminar or a webcast on the subject of “Video over IP,” sign up.  When you see an article or a white paper about “Understanding Video over IP,” read it.  When you’re at a trade show and there’s a breakout session on “Video over IP,” take it.

In the coming years, a little networking knowledge will be power.