Skip to content

Do You See What I see? Getting the Picture and Getting It Right

Share this Post:

The stage is set, the lights go up, the speaker walks out on stage, and the video screen is black. The producer begins yelling at the technical director, and the TD starts yelling at everyone. It’s a demonstration of what flows, and in which direction. It’s also a reminder that, as a camera operator or engineer, you are at the bottom of the hill on which it all flows. Assuming we remembered to remove the lens cover, there are many other pitfalls we would like to avoid. But what other kinds of problems could we have? By understanding a very basic signal flow through the camera, we can protect ourselves from doing something foolish and, more importantly, help us recover quickly when we have already done something foolish.

All cameras have lenses. They all do the same thing. They gather light and focus it on the focal plane. The first adjustment we should make to the camera is to set the back focus, assuring that when we zoom in and out, the subject stays in focus. Make certain that when you make this adjustment you use the back focus ring and not the mounting ring. I know a young man who once dropped a lens off a camera in the middle of a shoot because he “adjusted” the wrong ring — very embarrassing! (And no it wasn’t me!)

The focal plane can either be a single CCD chip or it can be a prism that will split the image into its red, green and blue components. Without getting heavily into physics, suffice it to say that the prism in a camera works the same way it does in a projector: light is split into three component colors and sent to its respective chip. This is part of the reason that a three-chip camera looks better than a single chip. A three-chip camera has better resolution because there are physically more pixels looking at the image. The individual pixels are also located right next to each other. Every pixel in the final image will have a red, green, and blue component.

In a single chip camera it takes four pixels on the chip to make one pixel in the final image. Additionally, they use a filter, called the Bayer filter, to make the red, green and blue components. But the only thing the Bayer filter has to do with headaches is the less than spectacular results you get. This is great for the consumer camcorder making home movies, but if we expect to be paid well for good results, we should be able to produce them (good results, that is!).


More about the chips: A “charge coupled device,” or CCD, is a chip which produces a voltage when light strikes its surface, similar to what happens in a solar cell. This voltage is stored in a capacitor until it is passed off to a buffer. A “hole accumulation diode,” or HAD, is a manufacturing technology developed by Sony to reduce the noise in the video signal at its source (the chip). This also improves the image quality by improving video black.
When light comes into the camera, the CCDs are spitting out signal. Now we have to gather that signal and turn it into something we can actually use. This can be done in either analog or digital on the camera, or with a camera control unit, or CCU for short. Here is where we begin tweaking the image. The goal is to provide the best possible signal to its final destination.

Professional-level cameras will typically have a switch somewhere to adjust the knee, and it will most likely be called “autoknee.” This control will also be found on the CCU. Think of this as a sort of an automatic gain control in the video signal. What it does visually is to remove the halo effect from around bright spots in an image. For example, if we have a close-up of someone standing near a lamp, the lamp will typically have a halo around it where it has washed out the image. By turning on autoknee, the effect will go away and the lamp will look natural.

On the CCU is an adjustment called “pedestal.” No, this is not a remote control for the camera stand, and has even less to do with what the camera operator thinks they should be on. This is kind of like 0dB for audio guys. It is the blanking level in the video signal and should be 7.5 IRE (Institute of Radio Engineers — the unit of measurement for video signals) below video black. This adjustment can be used to reduce noise and improve black levels in our pictures. Do not, however, confuse this with your black level adjustment. If we set the pedestal for the least noise, then set auto black, the end result will look much better.

Additionally, the CCU also has adjustments for color saturation, white levels, black levels, sync and phase. Color saturation will behave as a gain control for the individual colors. White level is an adjustment to tell the camera how bright to allow the signal to get. If the incoming signal goes above that, the image will flare and be ruined. Cameras typically require 80% of the screen to be filled with white before the auto white feature will work. Auto black will close the iris in the lens to block out all incoming light and adjust itself to set black to the appropriate level.

Now the signal is ready to leave home. If we connect it to a properly adjusted monitor and vectorscope, we can watch and verify that we have done our jobs. We have to pay constant attention to make subtle adjustments as needed. This will also help lower the director’s blood pressure and overall stress level on the job. We can be confident that when the producers and directors start yelling, it is not our fault.