This paper explores the challenges Broadcasters face when assessing video quality. Many factors affect the video before it gets to the TV: compression, image processing, scaling, decoding, transmission, etc. Video processing and compression algorithms change the characteristics of the original program in the quest of reducing the bandwidth needed to send the programming information to the home.
The art is to do this without allowing the audience to perceive a change in video quality. Successful video processing and compression algorithms perform the desired modifications while presenting a result to the viewer that, subjectively, looks natural and realistic. This sounds difficult, but it is necessary when transmitting many channels of high-quality programming. Each broadcaster - traditional or web caster - must deal with rapidly changing varieties of programming, new video processing algorithms, and new compression algorithms. Video processing and compression companies continuously invent sophisticated ways to reduce the huge bandwidth requirements to manageable levels.
How can broadcasters know if a new algorithm is better than their current choice? Broadcasters invite the various video processing and compression companies into their R&D facilities, and perform side-by-side tests also known as a "bake-off". Each vendor starts with the same source material, and does their best to reduce the bandwidth while keeping the video quality high. The broadcaster shows the results to a group of experts and asks them, which one is the best. This is termed subjective video analysis, and it measures the overall perceived video quality. The most commonly used video quality evaluation method is the Mean Opinion Score (MOS), recommended by the ITU. It consists in having several experts viewing a known distorted video sequences in order to rate its quality, according to a predefined quality scale.
By doing this the expert viewers are trained to build a mapping between the quality scale and a set of processed video sequences. After the "training" is complete, the subjects are then asked to rate the new video processing algorithms. Simply stated, the test setup is * Start with a known video sequence. * New Video Processing system alters the video sequence.
* Display the original and processed video sequences. * Bring in experts to subjectively vote. Complexity arises as * New Video Processing systems may need new equipment to playback the video sequences.
* The original and processed video sequences should be displayed in random orders. * Expert viewers are expensive and do not produce repeatable results. Easier Solution To streamline the process, equipment for video quality testing needs to be defined, which can capture, play, and analyze any two video sequences.
Further, as new input/output modules are continuously under development, the test equipment should use an open-architecture approach to ease upgradeability. Video Clarity defined the ClearView product line with these objectives in mind. * Capture video sequences in as many formats as possible. * Convert all video sequences to user-selectable resolution. * Translate all video sequences to uncompressed Y'CbCr 4:2:2 or RGB 4:4:4.
* Support 8 and 10-bit data paths with upgradeability to future 12-bit modes. * Store the video sequences as frames (fields) so that they can be played at any rate. * Display the video sequences in real time in multiple viewing modes.
* Apply objective metrics to the video sequences. * Export pieces of video sequences to further analyze off-line. * Use a standard operating system so that the operator can run 3rd party analysis applications. By working in the uncompressed domain, any two video processing algorithms can be compared independent of compression or other processing.
To further simplify the work flow, any video sequence can be played; while capturing another video sequence, thus, combining the video server and capture device into one unit. By doing this, the original source is already inside the test equipment so captured content alignment is easily obtained. The original and processed video sequences can be displayed - side-by-side, mirrored, or seamless split - on a single display. This eliminates the need to calibrate two separate displays.
Further the operator can play any stored video sequence, at any speed, for any duration either manually or using automated play lists. ClearView applies various objective metrics to the video sequences, generates graphs, and calculates an objective score. While development of additional objective algorithms is ongoing, we have built a hybrid system that takes into consideration subjective testing with objective measurements. ClearView can easily be programmed to display video sequences for the expert viewers; while recording the objective metric scores along with the MOS. While the MOS cannot be repeated, the objective metric can, easily and readily. Since the system is based on an open, Windows-based architecture, any objective measurement algorithm can be modeled off-line using the stored video sequences.
Benefits * Repeatable tests, quantitative results, and a streamlined setup. * Analyze 2 video sequences in real-time up to 1080P. * Input virtually any file type or capture from any digital or analog source. * Multiple viewing modes are presented on a single display - no need to calibrate 2 separate Television displays to compare video sequences. * Integrated uncompressed, high definition Video Server and Capture Device.
* Ability to Play 2 fully uncompressed, HD Streams in Real-Time. * Hybrid Solution with Integrated Objective Metrics and Subjective Viewing Modes. Case Examples A Broadcaster or Service Provider needs to * Play out "Source" video sequences at various resolutions and rates to their encoder and/or video processing equipment. * Capture the output of the encoder and/or video processing equipment. * Capture the output from the STB.
* Test the quality of the encoder, video processing, and set-top box equipments. To do this, they need to - Bring in Subjective Experts to choose the appropriate vendor ("bake-off") or to choose the correct settings. - Generate a Score for repeatability.
A Solution/System Integrator creates a broadcast solution using encoder, network, video processing, VOD, conditional access, and set-top box providers. They market and sell their solution to Broadcasters or Service Providers. So they perform all of the above tests.
When they cannot bring the Broadcaster or Service Provider into their viewing room to see the solution, they would like to take the test results to the Broadcaster. Thus, they have the additional requirement for a * Portable solution to show side-by-side comparisons to the Broadcaster or Service Provider. Summary ClearView takes advantage of the high-reliability of today's off-the-shelf computer platforms. This ensures that products are made with state-of-the-art hardware, while at the same time avoiding the high cost of custom designs. ClearView provides broadcasters, video researchers, compression developers with the unique ability to capture, play-out, and analyze video sequences. Objective measurements are generated and logged for repeatable tests.
Bill Reckwerdt, Vice President Marketing & Business Development Bill's extensive career spans 20 years in the digital video industries. He brings to Video Clarity expertise in compression, digital transmission, and video servers. To contact Bill directly send an email to bill@videoclarity.com or visit http://www.videoclarity.com/BroadcastSolutions.html#top