Account
Please wait, authorizing ...

Don't have an account? Register here today.

×

Reduction of visible defects in digital video systems

When in a distribution chain a continuous stream of digital video is subjected to several compression/decompression processes it is perfectly possible that at each step the encoder does its job differently, and that is where the problems begin.

One of the main limitations of video signals converted into digital streams with MPEG compression is that they are not editable. Although the bitstream that we transmit carries a digital and reduced representation of a video signal made up of separate frames, in practice it is necessary to decompress that signal in order to process it.

Paradoxically, this leads us to the fact that the use of MPEG streams to transport video signals generates a significant potential for image deterioration due to the use of several compression/decompression steps along a distribution chain. The use of several cascading codecs produces unpredictable damage to the image, as there are many variables involved in MPEG compression processes.

The seriousness of the matter is that in practice it is impossible to control from the source what will be the result of the decoding of an MPEG stream that reaches its destination after four or five compression/decompression processes that may or may not be partially incompatible with each other. Let's try to briefly review how these situations are handled in the real world.

The ingredients...

- Publicidad -

We are going to propose from the engineering perspective a typical production scenario that involves the use of MPEG streams. It is the transmission of a football match that must be delivered to a satellite television system.

The first part of the process is very complex but in practice does not present major problems. Installing the mobile unit, testing and delivering audio and video signals is something we know how to do. Now, these signals must go through some kind of link to reach the transmission centers, and as things stand it is very possible that we use a digital link based on MPEG compression.

Once our signal reaches the channel we decode it to convert it into a 601 signal and put it on our router. This signal goes to a control room where the channel logo and some graphics are inserted. Subsequently, this "marked" feed undergoes another compression process in order to be transmitted via satellite to our international client. Many times we forget that the signal we deliver must go through several "jumps" before reaching its final destination... in this case let's assume that we are going to communicate directly with a large teleport that receives our feed, decompresses it and immediately recompresses it to send it by optical fiber to the transmission center of the DTH system. There it is decompressed again to be able to insert logos and graphics ... and in the end it is compressed again to send it to the contribution center where it is decompressed to be able to compress it again and thus "package" it together with the other 64 channels that constitute the premium package that our DTH provider proudly offers to its customers.

The result...

After all the above process, the viewer will receive a signal that although it has the clean and stable look of digital processes, it presents many visible defects caused by the concatenation of the errors caused by each compression step. It is possible that in the midst of the excitement of the match the ordinary viewer does not perceive the small jumps in the image of the ball, the kind of halo that appears around the players who run or the strange appearance of the public in the panoramas, but the fact is, the flaws are there.

The use of successive compression/decompression steps in the distribution chains introduces irreversible damage to the image and this situation derives mainly from the large differences between the MPEG encoding processes used by each equipment manufacturer.

Possible solution...

- Publicidad -

One of the most important manufacturers of television equipment has put on the market a series of equipment that offer the possibility of minimizing the errors caused by successive compression steps. The axis of the Snell &Wilcox proposal is the use of MOLE technology ("The Mole"), which is fundamentally a system that allows the original MPEG encoding to be reproduced at each of the steps of the distribution chain.

MOLE-based systems take advantage of the possibility of transmitting additional information in MPEG streams to include information about how the first encoding step was made from a baseband signal. In the case of our example, if a MOLE-enabled encoder were used at the stage then the MPEG stream of the first link would include the exact information about the size and position of macroblocks, GOP sequences used, and the data reduction procedure applied to each part of the image.

When the signal arrives at our control room, the MOLE information would be embedded in the 601 signal with which we enter the switcher that does the work of inserting logos and titles, and would cross without any problem the entire chain until reaching the next encoder.

A MOLE-enabled encoder/re-encoder has the ability to retrieve data embedded in a 601 signal and clone the original compression pixel by pixel, which implies that all macroblocks that have not been affected as they pass through the switcher will reproduce exactly the original compression procedure, without discarding additional information in the new compression step. In practice, this is equivalent to having a transparent transport system that significantly reduces the effect of error concatenation by preserving the image quality achieved with the first compression step that has introduced MOLE information.

What about the macroblocks that have been modified as they pass through the switcher? These image segments are conventionally compressed and new MOLE information is generated to prevent damage in subsequent steps. In the case of a logo that appears in a corner of the screen, only the blocks adjacent to the logo will be recompressed, thus minimizing the overall deterioration of the image.

With the addition of procedures such as MOLE it is possible to significantly increase the quality of signals transmitted through conventional distribution chains, and the installation of contribution systems is significantly simplified, as these can resort to uncompressed environments for signal processing without fear of deteriorating the signals in a way. unnecessary.

- Publicidad -

A final thought: Despite its apparent complexity, the MOLE system is nothing more than an intelligent application of the ability of digital video streams to carry metadata. Don't forget that an amorphous set of bits can represent anything... to an automated quality control program.

Why does the use of several successive MPEG compression steps cause image corruption?

The fundamental principle of digital video compression processes is the suppression of redundant information which, in theory, should not affect the perception of the image. This is usually achieved by statistical image analysis based on regions of fixed-size pixels known as macroblocks. The result of the analysis of each macroblock is filtered according to the bandwidth and/or image quality goals to be achieved, and the resulting information constitutes the compressed version of the original video stream.

A codec (encoder/decoder) is the piece of hardware or software that does the intense arithmetic work that goes into reducing the data rat from a continuous stream of digital information. Each manufacturer develops its own codecs following with greater or lesser accuracy the standards established by the standardization bodies and always trying to obtain improvements in terms of quality or efficiency.

Some compression systems compare the macroblocks of each frame with those of the following frames, which allows them to also eliminate temporary redundancies of the image (for example, the repetition of a static element in several consecutive frames). Temporal image analysis is the basic principle of MPEG systems, which have become the de facto industry standard for digital video distribution applications.

MPEG compression systems encode digital video as a succession of three types of information packets: Tables I, which are basically JPEG boxes, B boxes that store information about changes that occurred compared to previous frames, and P boxes that store predictions about changes that may occur in the following frames. I/B/P frames are grouped into fixed or variable size sequences that are known as GOP (group of pictures) packages. On the other hand, the video sampling system may vary... some MPEG systems handle 4:2:2 video, others use 4:2:0... and additionally "space" is reserved for audio and metadata transport...

Immediate conclusion: Encoding MPEG streams is a difficult task, requiring a lot of processing power. This is why MPEG systems are said to be asymmetric. What does this mean? That a low-cost MPEG decoder can have the ability to handle compressed video streams originated by different encoders, because these encoders are complex and expensive equipment that do the difficult job: generate digital video streams that include the information necessary to decompress them successfully.

This means that (almost always) MPEG decoders can handle any stream of compressed video they receive, even if they significantly vary the size of the GOP, the macroblocks or even the spatial resolution of the image (the number of pixels that make it up). And it is possible that in some cases a set-top box "understands" better than another some digital video stream.

Now, when in a distribution chain a "continuous" stream of digital video is subjected to several compression/decompression processes, it is perfectly possible that at each step the encoder will do its job differently, and that is where the problems begin.

If the size of the macroblocks varies the process of compressing the image will be totally different. And if the structure of the GOP changes the matter becomes even more complicated, because the loss of image quality inherent in the compression process can be applied to video frames that contain the defects introduced by the previous steps.

Bottom Line: Each compression step adds flaws to the image. And one of the most important factors in this "cascading" deterioration process is the intervention of codecs from several manufacturers that handle very different algorithms, even if they are compatible in the decoding phase.

This is another reason why "digital" is not always synonymous with "good"... and although the industry already offers solutions for this type of problem, we must always take into account the need to have the problems of concatenation of errors when designing digital video transport systems.

No thoughts on “Reduction of visible defects in digital video systems”

• If you're already registered, please log in first. Your email will not be published.

Leave your comment

In reply to Some User
Suscribase Gratis
SUBSCRIBE TO OUR ENGLISH NEWSLETTER
DO YOU NEED A PRODUCT OR SERVICES QUOTE?
LATEST INTERVIEWS

Visita a MEDIA5 durante NAB SHOW Las Vegas 2023

Entrevista con MOISES MARTINI Empresa: MEDIA5 Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LIVEU durante NAB SHOW Las Vegas 2023

Entrevista con JOSÉ LUIS REYES Empresa: LIVEU Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LEYARD durante NAB SHOW Las Vegas 2023

Entrevista con DIMAS DE OLIVEIRA - CAMILO MADRIGAL Empresa: LEYARD Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LAWO durante NAB SHOW Las Vegas 2023

Entrevista con Noach Gonzales Empresa: Lawo Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a IGSON durante NAB SHOW Las Vegas 2023

Entrevista con IGOR SEKE Empresa: IGSON Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023
Load more...
SITE SPONSORS










LATEST NEWSLETTER
Ultimo Info-Boletin