Please wait, authorizing ...

Don't have an account? Register here today.


Signal compression technology

Find out how in 1995 we looked at the future of compression technology.

It is correct to say that digital compression is neutral in terms of its applications, because the technology and the benefits that flow from it can be used in any digital transmission or storage application, from sound and data to motion video. Digital compression is already widely used in voice communications networks via telephone and is a major component of compact disc burning technology. However, the market's attention is focused on compression technologies that can be applied to video streaming.

New techniques for video compression will soon be used in domestic satellite services and in the near future in DBS, HDTV transmissions, cable television and the expected fiber telephone services. Compression technologies will not only introduce major changes in the country's media industry, but will initiate major advances in new or developing video markets, including teleconferencing and videophones, and in interactive capacity for educational systems, distance learning, and public outreach/communications programs for the disabled and the elderly. among others.

Transactions by bandwidth/data frequency

Until very recently the prospects for digital video transmission using traditional modalities are outside the existing technical capabilities. In addition to the large amount of channel bandwidth needed, digital video can demand immense amounts of storage space and transmission time. One minute of analog video converted to digital and transmitted at the speed required for television (30 frames per second) would involve about 2,000 megabytes of storage capacity on a computer's disk. Streaming this minute of digital video would take hours. Simply put, this is the essence of the problem for digital video streaming. The solution is to force or compress the signal to a point where it can be transmitted within the specified channels available. In NTSC television and in the significantly more difficult case of HDTV signals, the video signal along with audio and other channels of related information must be compressed to fit into the 6MHz bandwidth allocated to the television transmission.

- Publicidad -

A key point in the transmission of digital signals is that the number of digital bits that need to be transmitted is directly proportional to the transmission frequency in bits and the required bandwidth of the channel. The bottom line, of course, is the cost. Channels with higher bandwidth, higher quality illustrations, live programs, and motion video streams can be had at a higher cost to the end user. Costs above consumer limits are prohibitive in most cases.

For digital video with motion, the required bit rate and restrictions on channel bandwidth reach very high levels. Analog signals of uncompressed NTSC live video once converted to digital require channel bandwidth capable of transmitting at about 90 Mbits/second. This is a huge line for a single video stream and double the channel bandwidth set at present for analog video streaming. Uncompressed HDVT digital signals demand several times this capacity.

There are several ways to reduce the channel bandwidth needed for digital video signal transmission. In essence, these consist of reducing the resolution of the illustrations (e.g. accepting images with VCR quality instead of NTSC transmission quality or true HDTV quality), the transmission rate of the signal (reducing the frame rate from live action to the fixed movement used in teaching videoconferencing), color accuracy (reducing full-color illustrations to a palette of some complexity) and redundancies in the transmitted illustrations.

Two factors make possible the high compression frequencies of digital video without compromising the qualitative advantages that can be obtained by converting from analog to digital. First of all, there is the ability of the human eye to automatically compensate for the characteristics of the image that differ from what can be seen with the naked eye. For example, the eye is less sensitive to variations in color or nuance than to variations in luminous intensity. Consequently, large amounts of color and even luminosity data that constitute the illustrations in a fixed video frame can be eliminated by taking advantage of the natural compensation capacity of the human eye. Technically this is achieved with special digital encoding methods (intraframe coding) that mathematically exclude visual details in the transmitted video frames.

The second factor applies only to moving images. In reality, video images are nothing more than a series of fixed video frames transmitted or presented in a very fast sequence. Again the eye sees movement but each frame is actually a completely different still image. When viewing 30 frames per second on television or 24 frames per second in films with moving figures, the eye cannot perceive the small changes in image information from one video frame to the next. The overall impression is that of movement through the screen. However, often about 99 percent of the information from one digital chart to the next is exactly the same. By eliminating these redundancies in the image and digitally encoding only the one percent that does not change (intraframe encoding) you can significantly reduce the amount of data that must be transmitted.

In essence, two different approaches are used to achieve signal compression: one uses a lot of memory and the other involves a lot of processing. In any case, compression techniques will allow two to six video channels to be compressed into the same bandwidth as is the case today with one channel. Depending on the acceptable video quality at the receiving point, it is possible to use compression systems for up to eight or ten signals. Fast-action video on tape or live, such as sports for example, is likely to demand lower compression ratios on the 2:1 to 3:1 scale, but action movies projected at 24 frames per second instead of 30 frames/second for television video, allow higher compression frequencies because less information is transmitted. The fundamental objective is the transmission of hundreds of compressed video channels, each with digital HDTV quality.

Compression standards for digital videos

- Publicidad -

Many international industry committees are developing technical standards for handling digital video images. The efforts transcend many traditional boundaries of the video market and can ultimately lay the groundwork for treating video, in all its manifestations, as simple variations on the same topic of information. In the future video will continue to symbolize movies and television, but it will also include all forms of computer-based graphics and virtual reality representations.

Three approaches to a digital video standard have been proposed so far. In 1991 the ISO (International Standards Organization) adopted the standard proposed by the Joint Photographic Experts Group (JPEG) to be applied worldwide. The JPEG standard is intended for compressing still images of video with little or no loss of information. This approach can be applied to moving images, only if each frame is treated as an independent entity, which is why it requires transmitting every detail in each video frame. The JPEG standard is considered to have a limited field of application when fast video transmission frequencies such as those of free-to-air or cable television broadcasts are required.

Another standard in development is due to a similar organization to set models that represent the moving image industry. The Motion Picture Experts Group (MPEG) is working on a standard that uses more digital encoding techniques to achieve higher compression frequencies that provide a 3:1 increase over the JPEG compression frequency. This also includes specifications for digital audio and data synchronization. The goal for MPEG is to deliver VHS quality video and audio within a 1.5 Mbit/second channel.

The third standard was adopted in 1989 by the Advisory Committee for International Telephony and Telegraphy (CCITT) of the International Telecommunication Union. This standard is mainly used in the transmission of teleconferencing and videophone applications through public/private switched telecommunications media (especially telephone and satellite carriers). The standard is designed around the characteristic speed of 64 kilobits/second used in channels for voice telecommunications; informally it is called PX64 (P times 64) which represents or means the different digital data rates at which images can be transmitted. In the case of videoconferencing, it is reported that the CCITT standard is similar to the MPEG standard but allows faster transmission (with the corresponding decrease in image quality).


© National Association of Broadcasters.

- Publicidad -

Played from Advanced Broadcast/Media Technologies. Marquet developments and impacts in the 90s and beyond.

No thoughts on “Signal compression technology”

• If you're already registered, please log in first. Your email will not be published.

Leave your comment

In reply to Some User
Suscribase Gratis

Visita a MEDIA5 durante NAB SHOW Las Vegas 2023

Entrevista con MOISES MARTINI Empresa: MEDIA5 Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LIVEU durante NAB SHOW Las Vegas 2023

Entrevista con JOSÉ LUIS REYES Empresa: LIVEU Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LEYARD durante NAB SHOW Las Vegas 2023

Entrevista con DIMAS DE OLIVEIRA - CAMILO MADRIGAL Empresa: LEYARD Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a LAWO durante NAB SHOW Las Vegas 2023

Entrevista con Noach Gonzales Empresa: Lawo Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023

Visita a IGSON durante NAB SHOW Las Vegas 2023

Entrevista con IGOR SEKE Empresa: IGSON Realizada por Richard Santa Evento: NAB SHOW Las Vegas Abril 2023
Load more...

Ultimo Info-Boletin