News

Visual Technology Tutorial, Part 3: Compression

High-bandwidth Digital Content Protection (HDCP) is a form of digital copy protection developed by Intel Corporation to prevent copying of digital, audio & video content as it travels across connections.

This is Part 3 of a video processing technology training series extracted from RGB Spectrum's Design Guide.

Digital Video

Digital technologies have revolutionized the way we work with both audio and video signals. However, representing information as groups of binary numbers requires an enormous amount of computing power, specifically memory capacity and processing capability. These requirements become especially challenging when audio and video signals are involved, because massive amounts of data are necessary to translate the characteristics of sound and light into bits.

Digital sound and video have created entirely new industries for both consumer and professional/commercial applications. One of the most important differences between these two types of applications is that professional/ commercial users don’t just use content like consumers do; rather, they often need to work with, manipulate, and combine this content with other sources. Information from any number of content sources frequently must be shared with co-workers who may be located in the same room or in remote locations around the globe. Digital technologies and networks have made these tasks significantly more effective than was ever possible in the analog domain.

The rise of digital technology has introduced a new set of challenges, primarily related to the vast amount of data that is required to represent digital video. For example, an image size of 1920x1080 pixels at 24 bit color depth can translate to about 6 MB per frame. At a frame rate of 60 fps, just one second of this video results in 3.6 GB of data, which is impractical for most current networks and storage systems. This example illustrates why video compression technology is often necessary when working with digital signals in these contexts.

Compression

Video compression is a process that reduces and removes redundant video information so that a digital video file/ stream can be sent across a network and stored more efficiently. An encoding algorithm is applied to the source video to create a compressed stream that is ready for transmission, recording, or storage. To decode (play) the compressed stream, an inverse algorithm is applied. The time it takes to compress, send, decompress and ultimately display a stream is known as latency.

A video codec (encoder/decoder) employs a pair of algorithms that work together. The process for encoding and decoding must be matched; video content that is compressed using one standard cannot be decompressed with a different standard. Different video compression standards utilize different methods of reducing data, and hence, results may differ in bit rate (i.e. bandwidth), latency, and image quality.

Types of compression are often categorized by the amount of data that’s maintained through the stages of processing. "Lossless" refers to a compression method in which there is no loss of data during the transmission of a video signal from source to display. The displayed image is identical to the original source image. "Visually lossless" means that the displayed image will appear identical to the original image, even if some data may actually have been lost during compression. "Lossy" compression usually involves some loss of data during the data-reduction process, but noticeable quality degradation may or may not be apparent.

RGB Spectrum is a leading designer and manufacturer of mission-critical, real-time audio-visual solutions for a civilian, government, and military client base. The company offers integrated hardware, software, and control systems to satisfy the most demanding requirements. Since 1987, RGB Spectrum has been dedicated to helping its customers achieve Better Decisions. Faster.