Latency and Protocols

Latency and Protocols


When a digital video source is directly connected to a display using an interface such as DVI, video is easily passed without the need for compression. Transmission is virtually instantaneous, because no extra image signal processing is involved. But when compression techniques are used to make data streams more compatible with network bandwidth constraints, the processing takes a certain amount of time, known as latency. Latency is one of the key considerations for evaluating streaming products, processes, and applications.


Many protocols exist for use in streaming media over networks. These protocols balance the trade-offs between reliable delivery, latency, and bandwidth requirements, and the selection of a set of protocols depends on the specific application.

One of the most common is Real-time Transport Protocol (RTP), which defines a standardized packet format for delivering audio and video over IP networks. RTP is often used in conjunction with Real-time Transport Control Protocol (RTCP), which monitors transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. Both protocols work independently of the underlying Transport layer and Network layer protocols. A third protocol, Real Time Streaming Protocol (RTSP) is an Application layer protocol that is used for establishing and controlling real-time media sessions between end points, and that uses common transport commands like start, stop, and pause.

Generally, RTP runs on top of the User Datagram Protocol (UDP), a Transport-layer protocol that can deliver live-streaming with low latency, but without error-checking. RTP is generally used for multi-point, multicast streaming.

Let Our Expertise Work for You

Contact Our Sales Team