Speeding up video Time to First Play (TTFP)

Speeding up video Time to First Play (TTFP)

At LiveQoS, we get lots of folks asking us how they can speed up video streaming and improve Time to First Byte (TTFB). If you’re in the video streaming business, you know how critical it is to have the video start playing as soon as the user taps play, as every delay increases the loss of user engagement you worked so hard to capture.

Moving your content nearer to the end user (i.e. CDN) is usually what you’ll hear the experts say, and usually what you want to do. But this only solves the problem for very popular videos in that CDN’s region, those that are cached locally. However, some apps offer such diversified content and the virality of videos can be so geographically dispersed, that a CDN implementation may not be the ideal solution for everyone.

Download the LiveTCP Whitepaper

When your CDN cache is a MISS, and your video has to be fetched from the origin server and transferred to the CDN node at the edge, you’re at the mercy of TCP, the transport protocol designed when the dinosaurs roamed the earth, to send your video from coast to coast.

Here’s the reality: TCP was never designed for quick-start video playback; it was designed to be bandwidth-friendly, in an era long past where bandwidth was expensive and networks were slow. But the world has changed, and that’s why we designed LiveONE, an SD QoE service purpose-built to overcome the QoE challenges imposed by Demanding Internet Applications.

TCP stacks are too slow. Ironically, you may have heard of TCP’s “slow start” algorithm: that’s where TCP ramps up throughput slowly and carefully, then progressively much faster, until it hits a wall in the form of perceived network congestion, where it then dramatically reduces the transmission rate that it worked so long to achieve. This leads to inconsistent performance, even though there’s plenty of bandwidth out there.

The other problem with regular TCP: it does a very poor job of understanding network congestion. For example, today’s high speed networks are quite volatile, as most applications tend to burst data transmission and then settle down for a while, waiting for some task to complete. The gaps in network traffic are an opportunity. But sensing network congestion the way that regular TCP does it, leads to very misleading results, and loss of transfer opportunities.

The net results are potentially costly delays and inefficient, out of control throughput characteristics. These are big challenges if you’re streaming video: delayed video playback for the user (TTFB) and unequal buffering time leading to starvation and pausing of the video player.

LiveTCP (a component of the LiveONE solution) addresses these concerns by ramping up faster and being much smarter about how and when to give the network a break. This results in faster playback start, and predictable buffering as the throughput curve is smoothed out near the maximum available bitrate.