We here at LiveQoS have been following Robert X. Cringley for many years and have particularly enjoyed his insights into topics that sometimes don’t get much exposure. Over the last number of years he’s been running a series of articles on bufferbloat, one of the most significant problems in the Internet today. Bufferbloat refers to the dramatic increase in buffer sizes in network devices that is impacting VoIP, video chat and other real-time applications.
All network devices have buffers, which you can think of as holding areas for packets before they are sent on their way. The reason you need them is that the amount of traffic entering a network device can easily exceed the speed of the outbound link. For example, your laptop at home is able to send data at hundreds of megabits per second on your home network, but it is very unlikely that your residential ISP service supports such high data rates.
If you didn’t have a buffer in your DSL or cable modem, the only recourse for too much traffic coming in and not enough leaving would be to throw away a lot of the incoming packets as there would be no place to store the excess. No matter how large you make a buffer, it will always be limited in size so packet loss due to congestion (too much incoming traffic and not enough outbound speed) will always be there.
So, why are buffers getting so big? It turns out that TCP, the protocol that underlies almost all applications, requires large buffers to deliver maximum performance. As network speeds have increased, so has the need for larger and larger buffers to ensure that TCP can fill the pipe. Carriers are going overboard with the size of the buffers but larger buffers are inevitable with faster speeds. As Cringley points out in his articles on bufferbloat, these large buffers play havoc with any type of real-time traffic like VoIP, video conferencing and online gaming.
I think that ISPs are also very happy that they can configure very large buffers to make your over-the-top VoIP service perform much worse than their own voice services. They really aren’t motivated to help competing services perform wel.
Although some solutions have been tried in the past, they typically only impact traffic that you are sending into the Internet. Traffic coming from the Internet is normally uncontrollable and, unfortunately, this is usually the direction where you find the biggest buffers and most congestion.
So, what’s the solution? One of LiveQoS’s unique features is that it prevents end devices from causing upstream buffer bloat. It is able to detect when buffers are getting too full and automatically tunes itself to keep the buffers as empty as possible.
LiveQoS is also developing a solution that will automatically control buffer bloat in both the upstream and downstream directions without requiring any end-user intervention.
Apart from using LiveQoS technology on all of your end devices, there are currently some other ways of addressing buffer bloat. Before making a VoIP or video-chat call or playing your favourite online game, stop (or rate limit) all of your TCP flows. Rate-limiting is a feature that may be deep in each of your client settings and it helps to avoid bufferbloat by limiting how fast an application can send data.
Remember, to be effective, you need to do this on all of the computers and devices that share your Internet connection. Any device on your network can easily independently fill the buffers.
Although this will help avoid local packet loss, latency and jitter due to downstream congestion, your VoIP and video chat will still be impacted by the network issues that are inherent in the core of the Internet and elsewhere in your ISP.