RTP (over UDP) playing fair with TCP

RTP (over UDP) playing fair with TCP

Real-time multimedia communications wants (adapted from ):

  • timely delivery (vs. reliable but late delivery via TCP)
  • smooth & predictable throughput

This lead to proposals to use a transport layer such as Datagram Congestion Control Protocol (DCCP) [RFC 4340] - as this implements TCP Friendly Rate Control (TFRC) [RFC 5348].

However, this has some problems – including [Choi 2009]:

  • when a flow traverses a low statistically multiplexed network link (e.g., DSL link) using drop-tail queuing, TFRC traffic can starve TCP traffic
  • oscillation on a short time scale
  • if the RTT is less than the CPU interrupt cycle, then TFRC is hard to implement!

Slide Notes

S. Floyd, M. Handley, J. Pahdye, and J. Widmer, TCP Friendly Rate Control (TFRC): Protocol Specification, IETF, Network Working Group, RFC 5348, September 2008 http://www.ietf.org/rfc/rfc5348.txt Links to an external site.

E. Kohler, M. Handley, and S. Floyd, Datagram Congestion Control Protocol (DCCP), IETF, Network Working Group, RFC 4340, March 2006, Updated by RFCs 5595 & http://www.ietf.org/rfc/rfc4340.txt Links to an external site.

Soo-Hyun Choi and Mark Handley, Designing TCP-Friendly Window-based Congestion Control for Real-time Multimedia Applications, Slides from their presentation at the 7th PFLDNeT, May 2009. http://www.hpcc.jp/pfldnet2009/Program_files/3-3.pdf Links to an external site.


Transcript

[slide476] So, the general thing that we've used in Internet technology to talk about fairness is TCP fair. Right? So, can we make a UDP that's TCP fair? So, Estrin and others propose the Datagram Congestion Control Protocol in RFC 4340, and it implements TCP fair rate control. But Choi showed it has some problems, because if a flow transitions over a low statistically multiplexed network link, for instance, DSL or something like that, that uses drop-tail queuing, oops, this supposedly fair traffic will starve the other TCP traffic that's going across the same link. Why is that? Well, because the UDP traffic is occurring periodically. So, even though we're being fair, the TCP traffic that we're competing against keeps seeing competition, so it keeps backing off more and more. It also introduces oscillations on a short timescale, and if the RTT is less than the CPU interrupt time cycle, it turns out implementing TCP rate-friendly control is actually really hard to implement. Now, what does it mean about the CPU interrupt cycle rate? Why should that be a problem, right? Our CPUs got faster and faster. The bad news is their maximum interrupt rates didn't. Because the CPUs got faster and faster because we built deeper and deeper instruction pipelines. What happens when an interrupt comes? Yes, it ends up interfering with everything that we have cached in our instruction pipeline because we have to transfer our control to somewhere else, so we just introduce a big bubble in our instruction pipeline and so the result is it's very difficult to have interrupt rates more than around 1,000 interrupts per second on most of these processors. That means a millisecond. Right? So if we had an RTT of less than a millisecond, we can't manage to even implement this well.