Underlying Quality
Underlying Quality
Some statistics from Qwest for POP to POP measurements†
February Monthly Averages
Atlanta |
Chicago |
Dallas |
Denver |
Los Angeles (LA) |
New York (NY) |
Sunnyvale |
|||||||||||||||
loss (%) |
latency (ms) |
jitter (ms) |
loss |
latency |
jitter |
loss |
latency |
jitter |
loss |
latency |
jitter |
loss |
latency |
jitter |
loss |
latency |
jitter |
loss |
latency |
jitter |
|
Atlanta |
0.00 |
39.64 |
0.05 |
0.00 |
24.13 |
0.05 |
0.00 |
45.21 |
0.05 |
0.00 |
52.10 |
0.08 |
0.00 |
20.35 |
0.00 |
0.00 |
61.10 |
0.14 |
|||
Chicago |
0.00 |
39.46 |
0.09 |
0.00 |
24.10 |
0.08 |
0.00 |
23.32 |
0.07 |
0.00 |
56.13 |
0.33 |
0.00 |
20.17 |
0.01 |
0.00 |
48.10 |
0.09 |
|||
Dallas |
0.00 |
24.13 |
0.05 |
0.00 |
24.12 |
0.05 |
0.00 |
21.21 |
0.07 |
0.00 |
40.77 |
0.07 |
0.00 |
44.24 |
0.02 |
0.00 |
46.14 |
0.10 |
|||
Denver |
0.00 |
45.16 |
0.09 |
0.00 |
23.32 |
0.05 |
0.00 |
21.23 |
0.08 |
0.00 |
32.16 |
0.06 |
0.00 |
44.13 |
0.00 |
0.00 |
25.08 |
0.06 |
|||
LA |
0.00 |
52.07 |
0.06 |
0.00 |
56.09 |
0.20 |
0.00 |
40.68 |
0.07 |
0.00 |
32.22 |
0.06 |
0.01 |
76.09 |
0.02 |
0.00 |
8.01 |
0.07 |
|||
NY |
0.00 |
20.36 |
0.00 |
0.00 |
20.21 |
0.01 |
0.00 |
44.23 |
0.00 |
0.00 |
44.24 |
0.00 |
0.01 |
76.17 |
0.00 |
0.00 |
68.19 |
0.00 |
|||
Sunnyvale |
0.02 |
61.14 |
0.09 |
0.01 |
48.20 |
0.08 |
0.00 |
46.17 |
0.10 |
0.01 |
25.12 |
0.08 |
0.00 |
8.14 |
0.09 |
0.00 |
68.24 |
0.00 |
† Numbers taken from http://209.3.158.116/statqwest/statistics.jsp
Transcript
[slide465] If we actually look at the underlying statistics, and this is from QWest, between a number of major cities in the U.S., packet loss, percentage, latency in milliseconds, jitter in milliseconds, and we see an enormous number of zeros. They have a fiber network. There's very, very little packet loss. Interestingly enough is that we see, in a number of cases, there's basically zero milliseconds jitter. So extremely low jitter. Because the available bandwidth is so high, there's very low probability [of contention]. Now why would you expect that? You can expect that with a fiber network that we have relatively small amounts of packet loss, why would we expect such a small amount of jitter? Even the non-zero values are pretty small. You know, five hundredths of a millisecond. So 50 microseconds jitter. Okay. You all know that if we increase the load on the network, we start off, the delays, this is delay here, and this is load, look like that. Initially, it's nearly flat. As we add more load, it doesn't change our delay very much. But we suddenly start to go non-linear and we basically go exponential when we get near the limit of our capacity. Well it turns out that most operators never run their networks above 50% utilization. Above 50% utilization. That means you're always down in this portion of the curve. That's how you get such low jitter variance. So basically all you see is the path delay plus a small amount for switching, sometimes about half of that can be accommodated for the switching or routing along the way. But basically you don't have other competing traffic, so the jitter is very tiny. Yes. [student asks: But this means that you lose 50% of utilization. That's an additional cost.] Yes. [student asks: So is it a trade-off between the delay of the quality of service you're delivering or the utilization?] It would be nice to think about it that way, and you get that. But it actually turns out that it occurs for a much simpler reason. And that is provisioning a new service. So if I get into this part of the curve, I don't ever want to operate there. So what do I need to do? I need to provision a new path to be able to carry that. But I typically can't do that very fast. So I need to make sure there's enough time for my loads to go from there to there that I get the new system installed so I don't end up operating up here. Because then I have a lot of unhappy people. But it's not clear what happens if you have a network that's, for instance, the way it's getting these new paths is by SDN-like techniques. We don't know. Because Qwest was running their own network. So do these same things apply when you have wavelength division, multiplexing, and SDN-like management of the network? Or do people manage them to have higher utilizations? However, what we do know is that, in general, there's more unlit fiber already in the ground than there is lit fiber. So there's vast amounts of capacity just sitting there completely idle. So that, again, argues for over-provisioning. But, and I had a clever student a number of years ago who said, if I do find that I'm moving in that direction, what can I do? I can purposely switch CODECs. And then what happens? I move my operating point down on the curve. So, for instance, if I find out that, you know, 4K video is too much and my performance is getting very bad, what do I do? I switch down to just HD quality. Or I switch down to lower quality. That moves my load this way. And I can also do a similar thing. And I can also do a similar thing for my voice CODECs. So I can switch to CODECs that have higher cost of compression and coding, but reduce their data rate. Or I may even increase my amount of forward error correction so that I tolerate the fact that, hey, some of the packets I'll just drop. But since I've provided enough error correction, it doesn't matter that I'm throwing away some of my packets. Which, again, has the property of moving us down on the curve. And so he built a system that did this adaptively. So the result was that everybody who was involved, who was following this adaptive scheme, would move in the direction that, in fact, made it better for everyone. Because the delays got better for everyone.