VoIP quality over IEEE 802.11b

VoIP quality over IEEE 802.11b

Two Master’s theses:

Juan Carlos Martín Severiano, “IEEE 802.11b MAC layer’s influence on VoIP quality: Measurements and Analysis” [Severiano 2004]

Victor Yuri Diogo Nunes, “VoIP quality aspects in 802.11b networks” [Nunes 2004]


Slide Notes

Juan Carlos Martín Severiano, “IEEE 802.11b MAC layer’s influence on VoIP quality: Measurements and Analysis”, MS thesis, Royal Institute of Technology (KTH)/IMIT, Stockholm, Sweden, October 2004. https://urn.kb.se/resolve?urn=urn%3Anbn%3Ase%3Akth%3Adiva-92577 Links to an external site.

Victor Yuri Diogo Nunes, “VoIP quality aspects in 802.11b networks”,  MS thesis, Royal Institute of Technology (KTH)/IMIT, Stockholm, Sweden, August, 2004.


Transcript

[slide478] Now, some time ago, there were two Master's theses looking at VoIP quality over 802.11b networks, both with some measurements out here on Isafjordsgatan and in building measurements. And of course, as we know, since then, voice over Wi-Fi has grown tremendously. But initially, there were many people who didn't think that it would behave very well. And there were some very peculiar properties that were observed. In particular, lots of systems, if it saw that it was losing packets, what did they do? They switched the rate from high data rate to low data rate. And logically, you would think that this is a good idea, since the symbol got longer, the probability of the symbol arriving successfully and being decoded correctly increases, right? Makes sense. Except that it's wrong. It's actually better to stay at the high data rate. Why? Yes? [student asks: If you have a burst of errors if the link goes down even for milliseconds, then the quality will get really, really low if you have low quality. Whereas if you have good quality, you will save it.] Yeah, but it turns out that the key problem is all of the stations are contending using the same algorithm. So they're using the same MAC protocol including the access point! Right? But if I have a number of mobile devices and they're all talking to the same access point, and this has the same probability of getting the channel as any of these devices, what happens when the one slows down? Right? If this decides to switch to a lower data rate it takes longer time on the channel, which slows everyone who's using the same access point down. Right? Because they're just taking a longer portion, a bigger slice, out of the whole channel's capacity. And so it actually turned out in real-life measurements that the best thing to do was simply retransmit at high data rate rather than back off to a lower data rate because if the errors weren't correlated, the probability if I lost one frame, what's the probability that I lose the adjacent frame? It's irrelevant. And now if I have forward error correction on top of it, it didn't matter to me at all. So I might even back off my retransmission amount whereas normally you would have the link configured to retransmit up to about 8 times before you would give up transmitting. Instead, it was better to say, hey, try. If it fails once, try again very quickly. If that fails, just forget about it. Very, very different approach to thinking about it than the traditional.