Traditional Telecom vs. Datacom

Traditional Telecom vs. Datacom

Circuit-switched Packet-switched

standardized interfaces

standardized protocols and packet formats

lots of internal state (i.e., each switch & other network nodes)

very limited internal state

  • caches and other state are soft-state and dynamically built based on traffic
  • no session state in the network

long setup times - since the route (with QoS) has to be set up from end-to-end before there is any further traffic

End-to-End Argument ⇒ integrity of communications is the responsibility of the end node, not the network [RFC 1958, RFC 3439]

services: built into the network ⇒ hard to add new services

  • operators decide what services users can have
  • all elements of the net have to support the service before it can be introduced
  • Application programming interfaces (APIs) are often vendor specific or even proprietary

Services can be added by anyone

  • since they can be provided by any node attached to the network
  • users control their choice of services

centralized control

no central control ⇒ no one can easily turn it off

“carrier class” equipment and specifications

  • target: very high availability 99.999% (5 min./year of unavailability)
  • all equipment, links, etc. must operate with very high availability

a mix of “carrier class”, business, & consumer equipment

  • backbone target: high availability >99.99% (50 min./year unavailability)
  • local networks: availability >99% (several days/year of unavailability)
  • In aggregate - there is extremely high availability because most of the network elements are independent

long tradition of slow changes

  • PBXs > ~10 years; public exchanges ~30yrs

short tradition of very fast change

  • Moore’s Law doublings at 18 or 9 months!

clear operator role (well enshrined in public law)

unclear what the role of operators is (or even who is an operator)


Slide Notes

 

[RFC 3439] R. Bush and D. Meyer, ‘Some Internet Architectural Guidelines and Philosophy’, Internet Request for Comments, vol. RFC 3439 (Informational), Dec. 2002 [Online]. Available: http://www.rfc-editor.org/rfc/rfc3439.txt Links to an external site.

[RFC 1958] B. Carpenter, ‘Architectural Principles of the Internet’, Internet Request for Comments, vol. RFC 1958 (Informational), Jun. 1996 [Online]. Available: http://www.rfc-editor.org/rfc/rfc1958.txt Links to an external site.


Transcript

[slide76] The first thing to do is look at traditional telecom versus voice over IP. Right, in traditional telecom, we're circuit switched. In VoIP, we're packet switched. In traditional telecom, everything is about standard interfaces. It's about a standard plug. It's all defined down to things that I can plug together. In the packet switch world, all we cared about was standard protocols. I don't care what your plugs look like. We can figure that out, but we better have it agreed upon packet format. Lots of internal state in switches. In [the] packet switch world, we try usually to have very little state, or only soft state. Really long call set up times in the traditional telephony world. But the end-to-end argument applies in the packet switch world. The service is built into the network versus being added by anyone. Centralized versus no central control. And here's a really fascinating one. So-called carrier class equipment and specifications. So the telephony world, they talk about five nines of reliability. That means five minutes per year that the system is unavailable. That's extremely high reliability. If you looked at a class five switch built by AT&T's Western Electric arm. Everything was redundant. You had two processors and all of this, so that you could ensure very high reliability. If we look at the packet switch case, what happens? Well, we're mostly around 99.99%. That's 50 minutes per year. Backbone switches, local networks, 99%. That's seven or so days per year that it's unavailable. But the interesting thing is, in the aggregate, what happens? So what happens if I have a system where I have part of the system that has a 99% reliability, and I have another part of the system with a 99% reliability? How unavailable is this? Right? How much? Right, one times ten to the minus four. Right, if I did my arithmetic correct. If that's ten to the minus two, ten to the minus two. The good thing is the product of these means the system, as a whole, has very high reliability. Even though the individual pieces may be pretty crappy. As a whole, the system, because we have no central control, we have no central clock. In the telephoning system, if the central clock goes out, the entire system doesn't work. Okay, a number of years ago, there was a big storm here in Sweden called Gudrun. Anyone remember that? It wiped out enormous part of southern Sweden's telecom infrastructure, knocked out trees. In fact, just the felled trees produced a pile so big, you could see it from space. Yes, for months, there were areas that didn't have cellular coverage. How long will it take for them to get back to 99.999% availability? Centuries, right? But in the telecom world, we don't talk about that. That's a disaster, it's exceptional. Right? It still happens. So the key point is that 99.999%, that's the goal. That's what people want to have. It's a marketing ploy. And they had to have very high reliability of everything, because it was all centrally controlled and all had to all work, or the system didn't work. And this huge thing of suddenly having independent parts that can now fail without causing a system failure is an entirely different world. Long tradition of slow changes, short tradition of very fast change. And of course, the operator role was enshrined in public law, but it's not even clear who an operator is in the packet switch case.