Home → Magazine Archive → April 2014 (Vol. 57, No. 4) → Rate-Limiting State → Abstract

Rate-Limiting State

By Paul Vixie

Communications of the ACM, Vol. 57 No. 4, Pages 40-43

[article image]

back to top 

By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we are at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills—and the quality assurance budgets—that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working "well enough" in spite its failings.

Witness the hundreds of millions of customer-premises equipment (CPE) boxes with literally too much memory for buffering packets. As Jim Gettys and Dave Taht have been demonstrating in recent years, more is not better when it comes to packet memory.1 Wireless networks in homes and coffee shops and businesses all degrade shockingly when the traffic load increases. Rather than the "fair-share" scheduling we expect, where N network flows will each get roughly 1/Nth of the available bandwidth, network flows end up in quicksand where they each get 1/(N2) of the available bandwidth. This is not because CPE designers are incompetent; rather, it is because the Internet is a big place with a lot of subtle interactions that depend on every device and software designer having the same—largely undocumented—assumptions.


No entries found