This article is concerned with congestion avoidance algorithms which utilize a single bit signal that the network is operating above or below its desired level of load for coordination between nodes on the network. In the model of the paper, time is divided into discrete intervals, and each user of the network sets its load for the next interval based on feedback generated by the previous interval.
Primarily evaluated are algorithms where at each time t, the new load is a(signal) + b(signal)*(old load) where signal is the single-bit indicator of whether the network was over the desired load at time t -1 . The appropriate values of a, b for each case are analyzed in light of convergence to both the desired level of load and the desired fairness. The main result is that additive increase/multiplicative decrease (i.e. a(underloaded) > 0, a(overloaded) = 0, b(underloaded) = 0, 0 < b(overloaded) < 1) is are the types of a and b functions which achieves convergence on both criteria. With respect to the rate of convergence, there is a tradeoff between quick convergence and low levels of oscillation around the goal load.
For me, the most interesting observation of the paper are the idealized throughput and response-time curves. TCP-style congestion control has its boolean signal of a sort (a dropped packet) — but where on that throughput/response time curve does it fall?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment