paper list:
Selected Papers
1. "A Minimum Delay Routing Algorithm Using Distributed Computation," [Gallagher 1977]. In this paper, the multipath routing problem (of determining at each node, the fraction of traffic destined for a given destination that should be routed over each of the node’s outgoing links) is formulated as an optimization problem. An iterative, distributed algorithm in which marginal delay information is passed to upstream nodes, which then readjust their routing fractions, is shown to converge to minimize the overall average cost (e.g., delay) in the network. This paper (as well as [Kelly 1998] below) are nice examples of how network protocols (e.g., routing, ratecontrol) can be naturally derived from well-posed optimization problems.
2.政霖 "The Design Philosophy of the DARPA Internet Protocols," [Clark 1988]. This paper provides a thoughtful retrospective view of the goals and design principles of the Internet architecture and its protocols. It has been a favorite among students in my networking classes, and paired with [Molinero-Fernandez 2003] has made for many lively and interesting class discussions.
3. "A Calculus for Network Delay, Part I: Network Elements in Isolation; Part II: Network Analysis," [Cruz 1991]. During the 1990’s, there was considerable foundational research on providing quality of service guarantees for flows that are multiplexed within the network. This paper describes an elegant "calculus" that provides provable worst-case performance (delay) bounds on per-session, end-end performance. Many important works followed this seminal work; a nice survey is [LeBoudec 2001].
4. "A generalized processor sharing approach to flow control in integrated services networks," [Parekh 1993]. In many ways a companion paper to [Cruz 1991], this two-part paper demonstrates how provable per-node and end-end persession performance bounds can be guaranteed, given a weighted fair-queueing discipline at each node.
5. 汎嘉"Equivalent capacity and its application to bandwidth allocation in high speed networks," [Guerin 1991]. The notion of effective bandwidth, an approximate characterization of the queueing behavior of a session when multiplexed with others, was developed by numerous researchers (see, e.g., [Kelly 1996]) throughout the 1990’s. This early paper introduced me to the idea, and sparked my interest in the area.
6. 筱婷"On the self-similar nature of Ethernet traffic (extended version)," [Leland 1994]. While the notion of long-range dependency, self-similarity, and heavy-tailed distributions are now a standard part of the lexicon of those interested in traffic characterization and descriptive network models, this paper introduced these ideas widely, launching many subsequent research efforts that have taken such an approach towards modeling.
7. "Sharing the cost of multicast trees: an axiomatic analysis," [Herzog 1997]. Axiomatic methods, in which one poses a desired set of system properties or behaviors, and then develops a protocol that meets these properties (or proves an impossibility result – that there is no protocol that meets the requirements) is a well-known technique in fields such as mathematical economics and social welfare theory. This paper was an elegant application of this set of tools in the networking domain.
8. 筱婷"Rate control in communication networks: shadow prices, proportional fairness and stability," [Kelly 1998]. This paper formulates the rate control (congestion control) problem as a problem of allocating bandwidth to flows so as to optimize overall system "utility," showing that Jacobson’s TCP congestion control protocol (developed 10 years earlier using tremendous engineering insight) can be naturally interpreted as a distributed algorithm that iteratively solves this global optimization problem.
9. "Multicast-based Inference of Network-Internal Loss Characteristics," [Caceres 1999]. Many research efforts in network measurement through the mid-to-late 1990’s were descriptive in nature – taking active or passive measurements at various points in the network, and interpreting the observed performance (e.g., packet delay, packet loss, aggregate traffic mix, or throughput). This paper elegantly used statistical methods (maximum likelihood estimation) together with end-to-end measurement data to infer the (unseen) topology of the network between the measurement endpoints. Inference techniques have since become an important and widely-used part of the measurement toolkit.
10. 嘉裕"Internet indirection infrastructure," [Stoica 2004]. It’s been said (in a quote often attributed to Butler Lampson) that nearly every problem in computer science can be solved by adding another level of indirection. I had always thought that this quote applied to data structures and algorithms. This paper, however, opened my eyes to how indirection can be used in an elegant and clean distributed network architecture for providing a variety of overlay services