P. Karbhari, M. Rabinovich, Z. Xiao, F. Douglis
A CDN for Applications
Content delivery networks (CDNs) improve scalability of accessing static and, recently, streaming content. However, forward caching platforms can improve access to these types of content as well. A unique value of CDNs is in scaling up access to dynamic content and other computer applications. This talk outlines components, goals, and challenges involved in building a CDN for applications. We also describe our prototype implementation of such a system.
David Starobinski, David Tse
Probabilistic Methods for Web Caching
We introduce and analyze new randomized policies for the management of web caches. The proposed policies are fully scalable, that is, handling a hit or an eviction requires only O(1) time. Our analysis is probabilistic, in nature, and based on an extended version of the IR model. The extension is needed in order to deal with the varying-cost and varying-size features of web documents. Under this assumption, we derive closed-form expressions for the stationary probabilities of finding each possible arrangement of the documents within the cache. Our analysis shows that the performance of the proposed algorithms are close to that of the optimal off-line algorithm. Using simulations and real traces, we also show that the new algorithms perform at least as well as existing algorithms of higher complexity.
Paul Barford, Jin-Yi Cai, Jim Gast
Cache Placement Methods Based on Client Demand Clustering
One of the principal motivations for content delivery networks, such as those that have been deployed over the past three years, is to improve performance from the perspective of the client. Ideally, this is achieved by placing caches close to groups of clients and then routing client requests to the nearest cache. In the first part of this paper we present a new method for identifying regions of client demand. Our method uses best path information from Border Gateway Protocol (BGP) routing tables to create a hierarchical clustering of autonomous systems (AS's). The method iteratively adds small clusters to larger clusters based on minimizing the Hamming distance between the neighbor sets of the clusters. This method results in a forest of AS trees where we define each tree root as an Internet backbone node. This forest representation of AS connectivity is an idealization of the Internet's true structure. We test for fidelity by comparing AS hop distances to the Internet backbone. One of the strengths of our AS clustering method is that it naturally lends itself to the cache placement problem. In the second part of this paper, we present two cache placement algorithms based on a tree graph of demand. The algorithms address the problems of placing single caches and multiple caches so as to minimize inter-AS traffic and client response time. We evaluate the effectiveness of our cache placement algorithms using Web server logs and show that they can greatly improve performance over random cache placement.
Steven Low
High Density CDN Model for Server Placement & Allocation
It is well-known that the problem of optimal server placement and allocation in content distribution networks is NP-hard. In this talk, we describe an approximate model for the case where both the users and servers are dense. This model leads to a simple lower bound on the network cost of server placement and allocation, and suggests an optimal strategy where the spatial density (location) of servers should be proportional to the square-root of the spatial density of user requests and where the allocation of servers should be weighted by the square-root of file popularity. The model highlights the importance of spatial distribution of user requests.
Geoff Voelker
Internet Content Distribution: Trends and Implications
Large-scale infrastructure, such as caching and content delivery systems, has been widely deployed to improve the performance of the delivery of Web content. In this talk we survey work that evaluates the effectiveness of Web optimization and delivery systems at large scales, as well as the sometimes subtle interactions of these systems. We then consider the implications of emerging workloads, such as streaming media, and speculate on the future of large-scale content distribution and the underlying Internet.
Peter Marbach
A Data Transfer Model for Peer-to-Peer Networks
We consider a network that is formed by independent peers in a self-organized way without relying on any established infrastructure or centralized administration. All services essential to the operation of the network are provided by the peers themselves. For example, the transmission of data between two distant peers A and B relies on intermediate peers that forward packets for the benefit of A and B. In such a network however, there is no good reason to assume that peers cooperate and provide services to each other. Indeed, the contrary is true: service provisioning is not in the interest of the peers, because it reduces the coomunication resources for their own traffic. Wireless ad-hoc networks are an example of such peer-to-peer networks. We propose a data transfer modelpeer-to-peer networks which aims at stimulating cooperation among nodes and study its resource-sharing properties.
Bobby Bhattacharjee and Suman Banerjee
Scalable Application-Layer Multicast for Content Distribution
We describe a new application-layer multicast protocol that is specifically designed to scale to large groups. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can be used to produce a number of different data delivery trees with specific properties. On average, group members using our protocol maintain only a constant amount of state about other group members, and incur a constant amount of control overhead. The trees produced by our protocol have relatively low latency and link stress, and can be used to implement scalable content delivery protocols.
John Byers
Streamlining Unicast Delivery of Popular Content via Fast Forward Error Correcting Codes
Forward error correcting codes (aka erasure-resilient codes) have recently seen wide use as an enabling technology for providing resilience to packet loss, in content delivery applications ranging from reliable multicast to delivery of streaming media to voice over IP. In this talk, we sketch these contributions and present several other significant benefits of transmitting encoded content -- most notably in the context of optimizing unicast delivery of large, popular files via TCP connections from a single webserver. The Cyclone webserver architecture which we describe alleviates many of the most serious scalability problems associated with conventional delivery of such files by leveraging codes to 1) maximize sharing of state across request threads and 2) eliminate the need for TCP retransmission buffers. If time permits, we will also describe applications of FEC codes to content delivery applications on overlay networks, such as reliable application-level multicast.
No prior familiarity with coding techniques will be assumed; the talk will be self-contained. Work on Cyclone server is joint with Stan Rost (Boston University and MIT) and Azer Bestavros (Boston University).
Arun K Somani
Beyond SONET and SDH: Challenges and Issues
SONET has played an important role in the design and operation of a number of networks around the globe. However, it is fast approaching its limits. SONET was originally built for transporting 64-kbps voice channel traffic. It is not exactly suited for the efficient transport of Internet related data traffic. In addition, bandwidth scaling difficulties, quick provisioning and the need for service differentiation necessitate the need to leverage traditional SONET/SDH to newer technologies. We will examine some of the issues that need to be considered in the evolution of SONET/SDH to newer network technologies that use wavelength-division multiplexing-based optical networking technology. We will address issues related to traffic grooming to support IP over WDM, satisfy demands for different Quality of Service (QoS), and survivability issues in WDM-based networks.
Fabio Neri
Interconnected WDM Rings for Metro Networks
The talk focuses on measurement-based resource allocation schemes for interconnected WDM rings in a metropolitan area network. These algorithms were proposed for the European project DAVID (Data And Voice Integration over D-WDM), aiming at the design of an optical packet-switched network for the transport of IP traffic. The DAVID network has a two level hierarchical structure, with a backbone of optical packet routers interconnected in a mesh, and metropolitan areas served by sets of optical rings connected to the backbone through devices called Hubs. The talk focuses on the operations of the media access protocol and on resource allocation schemes to be used in the metropolitan sections of the DAVID network. The Hub implements ring-to-ring permutations, with no buffering nor contention problems. Network nodes are equipped with tunable transceivers to receive and transmit packets from the WDM ring they are attached to. A simple scheme for datagram (not-guaranteed) traffic will be described, and its performance will be studied by simulation.
Martin Maier
Next-Generation Metro WDM Networks
Current metro networks are mostly based on inefficient SONET/SDH rings that prevent local broadband clients such as GbE from taking full advantage of the large WDM pipes available in the backbone. Next-generation metro WDM networks will transport IP packets directly over the optical layer. Apart from ring topologies, e.g., HORNET and RPR, arrayed-waveguide grating (AWG) based star topologies have attracted much attention as candidates for future metro WDM networks. In this talk we discuss various metro WDM architectures and protocols and outline their merits and drawbacks.
Joe Touch
An Optically Turbocharged Internet Router
Optical technologies have substantially increased the capacity of communication links and circuit switches. It has been difficult to apply optical capabilities to accelerate routers to support packet forwarding, the critical function of the core of the Internet. Current mechanisms map flows of packets onto optical circuits and automate circuit configuration. Such techniques try to wedge the Internet's packet model onto a circuit model, and work only where optical circuits are contiguous. Even then, they meet with only limited success. This talk considers the design of an optically turbocharged router that forwards packets among several single-channel terabit-per-second links without internal congestion or packet loss. The router consists of an optical booster integrated with a conventional electronic Internet router. The optical booster processes a subset of Internet Protocol (IP) packets at a much higher speed, and the remainder of the packets (with options, or complex lookups) fall through to the optical router. Optical boosting is an inexpensive upgrade that can be deployed incrementally in a backbone IP network. This talk presents simulation results and an discusion of the feasibility of implementing an optical forwarder with existing technology.
Eytan Modiano
Circuit vs. packet switching in future optical networks
Wavelength Division Multiplexing (WDM) is emerging as a dominant technology for u se in backbone networks. With WDM the capacity of a fiber is significantly increased by allowin g simultaneous transmission on multiple wavelengths (channels), each operating at the maximum e lectronic rate. With 10's of wavelengths per fiber and transmission rates of up to 10 Gbps per wavele ngths, capacities that approach a Tera-bit per second can be achieved. While such enormous capacity is very exciting, it also places a tremendous burden on the switches and routers at each node that must so mehow process all of this information. One way to deal with all of this traffic is through the use of optical packet switching technology that presumably can process much greater traffic loads than its elect ronic counterparts. However, such technology is far from maturing. An alternative for alleviating th is electronic bottleneck, is through the use of wavelength switches, that allow each wavelengths to either be dropped (and processed)at a node, or alternatively bypass that node. There are a number of wa ys in which wavelength switching can be used to reduce the electronic processing loads in th e network.
First, a configurable WDM optical layer can be used to dynamically reconfigure t he "logical" (i.e., electronic) topology of the network in response to changes in traffic conditions. In a packet switched network, traffic between a source and destination node typically goes through a number of intermediate electronic switches and routers. As the traffic in the network changes, so does the load on the routers. With a configurable WDM topology, it is possible to change the way in which the routers are connected in order to balance the load in the network in response to changes in traffic conditions. This ability can be used to significantly reduce network loads and lead to improved quality of service.
The ultimate form of optical bypass in the network is optical-flow-switching (OFS). With OFS, an all- optical (WDM)connection is established between end-users, bypassing all of the electronics in the network. Optical flow switching is particularly effective for very large transactions (e.g., Giga-bit file transfer)that might otherwise clog the network. One of the main bottlenecks in the present Internet is routing at the IP layer and several methods have been proposed to alleviate this bottleneck by switching long duration flows at lower layers (e.g., MPLS). This concept of lower-layer switching can be extended to switching large volume and/or long duration flows at the optical layer. That is, a lightpath can be established for large data transactions such as the transfer of large files or long duration and high bandwidth streams across the network. This optical flow will bypass all of the electronics in the network and be switched at the WDM layer. We will briefly discuss some of the important problems that arise in optical-flow-switching, such as routing,wavelength assignment and connection establishment.
Mario Baldi and Yoram Ofek
Dynamic Optical Networking
Electronic asynchronous packet switching has been developed since the early 1960s and now, at the dawn of the 21st century, it is a well-understood paradigm. All-optical packet switching, on the other hand, is a new paradigm, currently under-going researc h and development. In this paper we show that the transition from electronic to optical is not as straightforward as one would like to think.
The main thesis of this paper is that the transition to dynamic all-optical switching will require changing the current asynchronous packet switching paradigm. This is shown by focusing the analysis on the simple problem of optical memory, while assuming that the more complex problems associated with optical packet header processing and switching have been resolved, which is indeed not the case.
In order to evaluate optical memory, various comparative measures are used. For example, much more glass (Silicon) is needed for optical memory versus electronic memory, in fact a factor of 1 million! If we assume shrinking of Silicon circuitry by a facto r of two every 18 months this will translate into reversing Moore's law by 30 years.
It will be shown that a common time reference (CTR) is needed for realizing the optical random access memory (O-RAM) required for dynamic optical networking. Furthermore, it will be shown that if the CTR can be (globally) distributed, then it is possible to (globally) distribute the O-RAM and, consequently, to realize a new dynamic optical networking architecture called Fractional Lambda Switching (FλS).
Daniel Lee
FOSSIL (Forest for OVSF-Sequence-Set-Inducing Lineages) and its Use
We present a spreading gain adaptation protocol between a transmitter and a receiver communicating with each other through CDMA. This protocol does not require the transfer of an explicit control message indicating the change of CDMA spreading gain from the transmitter to the receiver. Also, according to this protocol, the transmitter can change the spreading gain symbol by symbol as opposed to frame-by-frame; thus, faster adaptation to the channel condition than the existing frame-by-frame approach is enabled. The requirement is that the transmitter and receiver must have a set of OVSF code sequences of different lengths that are cyclic orthogonal to each other. We suggest the set of such code sequences. We then analyze the performance of the protocol under the assumptions of Nakagami-m/Rayleigh/one-side Gaussian fading channels with AWGN.
John N. Daigle
Performance Modeling of STDM Systems with Reservation
We discuss here an ad hoc method of assessing the performance of a statistical time division multiplexing system that requires reservations prior to transmission. Time is organized as a sequence of equal-length frames, each frame having a multi-slot contention period and a multi-slot service period. Messages are drawn from a general distribution having finite support. Users wanting to send messages join a contention group and then choose a contention-slot at random in each successive frame until they are successful. Users who have successfully contended join a service queue where they wait until their turn for service, which may be multiple frame times in the future. We obtain estimates of throughput and average waiting time as a function of the system parameters via an ad hoc iterative technique.
Dan Rubenstein
Pricing Multicast in More Practical Network Paradigms
In this talk, we explore the complexity of using a bid-based approach to price multicast sessions in heterogeneous networking environments. A justifiable pricing mechanism must guarantee the network a profit and should be implementable via simple and efficient protocols. We present algorithms that implement a bid-based approach in heterogeneous networking environments where 1) a session is offered at multiple rates and receivers place bids for each offered rate and 2) the transmission is performed upon an overlay where there is a cost to the network for a node to join the overlay. We demonstrate that these algorithms, which require a single path up and down the nodes that might participate in the multicast tree, chooses the session rates for all receivers that maximizes network profits. We also summarize our results that show how to extend these algorithms into strategyproof forms in which receivers have no incentive to bid more or less on each offered rate than the value of receiving the transmission at that rate.
Spatial and temporal information about traffic dynamics is central to the design of effective traffic engineering practices for IP backbones. We study backbone traffic dynamics using data collected at a major POP on a tier-1 IP backbone. We develop a methodology that combines packet-level traces from access links in the POP and BGP routing information to build components of POP-to-POP traffic matrices. Our results show that there is wide disparity in the volume of traffic headed towards different egress POPs. At the same time, we find that current routing practices in the backbone tend to constrain traffic between ingress-egress POP pairs to a small number of paths. As a result, there is a wide variation in the utilization level of links in the backbone. Our findings indicate that there is a great deal of room for improvement in load balancing techniques in the backbone. We identify traffic aggregates based on destination address prefixes and find that this set of criteria isolates a few aggregates that account for an overwhelmingly large portion of inter-POP traffic. We also demonstrate that these aggregates exhibit stability throughout the day on per-hour time scales, and thus they form a natural basis for splitting traffic over multiple paths in order to improve load balancing.
Marwan Krunz and Turgay Korkmaz
Computation of QoS routes in the presence of state inaccuracies
One important aspect of quality-of-service (QoS) routing is the computation of cost-effective paths that satisfy end-to-end constraints. This problem has often been addressed in the literature under the assumption that the link-state information is precisely known (i.e., deterministic). In practice, however, this information may be inaccurate due to network dynamics, state aggregation, and the approximate nature of link-state computation. The uncertainty in the link-state values can be probabilistically modeled. In this paper, we consider the path selection problem under bandwidth and delay constraints, and with normally distributed bandwidth and delay link metrics. Our goal is to find the "most probable" bandwidth-delay constrained path (MP-BDCP). Since this is a multi-objective problem that may not even have an optimal solution, we reduce it to the problem of finding a set of "nondominated paths." For that, we first consider the most probable delay-constrained path (MP-DCP) problem, which in itself is an NP-complete problem. We use the central limit theorem and Lagrange relaxation techniques to provide efficient heuristics for this problem. Then, we combine these heuristics with existing solutions for the most probable bandwidth-constrained path (MP-BCP) problem to obtain our set of nondominated paths. Decision makers can select one or more of these paths based on a specific utility function. Extensive simulations are used to demonstrate the efficiency of our solutions.
The Internet has experienced explosive growth since its commercialization. The sizes of the routing tables have increased by an order of magnitude over the past six years. This dramatic growth of the routing table can decrease the packet forwarding speed and demand more router memory space. In this talk, we present the extent that various factors contribute to the routing table growth and predict the future growth of the routing table.
Ness Shroff
Analysis of Large System Netowrk Dynamics
In this talk I will describe a network decomposition methodology to estimate QoS metrics at an arbitrary node in the network. This work is motivated by the assumption that most network routers will have the capacity to multiplex a moderate to large number of network flows. I will show that the queue length at the network point at which a large number of traffic flows are multiplexed decreases to zero, almost surely. I will also describe how to estimate the QoS at a downstream node (within an error term) by only measuring the traffic at the edges of the network. These results are independent of the scheduling discipline at each node, as long as it is work conserving. We will provide numerical examples to illustrate the efficacy of our results under different network and traffic configurations.
Jorg Liebeherr, Steve Patek and Almut Burchard
The deterministic network calculus offers an elegant
framework for determining delays and backlog in a network with
deterministic service guarantees to individual traffic flows.
A drawback of the deterministic network calculus is that
it only provides worst-case bounds.
Here we present a network calculus for statistical service
guarantees, which can exploit the statistical multiplexing
gain of sources.
We introduce the notion of
an effective service curve as a
probabilistic bound on the service received by an
individual flow, and construct an effective service curve
for a network where capacities are
provisioned exclusively to aggregates of flows.
Numerical examples demonstrate that
the calculus is able to extract a significant
amount of multiplexing gain in networks with a large
number of flows.
In a typical aggregate traffic model, such as the classical form
of the ON/OFF model, traffic bursts arise from many connections
being active simultaneously. In this talk, we go beyond an aggregate
network traffic analysis by incorporating connection-level information.
A careful study of many traffic traces acquired in different
networking situations reveals that traffic bursts typically arise
from a single high-volume connection that dominates all others.
We term such dominating connections alpha traffic. We find that
alpha traffic is caused by large files transmissions over high bandwidth
links and that it is extremely bursty (non-Gaussian). Stripping the
alpha traffic from an aggregate trace leaves the beta traffic
residual that is Gaussian, LRD, and shares the same fractal
scaling exponent as the aggregate traffic. Beta traffic is caused
by both small file transmissions and large files transmissions
over low bandwidth links. In our alpha/beta traffic model, the
heterogeneity of the network resources give rise to burstiness and
heavy-tailed connection durations give rise to LRD.
Konstantina Papagiannaki, Sue Moon, Chuck Fraleigh,
Patrick Thiran, Fouad Tobagi and Christophe Diot
In this work we measure and analyze the single-hop packet delay
through operational
routers in a backbone IP network. First we present our delay
measurements through a single router. Then we identify step-by-step
the factors contributing to single-hop delay. In addition to packet
processing, transmission, and queueing delays, we identify the
presence of very large delays due to non-work-conserving router
behavior. We use a simple output queue model to separate those delay
components. Our step-by-step methodology used to obtain the pure
queueing delay is easily applicable to any single-hop delay
measurements.
After obtaining the queueing delay, we analyze the tail of
its distribution, and find that it is long tailed and fits a Weibull
distribution with the scale parameter, a = 0.5, and the shape
parameter, b = 0.58 to 0.6. The measured average queueing delay
is larger than predicted by M/M/1, M/G/1, and FBM models when the link
utilization is below 70%, but its absolute value is quite small.
The knowledge of the Internet topology is a vital concern
for network researchers. We present an analysis of the
Internet topology both at the AS and router level. We give
results concerning major topology properties (nodes and
edges number, average degree and distance, policy routing,
etc.) and distributions (degree, distance, etc.). We also
present many results on the tree part of the Internet.
Four new power-laws concerning the number of shortest
paths between node pairs and the tree size distributions
are presented. Protocols designed for the Internet
should be simulated over Internet-like topologies.
Many topology generators currently exist but the discovery
of power laws in the Internet has settled a new topology
model. Only a few generators comply with this new model
and not always with accuracy. We introduce a novel way
to generate Internet-like topologies. It is based on an
algorithm that performs a sampling on a real Internet map.
Generated topologies comply with most of the topology
properties recently found in the Internet and particularly
the distance properties.
Hyunseok Chang
Researchers have characterized AS-level Internet topology
by exploiting connectivity information contained in BGP
routing tables, mainly from those collected by the University
of Oregon. The AS-level topology constructed from
these routing tables are believed to reflect existing
AS-level Internet connectivity reasonably well.
However, the completeness of the Oregon topology hasn't been
validated by any kind of formal proof or actual measurements.
Our measurement result shows that a nonnegligible number of
existing AS-level connections are not advertised in many BGP
routing tables. We found that the observability of AS-level
connectivities in BGP routing tables depends to a large extent
on existing AS relationships. Our overall result suggests that
the Oregon route server may not capture a significant number of
existing AS-level connections.
Ramesh Govindan
I will describe the design of Mercator, a program that
uses hop-limited probes---the same primitive used in
traceroute---to infer the router-level Internet graph. I
will describe some recent experiences with Mercator.
The impact of routing policy on Internet paths is poorly
understood. In theory, policy can inflate shortest-router-
hop paths. Using a Mercator-derived map, and a simplified
model of routing policy in the Internet, we obtain
approximate indications of the impact of policy routing on
Internet paths. I will describe our somewhat surprising
findings.
Jennifer Rexford
Managing a large IP backbone requires a comprehensive
and up-to-date view of the network topology, routing
configuration, and traffic load. The network topology
includes the connectivity between the routers and the
capacity of the links. Routing depends on the
configuration of intradomain routing parameters and
interdomain routing policies, as well as the routing
advertisements received from neighboring domains.
The traffic load includes the utilization of the
individual links and the offered load between various
points in the network. Together, the topology, routing,
and traffic data enable network operators to detect
and diagnose performance problems (such as a congested
link) and evaluate possible remedies (such as a change
in the intradomain routing configuration). However,
this diverse information is not available from any
individual network element. Rather, constructing this
view requires several different types of measurement
data from multiple locations in the network. This talk
presents an overview of techniques and tools for
collecting this data, including our experiences in
constructing a network-wide view of AT&T's commercial
IP Backbone.
Sally Floyd
This talk discusses several changes to TCP's congestion control,
either proposed or in progress. The changes to TCP include a
Limited Transmit mechanism for transmitting new packets on the
receipt of one or two duplicate acknowledgements, and a SACK-based
mechanism for detecting and responding to unnecessary Fast Retransmits
or Retransmit Timeouts. These changes to TCP are designed to avoid
unnecessary Retransmit Timeouts, to correct unnecessary Fast
Retransmits or Retransmit Timeouts resulting from reordered or
delayed packets, and to assist the development of viable mechanisms
for Corruption Notification.
Changes in the network include Explicit Congestion Notification
(ECN), which builds upon the addition of Active Queue Management.
We also discuss the relationships between the following: the
evolving transport protocol TCP; the new transport protocol SCTP;
the proposed new transport protocol DCP for unreliable, unicast
transfer; and the Congestion Manager for sharing congestion control
state among multiple connections.
Neil Spring
We present an improved Explicit Congestion Notification
(ECN) mechanism that enables a router to signal congestion
to the sender without trusting the receiver or other
network devices along the signaling path. Without our
mechanism, ECN-based transports can be manipulated to
undermine congestion control. Web clients seeking faster
downloads, for example, can trivially conceal congestion
signals from Web servers. This presentation is an overview
of the ECN-Nonce approach, with special attention to policy
issues and implementation challenges.
Srinivasan Seshan
While TCP incorporates congestion control machinery and has largely been
responsible for the stability of the Internet to date, there remain several
problems:
Motivated by these trends, we take a fresh look at Internet congestion
management from an end-system perspective and propose a new architecture
built around the Congestion Manager (CM). The CM maintains network
statistics across flows, orchestrates data transmissions governed by robust
control principles, and obtains feedback via updates from applications. It
also exports a simple API for applications to learn about network state and
adapts their data transmissions to obtain the best possible performance. The
CM framework provides a modular implementation platform for ideas such as
streaming real-time audio and video.
In this talk we will present the CM architecture followed by implementation
and evaluation of the CM. This will be followed by a discussion of
challenges that arise from real-world deployments. In particular, we will
discuss what we term "false sharing" which occurs when two or more flows
sharing congestion state may, in fact, not share the same bottleneck. We
conclude our presentation with a look at future directions of the CM
project.
Effective Service Curves for End-to-end Statistical QoS
Analysis of Measured Single-Hop Delay from an Operational Backbone Network
On the Completeness of Observed AS-level Internet Topology
Topology Discovery and the Impact of Policy on
Internet Paths
Network Monitoring for Traffic Engineering
A Survey of Recent Developments of TCP
Robust ECN signaling with Nonces
The Congestion Manager