HTTP/1.1 200 OK Date: Tue, 09 Apr 2002 06:12:02 GMT Server: Apache/1.3.20 (Unix) Last-Modified: Wed, 27 Oct 1999 14:15:00 GMT ETag: "304bb8-c654-381708e4" Accept-Ranges: bytes Content-Length: 50772 Connection: close Content-Type: text/plain Internet Engineering Task Force Phil Karn INTERNET DRAFT Aaron Falk Joe Touch Marie-Jose Montpetit Jamshid Mahdavi Gabriel Montenegro File: draft-ietf-pilc-link-design-01.txt October, 1999 Expires: April, 2000 Advice for Internet Subnetwork Designers Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract This document provides advice to the designers of digital communication equipment, link layer protocols and packet switched subnetworks (collectively referred to as subnetworks) who wish to support the Internet protocols but who may be unfamiliar with the architecture of the Internet and the implications of their design choices on the performance and efficiency of the Internet. This document represents an evolving consensus of the members of the IETF Performance Implications of Link Characteristics (PILC) working group. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 1] INTERNET DRAFT October 22, 1999 Introduction and Overview The Internet Protocol [RFC791] is the core protocol of the world-wide Internet that defines a simple "connectionless" packet-switched network. The success of the Internet is largely attributed to the simplicity of IP, the "end-to-end principle" on which the Internet is based, and the resulting ease of carrying IP on a wide variety of subnetworks not necessarily designed with IP in mind. But while many subnetworks carry IP, they do not necessarily do so with maximum efficiency, minimum complexity or minimum cost. Nor do they implement certain features to efficiently support newer Internet features of increasing importance, such as multicasting or quality of service. With the explosive growth of the Internet, IP is an increasingly large fraction of the traffic carried by the world's telecommunications networks. It therefore makes sense to optimize both existing and new subnetwork technologies for IP as much as possible. Optimizing a subnetwork for IP involves three complementary considerations: 1. Providing functionality sufficient to carry IP. 2. Eliminating unnecessary functions that increase cost or complexity. 3. Choosing subnetwork parameters that maximize the performance of the Internet protocols. Because IP is so simple, consideration 2 is more of an issue than consideration 1. I.e., subnetwork designers make many more errors of commission than errors of omission. But certain enhanced Internet features, such as multicasting and quality-of-service, rely on support from the underlying subnetworks beyond that necessary to carry "traditional" unicast, best-effort IP. A major consideration in the efficient design of any layered communication network are the appropriate layer(s) in which to implement a given feature. This issue was first addressed in the seminal paper "End-to-End Arguments in System Design" [SRC81]. This paper argued that many -- if not most -- network functions are best implemented on an end-to-end basis, i.e., at the higher protocol layers. Duplicating these functions at the lower levels is usually redundant, and can even be harmful. However, certain low level functions can sometimes be justified as a performance enhancement. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 2] INTERNET DRAFT October 22, 1999 An example would be link layer retransmission on an unusually lossy channel, e.g., mobile radio. The architecture of the Internet was heavily influenced by the end- to-end principle, and in our view it was crucial to the Internet's success. The remainder of this document discusses the various subnetwork design issues that the authors consider relevant to efficient IP support. Maximum Transmission Units (MTUs) and IP Fragmentation IP packets (datagrams) vary in size from 20 bytes (the size of the IP header alone) to a maximum of 65535 bytes. Subnetworks need not support maximum-sized (64KB) IP packets, as IP provides a scheme that breaks packets that are too large for a given subnetwork into fragments that travel as independent packets and are reassembled at the destination. The maximum packet size supported by a subnetwork is known as its Maximum Transmission Unit (MTU). Subnetworks may, but are not required to indicate the lengths of the packets they carry. One example is Ethernet with the widely used DIX (not IEEE 802.3) header, which lacks a length field to indicate the true data length when the packet is padded to the 60 byte minimum. This is not a problem for uncompressed IP because it carries its own length field. If optional header compression [RFC1144] [RFC2507] [RFC2508] is used, however, it is required that the link framing indicate frame length as it is needed for the reconstruction of the original header. In IP version 4 (current IP), fragmentation can occur at either the sending host or in an intermediate router, and fragments can be further fragmented at subsequent routers if necessary. In IP version 6, fragmentation can occur only at the sending host; it cannot occur in a router. Both IPv4 and IPv6 provide a "Path MTU Discovery" procedure [RFC1191] [RFC1435] [RFC1981] that allows the sending host to avoid fragmentation by discovering the minimum MTU along a given path and reducing its packet sizes accordingly. This procedure is optional in IPv4 but mandatory in IPv6 where there is no router fragmentation. The Path MTU Discovery procedure (and the deletion of router fragmentation in IPv6) reflects a consensus of the Internet technical community that IP fragmentation is best avoided. This requires that Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 3] INTERNET DRAFT October 22, 1999 subnetworks support MTUs that are "reasonably" large. The smallest MTU that IPv4 can use is 28 bytes, but this is clearly unreasonable; because each IP header is 20 bytes, only 8 bytes per packet would be available to carry transport headers and application data. If a subnetwork cannot directly support a "reasonable" MTU with native framing mechanisms, it should internally fragment. That is, it should transparently break IP packets into internal data elements and reassemble them at the other end of the subnetwork. This leaves the question of what is a "reasonable" MTU. Ethernet (10 and 100 Mb/s) has a MTU of 1500 bytes, and because of its ubiquity few Internet paths have MTUs larger than this value. This severely limits the utility of larger MTUs provided by other subnetworks. But larger MTUs are increasingly desirable on high speed subnetworks to reduce the per-packet processing overhead in host computers, and implementers are encouraged to provide them even though they may not be usable when Ethernet is also in the path. Choosing the MTU in Slow Networks [Stevens94, RFC1144] In slow networks, the time required to transmit the largest possible packet may be considerable. Interactive response time should not exceed the well-known human factors limit of 100 to 200 ms. This includes all sources of delay: electromagnetic propagation delay, queueing delay, and the store-and-forward time, i.e,. the time to transmit a packet at link speed. At low link speeds, store-and-forward delays can dominate total end- to-end delay, and these are in turn directly influenced by the maximum transmission unit (MTU). Even when an interactive packet is given a higher queuing priority, it may have to wait for a large bulk transfer packet to finish transmission. This worst-case wait can be set by an appropriate choice of MTU. For example, if the MTU is set to 1500 bytes, then a MTU-sized packet will take about 8 milliseconds to send on a T1 (1.536 Mb/s) link. But if the link speed is 19.2kb/s, then the transmission time becomes 625 ms -- well above our 100-200ms limit. A 256-byte MTU would lower this delay to a little over 100 ms. However, care should be taken not to lower the MTU excessively, as this will increase header overhead and trigger IP fragmentation (if Path MTU discovery is not in use). One way to limit delay for interactive traffic without imposing a small MTU is to preempt (abort) the transmission of a lower priority packet when a higher priority packet arrives in the queue. However, the link resources used to send the aborted packet are lost, and overall throughput will decrease. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 4] INTERNET DRAFT October 22, 1999 Another way is to implement a link-level multiplexing scheme that allows several packets to be in progress simultaneously, with transmission priority given to segments of higher priority IP packets. ATM (asynchronous transfer mode) is an example of this technique. However, ATM is generally used on high speed links where the store-and-forward delays are already minimal, and it introduces significant (~9%) additional overhead due to the addition of 5-byte frame headers to each 48-byte data frame. To summarize, there is a fundamental tradeoff between efficiency and latency in the design of a subnetwork, and the designer should keep this in mind. Framing on Connection-Oriented Subnetworks IP needs a way to mark the beginning and end of each variable-length, asynchronous IP packet. Some examples of links and subnetworks that do not provide this as an intrinsic feature include: 1. leased lines carrying a synchronous bit stream; 2. ISDN B-channels carrying a synchronous octet stream; 3. dialup telephone modems carrying an asynchronous octet stream; and 4. Asynchronous Transfer Mode (ATM) networks carrying an asynchronous stream of fixed-sized "cells" The Internet community has defined packet framing methods for all these subnetworks. The Point-To-Point Protocol (PPP) [RFC1661] is applicable to bit synchronous, octet synchronous and octet asynchronous links (i.e., examples 1-3 above). ATM has its own framing methods described in [RFC2684] [RFC2364]. At high speeds, a subnetwork should provide a framed interface capable of carrying asynchronous, variable-length IP datagrams. The maximum packet size supported by this interface is discussed above in the MTU/Fragmentation section. The subnetwork may implement this facility in any convenient manner. In particular, IP packet boundaries may, but need not, coincide with any framing or synchronization mechanisms internal to the subnetwork. When the subnetwork implements variable sized data units, the most straightforward approach is to place exactly one IP packet into each subnetwork data unit (SDU), and to rely on the subnetwork's existing ability to delimit SDUs to also delimit IP packets. A good example Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 5] INTERNET DRAFT October 22, 1999 is Ethernet. But some subnetworks have SDUs of one or more fixed sizes, as dictated by switching, forward error correction and/or interleaving considerations. Examples of such subnetworks include ATM, with a single frame size of 48 bytes plus a 5-byte header, and IS-95 digital cellular, with two "rate sets" of four fixed frame sizes each that may be selected on 20 millisecond boundaries. Because IP packets are variable sized, they may not necessarily fit into an integer multiple of fixed-sized SDUs. An "adaptation layer" is needed to convert IP packets into SDUs while marking the boundary between each IP packet in some manner. There are two traditional approaches to the problem. The first is to encode each IP packet into one or more SDUs, with no SDU containing pieces of more than one IP packet, and padding out the last SDU of the packet as needed. Bits in a control header added to each SDU indicate where it belongs in the IP packet. If the subnetwork provides in-order, at-most-once delivery, the header can be as simple as a pair of bits to indicate whether the SDU is the first and/or the last in the IP packet. Or only the last SDU of the packet could be marked, as this would implicitly mark the next SDU as the first in a new IP packet. The AAL5 (ATM Adaption Layer 5) scheme used with ATM is an example of this approach, though it adds other features, including a payload length field and a payload CRC. The second approach is to insert a special flag sequence into the data stream between each IP packet, and to pack the resulting data stream into SDUs without regard to SDU boundaries. The flag sequence can also pad unused space at the end of an SDU. If the special flag appears in the user data, it is escaped to an alternate sequence (usually larger than a flag) to avoid being misinterpreted as a flag. The HDLC-based framing schemes used in PPP are all examples of this approach. Both adaptation schemes introduce overhead; how much depends on the distribution of IP packet sizes, the size(s) of the SDUs, and in the HDLC-like approaches, the content of the IP packet (since flags occurring in the packet must be escaped, which expands them). The designer must also weigh implementation complexity in the choice and design of an adaptation layer. Connection-Oriented Subnetworks IP has no notion of a "connection"; it is a purely connectionless protocol. When a connection is required by an application, it is usually provided by TCP, the Transmission Control Protocol, running atop IP on an end-to-end basis. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 6] INTERNET DRAFT October 22, 1999 Connection-oriented subnetworks can be (and are) widely used to carry IP, but often with considerable complexity. Subnetworks with a few nodes can simply open a permanent connection between each pair of nodes, as is frequently done with ATM. But the number of connections is equal to the square of the number of nodes, so this is clearly impractical for large subnetworks. A "shim" layer between IP and the subnetwork is therefore required to manage connections in the latter. These shim layers typically open subnetwork connections as needed when an IP packet is queued for transmission and close them after an idle timeout. There is no relation between subnetwork connections and any connections that may exist at higher layers (e.g., TCP). Because Internet traffic is typically bursty and transaction- oriented, it is often difficult to pick an optimal idle timeout. If the timeout is too short, subnetwork connections are opened and closed rapidly, possibly over-stressing its call management system (especially if was designed for voice traffic holding times). If the timeout is too long, subnetwork connections are idle much of the time, wasting any resources dedicated to them by the subnetwork. The ideal subnetwork for IP is connectionless. Connection-oriented networks that dedicate minimal resources to each connection (e.g., ATM) are a distant second, and connection-oriented networks that dedicate a fixed amount of bandwidth to each connection (e.g., the PSTN, including ISDN) are the least efficient. If such subnetworks must be used to carry IP, their call-processing systems should be capable of rapid call set-up and tear-down. Bandwidth on Demand (BoD) Subnets (Aaron Falk) Wireless networks, including both satellite and terrestrial, may use Bandwidth on Demand (BoD). Bandwidth on demand, which is implemented at the link layer by Demand Assignment Multiple Access (DAMA) in TDMA systems, is currently one of the proposed mechanism to efficiently share limited spectrum resources amongst a large number of users. The design parameters for BoD are similar to those in connection oriented subnetworks, however the implementations may be very different. In BoD, the user typically requests access to the shared channel for some duration. Access may be allocated in terms of a period of time at a specific rate, a certain number of packets, or until the user chooses to release the channel. Access may be coordinated through a central management entity or through using a distributed algorithm amongst the users. The resource shared may be a terrestrial wireless hop, a satellite uplink, or an end-to-end satellite channel. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 7] INTERNET DRAFT October 22, 1999 Long delay BoD subnets pose problems similar to the Connection Oriented networks in terms of anticipating traffic arrivals. While connection oriented subnets hold idle channels open expecting new data to arrive, BoD subnets request channel access based on buffer occupancy (or expected buffer occupancy) on the sending port. Poor performance will likely result if the sender does not anticipate additional traffic arriving at that port during the time it takes to grant a transmission request. It is recommended that the algorithm have the capability to extend a hold on the channel for data that has arrived after the original request was generated (this may done by piggybacking new requests on user data). There are a wide variety of BoD protocols available and there has been relatively little comprehensive research on the interactions between the BoD mechanisms and Internet protocol performance. A tradeoff exists balancing the time a user can be allowed to hold a channel to drain port buffers with the additional imposed latency on other users who are forced to wait to get access to the channel. It is desirable to design mechanisms that constrain the BoD imposed latency variation. This will be helpful in preventing spurious timeouts from TCP. Reliability and Error Control In the Internet architecture, the ultimate responsibility for error recovery is at the end points. The Internet may occasionally drop, corrupt, duplicate or reorder packets, and the transport protocol (e.g., TCP) or application (e.g., if UDP is used) must recover from these errors on an end-to-end basis. Error recovery in the subnetwork is therefore justified only to the extent that it can enhance overall performance. It is important to recognize that a subnetwork can go too far in attempting to provide error recovery services in the Internet environment. Subnet reliability should be "lightweight", i.e., it only has to be "good enough", *not* perfect. In this section we discuss how to analyze characteristics of a subnetwork to determine what is "good enough". The discussion below focuses on TCP, which is the most widely used transport protocol in the Internet. It is widely believed (and is in fact a stated goal within the IETF community) that non-TCP transport protocols should attempt to be "TCP-friendly" and have many of the same performance characteristics. Thus, the discussion below should be applicable even to portions of the Internet where TCP may not be the predominant protocol. How TCP Works One of TCP's functions is end-host based congestion control for the Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 8] INTERNET DRAFT October 22, 1999 Internet. This is a critical part of the overall stability of the Internet, so it is important that link layer designers understand TCP's congestion control algorithms. TCP assumes that, at the most abstract level, the network consists of links and queues. Queues provide output-buffering on links that are momentarily oversubscribed. They smooth instantaneous traffic bursts to fit the link bandwidth. When demand exceeds link capacity long enough to fill the queue, packets must be dropped. The traditional action of dropping the most recent packet ("tail dropping") is no longer recommended (see [RED93]), but it is still widely practiced. TCP uses sequence numbering and acknowledgements (ACKs) on an end-to- end basis to provide reliable, sequenced, once-only delivery. TCP ACKs are cumulative, i.e., each one implicitly ACKs every segment received so far. If a packet is lost, the cumulative ACK will cease to advance. Since the most common cause of packet loss is congestion, TCP treats packet loss as a network congestion indicator. This happens automatically, and the subnetwork need not know anything about IP or TCP. It simply drops packets whenever it must, though RED shows that some packet-dropping strategies are more fair than others. TCP recovers from packet losses in two different ways. The most important is by a retransmission timeout. If an ACK fails to arrive after a certain period of time, TCP retransmits the oldest unacked packet. Taking this as a hint that the network is congested, TCP waits for the retransmission to be ACKed before it continues, and it gradually increases the number of packets in flight as long as a timeout does not occur again. A retransmission timeout can impose a significant performance penalty, as the sender will be idle during the timeout interval and restarts with a congestion window of 1 following the timeout. To allow faster recovery from the occasional lost packet in a bulk transfer, an alternate scheme known as "fast recovery" was introduced [ref?] Fast recovery relies on the fact that when a single packet is lost in a bulk transfer, the receiver continues to return ACKs to subsequent data packets, but they will not actually ACK any data. These are known as "duplicate acknowledgments" or "dupacks". The sending TCP can use dupacks as a hint that a packet has been lost, and it can retransmit it without waiting for a timeout. Dupacks effectively constitute a negative acknowledgement (NAK) for the packet whose Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 9] INTERNET DRAFT October 22, 1999 sequence number is equal to the acknowledgement field in the incoming TCP packet. TCP currently waits until a certain number of dupacks (currently 3) are seen prior to assuming a loss has occurred; this helps avoid an unnecessary retransmission in the face of out-of- sequence delivery. A new technique called "Explicit Congestion Notification" (ECN) allows routers to directly signal congestion to hosts without dropping packets. This is done by setting a bit in the IP header. Since this is currently an optional behavior (and, longer term, there will always be the possibility of congestion in portions of the network which don't support ECN), the lack of an ECN bit MUST NEVER be interpreted as a lack of congestion. Thus, for the foreseeable future, TCP MUST interpret a lost packet as a signal of congestion. The TCP "congestion avoidance" [RFC2581] algorithm is the end-system congestion control algorithm used by TCP. This algorithm maintains a congestion window (cwnd), which controls the amount of data which TCP may have in flight at any given point in time. Reducing cwnd reduces the overall bandwidth obtained by the connection; similarly, raising cwnd increases the performance, up to the limit of the available bandwidth. TCP probes for available network bandwidth by setting cwnd at one packet and then increasing it by one packet for each ACK returned from the receiver. This is TCP's "slow start" mechanism. When a packet loss is detected (or congestion is signalled by other mechanisms), cwnd is set back to one and the slow start process is repeated until cwnd reaches one half of its previous setting before the loss. Cwnd continues to increase past this point, but at a much slower rate than before. If no further losses occur, cwnd will ultimately reach the window size advertised by the receiver. This is referred to as an "Additive Increase, Multiplicative Decrease" (AIMD) algorithm. The steep decrease in response to congestion provides for network stability; the AIMD algorithm also provides for fairness between long running TCP connections sharing the same path. TCP Performance Characteristics Caveat In this section, we present the current "state-of-the-art" understanding of TCP performance. This analysis attempts to characterize the performance of TCP connections over links of varying characteristics. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 10] INTERNET DRAFT October 22, 1999 Link designers may wish to use the techniques in this section to predict what performance TCP/IP may achieve over a new link layer design. Such analysis is encouraged. Because this is relatively new analysis, and the theory is based on single stream TCP connections under "ideal" conditions, it should be recognized that the results of such analysis may be different than actual performance in the Internet. That being said, we have done the best we can to provide information which will help designers get an accurate picture of the capabilities and limitations of TCP under various conditions. The Formulae The performance of TCP's AIMD Congestion Avoidance algorithm has been extensively analyzed. The current best formula for the performance of the specific algorithms used by Reno TCP is given by Padhye, et.al. [PFTK98]. This formula is: MSS BW = -------------------------------------------------------- RTT*sqrt(1.33*p) + RTO*p*[1+32*p^2]*min[1,3*sqrt(.75*p)] In this formula, the variables are as follows: MSS is the segment size being used by the connection RTT is the end-to-end round trip time of the TCP connection RTO is the packet timeout (based on RTT) p is the packet loss rate for the path (i.e. .01 if there is 1% packet loss) This is currently considered to be the best approximate formula for Reno TCP performance. A further simplification to this formula is generally made by assuming that RTO is approximately 5*RTT. TCP is constantly being improved. A simpler formula, which gives an upper bound on the performance of any AIMD algorithm which is likely to be implemented in TCP in the future, was derived by Ott, et.al. [MSMO97][OKM96] MSS 1 BW = 0.93 --- ------- RTT sqrt(p) Assumptions of these formulae Both of these formulae assume that the TCP Receiver Window is not limiting the performance of the connection in any way. Because receiver window is entirely determined by end-hosts, we assume that hosts will maximize the announced receiver window in order to maximize their network performance. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 11] INTERNET DRAFT October 22, 1999 Both of these formulae allow for BW to become infinite if there is no loss. This is because an Internet path will drop packets at bottleneck queues if the load is too high. Thus, a completely lossless TCP/IP network can never occur (unless the network is being underutilized). The RTT used is the average RTT including queuing delays. The formulae are calculations for a single TCP connection. If a path carries many TCP connections, each will follow the formulae above independently. The formulae assume long running TCP connections. For connections which are extremely short (<10 packets) and don't lose any packets, performance is driven by the TCP slow start algorithm. For connections of medium length, where on average only a few segments are lost, single connection performance will actually be slightly better than given by the formulae above. The difference between the simple and complex formulae above is that the complex formula includes the effects of TCP retransmission timeouts. For very low levels of packet loss (significantly less than 1%), timeouts are unlikely to occur, and the formulae lead to very similar results. At higher packet losses (1% and above), the complex formula gives a more accurate estimate of performance (which will always be significantly lower than the result from the simple formula). Note that these formulae break down as p approaches 100%. Analysis of Link Layer Effects on TCP Performance Link layer designers who are interested in understanding the performance of TCP over these links can use these formulae to figure this out. Consider the following example: A designer invents a new wireless link layer which, on average, loses 1% of IP packets. The link layer supports packets of up to 1040 bytes, and has a one-way delay of 20 msec. If this link layer were used in the Internet, on a path which otherwise had a round trip of of 80 msec, you could compute an upper bound on the performance as follows: For MSS, use 1000 bytes (remove the 40 bytes for TCP/IP headers, which do not contribute to performance). For RTT, use 120 msec (80 msec for the Internet part, plus 20 msec Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 12] INTERNET DRAFT October 22, 1999 each way for the new wireless link). For p, use .01. For C, assume 1. The simple formula gives: BW = (1000 * 8 bits) / (.120 sec * sqrt(.01)) = 666 kbit/sec The more complex formula gives: BW = 402.9 kbit/sec If this were a 2 Mb/s wireless LAN, the designers might be somewhat disappointed. Some observations on performance: 1. We have assumed that the packet losses on the link layer are interpreted as congestion by TCP. This is a "fact of life" which must be accepted. 2. Note that the equations for TCP performance are all expressed in terms of packet loss. Many link-layer designers think in terms of bit-error rate. *If* there were a uniform random distribution of errors, then the probability of a packet being corrupted would be: p = 1 - ([1 - BER]^[MSS * 8]) (Here we assume MSS is represented in bytes). If the inequality BER * MSS * 8 << 1 holds, p can be approximated by: p = BER * MSS * 8 These equations can be used to apply BER to the performance equations above. Note that links with Forward Error Correction (FEC) generally have very non-uniform bit error distributions. The distribution is a strong function of the types and combinations of FEC algorithms used. In such cases these equations cannot be used to apply BER to the performance equations above. If the distribution of error distributions under the FEC scheme is known, one could apply the same type of analysis as above, using the correct distribution function for the BER. It is more likely in these FEC cases, however, that empirical methods will need to be used to determine the actual packet Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 13] INTERNET DRAFT October 22, 1999 loss rate. 3. Note that the packet size plays an important role. Larger packet sizes will allow for improved performance at the same *packet loss* rate. Assuming constant, uniform bit-errors (instead of packet errors), and assuming that the BER is small enough for the approximation [p=BER*MSS*8] to apply, a simple derivation will show that larger packet sizes still result in increased TCP performance. For this reason (and others) it is advisable to support larger packet sizes where possible. To derive this, simply plug in p = BER*MSS*8 into the simple formula for performance. The result is p = O(sqrt(MSS)), providing larger performance for larger packet sizes. If the approximation p = BER*MSS*8 breaks down, and in particular if the BER is high enough that BER*MSS approaches (or exceeds) 1, the packet loss rate p will tend to 100%, resulting in zero throughput. 4. We have chosen a specific RTT which might occur on a wide-area Internet path within the USA. In the Internet, it is important to recognize that RTT varies considerably. For example, in a wired LAN environment, RTTs are typically less than 10 msec. International connections (between hosts in different countries) may have RTTs of 200 msec or more. Modems and other low- capacity links can add considerable delay to the overall RTTs experienced by the end hosts due to their long packet transmission times. Links running over geostationary repeater satellites have one-way times of around 250ms (125ms up to the satellite, 125ms down) so the RTT of an end-to-end TCP connection that includes such a link can be expected to be greater than 250ms. Heavily congested links may have queues which back up, increasing RTTs. Finally, VPNs and other forms of encryption and tunneling can add significant end-to-end delay to network connections. Increased delay decreases the overall performance of TCP at a given loss rate. A good rule of thumb is to recognize that you can't do anything about the laws of physics, so you can't change the propagation delay. Many link layer designers are likely to face the following tradeoff: using additional delay to reduce the probability of packet loss (through FEC, ARQ, or other methods). Increasing the delay somewhat in order to decrease packet loss is probably a worthwhile investment, either up to doubling, or in the case of very low delay pipes, adding 10-20 msec won't have much effect on a Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 14] INTERNET DRAFT October 22, 1999 typical Internet path. Quality of Service, Fairness vs Performance, Congestion signalling [subnet hooks for QOS bits] Delay Characteristics [self clocking TCP, (re)transmission shaping] Bandwidth Asymmetries Some subnetworks may provide asymmetric bandwidth and the Internet protocol suite will generally still work fine. However, there is a case when such a scenario reduces TCP performance. Since TCP data segments are ``clocked'' out by returning acknowledgments TCP senders are limited by the rate at which ACKs can be returned [BPK98]. Therefore, when the ratio of the bandwidth of the subnetwork carrying the data to the bandwidth of the subnetwork carrying the acknowledgments is too large, the slow return of of the ACKs directly impacts performance. Since ACKs are generally smaller than data segments, TCP can tolerate some asymmetry, but as a general rule designers of subnetworks should avoid large differences in the incoming and outgoing bandwidth. One way to cope with asymmetric subnetworks is to increase the size of the data segments as much as possible. This allows more data to be sent per ACK, and therefore mitigates the slow flow of ACKs. Using the delayed acknowledgment mechanism {Bra89], which reduces the number of ACKs transmitted by the receiver by roughly half, can also improve performance by reducing the congestion on the ACK channel. These mechanisms should be employed in asymmetric networks. Several researchers have introduced strategies for coping with bandwidth asymmetry. These mechanisms generally attempt to reduce the number of ACKs being transmitted over the low bandwidth channel by limiting the ACK frequency or filtering out ACKs at an intermediate router [BPK98]. While these solutions mitigate the performance problems caused by asymmetric subnetworks they do have some cost and therefore, as suggested above, bandwidth asymmetry should be minimized whenever possible when designing subnetworks. Buffering, flow & congestion control [atm dropping individual cells in a packet means the entire packet must be dropped unless EPD/PPD is used] Compression Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 15] INTERNET DRAFT October 22, 1999 User data compression is a function that can usually be omitted at the subnetwork layer. The endpoints typically have more CPU and memory resources to run a compression algorithm and a better understanding of what is being compressed. End-to-end compression benefits every network element in the path, while subnetwork-layer compression, by definition, benefits only a single subnetwork. Data presented to the subnetwork layer may already be in compressed format (e.g., a JPEG file), compressed at the application layer (e.g., the optional "gzip", "compress", and "deflate" compression in HTTP/1.1 [RFC2616]), or compressed at the IP layer (the IP Payload Compression Protocol [RFC2393] supports DEFLATE [RFC2394] and LZS [RFC2395]). In any of these cases, compression in the subnetwork is of no benefit. The subnetwork may also process data that has been encrypted at the application protocol layer (OpenPGP [RFC2440] or S/MIME [RFCs-2630-2634]), the transport layer (SSL, TLS [RFC2246]), or the IP layer (IPSEC ESP [RFC2406]). Ciphers generate random-looking bit streams lacking any patterns that can be exploited by a compression algorithm. If a subnetwork decides to implement user data compression, it must detect when the data is encrypted or already compressed and transmit it without further compression. This is important because most compression algorithms increase the size of encrypted data or data that has already been compressed. In contrast to user data compression, subnetworks that operate at low speed or with small packet size limits are encouraged to compress IP and transport-level headers (TCP and UDP). An uncompressed 40-byte TCP/IP header takes about 33 milliseconds to send at 9600 bps. "VJ" TCP/IP header compression [RFC1144] compresses most headers to 3-5 bytes, reducing transmission time to several milliseconds. This is especially beneficial for small, latency-sensitive packets, such as in interactive sessions. Designers should consider the effect of the subnetwork error rate on performance when considering header compression. TCP ordinarily recovers from lost packets by retransmitting only those packets that were actually lost; packets arriving correctly after a packet loss are kept on a resequencing queue and do not need to be retransmitted. In VJ TCP/IP [RFC1144] header compression, however, the receiver cannot explicitly notify a sender about data corruption and subsequent loss of synchronization between compressor and decompressor. It relies instead on TCP retransmission to resynchronize the decompressor. After a packet is lost, the decompressor must discard every subsequent packet, even if the Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 16] INTERNET DRAFT October 22, 1999 subnetwork makes no further errors, until the sending TCP retransmits to resynchronize the decompressor. This effect can substantially magnify the effect of subnetwork packet losses if the sending TCP window is large, as it will often be on a path with a large bandwidth*delay product. Alternative header compression schemes such as those described in [RFC2507] include an explicit request for retransmission of an uncompressed packet to allow decompressor resynchronization without waiting for a TCP retransmission. However, these schemes are not yet in widespread use. Packet Reordering The Internet architecture does not guarantee that packets will arrive in the same order in which they were originally transmitted, and transport protocols like TCP must take this into account. However, we recommend that subnetworks not gratuitously deliver packets out of sequence. Since TCP returns a cumulative acknowledgment (ACK) indicating the last in-order segment that has arrived, out-of-order segments cause a TCP receiver to transmit a duplicate acknowledgment. When the TCP sender notices three duplicate acknowledgments it assumes that a segment was dropped by the network and uses the fast retransmit algorithm [Jac90,APS99] to resend the segment. In addition, the congestion window is reduced by half, effectively halving TCP's sending rate. If a subnetwork badly re-orders segments such that three duplicate ACKs are generated the TCP sender needlessly reduces the congestion window, and therefore performance. Mobility [best provided at a higher layer, for performance and flexibility reasons, but some subnet mobility can be a convenience as long as it's not too inefficient with routing] Multicasting Similar to the case of broadcast and discovery, multicast is more efficient on shared links where it is supported natively. Native multicast support requires a reasonable number (?? - over 10, under 1000?) of separate link-layer broadcast addresses. One such address SHOULD be reserved for native link broadcast; other addresses SHOULD be provided support separate multicast groups (and there SHOULD be at least 10?? such addresses). The other criteria for native multicast is a link-layer filter, which can select individual or sets of broadcast addresses. Such link filters avoid having every host parse every multicast message in the Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 17] INTERNET DRAFT October 22, 1999 driver; a host receives, at the network layer, only those packets that pass its configured link filters. A shared link SHOULD support multiple, programmable link filters, to support efficient native multicast. [Multicasting can be simulated over unicast subnets by sending multiple copies of packets, but this is wasteful. If the subnet can support native multicasting in an efficient way, it should do so] Broadcasting and Discovery Link layers fall into two categories: point-to-point and shared link. A point-to-point link has exactly two endpoint components (hosts or gateways); a shared link has more than two, either on an inherently broadcast media (e.g., Ethernet, radio) or on a switching layer hidden from the network layer (switched Ethernet, Myrinet, ATM). There are a number of Internet protocols which make use of link layer broadcast capabilities. These include link layer address lookup (ARP), auto-configuration (RARP, BOOTP, DHCP), and routing (RIP). These protocols require broadcast-capable links. Shared links SHOULD support native, link layer subnet broadcast. The lack of broadcast can impede the performance of these protocols, or in some cases render them inoperable. ARP-like link address lookup can be provided by a centralized database, rather than owner response to broadcast queries. This comes at the expense of potentially higher response latency and the need for explicit knowledge of the ARP server address (no automatic ARP discovery). For other protocols, if a link does not support broadcast, the protocol is inoperable. This is the case for DHCP, for example. Routing [what is proper division between routing at the Internet layer and routing in the subnet? Is it useful or helpful to Internet routing to have subnetworks that provide their own internal routing?] Security [Security mechanisms should be placed as close as possible to the entities that they protect. E.g., mechanisms that protect host computers or users should be implemented at the higher layers and operate on an end-to-end basis under control of the users. This makes subnet security mechanisms largely redundant unless they are to protect the subnet itself, e.g., against unauthorized use.] Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 18] INTERNET DRAFT October 22, 1999 References References of the form RFCnnnn are Internet Request for Comments (RFC) documents available online at www.rfc-editor.org. [APS99] Mark Allman, Vern Paxson, W. Richard Stevens. TCP Congestion Control, April 1999. RFC 2581. [BPK98] Hari Balakrishnan, Venkata Padmanabhan, Randy H. Katz. The Effects of Asymmetry on TCP Performance. ACM Mobile Networks and Applications (MONET), 1998. [Jac90] Van Jacobson. Modified TCP Congestion Avoidance Algorithm. Email to the end2end-interest mailing list, April 1990. URL: ftp://ftp.ee.lbl.gov/email/vanj.90apr30.txt. [SRC81] Jerome H. Saltzer, David P. Reed and David D. Clark, End-to- End Arguments in System Design. Second International Conference on Distributed Computing Systems (April, 1981) pages 509-512. Published with minor changes in ACM Transactions in Computer Systems 2, 4, November, 1984, pages 277-288. Reprinted in Craig Partridge, editor Innovations in internetworking. Artech House, Norwood, MA, 1988, pages 195-206. ISBN 0-89006-337-0. Also scheduled to be reprinted in Amit Bhargava, editor. Integrated broadband networks. Artech House, Boston, 1991. ISBN 0-89006-483-0. http://people.qualcomm.com/karn/library.html. [RFC791] Jon Postel. "Internet Protocol". September 1981. [RFC1144] Jacobson, V., "Compressing TCP/IP Headers for Low-Speed Serial Links," RFC 1144, February 1990. [RFC1191] J. Mogul, S. Deering. "Path MTU Discovery". November 1990. [RFC1435] S. Knowles. "IESG Advice from Experience with Path MTU Discovery". March 1993. [RFC1577] M. Laubach. "Classical IP and ARP over ATM". January 1994. [RFC1661] W. Simpson. "he Point-to-Point Protocol (PPP)". July 1994. [RFC1981] J. McCann, S. Deering, J. Mogul. "Path MTU Discovery for IP version 6". August 1996. [RFC2364] G. Gross et al. "PPP Over AAL5". July 1998. [RFC2393] A. Shacham et al. "IP Payload Compression Protocol (IPComp)". December 1998. Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 19] INTERNET DRAFT October 22, 1999 [RFC2394] R. Pereira. "IP Payload Compression Using DEFLATE". December 1998. [RFC2395] R. Friend, R. Monsour. "IP Payload Compression Using LZS". December 1998. [RFC2440] J. Callas et al. "OpenPGP Message Format". November 1998. [RFC2246] T. Dierks, C. Allen. "The TLS Protocol Version 1.0". January 1999. [RFC2507] M. Degermark, B. Nordgren, S. Pink. "IP Header Compression". February 1999. [RFC2508] S. Casner, V. Jacobson. "Compressing IP/UDP/RTP Headers for Low-Speed Serial Links". February 1999. [RFC2581] M. Allman, V. Paxson, W. Stevens. "TCP Congestion Control". April 1999. [RFC2406] S. Kent, R. Atkinson. "P Encapsulating Security Payload (ESP)". November 1998. [RFC2616] R. Fielding et al. "Hypertext Transfer Protocol -- HTTP/1.1". June 1999. [RFC2684] D. Grossman, J. Heinanen. "Multiprotocol Encapsulation over ATM Adaptation Layer 5". September 1999. [PFTK98] Padhye, J., Firoiu, V., Towsley, D., and Kurose, J., Modeling TCP Throughput: a Simple Model and its Empirical Validation, UMASS CMPSCI Tech Report TR98-008, Feb. 1998. [MSMO97] M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm",Computer Communication Review, volume 27, number 3, July 1997. [OKM96] T. Ott, J.H.B. Kemperman, M. Mathis, The Stationary Behavior of Ideal TCP Congestion Avoidance. ftp://ftp.bellcore.com/pub/tjo/TCPwindow.ps [RED93] S. Floyd, V. Jacobson, "Random Early Detection gateways for Congestion Avoidance", IEEE/ACM Transactions in Networking, V.1 N.4, August 1993, http://www.aciri.org/floyd/papers/red/red.html [Stevens94] R. Stevens, "TCP/IP Illustrated, Volume 1," Addison- Wesley, 1994 (section 2.10). Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 20] INTERNET DRAFT October 22, 1999 Security Considerations [comment here] Authors' Addresses: Phil Karn (karn@qualcomm.com) Aaron Falk (afalk@panamsat.com) Joe Touch (touch@isi.edu) Marie-Jose Montpetit (marie@teledesic.com) Jamshid Mahdavi (mahdavi@novell.com) Gabriel Montenegro (Gabriel.Montenegro@eng.sun.com) Karn, Falk, Touch, Mahdavi, Montpetit & Montenegro [Page 21]