TOC 
Internet Engineering Task ForceM. Welzl
Internet-DraftUniversity of Oslo
Intended status: ExperimentalD. Damjanovic
Expires: February 1, 2011University of Innsbruck
 S. Gjessing
 University of Oslo
 July 31, 2010


MulTFRC: TFRC with weighted fairness
draft-irtf-iccrg-multfrc-01.txt

Abstract

This document specifies the MulTFRC congestion control mechanism. MulTFRC is derived from TFRC and can be implemented by carrying out a few small and simple changes to the original mechanism. Its behavior differs from TFRC in that it emulates a number of TFRC flows with more flexibility than what would be practical or even possible using multiple real TFRC flows. Additionally, MulTFRC better preserves the original features of TFRC than multiple real TFRCs do.

Status of this Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

This Internet-Draft will expire on February 1, 2011.

Copyright Notice

Copyright (c) 2010 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.



Table of Contents

1.  Introduction
2.  Specification
    2.1.  Section 3 of RFC 5348
    2.2.  Section 4 of RFC 5348
    2.3.  Section 5 of RFC 5348
    2.4.  Section 6 of RFC 5348
    2.5.  Section 8 of RFC 5348
    2.6.  Appendix A of RFC 5348
3.  Usage Considerations
    3.1.  Which applications could use MulTFRC?
    3.2.  Setting N
4.  Security Considerations
5.  Acknowledgements
6.  References
    6.1.  Normative References
    6.2.  Informative References
Appendix A.  X_Bps implementation considerations
§  Authors' Addresses




 TOC 

1.  Introduction

"TCP-friendliness", the requirement for a flow to behave under congestion like a flow produced by a conformant TCP (introduced by the name of "TCP-compatibility" in [RFC2309] (Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, “Recommendations on Queue Management and Congestion Avoidance in the Internet,” April 1998.)), has been put into question in recent years (cf. [Bri07] (Briscoe, B., “Flow rate fairness: dismantling a religion,” April 2007.)). As an illustrative example, consider the fact that not all data transfers are of equal importance to a user. A user may therefore want to assign different priorities to different flows between two hosts, but TCP(-friendly) congestion control would always let these flows use the same sending rate. Users and their applications are now already bypassing TCP-friendliness in practice: since multiple TCP flows can better saturate a bottleneck than a single one, some applications open multiple connections as a simple workaround. The "GridFTP" (Allcock, W., “GridFTP: Protocol Extensions to FTP for the Grid,” 2003.) [All03] protocol explicitly provides this function as a performance improvement.

Some research efforts were therefore carried out to develop protocols where a weight can directly be applied to the congestion control mechanism, allowing a flow to be as aggressive as a number of parallel TCP flows at the same time. The first, and best known, such protocol is MulTCP (Crowcroft, J. and P. Oechslin, “Differentiated end-to-end Internet services using a weighted proportional fair sharing TCP,” 1998.) [Cro+98], which emulates N TCPs in a rather simple fashion. Improved versions were later published, e.g. Stochastic TCP (Hacker, T., Noble, B., and B. Athey, “Improving Throughput and Maintaining Fairness using Parallel TCP,” March 2004.) [Hac+04] and Probe-Aided (PA-)MulTCP (Kuo, F. and X. Fu, “Probe-Aided MulTCP: an aggregate congestion control mechanism,” 2008.) [Kuo+08]. These protocols could be called "N-TCP-friendly", i.e. as TCP-friendly as N TCPs.

MulTFRC, defined in this document, does with TFRC (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) [RFC5348] what MulTCP does with TCP. In [Dam+09] (Damjanovic, D. and M. Welzl, “MulTFRC: Providing Weighted Fairness for Multimedia Applications (and others too!),” 2009.) [Dam10] (Damjanovic, D., “Parallel TCP Data Transfers: A Practical Model and its Application,” February 2010.), it was shown that MulTFRC achieves its goal of emulating N flows better than MulTCP (and improved versions of it) and has a number of other benefits. For instance, MulTFRC with N=2 is more reactive than two real TFRC flows are, and it has a smoother sending rate than two real MulTFRC flows do. Moreover, since it is only one mechanism, a protocol that uses MulTFRC can send a single data stream with the congestion control behavior of multiple data streams without the need to split the data and spread it over separate connections. Depending on the protocol in use, N real TFRC flows can also be expected to have N times the overhead for, e.g., connection setup and teardown, of a MulTFRC flow with the same value of N.

The core idea of TFRC is to achieve TCP-friendliness by explicitly calculating an equation which approximates the steady-state throughput of TCP and sending as much as the calculation says. The core idea of MulTFRC is to replace this equation in TFRC with the algorithm from [Dam+08] (Damjanovic, D., Welzl, M., Telek, M., and W. Heiss, “Extending the TCP Steady-State Throughput Equation for Parallel TCP Flows,” August 2008.) [Dam10] (Damjanovic, D., “Parallel TCP Data Transfers: A Practical Model and its Application,” February 2010.), which approximates the steady-state throughput of N TCP flows. MulTFRC can be implemented via a few simple changes to the TFRC code. It is therefore defined here by specifying how it differs from the TFRC specification (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) [RFC5348].



 TOC 

2.  Specification

This section lists the changes to [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) that must be applied to turn TFRC into MulTFRC. The small number of changes ensures that many original properties of a single TFRC flow are preserved, which is often the most appropriate choice (e.g. it would probably not make sense for a MulTFRC flow to detect a data-limited interval differently than a single TFRC flow would). It also makes MulTFRC easy to understand and implement. Experiments have shown that these changes are enough to attain the desired effect.



 TOC 

2.1.  Section 3 of RFC 5348

While the TCP throughput equation requires the loss event rate, round-trip time and segment size as input, the algorithm to be used for MulTFRC additionally needs the number of packets lost in a loss event. The equation, specified in [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) as

                             s
X_Bps = ----------------------------------------------------------
        R*sqrt(2*b*p/3) + (t_RTO * (3*sqrt(3*b*p/8)*p*(1+32*p^2)))

is replaced with the following algorithm, which returns X_Bps, the average transmit rate of N TCPs in bytes per second:

If (N < 12) {
    af = N * (1-(1-1/N)^j);
}
Else {
    af = j;
}
af=max(min(af,ceil(N)),1);
a = p*b*af*(24*N^2+p*b*af*(N-2*af)^2);
x= (af*p*b*(2*af-N)+sqrt(a))/(6*N^2*p);
z=t_RTO*(1+32*p^2)/(1-p);
q=min(2*j*b*z/(R*(1+3*N/j)*x^2), N*z/(x*R), N);
X_Bps=((1-q/N)/(p*x*R)+q/(z*(1-p)))*s;

Where:

s is the segment size in bytes (excluding IP and transport protocol headers).

R is the round-trip time in seconds.

b is the maximum number of packets acknowledged by a single TCP acknowledgement.

p is the loss event rate, between 0 and 1.0, of the number of loss events as a fraction of the number of packets transmitted.

j is the number of packets lost in a loss event.

t_RTO is the TCP retransmission timeout value in seconds.

N is the number of TFRC flows that MulTFRC should emulate. N is a positive rational number; a discussion of appropriate values for this parameter, and reasons for choosing them, is provided in Section 3.2 (Setting N).

ceil(N) gives the smallest integer greater than or equal to N.

x, af, a, z and q are temporary floating point variables.

Appendix A (X_Bps implementation considerations) contains an argument that shows why the calculations in the algorithm will not overflow, underflow or produce significant rounding errors.

Section 3.1 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) contains a recommendation for setting the t_RTO parameter, and a simplification of the equation as a result of setting this parameter in a specific way. This part of the TFRC specification could be applied here too. Section 3.1 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) also contains a discussion of the parameter b for delayed acknowledgements and concludes that the use of b=1 is RECOMMENDED. This is also the case for MulTFRC.

Section 3.2.2 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) specifies the contents of feedback packets. In addition to the information listed there, a MulTFRC feedback packet also carries j, the number of packets lost in a loss event.



 TOC 

2.2.  Section 4 of RFC 5348

The procedure for updating the allowed sending rate in section 4.3 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) ("action 4") contains the statement:

Calculate X_Bps using the TCP throughput equation.

which is replaced with the statement:

If (p==1) {
    X_Bps=s*N/t_mbi;
} Else {
    Calculate X_Bps using the algorithm defined in section 3.
}



 TOC 

2.3.  Section 5 of RFC 5348

Section 5.2 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) explains how a lost packet that starts a new loss event should be distinguished from a lost packet that is a part of the previous loss event interval. Here, additionally the number of packets lost in a loss event is counted, and therefore this section is extended with:

If S_new is a part of the current loss interval LP_0 (the number of lost packets in the current interval) is increased by 1. On the other hand, if S_new starts a new loss event, LP_0 is set to 1.

Section 5.4 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) contains the algorithm for calculating the average loss interval that is needed for calculation of the loss event rate, p. MulTFRC also requires the number of lost packets in a loss event, j. In [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) the calculation of the average loss interval is done using a filter that weights the n most recent loss event intervals, and setting n to 8 is RECOMMENDED. The same algorithm is used here for calculating the average loss interval. For the number of lost packets in a loss event interval, j, the weighted average number of lost packets in the n most recent loss intervals is taken and the same filter is used.

For calculating the average number of packets lost in a loss event interval we use the same loss intervals as for the p calculation. Let LP_0 to LP_k be the number of lost packets in the k most recent loss intervals. The algorithm for calculating I_mean in Section 5.4 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) (page 23) is extended by adding, after the last line ("p = 1 / I_mean;"):

LP_tot = 0;
If (I_tot0 > I_tot1) {
    for (i = 0 to k-1) {
          LP_tot = LP_tot + (LP_i * w_i);
    }
}
Else {
    for (i = 1 to k) {
          LP_tot = LP_tot + (LP_i * w_i);
    }
}
j = LP_tot/W_tot;

In section 5.5 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) (page 25), the algorithm that ends with "p = min(W_tot0/I_tot0, W_tot1/I_tot1);" is extended by adding:

LP_tot = 0;
If (I_tot0 > I_tot1) {
    for (i = 0 to k-1) {
        LP_tot = Lp_tot + (LP_i * w_i * DF_i * DF);
    }
    j = LP_tot/W_tot0;
}
Else {
    for (i = 1 to k) {
        LP_tot = LP_tot + (LP_i * w_(i-1) * DF_i);
    }
    j = LP_tot/W_tot1;
}



 TOC 

2.4.  Section 6 of RFC 5348

The steps to be carried out by the receiver when a packet is received in section 6.1 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) ("action 4") contain the statement:

3)  Calculate p: Let the previous value of p be p_prev.  Calculate
the new value of p as described in Section 5.

which is replaced with the statement:

3)  Calculate p and j: Let the previous values of p and j be p_prev
and j_prev.  Calculate the new values of p and j as described in
Section 5.

The steps to be carried out by the receiver upon expiration of feedback timer in section 6.2 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) ("action 1") contain the statement:

1)  Calculate the average loss event rate using the algorithm
       described in Section 5.

which is replaced with:

1)  Calculate the average loss event rate and average number of
lost packets in a loss event using the algorithm described in
Section 5.

This statement is added at the beginning of the list of initial steps to take when the first packet is received, in section 6.3 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.):

Section 6.3.1 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) discusses how the loss history is initialized after the first loss event. TFRC approximates the rate to be the maximum value of X_recv so far, assuming that a higher rate introduces loss. Therefore j for this rate is approximated by 1 and the number of packets lost in the first interval is set to 1. This is accomplished by the following change. The first sentence of the fourth paragraph (in section 6.3.1) is:

TFRC does this by finding some value, p, for which the throughput
equation in Section 3.1 gives a sending rate within 5% of X_target,
given the round-trip time R, and the first loss interval is then set
to 1/p.

which is replaced with:

TFRC does this by finding some value, p, for which the throughput
equation in Section 3.1 gives a sending rate within 5% of X_target,
given the round-trip time R, and j equal to 1. The first loss
interval is then set to 1/p.

The second last paragraph in section 6.3.1 ends with:

Thus, the TFRC receiver calculates the loss interval that would be
required to produce the target rate X_target of 0.5/R packets per
second, for the round-trip time R, and uses this synthetic loss
interval for the first loss interval.  The TFRC receiver uses 0.5/R
packets per second as the minimum value for X_target when
initializing the first loss interval.

which is replaced with:

Thus, the TFRC receiver calculates the loss interval that would be
required to produce the target rate X_target of 0.5/R packets per
second, for the round-trip time R, and for j equal to 1. This
synthetic loss interval is used for the first loss interval. The TFRC
receiver uses 0.5/R packets per second as the minimum value for
X_target when initializing the first loss interval.



 TOC 

2.5.  Section 8 of RFC 5348

Section 8.1 explains details about calculating the original TCP throughput equation, which was replaced with a new algorithm in this document. It is therefore obsolete.



 TOC 

2.6.  Appendix A of RFC 5348

This section provides a terminology list for TFRC, which is extended as follows:

N:  number of emulated TFRC flows.

j:  number of packets lost in a loss event.



 TOC 

3.  Usage Considerations

The "weighted fairness" service provided by a protocol using MulTFRC is quite different from the service provided by traditional Internet transport protocols. This section intends to answer some questions that this new service may raise.



 TOC 

3.1.  Which applications could use MulTFRC?

Like TFRC, MulTFRC is suitable for applications that require a smoother sending rate than standard TCP. Since it is likely that these would be multimedia applications, TFRC has largely been associated with them (and [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.) mentions "streaming media" as an example). Since timely transmission is often more important for them than reliability, multimedia applications usually do not keep retransmitting packets until their successful delivery is ensured. Accordingly, TFRC usage was specified for the Datagram Congestion Control Protocol (DCCP) [RFC4342] (Floyd, S., Kohler, E., and J. Padhye, “Profile for Datagram Congestion Control Protocol (DCCP) Congestion Control ID 3: TCP-Friendly Rate Control (TFRC),” March 2006.), but not for reliable data transfers.

MulTFRC, on the other hand, provides an altogether different service. For some applications, a smoother sending rate may not be particularly desirable but might also not be considered harmful, while the ability to emulate the congestion control of N flows may be useful for them. This could include reliable transfers such as the transmission of files. Possible reasons to use MulTFRC for file transfers include the assignment of priorities according to user preferences, increased efficiency with N > 1, the implementation of low-priority "scavenger" services and resource pooling [Wis+08] (Wischik, D., Handley, M., and M. Braun, “The Resource Pooling Principle,” October 2008.).



 TOC 

3.2.  Setting N

N MUST be set at the beginning of a transfer; it MUST NOT be changed while a transfer is ongoing. The effects of changing N during the lifetime of a MulTFRC session on the dynamics of the mechanism are yet to be investigated; in particular, it is unclear how often N could safely be changed, and how "safely" should be defined in this context. Further research is required to answer these questions.

N is a positive floating point number which can also take values between 0 and 1, making MulTFRC applicable as a mechanism for what has been called a "Lower-than-Best-Effort" (LBE) service. Since it does not reduce its sending rate early as delay increases, as some alternative proposals for such a service do (e.g. TCP-LP (Kuzmanovic, A. and E. Knightly, “TCP-LP: low-priority service via end-point congestion control,” August 2006.) [Kuz+06], TCP Nice (Venkataramani, A., Kokku, R., and M. Dahlin, “TCP Nice: a mechanism for background transfers,” 2002.) [Ven+02] or 4CP (Liu, S., Vojnovic, M., and D. Gunawardena, “Competitive and Considerate Congestion Control for Bulk Data Transfers,” June 2007.) [Liu+07]), it can probably be expected to be more aggressive than these mechanisms if they share a bottleneck at the same time. This, however, also means that MulTFRC is less likely to be prone to starvation. Values between 0 and 1 could also be useful if MulTFRC is used across multiple paths to realize resource pooling [Wis+08] (Wischik, D., Handley, M., and M. Braun, “The Resource Pooling Principle,” October 2008.).

Setting N to 1 is also possible. In this case, the only difference between TFRC and MulTFRC is that the underlying model of TFRC assumes that all remaining packets following a dropped packet in a "round" (less than one round-trip time apart) are also dropped, whereas the underlying model of MulTFRC does not have this assumption. Whether it is correct or not depends on the specific network situation; large windows and other queuing schemes than Drop-Tail make it less likely for the assumption to match reality. This document does not make any recommendation about which mechanism to use if only one flow is desired.

Since TCP has been extensively studied, and the aggression of its congestion control mechanism is emulated by TFRC, we can look at the behavior of a TCP aggregate in order to find a reasonable upper limit for N in MulTFRC. From [Alt+06] (Altman, E., Barman, D., Tuffin, B., and M. Vojnovic, “Parallel TCP Sockets: Simple Model, Throughput and Validation,” April 2006.), N TCPs (assuming non-sychronized loss events over connections) can saturate a bottleneck link by roughly 100-100/(1+3N) percent. This means that a single flow can only achieve 75% utilization, whereas 3 flows already achieve 90%. The theoretical gain that can be achieved by adding a flow declines with the total number of flows - e.g., while going from 1 to 2 flows is a 14.3% performance gain, the gain becomes less than 1% beyond 6 flows (which already achieve 95% link utilization). Since the link utilization of MulTFRC can be expected to be roughly the same as the link utilization of multiple TCPs, the approximation above also holds for MulTFRC. Thus, setting N to a much larger value than the values mentioned above will only yield a marginal benefit in isolation but can significantly affect other traffic. Therefore, the maximum value that a user can set for MulTFRC SHOULD NOT exceed 6.

Note that the model in [Alt+06] (Altman, E., Barman, D., Tuffin, B., and M. Vojnovic, “Parallel TCP Sockets: Simple Model, Throughput and Validation,” April 2006.), and hence the above discussion, considers the long-term steady-state behavior of TCP, which may not always be seen when the bandwidth*delay product is very large [RFC3649] (Floyd, S., “HighSpeed TCP for Large Congestion Windows,” December 2003.). This is due to TCP's slow congestion window growth in the congestion avoidance phase. While a MulTFRC flow with N > 1 can generally be expected to outperform a single standard TCP flow if N is large enough, such usage of MulTFRC is not recommended as a fix to the problem of saturating very large bandwidth*delay product paths: in order to always achieve good bottleneck utilization with MulTFRC under such conditions, N would have to be a function of the bandwidth*delay product. In other words, the mechanism does not scale with bandwidth and delay; very large bandwidth or delay values may require very large values for N, leading to a behavior which is overly aggressive but possibly worse in terms of performance than mechanisms such as HighSpeed TCP [RFC3649] (Floyd, S., “HighSpeed TCP for Large Congestion Windows,” December 2003.), which are specifically designed for such situations. The same argument applies for running multiple TCP flows, as in [All03] (Allcock, W., “GridFTP: Protocol Extensions to FTP for the Grid,” 2003.).



 TOC 

4.  Security Considerations

It is well known that a single uncontrolled UDP flow can cause significant harm to a large number of TCP flows that share the same bottleneck. This potential danger is due to the total lack of congestion control in UDP. Because this problem is well known, and because UDP is easy to detect, UDP traffic will often be rate limited by service providers.

If MulTFRC is used within a protocol such as DCCP, which will normally not be considered harmful and will therefore typically not be rate-limited, its tunable aggression could theoretically make it possible to use it for a Denial-of-Service (DoS) attack. In order to avoid such usage, the maximum value of N MUST be restricted. If, as recommended in this document, the maximum value for N is restricted to 6, the impact of MulTFRC on TCP is roughly the same as the impact of 6 TCP flows would be. It is clear that the conjoint congestion control behavior of 6 TCPs is far from being such an attack.

With transport protocols such as TCP, SCTP or DCCP, users can already be more aggressive than others by opening multiple connections. If MulTFRC is used within a transport protocol, this effect becomes more pronounced - e.g., 2 connections with N set to 6 for each of them roughly exhibit the same congestion control behavior as 12 TCP flows. The N limit SHOULD therefore be implemented as a system wide parameter such that the sum of the N values of all MulTFRC connections does not exceed it. Alternatively, the number of connections that can be opened could be restricted.



 TOC 

5.  Acknowledgements

This work was partially funded by the EU IST project EC-GIN under the contract STREP FP6-2006-IST-045256.

The authors would like to thank the following people whose feedback and comments contributed to this document (in alphabetic order): Lachlan Andrew, Dirceu Cavendish, Soo-Hyun Choi, Wes Eddy.



 TOC 

6.  References



 TOC 

6.1. Normative References

[RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” RFC 5348, September 2008 (TXT).


 TOC 

6.2. Informative References

[All03] Allcock, W., “GridFTP: Protocol Extensions to FTP for the Grid,” Open Grid Forum Document GFD.20, 2003.
[Alt+06] Altman, E., Barman, D., Tuffin, B., and M. Vojnovic, “Parallel TCP Sockets: Simple Model, Throughput and Validation,” Proceedings of Infocom 2006, April 2006.
[Bri07] Briscoe, B., “Flow rate fairness: dismantling a religion,” ACM SIGCOMM Computer Communication Review vol. 37, no. 2, (April 2007), pp. 63-74, April 2007.
[Cro+98] Crowcroft, J. and P. Oechslin, “Differentiated end-to-end Internet services using a weighted proportional fair sharing TCP,” ACM SIGCOMM Computer Communication Review vol. 28, no. 3 (July 1998), pp. 53-69, 1998.
[Dam+08] Damjanovic, D., Welzl, M., Telek, M., and W. Heiss, “Extending the TCP Steady-State Throughput Equation for Parallel TCP Flows,” University of Innsbruck, Institute of Computer Science, DPS NSG Technical Report 2, August 2008.
[Dam+09] Damjanovic, D. and M. Welzl, “MulTFRC: Providing Weighted Fairness for Multimedia Applications (and others too!),” ACM SIGCOMM Computer Communication Review vol. 39, issue 9 (July 2009), 2009.
[Dam10] Damjanovic, D., “Parallel TCP Data Transfers: A Practical Model and its Application,” Ph.D. thesis, University of Innsbruck, Austria, February 2010.
[Hac+04] Hacker, T., Noble, B., and B. Athey, “Improving Throughput and Maintaining Fairness using Parallel TCP,” Proceedings of Infocom 2004, March 2004.
[Kuo+08] Kuo, F. and X. Fu, “Probe-Aided MulTCP: an aggregate congestion control mechanism,” ACM SIGCOMM Computer Communication Review vol. 38, no. 1 (January 2008), pp. 17-28, 2008.
[Kuz+06] Kuzmanovic, A. and E. Knightly, “TCP-LP: low-priority service via end-point congestion control,” IEEE/ACM Transactions on Networking (ToN)  Volume 14, Issue 4, pp. 739-752., August 2006.
[Liu+07] Liu, S., Vojnovic, M., and D. Gunawardena, “Competitive and Considerate Congestion Control for Bulk Data Transfers,” Proceedings of IWQoS 2007, June 2007.
[RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J., and L. Zhang, “Recommendations on Queue Management and Congestion Avoidance in the Internet,” RFC 2309, April 1998 (TXT, HTML, XML).
[RFC3649] Floyd, S., “HighSpeed TCP for Large Congestion Windows,” RFC 3649, December 2003 (TXT).
[RFC4342] Floyd, S., Kohler, E., and J. Padhye, “Profile for Datagram Congestion Control Protocol (DCCP) Congestion Control ID 3: TCP-Friendly Rate Control (TFRC),” RFC 4342, March 2006 (TXT).
[Ven+02] Venkataramani, A., Kokku, R., and M. Dahlin, “TCP Nice: a mechanism for background transfers,” Proceedings of OSDI '02, 2002.
[Wis+08] Wischik, D., Handley, M., and M. Braun, “The Resource Pooling Principle,” ACM Computer Communication Review  Volume 38, Issue 5 (October 2008), October 2008.


 TOC 

Appendix A.  X_Bps implementation considerations

In this appendix we show why the algorithm for calculating X_Bps in Section 2.1 (Section 3 of RFC 5348) contains integer and floating point arithmetic that will not give underflow, overflow or rounding errors that will adversely affect the result of the algorithm. Note that the algorithm is not invoked when p == 1. p is computed in section 5.4 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.). If p is equal to 1 there is exactly one packet in each loss interval (and this will be an ECN-marked packet).

Assuming that the rest of the parameters are not outside their conceivable values, the calculation of X_Bps in Section 2.1 (Section 3 of RFC 5348) could lead to arithmetic errors or large imprecision only if p is very close to 1. However this will never happen because p is calculated as 1/I_mean in section 5.4 of [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.). The algorithm that calculates I_mean does so by a weighted average of the number of packets in the last n loss events. If n == 8 and the weights are set as defined in [RFC5348] (Floyd, S., Handley, M., Padhye, J., and J. Widmer, “TCP Friendly Rate Control (TFRC): Protocol Specification,” September 2008.), the smallest possible value of I_mean will be 1.033333. The reason for this is that the number of packets in a loss event is a positive integer and the smallest value (not equal to 1) is found when all recent intervals are of length 1, but the oldest interval (number 8 in this case) is of length two. All other sizes of loss intervals and smaller values of n will give higher values for I_mean and hence lower values for p.

Below we analyze the algorithm that calculates X_Bps, and see that it will always execute without any overflow, underflows or large rounding errors. In this analysis we assume that no parameter has an improper value. We say that a calculation is "sound" when no overflow, underflow or significant rounding may occur during the calculation.

In the first lines of the algorithm, af is calculated, and it is easy to see that the calculation is sound and that the value of af will end up between 1 and N+1. In the calculation of a, N may be equal to or close to af/2, and then the expression (p*b*af*(N-2*af)^2) may evaluate to a number very close to 0. However, (24*N^2) is added, and hence the calculation of a is sound, as the values of p, b and N are neither extremely small nor extremely large. Note that a will be a smaller value when p is a smaller value.

When x is calculated there will be no problem to find the root of a, and then add it to (af*p*b*(2*af-N)), which again might be a very small value. The final calculation of x involves a division by (6*p*N^2). However when p is small, the dividend is not very large (dominated by p and a), hence the calculation of x is sound and the result will neither be a very large nor a very small number. However, we need to show that x will not be 0 or negative, otherwise the algorithm will try a division by 0, or, if x is negative, the calculation of X_Bps could turn out to be a negative value.

We show that x is a positive rational number (and not 0) by defining W=p*b*af, Y=2*af-N and D=6*N^2*p. Then the assignment to x can be written as x= (W*Y + sqrt(W*24*N^2 + W^2 * (-Y)^2 )) / D. If we subtract W*24*N^2 from the argument to the sqrt-function, we get an expression that is obviously smaller, hence after the assignment to x, this inequality holds: x > (W*Y + sqrt(W^2 * (-Y)^2) ) / D. Simplifying the right hand side gives: x > (W*Y - W*Y) / D, that is: x > 0.

From this argument it is also clear that x is not close to 0, so that the divisons by x (or x*R) that we will see later in the algorithm will give a sound result. Then z is found by division by 1-p, which is sound when p is not close to 1 (which we have argued above it is not). Below we also need the fact that z is not close to 0, which is true because neither t_RTO nor p is close to 0.

Since neither R, j nor x is 0, or close to 0, the calculation of the first parameter to the min function is sound. The second parameter of the min function is sound because x*R is not a value close to 0, and hence the execution of all the three arguments to the min function is sound.

Finally X_Bps is calculated and since N, p, x, R, z are not close to zero, and p is not close to one (and hence 1-p is not close to 0), this final calculation is also sound.



 TOC 

Authors' Addresses

  Michael Welzl
  University of Oslo
  PO Box 1080 Blindern
  Oslo, N-0316
  Norway
Phone:  +47 22 85 24 20
Email:  michawe@ifi.uio.no
  
  Dragana Damjanovic
  University of Innsbruck
  Technikerstr. 21 A
  Innsbruck, A-6020
  Austria
Phone:  +43 512 507 96803
Email:  dragana.damjanovic@uibk.ac.at
  
  Stein Gjessing
  University of Oslo
  PO Box 1080 Blindern
  Oslo, N-0316
  Norway
Phone:  +47 22 85 24 44
Email:  steing@ifi.uio.no