IP Performance Working Group M. Mathis Internet-Draft Google, Inc Intended status: Experimental Oct 15, 2012 Expires: April 18, 2013 Model Based Internet Performance Metrics draft-mathis-ippm-model-based-metrics-00.txt Abstract We introduce a new class of model based Internet metrics designed to determine if a long path can be expected to meet a predefined end-to- end application performance target by applying a suite of single property tests to successive sections of the long path. In many cases these single property tests are based on existing IPPM metrics, with the addition of success and validity criteria. The sub-path at a time tests are designed to eliminate all known conditions that might prevent the full path from meeting the target performance. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on April 18, 2013. Copyright Notice Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect Mathis Expires April 18, 2013 [Page 1] Internet-Draft Model Based Metrics Oct 2012 to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3. Common Models and Parameters . . . . . . . . . . . . . . . . . 5 3.1. End-to-end parameters . . . . . . . . . . . . . . . . . . 5 3.2. Per sub-path parameters . . . . . . . . . . . . . . . . . 7 3.3. Common Calculations for Single Property Tests . . . . . . 7 3.4. Parameter Derating . . . . . . . . . . . . . . . . . . . . 8 3.5. Single Property Tests Results . . . . . . . . . . . . . . 9 4. Single Property Tests . . . . . . . . . . . . . . . . . . . . 9 4.1. Verify the absence of cross traffic . . . . . . . . . . . 9 4.1.1. Parameter Calculation . . . . . . . . . . . . . . . . 10 4.1.2. Cross traffic Measurement . . . . . . . . . . . . . . 10 4.2. Full Data Rate Loss Rate Tests . . . . . . . . . . . . . . 10 4.2.1. Loss Rate Measurement . . . . . . . . . . . . . . . . 11 4.3. Background Loss Rate Tests . . . . . . . . . . . . . . . . 11 4.3.1. Background Loss Rate Measurement . . . . . . . . . . . 11 4.4. Queue Capacity Test . . . . . . . . . . . . . . . . . . . 12 4.4.1. Model Calculation . . . . . . . . . . . . . . . . . . 12 4.4.2. Queue Capacity Measurement . . . . . . . . . . . . . . 12 4.5. AQM Test . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.5.1. Model Calculation . . . . . . . . . . . . . . . . . . 12 4.5.2. AQM Measurement . . . . . . . . . . . . . . . . . . . 12 4.6. Reordering Test . . . . . . . . . . . . . . . . . . . . . 12 4.6.1. Model Calculation . . . . . . . . . . . . . . . . . . 12 4.6.2. Reordering Measurement . . . . . . . . . . . . . . . . 12 5. Calibration . . . . . . . . . . . . . . . . . . . . . . . . . 12 6. References . . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.1. Normative References . . . . . . . . . . . . . . . . . . . 13 6.2. Informative References . . . . . . . . . . . . . . . . . . 13 Appendix A. Model Derivations . . . . . . . . . . . . . . . . . . 13 Appendix B. Old text from an earlier document . . . . . . . . . . 13 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 15 Mathis Expires April 18, 2013 [Page 2] Internet-Draft Model Based Metrics Oct 2012 1. Introduction We introduce a new class of model based metrics designed to determine if a long path can be expected to meet a predefined application end- to-end performance target by applying a suite of single property tests to successive sections of the long path. In many cases these single property tests are based on existing IPPM metrics, with the addition of success and validity criteria. The sub-path at a time tests are designed to eliminate all known conditions that might prevent the full path from meeting the target performance. The end- to-end target performance must be specified in advance, in order to be able to open-loop the control systems (such as congestion control) that are present in all Internet transport protocols and applications. Since a singleton (see [RFC2330]) is only a pass/fail measurement of a sub-path, these metrics are most useful in composition over large pools of samples, such as across a collection of paths or a time interval [RFC5835]. For Bulk Transport Capacity (BTC) the target performance to be measured is a data rate. TCP's ability to compensate for less than ideal network conditions is fundamentally affected by the RTT and MTU of the end-to-end Internet path that it traverses. Since the minimum RTT and maximum MTU are both fixed properties of the path, they are also taken as parameters. The target values for these three parameters, Data Rate, RTT and MTU, are determined by the application, its intended use and the physical infrastructure over which it traverses. They are described in more detail in Section 3 together with the models used to infer the required performance of the underlying Internet fabric. Traditional end-to-end BTC metrics have proven to be difficult or unsatisfactory for the reasons described in Section 2. Rather than testing the end-to-end path with TCP or other some other BTC, each sub-path is evaluated using suite of far simpler and more predictable single property tests described in Section 4. For BTC the following tests are sufficient: raw data rate, background loss rate, queue burst capacity, reordering extent, onset of congestion/AQM and return path quality. If every sub-path passes all of these tests, then an end-to-end application using any reasonably modern TCP or similar protocol should be able to attain the specified target data rate, over the full end-to-end path at the specified RTT and MTU. There exists the potential that model based metric might fail, in the sense that every sub-path of an end-to-end path passes every single property test and yet a application might still fall to attain its performance target. If so, then a traditional BTC needs to be used to validate the tests for each sub-path, as described in Section 5. Mathis Expires April 18, 2013 [Page 3] Internet-Draft Model Based Metrics Oct 2012 Future text (or a more likely a future document) will describe model based metrics for real time traffic. The salient point will be that concurrently meeting the goals of both RT and throughput maximizing traffic implicitly requires some form of traffic segregation, such that the two traffic classes are not placed in the same queue. Some technique as simple as SFQ[SFQ] might be a sufficient alternative to full QoS. TODO: Add discussion of protocol overhead: MSS vs IP MTU vs link MTU 2. Background (Fragments of earlier text) The holy grail of IPPM has been BTC measurement, but it has proven to be a very hard problem for a number of reasons: TCP is a control systems - everything affects performance, including components that are explicitly not part of the the test. Congestion control is an equilibrium process, transport protocols change the network (raise loss probability and/or RTT) to conform to their behavior. TCP's ability to compensate for network flaws is directly proportional to the number of round trips per second (e.g. inversely proportional to the RTT). As a consequence a flawed link that passes a local test is likely to completely fail when the path is extended by a perfect network to some larger RTT. TCP has a meta Heisenberg problem - Measurement and cross traffic interact in unknown and ill defined ways. The situation is actually worse than the traditional physics problem where you can at least estimate the relative masses of the measurement and measured particles. For network measurement you can not in general determine the relative "masses" of the measurement traffic and cross traffic, so you can not even gage the relative magnitude of their effects on each other. The new approach is to "open loop" congestion control. Defeat CC, typically by throttling TCP to a lower rate, such that it does not respond to network conditions. In this mode the measurement software explicitly controls TCP's state variables (e.g. cwnd) to create controlled traffic patterns, which are manipulated to measure the network. Models are used to determine the actual test parameters (loss rate, etc) from the target parameters. The basic method is to use models to estimate simple network properties required to sustain a given transport flow (or set of flows), and using a suite of simpler Mathis Expires April 18, 2013 [Page 4] Internet-Draft Model Based Metrics Oct 2012 metrics to confirm that the network meets the required properties. For example a network can sustain a Bulk TCP flow of a given data rate, MTU and RTT when 4 (and probably more) conditions are met: The raw link rate is higher than the target data rate. The raw packet loss rate is lower than required by a suitable TCP performance model There is sufficient buffering at any bottleneck smooth bursts. When the link is overfilled (congested), the onset of packet loss is progressive. These condition can all be verified with simple tests, using model parameters and acceptance thresholds derived from the target data rate, MTU and RTT. Note that this procedure is not invertible: a singleton measurement is a pass/fail evaluation of a given path or subpath at a given performance. Measurements to confirm that a link passes at one particular performance may not be generally be useful to predict if the link will pass at a different performance. Although they are not invertible, they do have several other valuable properties, such as natural ways to define several different composition metrics. 3. Common Models and Parameters Transport performance models are used to derive the test parameters for each single property test from the end-to-end target parameters and additional ancillary parameters. It is envisioned that the modeling phase (to compute the test parameters) and testing phases will be decoupled. This section covers common derived parameters, used by multiple single property tests. For some tests, additional modeling is described with the tests. Since some aspects of the models are very conservative, the modeling framework permits some latitude in derating test parameters, as described in Section 3.4. For certain sub-paths (e.g. common types of access links) it would be appropriate for the single property test parameters to be documented as a "measurement profile" together with the modeling assumptions and derating factors described in Section 3.3 and Section 3.4. 3.1. End-to-end parameters These parameters are determined by the needs of the application or the ultimate end user and the end-to-end Internet path. Mathis Expires April 18, 2013 [Page 5] Internet-Draft Model Based Metrics Oct 2012 Target Data Rate: The application or ultimate user's performance goal (in aggregate across all connections). Permitted Number of Connections: The target rate can be more easily obtained by dividing the traffic across more than one connection. In general the number of concurrent connections is determined by the application, however see the comments below on multiple connections. Target RTT (Round Trip Time): For fundamental reasons a long path makes it more difficult for TCP or other transport protocol to meet the target rate. The target RTT must be representative for the actual applications expected to use the network. This parameter may be subject to a future convention (e.g. continental scale paths should be assumed to be some fixed RTT, such as 100 ms) or alternatively be an property of an ISP's topology (e.g. a ISP with richer and better placed peering may actually have lower RTTs for typical users.) Target MTU (Maximum Transmission Unit): Assume 1500 Bytes unless otherwise specified. If some sub-path forces a smaller MTU, then all sub-paths must be tested with the same smaller MTU. The use of multiple connections has been very controversial since the beginning of the World-Wide-Web[first complaint]. Modern browsers open many connections [BScope]. Experts in the IETF transport area have frequently spoken against this practice [long list]. It is not inappropriate to assume some small number of concurrent connections (e.g. 4 or 6), to compensate for limitation in TCP. However, choosing too large a number is at risk of being taken as a signal by the web browser community that this practice has been embraced by the Internet community. It may not be desirable to send such a signal. The following optional parameters apply for testing generalized end- to-end paths that include subpaths with known specific types of behaviors that are not well represented by simple queueing models: Bottleneck link clock rate: This applies to links that are using virtual queues or other techniques to police or shape users traffic at lower rates full link rate. The bottleneck link clock rate should be representative of queue drain times for short bursts of packets on an otherwise unloaded link. Channel hold time: For channels that have relatively expensive channel arbitration algorithms, this is the typical (maximum?) time that data and or ACKs are held pending acquiring the channel. WHile under heavy load, the RTT may be inflated by this parameter, unless it is built into the target RTT Preload traffic volume: If the user's traffic is shaped on the basis of average traffic volume, this is volume necessary to invoke "heavy hitter" policies. Mathis Expires April 18, 2013 [Page 6] Internet-Draft Model Based Metrics Oct 2012 Unloaded traffic volume: If the user's traffic is shaped on the basis of average traffic volume, this is the maximum traffic volume that a test can use and stay within a "light user" policies. Note on a ConEx enabled network [ConEx], the word "traffic" in the last two items should be replaced by "congestion" i.e. "preload congestion volume" and "unloaded congestion volume". 3.2. Per sub-path parameters Some single parameter tests also need parameter of the sub-path. sub-path RTT: RTT of the sub-path under test. sub-path link clock rate: If different than the Bottleneck link clock rate 3.3. Common Calculations for Single Property Tests The most important derived parameter is target_pipe_size (in packets), which is the number of packets needed exactly meet the target rate, with no cross traffic for the specified RTT and MTU. It is given by: target_pipe_size = target_rate * target_RTT / target_MTU If the transport protocol (e.g. TCP) average window size is smaller than this, the link will be under filled. If target_data_rate is equal to bottleneck link_data_rate, then target_pipe_size also predicts the onset of queueing. If the transport protocol (e.g. TCP) average window size is larger than the target_pipe_size, the excess packets will be in a standing queue at the bottleneck. If the transport protocol is using Reno congestion control [RFC5681], then there must be target_pipe_size roundtrips between losses. Otherwise the multiplicative window reduction triggered by a loss would cause the network to be underfilled. Following [MSMO97], we derive the losses must be no more frequent than every 1 in (3/2)(target_pipe_size^2) packets. This provides the reference value for target_run_length which is typically the number of packets that must be delivered between loss episodes in hte tests below: reference_target_run_length = (3/2)(target_pipe_size^2) Note that this calculation is based on a number of assumptions that may not apply. Appendix A discusses these assumptions and provides Mathis Expires April 18, 2013 [Page 7] Internet-Draft Model Based Metrics Oct 2012 some alternative models. The actual method for computing target_run_length MUST be published along with the rationale for the underlying assumptions and the ratio of chosen target_run_length to reference_target_run_length. Although this document gives a lot of latitude for calculating target_run_length people specifying profiles for suites of single property tests need to consider the effect of their choices on the ongoing conversation and tussle about the relevance of "TCP friendliness" as an appropriate model for capacity allocation. Choosing a target_run_length that is substantially smaller than reference_target_run_length is equivalent to saying that it is appropriate for the research community to abandon "TCP friendliness" as a fairness model and to develop more aggressive Internet transport protocols, and for applications to continue (or even increase) the number of connections that they open. The calculations for individual parameters are presented with the each single property test. In general these calculations are permitted some as described in Section 3.4 3.4. Parameter Derating Since some aspects of the models are very conservative, the modeling framework permits some latitude in derating some specific test parameters, as indicated in Section 4. For example classical performance models suggest that in order to be sure that a single TCP stream can fill a link, it needs to have a full bandwidth-delay- product worth of buffering at the bottleneck[QueueSize]. In real networks with real applications this is sometimes overly conservative. Rather than trying to formalize more complicated models we permit some test parameters to be relaxed as long as they meet some additional procedural constraints: The method used compute and justify the derated metrics is published in such a way that it becomes a matter of public record. The calibration procedures described in Section 5 are used to demonstrate the feasibility of meeting the performance targets with the derated test parameters. The calibration process itself is documented is such a way that other researchers can duplicate the experiments and validate the results. Note that some single property test parameters are not permitted to be derated. Mathis Expires April 18, 2013 [Page 8] Internet-Draft Model Based Metrics Oct 2012 3.5. Single Property Tests Results TBD Define: Pass, Fail and inconclusive test results. The inconclusive outcome is needed to address the case where a test failed to attain the specified test conditions. This is important to the extent that the tests themselves have built in control systems which might interfere with some aspepect of the test. It is required for example to use TCP for testing. 4. Single Property Tests The single property tests confirm that each sub-path can sustain the normal traffic patterns caused by TCP running at the specified target performance. Specifically they confirm that each sub-path has: sufficient raw capacity (e.g. sufficient data rate); low enough background loss rate where mandatory congestion control stays out of the way; large enough queue space to absorb TCP's normal bursts; does not cause unreasonable packet reordering; progressive AQM to appropriately invoke congestion control. Appropriately invoking congestion control requires that packet losses or ECN marks start progressively before TCP creates an excessive sustained queues[BufferBloat] or excessively bursty losses. The return path must also subject to a similar suite of tests, although potentially with different test parameters. Note that many of the sub-path tests resemble metrics that have already been defined in the IPPM context, with the addition of criteria for passing or failing the test. The models used to derive the test parameters make specific assumptions about network conditions, a test is deemed "inconclusive" (as opposed to failing) if tester does not meet the underlying assumption. For example a loss rate test at a specified data rate is inconclusive if the tester fails to send data at the specified rate for some reason. This concept of an inconclusive test is necessary to build tests out of protocols or technologies that they themselves have built in or implicit control systems. Some single property test can be combined, since their parameters are not mutually exclusive. 4.1. Verify the absence of cross traffic Use a passive packet or SNMP monitoring to verify that the traffic volume on the sub-path agrees with the traffic generated by each test. Ideally this should be performed before during and after each test. Mathis Expires April 18, 2013 [Page 9] Internet-Draft Model Based Metrics Oct 2012 The goal is provide quality assurance on the overall measurement process, and specifically to detect the following measurement failure: a user observes unexpectedly poor application performance, the ISP observes that the access link is running at the rated capacity. Both fail to observe that the user's computer has been infected by a virus which is spewing traffic as fast as it can. Parameters: Maximum Cross Traffic Data Rate The amount of excess traffic permitted. Note that this might be different for different tests. Maximum Data Rate underage The permitted amount that the traffic can be less than predicted for the current test. Normally this would just be a statement of the maximum permitted measurement error, however it might also detect cases where the passive and active tests are misaligned: testing different subscriber lines. This is important because the vantage points are so different: in-band active measurement vs out-of-band passive measurement. 4.1.1. Parameter Calculation TBA 4.1.2. Cross traffic Measurement One possible method is an adaptation of: www-didc.lbl.gov/papers/ SCNM-PAM03.pdf D Agarwal etal. "An Infrastructure for Passive Network Monitoring of Application Data Streams". Use the same technique as that paper to trigger the capture of SNMP statistics for the link. 4.2. Full Data Rate Loss Rate Tests We propose two versions of the loss rate test. One, performed at data full rate, is intrusive and recommend for infrequent testing, such as when a service is first turned up or as part of an auditing process. Note that this test also implicitly confirms that sub_path has sufficient capacity to carry the target_data_rate. The second background loss rate, described below, is designed for ongoing monitoring for change is sub-path quality. Parameters: Run Length Same as target_run_lenght Data Rate Same as target_data_rate Note that these parameters MUST NOT be derated. If the default parameters are too stringent use an alternate model for arget_data_rate as described in Appendix A. Mathis Expires April 18, 2013 [Page 10] Internet-Draft Model Based Metrics Oct 2012 4.2.1. Loss Rate Measurement Data is sent at the specified data_rate. The receiver accumulates the total data delivered and packets lost [and ECN marks, which are nominally treated as losses by conforming transport protocols]. The observed average_run_lenght is computed from total_data_delivered divided by the total_loss_rate. A [TBD] statistical test is applied to determine when or if the average_run_lenght is larger than target_run_lenght. TODO: add language about monitoring cross traffic. The test is deemed to have passed only if the observed data rate matches the target_data_rate and it is statistically significant that the average_run_lenght is larger than target_run_lenght. It is deemed inconclusive if: the statistical test is inconclusive; there is too much background load; or the target_data_rate could not be attained. 4.3. Background Loss Rate Tests The background loss rate is designed for ongoing monitoring for change is sub-path quality. It should be used in conjunction with the above full rate test. Parameters: Run Length Same as target_run_lenght Data Rate Some small fraction of target_data_rate, such as 1%. 4.3.1. Background Loss Rate Measurement The receiver accumulates the total data delivered and packets losses [and ECN marks, which are nominally treated as losses by conforming transport protocols]. The observed average_run_lenght is computed from total_data_delivered divided by the total_loss_rate. A [TBD] statistical test is applied to determine when or if the average_run_lenght is larger than target_run_lenght. TODO: add language about monitoring cross traffic. The test is deemed to have passed if it is statistically significant that the average_run_lenght is larger than target_run_lenght. It is deemed inconclusive if there is too much background traffic or the statistical test is inconclusive. Mathis Expires April 18, 2013 [Page 11] Internet-Draft Model Based Metrics Oct 2012 4.4. Queue Capacity Test Parameters: TBA TBA 4.4.1. Model Calculation TBA 4.4.2. Queue Capacity Measurement TBA 4.5. AQM Test Parameters: TBA TBA 4.5.1. Model Calculation TBA 4.5.2. AQM Measurement TBA 4.6. Reordering Test Parameters: TBA TBA 4.6.1. Model Calculation TBA 4.6.2. Reordering Measurement TBA 5. Calibration If using derated metrics, or when something goes wrong, the results must be calibrated against a traditional BTC...... 6. References Mathis Expires April 18, 2013 [Page 12] Internet-Draft Model Based Metrics Oct 2012 6.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", BCP 9, RFC 2026, October 1996. 6.2. Informative References [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, "Framework for IP Performance Metrics", RFC 2330, May 1998. [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion Control", RFC 5681, September 2009. [RFC5835] Morton, A. and S. Van den Berghe, "Framework for Metric Composition", RFC 5835, April 2010. [MSMO97] Mathis, M., Semke, J., Mahdavi, J., and T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm", Computer Communications Review volume 27, number3, July 1997. [BScope] Broswerscope, "Browserscope Network tests", Sept 2012, . See Max Connections column Appendix A. Model Derivations This appendix describes several different ways to calculate target_run_length and the implication of the chosen calculation. Rederive MSMO97 under two different assumptions: target_rate = link_rate and target_rate < 2 * link_rate. Show equivalent derivation for CUBIC. Commentary on the consequence of the choice. Appendix B. Old text from an earlier document To be moved, removed or absorbed Mathis Expires April 18, 2013 [Page 13] Internet-Draft Model Based Metrics Oct 2012 Step 0: select target end-to-end parameters: a target rate and target RTT. The primary test will be to confirm that the link quality is sufficient to meet the specified target rate for the link under test, when extended to the target RTT by an ideal network. The target rate must be below the actual link rate and nominally the target RTT would be longer than the link RTT. There should probably be a convention for the relationship between link and target rates (e.g. 85%). For example on a 10 Mb/s link, the target rate might be 1 MBytes/s, at an RTT of 100 mS (a typical continental scale path). Step 1: On the basis of the target rate and RTT and your favorite TCP performance model, compute the "required run length", which is the required number of consecutive non-losses between loss episodes. The run length resembles one over the loss probability, if clustered losses only count as a single event. Also select "test duration" and "test rate". The latter would nominally the same as the target rate, but might be different in some situations. There must be documentation connecting the test rate, duration and required run length, to the target rate and RTT selected in step 0. Continuing the above example: Assuming a 1500 Byte MTU. The calculated model loss rate for a single TCP stream is about 0.01% (1 loss in 1E4 packets). Step 2, the actual measurement proceeds as follows: Start an unconstrained bulk data flow using any modern TCP (with large buffers and/or autotuning). During the first interval (no rate limits) observe the slowstart (e.g. tcpdump) and measure: Peak burst size; link clock rate (delivery rate for each round); peak data rate for the fastest single RTT interval; fraction of segments lost at the end of slow start. After the flow has fully recovered from the slowstart (details not important) throttle the flow down to the test rate (by clamping cwnd or application pacing at the sender or receiver). While clamped to the test rate, observe the losses (run length) for the chosen test duration. The link passes the test if the slowstart ends with less than approximately 50% losses and no timeouts, the peak rate is at least the target rate, and the measured run length is better than the required run length. There will also need to be some ancillary metrics, for example to discard tests where the receiver closes the window, invalidating the slowstart test. [This needs to be separated into multiple subtests] Optional step 3: In some cases it might make sense to compute an "extrapolated rate", which is the minimum of the observed peak rate, and the rate computed from the specified target RTT and the observed run length by using a suitable TCP performance model. The extrapolated rate should be annotated to indicate if it was run Mathis Expires April 18, 2013 [Page 14] Internet-Draft Model Based Metrics Oct 2012 length or peak rate limited, since these have different predictive values. Other issues: If the link RTT is not substantially smaller than the target RTT and the actual run length is close to the target rate, a standards compliant TCP implementation might not be effective at accurately controlling the data rate. To be independent of the details of the TCP implementation, failing to control the rate has to be treated as a spoiled measurement, not a infrastructure failure. This can be overcome by "stiffening" TCP by using a non-standard congestion control algorithm. For example if the rate controlling by clamping cwnd then use "relentless TCP" style reductions on loss, and lock ssthresh to the cwnd clamp. Alternatively, implement an explicit rate controller for TCP. In either case the test must be abandoned (aborted) if the measured run length is substantially below the target run length. If the test is run "in situ" in a production environment, there also needs to be baseline tests using alternate paths to confirm that there are no bottlenecks or congested links between the test end points and the link under test. It might make sense to run multiple tests with different parameters, for example infrequent tests with test rate equal to the target rate, and more frequent, less disruptive tests with the same target rate but the test rate equal to 1% of the target rate. To observe the required run length, the low rate test would take 100 times longer to run. Returning to the example: a full rate test would entail sending 690 pps (1 MByte/s) for several tens of seconds (e.g. 50k packets), and observing that the total loss rate is below 1:1e4. A less disruptive test might be to send at 6.9 pps for 100 times longer, and observing Formatted: Mon Oct 15 16:00:51 PDT 2012 Mathis Expires April 18, 2013 [Page 15] Internet-Draft Model Based Metrics Oct 2012 Author's Address Matt Mathis Google, Inc 1600 Amphitheater Parkway Mountain View, California 93117 USA Email: mattmathis@google.com Mathis Expires April 18, 2013 [Page 16]