Network Working Group J. Zhang Internet-Draft Cisco Systems, Inc. and Cornell Intended status: Informational University Expires: September 6, 2007 A. Charny V. Liatsos F. Le Faucheur Cisco Systems, Inc. March 5, 2007 Performance Evaluation of CL-PHB Admission and Pre-emption Algorithms draft-zhang-pcn-performance-evaluation-01.txt Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on September 6, 2007. Copyright Notice Copyright (C) The IETF Trust (2007). Abstract Pre-Congestion Notification [I-D.briscoe-tsvwg-cl-architecture] approach proposes Admission Control to limit the amount of real-time PCN traffic to a configured level during the normal operating conditions, and Preemption use to tear-down some of the flows to Zhang, et al. Expires September 6, 2007 [Page 1] Internet-Draft CL Simulation Study March 2007 bring the PCN traffic level down to a desirable amount during unexpected events such as network failures, with the goal of maintaining the QoS assurances to the remaining flows. Preliminary performance evaluation results on example admission and Preemption mechanisms were presented in [I-D.briscoe-tsvwg-cl-phb]. This draft presents the results of a follow-up simulation study and identifies a number of open issues. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1. Changes from the previous version . . . . . . . . . . . . 5 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 2. Simulation Setup and Environment . . . . . . . . . . . . . . . 5 2.1. Network Models . . . . . . . . . . . . . . . . . . . . . . 5 2.2. Call Signaling Model . . . . . . . . . . . . . . . . . . . 7 2.3. Traffic Models . . . . . . . . . . . . . . . . . . . . . . 8 2.3.1. Voice CBR . . . . . . . . . . . . . . . . . . . . . . 8 2.3.2. VBR Voice . . . . . . . . . . . . . . . . . . . . . . 8 2.3.3. Synthetic "Video" - High Peak-to-Mean Ratio VBR Traffic (SVD) . . . . . . . . . . . . . . . . . . . . 9 2.3.4. Real Video Traces (VTR) . . . . . . . . . . . . . . . 9 2.3.5. Randomization of Base Traffic Models . . . . . . . . . 10 2.4. Simulation Environment . . . . . . . . . . . . . . . . . . 10 3. Admission Control . . . . . . . . . . . . . . . . . . . . . . 10 3.1. Parameter Settings . . . . . . . . . . . . . . . . . . . . 10 3.1.1. Virtual queue settings . . . . . . . . . . . . . . . . 10 3.1.2. Egress measurements . . . . . . . . . . . . . . . . . 11 3.2. Basic Bottleneck Aggregation Results . . . . . . . . . . . 11 3.3. Sensitivity to Call Arrival Assumptions . . . . . . . . . 13 3.4. Sensitivity to Marking Parameters at the Bottleneck . . . 14 3.4.1. Ramp vs Step Marking . . . . . . . . . . . . . . . . . 15 3.4.2. Sensitivity to Virtual Queue Marking Thresholds . . . 15 3.5. Sensitivity to RTT . . . . . . . . . . . . . . . . . . . . 16 3.6. Sensitivity to EWMA weight and CLE . . . . . . . . . . . . 16 3.7. Effect of Ingress-Egress Aggregation . . . . . . . . . . . 19 3.8. Effect of Multiple Bottlenecks . . . . . . . . . . . . . . 20 4. Pre-Emption . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1. Pre-emption Model and Key Parameters . . . . . . . . . . . 21 4.2. Effect of RTT Difference . . . . . . . . . . . . . . . . . 22 4.3. Ingress-Egress Aggregation Experiments . . . . . . . . . . 25 Zhang, et al. Expires September 6, 2007 [Page 2] Internet-Draft CL Simulation Study March 2007 4.3.1. Motivation for the Investigation . . . . . . . . . . . 25 4.3.2. Detailed results . . . . . . . . . . . . . . . . . . . 26 4.3.3. Discussion of the Ingress Aggregation Results . . . . 30 4.4. Multiple Bottlenecks Experiments . . . . . . . . . . . . . 31 4.4.1. Motivation for the Investigation . . . . . . . . . . . 31 4.4.2. Detailed Results . . . . . . . . . . . . . . . . . . . 32 5. Summary of Results . . . . . . . . . . . . . . . . . . . . . . 37 5.1. Summary of Admission Control Results . . . . . . . . . . . 37 5.2. Summary and Discussion of Pre-emption Results . . . . . . 38 6. Future work . . . . . . . . . . . . . . . . . . . . . . . . . 39 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 39 8. Security Considerations . . . . . . . . . . . . . . . . . . . 39 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 39 9.1. Normative References . . . . . . . . . . . . . . . . . . . 39 9.2. Informative References . . . . . . . . . . . . . . . . . . 39 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 40 Intellectual Property and Copyright Statements . . . . . . . . . . 42 Zhang, et al. Expires September 6, 2007 [Page 3] Internet-Draft CL Simulation Study March 2007 1. Introduction Pre-Congestion Notification [I-D.briscoe-tsvwg-cl-architecture] approach proposes Admission Control to limit the amount of real-time PCN traffic to a configured level during the normal operating conditions, and Preemption use to tear down some of the flows to bring the PCN traffic level down to a desirable amount during unexpected events such as network failures, with the goal of maintaining the QoS assurances to the remaining flows. In [I-D.briscoe-tsvwg-cl-architecture], Admission and Preemption use two different markings and two different metering mechanisms in the internal nodes of the PCN region. An initial simulation study was reported in [I-D.briscoe-tsvwg-cl-phb], where it was shown that both admission and Preemption mechanism discussed there have reasonable performance in a limited set of experiments performed here. This draft reports the next installment of the simulation results. For completeness and convenience of exposition, most of the results earlier presented in [I-D.briscoe-tsvwg-cl-phb] have been moved into this draft. The new results presented in the current draft further confirm that admission and Preemption algorithms of [I-D.briscoe-tsvwg-cl-phb] perform well under a range of operating conditions and are relatively insensitive to parameter variations around a chosen operation range. Perhaps the most interesting (and quite unexpected) conclusion that can be drawn from these results is that both Admission and Preemption algorithms appear to be not as sensitive to low per ingress-egress- pair aggregation as one might fear. This result is quite encouraging: while it seems reasonable to assume sufficient bottleneck link aggregation, it is not very clear whether one can safely assume high levels of aggregation on a per ingress-egress-pair basis. However, more work is necessary to evaluate whether this moderate sensitivity to ingress-egress aggregation can be safely relied upon under a broader range of conditions. Yet, low levels of ingress-egress aggregation remain a potential concern, especially for the pre-emption mechanism. More discussion on this is presented in section 4. Other conclusions are presented in Section 5. Section 2 describes simulation environment and models, Admission and Preemption simulation results are presented in sections 3 and 4, and section 5 summarizes the results of the simulations so far and lists areas for further study. Zhang, et al. Expires September 6, 2007 [Page 4] Internet-Draft CL Simulation Study March 2007 1.1. Changes from the previous version Added multi-bottleneck topology simulations. Added experiments with real video traces. Miscellaneous editorial changes and clarifications based on feedback to the previous version. 1.2. Terminology o Pre-Congestion Notification (PCN): two algorithms that determine when a PCN-enabled router Admission Marks and Preemption Marks a packet, depending on the traffic level. o Admission Marking condition- the traffic level is such that the router Admission Marks packets. The router provides an "early warning" that the load is nearing the engineered admission control capacity, before there is any significant build-up in the queue of packets belonging to the specified real-time service class. o Preemption Marking condition- the traffic level is such that the router Preemption Marks packets. The router warns explicitly that Preemption may be needed. o Configured admission rate - the reference rate used by the admission marking algorithm in a PCN-enabled router. o Configured preemption rate - the reference rate used by the Preemption marking algorithm in a PCN-enabled router. o CLE - congestion level estimate computed by the egress node by estimating as the fraction of admission-marked packets it receives. 2. Simulation Setup and Environment 2.1. Network Models use three types of topologies, described in this section. In the simplest topology shown in Fig. 2.1 the network is modelled as a single link between an ingress and an egress node, all flows sharing the same link. Figure 2.1 shows the modelled network. A is the ingress node and B is the egress node. Zhang, et al. Expires September 6, 2007 [Page 5] Internet-Draft CL Simulation Study March 2007 A-----B Fig. 2.1 Simulated Single Link Network (Referred to as Single Link Topology) A subset of simulations uses a network structured similarly to the network shown on Figure 2.2. A set of ingresses (A,B,C) connected to an interior node in the network (D) with links of different propagation delay. This node in turn is connected to the egress (F). In this topology, different sets of flows between each ingress and the egress converge on the single link, where Pre-congestion notification algorithm is enabled. The ingress link capacity is assumed to be sufficiently large so that neither Admission nor Preemption mechanisms have any effect on them. All links are assigned a propagation delay. The point of congestion (link (D-F) connecting the interior node to the egress node) is modelled with a 1ms or 10ms propagation delay. In our simulations, the number of ingress nodes in the network range from 2 to 1800 nodes, each connected to the interior node with a range of propagation delay (1ms to 100ms). In some experiments all ingress links have the same propagation delay, and in some experiments the delay of different ingresses vary in the range from 1 to 100 ms. A \ B - D - F / C Fig. 2.2. Simulated Multi-Link Network (Referred to as RTT Topology) Another type of network of interest is multi-bottleneck topology that we call Parking Lot (PLT). The simplest PLT with 2 bottlenecks is illustrated in Fig 2.3(a). An example traffic matrix with this network on this topology is as follows: o an aggregate of "2-hop" flows entering the network at A and leaving at C (via the two links A-B-C) o an aggregate of "1-hop" flows entering the network at D and leaving at E (via A-B) o an aggregate of "1-hop" flows entering the network at E and leaving at F (via B-C) In the 2-hop PLT of Fig. 2.3(a) the points of congestion are links Zhang, et al. Expires September 6, 2007 [Page 6] Internet-Draft CL Simulation Study March 2007 A--B and B--C. Capacity of all other links is not limiting. This topology and traffic matrix models the network where some flows cross multiple bottlenecks, each with substantial amount of cross-traffic. A--B--C A--B--C--D A--B--C--D--E--F | | | | | | | | | | | | | | | | | | | | | | | | | | D E F E F G H G H I J K L (a) (b) (c) Figure 2.3: Simulated Multiple-bottleneck (Parking Lot) Topologies. We also experiment with larger PLT topologies with 3 bottlenecks (see Fig 2.3(b)) and 5 bottlenecks ( Fig 2.3 (c)). In all cases, we simulated one ingress-egress pair that carries the aggregate of "long" flows traversing all the N bottlenecks (where N is the number of bottleneck links in the PLT topology, shown as "horizontal" links in Fig. 2.3), and N ingress-egress pairs that carry flows traversing a single bottleneck link and exiting at the next "hop". In all cases, capacities of all "vertical" links are non-limiting, so neither Pre-emption not Admission mechanisms are never triggered on these links. Propagation delays for all links in all PLT topologies are set to 1ms. Due to time limitations, other possible traffic matrices (e.g. some of the flows traversing a subset of several bottleneck links in Fig 2.3) have not yet been considered and remain the area for future investigation. Our simulations concentrated primarily on the range of capacities of 'bottleneck' links with sufficient level of bottleneck aggregation - above 10 Mbps for voice and 622 Mbps for "video", up to 2.4 Gbps. But we also investigated slower 'bottleneck' links down to 512 Kbps in some experiments. 2.2. Call Signaling Model n the simulation model of admission control, a call request arrives at the ingress and immediately sends a message to the egress. The message arrives at the egress after the propagation time plus link processing time (but no queuing delay). When the egress receives this message, it immediately responds to the ingress with the current Congestion Level Estimate. If the Congestion Level Estimate is below the specified CLE- threshold, the call is admitted, otherwise it is rejected. Zhang, et al. Expires September 6, 2007 [Page 7] Internet-Draft CL Simulation Study March 2007 For preemption, once the ingress node of a PCN region decides to preempt a call, that call is preempted immediately and sends no more packets from that time on. The life of a call outside the domain described above is not modelled. Propagation delay from source to the ingress and from destination to the egress is assumed negligible and is not modelled. 2.3. Traffic Models We simulated four models of real-time traffic - two voice models and two video models. The voice models included CBR voice and on-off traffic approximating voice with silence compression. For video, we simulated on-off traffic with peak and mean rates corresponding to an MPEG-2 video stream (we termed the latter Synthetic Video (SVD), and a real video trace (VTR). The distribution of flow duration was chosen to be exponentially distributed with mean 1min, regardless of the traffic type. In most of the experiments flows arrived according to a Poisson distribution with mean arrival rate chosen to achieve a desired amount of overload over the configured-admission-limit in each experiment. Overloads in the range 1x to 5x and underload with 0.95x have been investigated. For on-off traffic, on and off periods were exponentially distributed with the specified mean. Traffic parameters for each flow are summarized below . 2.3.1. Voice CBR This traffic is intended to closely approximates CBR voice codex, and is referred to in the simulation study as "CBR". Its parameters are: o Average rate 64 Kbps, o Packet length 160 bytes o packet inter-arrival time 20ms 2.3.2. VBR Voice This traffic is intended to approximate voice with silence compression. It is referred to in the simulation study as "VBR", and uses the following parameters: o Packet length 160 bytes o Long-term average rate 21.76 Kbps Zhang, et al. Expires September 6, 2007 [Page 8] Internet-Draft CL Simulation Study March 2007 o On Period mean duration 340ms; during the on period traffic is sent with the CBR voice parameters described above o Off Period mean duration 660ms; no traffic is sent during the off period 2.3.3. Synthetic "Video" - High Peak-to-Mean Ratio VBR Traffic (SVD) This model is on-off traffic with video-like mean-to-peak ratio and mean rate approximating that of an MPEG-2 video stream. No attempt is made to simulate any other aspects of a video stream, and this model is merely that of on-off traffic. Although there is no claim that this model represents the performance of video traffic under the algorithms in question adequately, intuitively, this model should be more challenging for a measurement-based algorithm than the actual MPEG video, and as a result, 'good' or "reasonable" performance on this traffic model indicates that MPEG traffic should perform at least as well. We term this type of traffic SVD for "Synthetic Video". Parameters used for this traffic models are: o Long term average rate 4 Mbps o On Period mean duration 340ms; during the on-period the packets are sent at 12 Mbps o 1500 byte packets, packet inter-arrival: 1ms o Off Period mean duration 660ms 2.3.4. Real Video Traces (VTR) We used a publicly available library of frame size traces of long MPEG-4 and H.263 encoded video obtained from http://www.tkn.tu-berlin.de/research/trace/trace.html (courtesy Telecommunication Networks Group of Technical University of Berlin). Each trace is roughly 60 minutes in length, consisting of a list of records in the format of . Among the 160 available traces, we picked the two with the highest average rate (averaged over the trace length, in this case, 60 minutes. In addition, the two also have a similar average rate). The trace file used in the simulation is the concatenation of the two. Since the duration of the flow is much smaller than the length of the trace, we need to check how does the expect rate of flow related the trace's long term average. To do so, we simulate a number of flows starting from random locations in the trace with duration chosen to be exponentially distributed with mean 1min. The results show that the expected rate of flow is roughly the same as the trace's average. Traffic characteristics are summarized below: Zhang, et al. Expires September 6, 2007 [Page 9] Internet-Draft CL Simulation Study March 2007 o Average rate 769 Kbps o Each frame is sent with packet length 1500 bytes and packet inter- arrival time 1ms o No traffic is sent between frames. 2.3.5. Randomization of Base Traffic Models To emulate some degree of disruption of the arrival models we used by the queuing encountered by the traffic stream before its arrival to the PCN region, we implemented limited randomization of the base models by randomly moving the packet by a small amount of time around its transmission time in the corresponding base traffic model. We implemented randomized versions of all 4 traffic streams (CBR, VBR, SVD and VTR) by randomizing the CBR portion of each model. All multi-bottleneck experiments in this document use the randomized versions of the traffic models, while most of the single bottleneck experiments use the base traffic models, unless stated otherwise. Although we expect to be able to run all topologies with both randomized and non-randomized models in the future work, we believe that there should be no qualitative difference between the performance with randomized and unrandomized models in all cases except a subset of pre-emption experiments with low ingress-egress aggregation levels, which have already been examined under both randomized and un-randomized models and reported in this draft. 2.4. Simulation Environment The simulation study reported here used purpose built discrete-event simulator implemented in ECLiPSe Language (http://eclipse.crosscoreop.com/eclipse). The latter is intended for general programming tasks, and is especially suitable for rapid prototyping. Simulations were run on Enterprise Linux Red Hat, IBM eServer x335, 3.2GHz Intel Xeon, 4GB RAM. 3. Admission Control 3.1. Parameter Settings 3.1.1. Virtual queue settings Unless otherwise specified, most of the simulations were run with the following Virtual Queue thresholds: Zhang, et al. Expires September 6, 2007 [Page 10] Internet-Draft CL Simulation Study March 2007 o min-marking-threshold: 5ms at virtual queue rate o max-marking-threshold: 15ms at virtual queue rate o virtual-queue-upper-limit: 20ms at virtual queue rate The virtual-queue-upper-limit puts an upper bound on how much the virtual queue can grow. Note that the virtual queue is drained at a configured rate smaller than the link speed. Most of the simulations were set with the configured-admission-rate of the virtual queue at half the link speed. Note that as long as there is no packet loss, the admission control scheme successfully keeps the load of admitted flows at the desired level regardless of the actual setting of the configured-admission- limit. However, it is not clear if this remains true when the configured-admission-rate is close to the link speed/actual queue service rate. Further work is necessary to quantify the performance of the scheme with smaller service rate/ virtual queue rate ratio, where packet loss may be an issue. 3.1.2. Egress measurements The CLE is computed as an exponential weighted moving average (EWMA) with a weight of 0.01. In the simulation results presented in sections 3.2 and 3.3 the CLE is computed on a per-packet basis as it is that setting that was used in [I-D.briscoe-tsvwg-cl-phb], from which these results are taken. For those experiments the CLE value 0.5 and EWMA weight of 0.01 are used unless otherwise specified. Our subsequent study indicated that there is no significant difference between the observed performance of interval-based and per-packet egress measurements. Since interval based measurements for a large number of ingresses are substantially easier for hardware implementations, subsequent studies reported in the rest of this draft concentrated on the interval based egress measurement. The measurement interval was chosen to be 100ms, and a range of CLE values and EWMA weights was explored, as specified in specific experiment descriptions. 3.2. Basic Bottleneck Aggregation Results One of the assumptions in [I-D.briscoe-tsvwg-cl-architecture] is that there is sufficient aggregation on the "bottleneck" links. Our first set of experiments revolved around getting some preliminary intuition of what constitutes "enough bottleneck aggregation" for the traffic models we chose. To that end we fixed configured admission rate at half the link speed in the range of T1 (1.5 Mbps) through 1Gbps, and examined the level of aggregation at different link speeds for different traffic models corresponding to the chosen configured admission rate at those speeds. Further, to eliminate the issue of Zhang, et al. Expires September 6, 2007 [Page 11] Internet-Draft CL Simulation Study March 2007 whether ingress-egress pair aggregation has any significant effect, in the experiments performed in this section we used Single Link topology only, so that all flows shared the same ingress-egress pair. We found that on links of capacity from 10Mbps to OC3, admission control for CBR voice and ON-OFF voice (VBR) traffic work reliably with the range of parameters we simulated, both with Poisson and Batch call arrivals. As the performance of the algorithm was quite good at these speeds, and generally becomes the better the higher the degree of aggregation of traffic, we chose to not investigate higher link speeds for CBR and VBR voice, within the time constraints of this effort. The performance at lower link speeds was substantially worse, and these results are not presented here. These results indicate that a rule of thumb, admission control algorithm described in [I-D.briscoe-tsvwg-cl-architecture] should not be used at aggregations substantially below 5 Mbps of aggregate rate even for voice traffic (with or without silence compression). For higher-rate on-off SVD traffic, due to time limitations we simulated 1Gbps and OC12 (622 Mbps) links and Poisson arrivals only. Note that due to the high mean and peak rates of this traffic model, slower links are unlikely to yield sufficient level of aggregation of this type of traffic to satisfy the flow aggregation assumptions of [I-D.briscoe-tsvwg-cl-architecture]. Our simulations indicated that this model also behaved quite well at these levels of aggregation, although the deviation from the configured-admission-rate is slightly higher in this case than for the less bursty traffic models. Recalling that simulated SVD model is in fact just on-off traffic with high peak rate and video-like peak ratio, we believe that the actual video will behave only better, and hence it follows that with bottleneck aggregation of the order of 150 SVD flows the admission control algorithm is expected to perform reasonably well. Note however that this statement assumes sufficient per ingress-egress pair aggregation as well. For these link speeds and traffic models, we investigated the demand overload of 2x-5x. Performance at lower levels of overload is expected to be only better, and higher levels of overloads have not been studied due to time limitations. Table 3.1 below summarizes the worst case difference between the admitted load vs. Configured admission rate (which we refer to as over-admisison-perc). The worst case difference was taken over all experiments with the corresponding range of link speeds and demand overloads. In general, the higher the demand, the more challenging it is for the admission control algorithm due to a larger number of near-simultaneous arrivals at higher overloads, and as a result the worst case results in Table 3.1 correspond to the 5x demand overload experiments. Zhang, et al. Expires September 6, 2007 [Page 12] Internet-Draft CL Simulation Study March 2007 --------------------------------------------------------------------- | | | | overadmission | standard | | Link type | traffic | call | percent | deviation to | | | type | arrival | | conf-adm-rate| | | | process | | ratio | --------------------------------------------------------------------- |T3,100Mbps,OC3 | CBR | POISSON | 0.5% | 0.005 | --------------------------------------------------------------------- |T3,100Mbps,OC3 | VBR | POISSON | 2.5% | 0.025 | --------------------------------------------------------------------- |T3,100Mbps,OC3 | CBR | BATCH | 1.0% | 0.01 | --------------------------------------------------------------------- |T3,100Mbps,OC3 | VBR | BATCH | 3.0% | 0.03 | --------------------------------------------------------------------- | 1Gbps | SVD | POISSON | 2.0% | 0.08 | --------------------------------------------------------------------- | OC12 | SVD | POISSON | 0.0% | 0.1 | --------------------------------------------------------------------- Table 3.1. Summary of the admission control results for links above T3 speeds. Note: T3 = 45Mbps, OC3 = 155Mbps, OC12 = 622Mbps. 3.3. Sensitivity to Call Arrival Assumptions In the previous section we listed that at sufficient levels of aggregation Poisson call arrivals assumption was not critical in the sense that even a burstier, batch arrival process resulted in a reasonable performance for all traffic models. In this section we investigate to what extent the Poisson call arrival assumption affect the accuracy of the admission control algorithm. The results presented here show that the Poisson call arrival assumption matters significantly at all levels of aggregation, while at lower levels of aggregation it makes the difference between poor but possibly tolerable performance to completely unacceptable (see below). To that end we investigated the comparative performance of the algorithm with Poisson and Batch call arrival processes for the CBR and VBR voice traffic. The mean call arrival rate was the same for both processes, with the demand overloads ranging from 2x to 5x. Table 3.2 below summarizes the difference between the admitted load and the configured-admission-rate for CBR Voice in the case of Poisson and Batch arrivals. Table 3.3 provides a similar summary for on-off traffic simulating voice with silence compression. The results in the tables correspond to the worst case across all overload factors (and when multiple links speeds are listed, across all those link speeds). Zhang, et al. Expires September 6, 2007 [Page 13] Internet-Draft CL Simulation Study March 2007 ----------------------------------------------------------- | Link type | arrival |overadmission | standard | | | model |percent | deviation to | | | | | conf-adm-rate| | | | | ratio | ----------------------------------------------------------- | 1Mbps, T1 | BATCH | 30.0% | 0.30 | ----------------------------------------------------------- | 10 Mbps | BATCH | 5.0% | 0.08 | ----------------------------------------------------------- |T3,100Mbps,OC3| BATCH | 1.0% | 0.01 | ----------------------------------------------------------- | 1Mbps, T1 | POISSON | 5.0% | 0.10 | ----------------------------------------------------------- | 10 Mbps | POISSON | 1.0% | 0.02 | ----------------------------------------------------------- |T3,100Mbps,OC3| POISSON | 0.5% | 0.005 | ----------------------------------------------------------- Table 3.2. Comparison of Poisson and Batch call arrival models for CBR voice. Note: T1 = 1.5Mbps, T3 = 45Mbps, OC3 = 155Mbps, OC12 = 622Mbps ----------------------------------------------------------- | Link type | arrival | overadmission | standard | | | model | percent | deviation to | | | | | conf-adm-rate| | | | | ratio | ----------------------------------------------------------- | 1Mbps, T1 | BATCH | 40.0% | 0.30 | ----------------------------------------------------------- | 10 Mbps | BATCH | 8.0% | 0.06 | ----------------------------------------------------------- |T3,100Mbps,OC3| BATCH | 3.0% | 0.03 | ----------------------------------------------------------- | 1Mbps, T1 | POISSON | 15.0% | 0.20 | ----------------------------------------------------------- | 10 Mbps | POISSON | 7.0% | 0.06 | ----------------------------------------------------------- |T3,100Mbps,OC3| POISSON | 2.5% | 0.025 | ----------------------------------------------------------- Table 3.3. Comparison of Poisson and Batch call arrival models for VBR voice with silence compression. Note: T1 = 1.5Mbps, T3 = 45Mbps, OC3 = 155Mbps, OC12 = 622Mbps. 3.4. Sensitivity to Marking Parameters at the Bottleneck Zhang, et al. Expires September 6, 2007 [Page 14] Internet-Draft CL Simulation Study March 2007 3.4.1. Ramp vs Step Marking Draft [I-D.briscoe-tsvwg-cl-architecture] gave an option of "ramp" and "step" marking at the bottleneck. The behavior of the congestion control algorithm in all simulation experiments we performed did not substantially differ depending on whether the marking was "ramp", i.e. whether a separate min-marking-threshold and max-marking- threshold were used, with linear marking probability between these thresholds, or whether the marking was "step" with the min-marking- threshold and max-marking-threshold collapsed at the max- marking- threshold value, and marking all packets with probability 1 above this collapsed threshold. However, the difference between "ramp" and "step" may be more visible in the multiple congestion point case (recall that only a single congestion point experiments were performed so far). Another possible reason for this apparent lack of difference between "ramp" and "step" may relate to the choice of the egress measurement parameters and a relatively high CLE threshold of 0.5 Choosing a lower CLE-acceptance threshold and a faster measurement timescale may result in a better sensitivity to lower levels of marked traffic. Investigating the interaction between settings of the marking thresholds, the CLE-threshold, and the measurement parameters at the egress remains an area of future investigation. 3.4.2. Sensitivity to Virtual Queue Marking Thresholds The limited number of simulation experiments we performed indicate that the choice of the absolute value of the min- marking-threshold, the max-marking-threshold and the virtual-queue- upper-limit can have a visible effect on the algorithm performance. Specifically, choosing the min-marking-threshold and the max-marking- threshold too small may cause substantial under-utilization, especially on the slow links. However, at larger values of the min- marking-threshold and the max-marking-threshold, preliminary experiments suggest the algorithm's performance is insensitive to their values. The choice of the virtual-queue-upper-limit affects the amount of over-admission (above the configured-admission-rate threshold) in some cases, although this effect is not consistent throughout the experiments. The Table 3.4 below gives a summary of the difference between the admitted load and the configured-admission-rate as a function of the virtual queue parameters, for the SVD traffic model. The results in the table represent the worst case result among the experiments with different degree of demand overloads in the range of 2x-5x. Typically, higher deviation of admitted load from the configured- admission-rate occurs for the higher degree of demand overload. The sensitivity of smoother CBR and VBR voice traffic models to the variation of these parameters is not as significant as that presented in Table 3.4 for SVD. Zhang, et al. Expires September 6, 2007 [Page 15] Internet-Draft CL Simulation Study March 2007 ----------------------------------------------------------- | | | | standard | | Link type |min-threshold, | overadmission | deviation to | | |max-threshold, | percent | conf-adm-rate| | |upper-limit(ms)| | ratio | ----------------------------------------------------------- | 1Gbps |5, 15, 20 | 6.0% | 0.08 | ----------------------------------------------------------- | 1Gbps |1, 5, 10 | 2.0% | 0.07 | ----------------------------------------------------------- | 1Gbps |5, 15, 45 | 2.0% | 0.08 | ----------------------------------------------------------- | OC12 |5, 15, 20 | 5.0% | 0.11 | ----------------------------------------------------------- | OC12 |1, 5, 10 | 2.0% | 0.13 | ----------------------------------------------------------- | OC12 |5, 15, 45 | 0.0% | 0.10 | ----------------------------------------------------------- Table 3.4. Sensitivity of 4 Mbps on-off SVD traffic to the virtual queue settings. Note: T1 = 1.5Mbps, T3 = 45Mbps, OC3 = 155Mbps, OC12 = 622Mbps 3.5. Sensitivity to RTT We performed a limited amount of sensitivity analysis of the admission control algorithm used to the range of round trip propagation time (which is the dominant component of the control delay in the typical environment using Pre-congestion notification). We considered both the case when all flows in a given experiment had the same RTT from this range, and also when RTT of different flows sharing a single bottleneck link in a single experiment had a range of round trip delays between 22 and 220 ms. The results were good for all types of traffic tested, implying that the admission control algorithm is not sensitive to the either the absolute value of the round-trip propagation time or relative value of the round-trip propagation time, at least in the range of values tested. We expect this to remain true for a wider range of round-trip propagation times. 3.6. Sensitivity to EWMA weight and CLE This section represents the results of the investigation the combined effect of the EWMA weight and CLE setting at the egress in three types of settings on: o a Single Link topology of Fig. 2.1 Zhang, et al. Expires September 6, 2007 [Page 16] Internet-Draft CL Simulation Study March 2007 o RTT topology of Fig. 2.2 with 100 ingress links o PLT topologies of Fig. 2.3 We experiment with 3 levels of CLE (0.05, 0.15, 0.25) in combination of EWMA weight ranging from 0.1 to 0.9 (in 0.2 step increase). The overload (the ratio of the demand on the bottleneck link to the configured admission threshold) is taken in the range between 0.95 and 5. For brevity we present here only the results of the endpoints of this overload interval. For the intermediate values of overload the results are even closer to the expected than at the two boundary loads. For PLT topology with N bottlenecks, we have N over-admission-perc. values (each corresponds to one bottleneck link). We show here only the worse case values. That is, in the overload experiments (1-5x), the maximum of the N over-admission-perc is displayed; in case of underload (0.95x), the minimum is displayed. The simulation results reveal that for CBR, VBR and VTR, the admission control is rather insensitive to the EWMA weight and CLE changes. So instead of listing all 15 values (for each combination of weight and CLE), we display the 4-tuple summaries across all experiments with CBR, VBR and VTR in Table 3.5. These statistics show that over-admission-percentage values are rather similar, with the admitted load staying within -3%+2% range of the desired admission threshold, with quite limited variability. Note that the load of 0.95 corresponds to the case when the demand is below the configured admission rate, so the ideal performance of an admission control algorithm would be admit all flows demanding admission. Zhang, et al. Expires September 6, 2007 [Page 17] Internet-Draft CL Simulation Study March 2007 ---------------------------------------------------------- | Over Admission Perc Stats | Over | Topo | Type | | Min | Mean | Max | SD | Load | | | ---------------------------------------------------------- | 0 | 0 | 0 | 0 | 0.95 | | | |------------------------------------------| S.Link | | | 0.224 | 0.849 | 1.905 | 0.275 | 5 | | | |---------------------------------------------------| | | 0 | 0 | 0 | 0 | 0.95 | | | |------------------------------------------| RTT | CBR | | 0.200 | 0.899 | 1.956 | 0.279 | 5 | | | |---------------------------------------------------| | | -1.06 | -0.33 | -0.15 | 0.228 | 0.95 | | | |------------------------------------------| PLT | | | -0.58 | 0.740 | 1.149 | 0.404 | 5 | | | |---------------------------------------------------------- | -1.45 | -0.98 | -0.86 | 0.117 | 0.95 | | | |------------------------------------------| S.Link | | | -0.07 | 1.405 | 1.948 | 0.421 | 5 | | | |---------------------------------------------------| | | -1.56 | -0.80 | -0.69 | 0.16 | 0.95 | | | |------------------------------------------| RTT | VBR | | -0.11 | 1.463 | 2.199 | 0.462 | 5 | | | |---------------------------------------------------| | | -3.49 | -2.23 | -1.80 | 0.606 | 0.95 | | | |------------------------------------------| PLT | | | -1.37 | 0.978 | 1.501 | 0.744 | 5 | | | ---------------------------------------------------------- | 0 | 0 | 0 | 0 | 0.95 | | | |------------------------------------------| S.Link | | | -0.53 | 1.004 | 1.539 | 0.453 | 5 | | | |---------------------------------------------------| | | 0 | 0 | 0 | 0 | 0.95 | | | |------------------------------------------| RTT | VTR | | -0.21 | 1.382 | 1.868 | 0.503 | 5 | | | |---------------------------------------------------| | | 0 | 0 | 0 | 0 | 0.95 | | | |------------------------------------------| PLT | | | -0.86 | 0.686 | 1.117 | 0.452 | 5 | | | ---------------------------------------------------------- Table 3.5 Summarized performance for CBR, VBR and VTR across different parameter settings and topologies For SVD, the algorithms does show certain sensitivity to parameters. Table 3.6 records the over- admission-percentage for each combination of weights and CLE threshold for SVD traffic model. Zhang, et al. Expires September 6, 2007 [Page 18] Internet-Draft CL Simulation Study March 2007 -------------------------------------------------------------------- | | EWMA Weights | Over | Topo | | | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 | Load | | -------------------------------------------------------------------- | 0.05 | -4.87 | -3.05 | -2.92 | -2.40 | -2.40 | | | | 0.15 | -3.67 | -2.99 | -2.40 | -2.40 | -2.40 | 0.95 | | | 0.25 | -2.67 | -2.40 | -2.40 | -2.40 | -2.40 | | | | --------------------------------------------------------| Single | | C 0.05 | -4.03 | 2.52 | 3.45 | 5.70 | 5.17 | | Link | | L 0.15 | -0.81 | 3.29 | 6.35 | 6.80 | 8.13 | 5 | | | E 0.25 | 2.15 | 5.83 | 6.81 | 8.62 | 7.95 | | | | ----------------------------------------------------------------- | T 0.05 | -11.77 | -8.35 | -5.23 | -2.64 | -2.35 | | | | H 0.15 | -9.71 | -7.14 | -2.01 | -2.21 | -1.13 | 0.95 | | | R 0.25 | -5.54 | -6.04 | -3.28 | -0.88 | -0.27 | | | | E --------------------------------------------------------| RTT | | S 0.05 | -5.04 | -0.65 | 4.21 | 6.65 | 9.90 | | | | S 0.15 | -1.02 | 1.58 | 7.21 | 8.24 | 10.07 | 5 | | | H 0.25 | -0.76 | 1.96 | 7.43 | 9.66 | 11.26 | | | | E ----------------------------------------------------------------- | L 0.05 | -2.51 | -0.85 | -0.63 | 0.025 | -0.00 | | | | D 0.15 | -1.50 | -0.63 | -0.02 | 0.052 | -0.02 | 0.95 | | | 0.25 | -0.26 | 0.122 | 0.041 | -0.02 | 0.132 | | | | --------------------------------------------------------| PLT | | 0.05 | -3.50 | 0.422 | 1.899 | 3.339 | 3.770 | | | | 0.15 | -1.04 | 2.016 | 3.251 | 3.880 | 3.991 | 5 | | | 0.25 | 0.449 | 2.965 | 3.066 | 4.107 | 4.737 | | | -------------------------------------------------------------------- Table 3.6 Over-admission-percentage for SVD It follows from these results that while choosing the CLE and EWMA weights in the middle of the tested range appear to be more beneficial for the overall performance across the chosen range of overload, assuming the chosen values for the remaining parameters, at the same time performance is tolerable across the entire tested range of both values, even for very small ingress aggregation. The high level conclusion that can be drawn from Table 3.6 is that (predictably) high peak-to-mean ratio SVD traffic is substantially more stressful to the queue-based admission control algorithm, but a set of parameters exists that keeps the over-admission within about -3% - +10% of the expected load. 3.7. Effect of Ingress-Egress Aggregation In this section, we discuss the effect of Ingress-Egress aggregation on the algorithm by comparing the SingleLink results in Table 3.5 and 3.6 with the corresponding RTT results. As discussed earlier, the Zhang, et al. Expires September 6, 2007 [Page 19] Internet-Draft CL Simulation Study March 2007 actual choice of RTT values of different ingress links does not appear to have any significant effect on the simulation results. We believe that any appreciable difference between the two topologies relates to the degree of aggregation of each ingress-egress pair. One of the outcomes of the results presented in Table 3.5 and 3.6 is that the admission control algorithm of [I-D.briscoe-tsvwg-cl-architecture] seems relatively insensitive to the level of ingress-egress aggregation. This result is not entirely intuitive, and requires further exploration. Nevertheless, even if preliminary, these results are very encouraging: while the assumption of reasonable aggregation of PCN traffic at an internal bottleneck seems a relatively safe one, it is much less clear that it is safe to assume that high per ingress- egress aggregation level is a safe assumption in reality. In particular, the SVD setup with only ~100 SVD flows taking up about 50% of a 1G bottleneck link bandwidth with all 100 flows coming from different ingresses seems entirely plausible. It is therefore encouraging that the algorithm seems sufficiently robust under these circumstances. 3.8. Effect of Multiple Bottlenecks The results in Table 3.5 and 3.6 can also be used to demonstrate the effect of multi-bottleneck topology. In fact, as follows from these tables, there seems to be no visible performance difference in the case of multiple bottleneck topologies (PLT), compared to the case when only a single bottleneck is traversed (as in both SingleLink and RTT topologies). Note that it may even seem from the data that in the case of SVD the PLT out-performs the SingleLinks. In fact ths is not the case. The cause of the better performance in the case of PLT topology compared to that of Single Link is the fact that the bottleneck links in PLT happen to be 2.4 times the size of the ones in SingleLink and RTT cases. Given the same admission threshold relative to the link speed, this implies that the level of bottleneck aggregation in PLT is 2.4 times that of the SingleLink, while the ingress-egress pair aggregation levels of SingleLink and PLT are comparable. Hence, the better results in PLT compared to SingleLink should be viewed as the effect of the aggregation rather than the effect multi-bottleneck topology. (For RTT, the level of ingress-egress aggregation is smaller, and hence further performance degradation observed compared to the SingleLink). Since Tables 3.5 and 3.6 show only the worst case value across all bottlenecks in the topology), it is not possible to discern from those tables what, if any is the effect of subsequent bottlenecks on Zhang, et al. Expires September 6, 2007 [Page 20] Internet-Draft CL Simulation Study March 2007 the over-admission perc. of each individual bottleneck. In Table 3.7, we show a snapshot of the behavior of all bottlenecks in a 5 bottleneck topology. Here, the over-admission-perc. displayed is an average across all 15 experiments with different [weight, CLE] setting. (We do observe very much the same behavior in each of the individual experiment, hence providing a summarized stats does not invalidate the results). As seen from this table, there appears to be no significant difference in over-admission percentages across the different bottlenecks traversed by the "long-haul"flows in the PLT topologies. -------------------------------------------------------- | Traffic | Bottleneck LinkId | Over | | Type | 1 | 2 | 3 | 4 | 5 | Load | -------------------------------------------------------- | CBR | 0.413 | 0.443 | 0.429 | 0.412 | 0.412 | 5 | |-------------------------------------------------------- | VBR | 0.599 | 0.595 | 0.579 | 0.590 | 0.634 | 5 | |-------------------------------------------------------- | VTR | 0.266 | 0.279 | 0.290 | 0.247 | 0.298 | 5 | -- ----------------------------------------------------- Table 3.7 Over-admission-percentage for PLT5 for all bottlenecks 4. Pre-Emption 4.1. Pre-emption Model and Key Parameters We evaluate the preemption algorithm on all the topologies described in Section 2. In all experiments, initially all ingresses but one generate traffic such that the aggregate load on every bottleneck is substantially smaller than the configured preemption threshold. We refer to this initial aggregate load as "base traffic". Then at some point in the simulation, we emulate a network "failure" event by generating additional "failure traffic" and directing it to the appropriate bottleneck link(s). Both "base" and "failure" traffic are generated according to a Poisson distribution. The sum of the base and failure loads is what we refer to as "bottleneck load". In all preemption experiments presented in this document, we configure the preemption threshold as 1/2 of the bottleneck link speed. In all cases, we generate the bottleneck load (after failure) to be roughly 3/4 of the bottleneck link capacity. In the simulation, the router implementing PCN Preemption Marking operates as described in [I-D.briscoe-tsvwg-cl-architecture], marking all packets which find no token in the token bucket. In the case of Zhang, et al. Expires September 6, 2007 [Page 21] Internet-Draft CL Simulation Study March 2007 multiple bottlenecks, only previously unmarked traffic is metered against the token bucket. When an egress gateway receives a marked packet from the ingress, it will start measuring its Sustainable- Aggregate-Rate for this ingress, if it is not already in the pre- emption mode. If a marked packet arrives while the egress is already in the pre-emption mode, the packet is ignored. The measurement is interval based, with 100ms measurement interval chosen in all simulations. At the end of the measurement interval, the egress sends the measured Sustainable-Aggregate-Rate to the ingress, and leaves the Preemption mode. When the ingress receives the sustainable rate from the egress, it starts its own interval immediately (unless it is already in a measurement interval), and measures its sending rate to that egress. Then at the end of that measurement interval, it preempts the necessary amount of traffic. The ingress then leaves the Preemption mode until the next time it receives the sustainable rate estimate from the egress. In all our simulations the ingress used the same length of the measurement interval as the egress. Token bucket depth was set to 256 packets in all experiments presented here. We evaluate the performance of the algorithms using a metric called "over-preemption-percentage", which is defined as (actual-preemption - optimal-preemption) *100%. We apply this metric in two contexts: (1) the aggregate amount of preempted traffic on a given bottleneck link, and (2) the aggregate amount of pre-empted traffic of an ingress-egress traffic aggregate. The former relates to bottleneck utilization, and is quite straightforward: the optimal pre-emption would preempt all traffic above the configured pre-emption threshold, so "optimal" pre-emption is defined only by the configured pre- emption threshold. For the ingress-egress aggregates, the notion of optimality is closely related to the notion of fairness. In general, fairness can be defined in many different ways, and we do not attempt to argue for one being "more optimal" than the other. In this draft we call the per-ingress-egress pre-emption amounts optimal if the amount of preempted traffic is distributed among all ingress-egress pairs sharing a bottleneck link in proportion to their rates prior to pre-emption. For brevity, we omit the details of the definition for the multiple bottleneck case here as it is not central to the discussion in this draft. 4.2. Effect of RTT Difference Our experiments indicate that absolute value of RTT within the chosen range ( up to 220 ms) has no effect on the performance of the Preemption algorithm, as long as the RTTs of the different ingress- egress pairs are comparable. This section investigates the impact of the relative difference or RTTs of different flows sharing a single bottleneck. We show that in principle, when both short- and long-RTT Zhang, et al. Expires September 6, 2007 [Page 22] Internet-Draft CL Simulation Study March 2007 ingress-egress pairs are present, the difference in RTT may cause over-preemption. To demonstrate that we consider a simple RTT topology with two ingresses, with CBR traffic. Table 4.3 shows the experiment setup and preemption results. The overall traffic on the bottleneck during the event is 1761 CBR flows, which constitutes 75% of OC3 link. Ingress 2 has a RTT that around 50ms larger than Ingress 1. The actual preemption and the over-preemption percentage are listed for each ingress separately. The results shows that Ingress 1 over- preempts about 10% of its traffic, which results in about 6% of the overall over-preemption at the bottleneck. --------------------------------------------- |Ingress|Bottleneck| RTT | Actual | Over-Pre.| | |Eventload | | Preempt | Perc | --------------------------------------------- | 1 | 1178 | 1ms | 0.405 | 9.59% | ------------------------- ------------------- | 2 | 583 | 50ms| 0.302 | -0.51% | --------------------------------------------- Table 4.3. Summary of the RTT difference Results. Figure 4.3 shows a time vs. load graph that is intended to capture the effect of the preemption algorithm in this experiment. The X-axis is the time, where a number of important time points are labeled (actual time is listed in table due to lack of space). The Y-axis is the load on the bottleneck link. The stacked graph on the right shows the behavior of each individual ingress. (The shade region is the load contributes to Ingress 1 and the clear region corresponds to Ingress 2). Finally, the dotted line represent the preemption threshold. Zhang, et al. Expires September 6, 2007 [Page 23] Internet-Draft CL Simulation Study March 2007 | ____ ____ L1| | | | | | | | | | | | | | | | | |_ | |_ | | | | | L2|....|......|___............. |___ ..|___................. | | |____________ |****| |________________ L | | L |****| o | | o |****|_____ a | | a |**********|_________________ d | | d |**************************** |____| |**************************** | |**************************** | |**************************** | |**************************** | |**************************** |____|____|_|___|___________ |____|_|___|_________________ t1 t2 t3 t4 t1 t2 t3 t4 Time Time --------------------------------- | t1 | t2 | t3 | t4 | --------------------------------- | 200.0 | 200.2 | 200.25 | 200.40 | ------------------------- ------- Fig 4.4. Time series of preemption events in the RT Difference experiment As the simulated failure event occur at time t1 (200s), the load on the bottleneck goes over the preemption threshold by 1/3, thereby activating the preemption algorithm. 200ms afterward at t2, which is sum of the measurements of sustainable rate at the egress (100 ms) and the consequent ingress measurement of its current sending rate, Ingress1 with negligible RTT (1ms) start preempting its traffic. 50ms later at t3, Ingress 2 preempts its share of traffic. Note, at this point, both of ingresses had preempted the correct amount, which is why the load on bottleneck between time t3 and t4 is exactly at the preemption threshold. However the stacked graph shows that Ingress1 did another around of preemption at t4 (200.4), which corresponds to its 10% over-preemption. The reason for this effect is that during the interval between t2 and t3, when Ingress1 finishes its preemptions, and Ingress2 has not yet started due to its longer RTT, the non-preempted traffic from Ingress2 will cause a further decrement in Ingress1's sustainable rate during the measurement interval (t2, t2+100ms). This will in turn cause Ingress1 to preempt at time t4 to compensate for that 50ms of excess traffic from Ingress2. Our follow-up results indicate that this RTT effect exists Zhang, et al. Expires September 6, 2007 [Page 24] Internet-Draft CL Simulation Study March 2007 to some degree in every experiment that has sufficient Ingress RTT difference, independent of the traffic type. Although for burstier traffic the over-preemption may be worse than shown above, in our experiments we did not see over-preemption that would be drastically larger. However, further investigation is needed to access whether other scenarios might lead to more substantial over-preemption. 4.3. Ingress-Egress Aggregation Experiments 4.3.1. Motivation for the Investigation While sufficiently high bottleneck aggregation is listed as one of the underlying assumptions of [I-D.briscoe-tsvwg-cl-architecture], there remains a question of whether of not sufficient degree of aggregation of traffic on a per ingress-egress pair is also necessary. We saw that in our admissions experiments, the algorithm performed reasonably well even with small ingress-egress aggregation levels, as long as the bottleneck aggregation level was sufficiently high. A similar investigation needs to be performed for the case of pre-emption. Assuming a large degree of aggregation on a per ingress-egress pair is less attractive, as one can easily imagine that a bottleneck link in a PCN region may carry traffic from hundreds or thousands of ingresses, and so one can easily construct cases when per-ingress- egress pair traffic is generated by a relatively small number of flows. This is especially true for high-rate SVD flows. If indeed the number of flows in an ingress-egress pair is small, theoretically there exists concern that the granularity of preemption (which can operate on integer number of flows only) will result in large inaccuracies of the amount of traffic preempted in a per-ingress- egress aggregate, and consequently a large amount of over-preemption. As an example of a situation creating this problem suppose that a bottleneck link is shared by 2N flows, each one of them coming from a different ingress-egress pair. Suppose that only N flows can be supported at the configured Preemption rate, so N out of 2N flows must be preempted. This means that half of the packets will get Preemption marked. If these marked packets are more or less uniformly distributed among the flows sharing the bottleneck, one should expect that every one of the 2N flows will have half of its packets marked. That in turn would imply that each ingress would need to preempt half of its traffic, and since it only has one flow, it would have to preempt that flow (assuming that the number of flows to preempt is rounded up to the nearest flow) or not preempt any flow at all (if the rounding down to the nearest flow is done). In either case the outcome is quite pessimistic- either all flows are preempted, or the Preemption will not take any effect at all. Clearly, a similar (although perhaps less drastic) effect would be if Zhang, et al. Expires September 6, 2007 [Page 25] Internet-Draft CL Simulation Study March 2007 a few flows rather than one constitute an ingress-egress pair. The effect quickly disappears when the rate of an individual flow is sufficiently small compared to the total rate of the ingress-egress aggregate. While a number of possible changes to the ingress behavior could be considered to solve or alleviate this problem, we set out to investigate whether this problem does in fact occur in practice. The key question in that respect is whether or not the packets do indeed get marked more or less uniformly among different flows sharing a bottleneck over the timescale of the ingress and egress measurement intervals. The results of this investigation are presented in the following subsections. 4.3.2. Detailed results To investigate the effect of small ingress-egress aggregation, we first performed the experiments with three traffic types (CBR, VBR and SVD) at different degrees of ingress aggregation. All the experiments in this section are carried out on RTT topology; the different ingress aggregation levels are obtained by varying the number of ingress links in the topology. All links' RTT are set to 1ms (to eliminate the potential RTT influence). CBR and VBR voice used an OC3 bottleneck link while SVD used an OC48 link, with Preemption threshold set at 50% of the link bandwidth in all cases. The bottleneck aggregation was therefore quite high (with respect to the corresponding link bandwidth), but the ingress-egress aggregation was varied from 1 flow to about 1/3 of the number of flows at the bottleneck in each ingress-egress pair. The results are summarized in Table 4.1 below. Zhang, et al. Expires September 6, 2007 [Page 26] Internet-Draft CL Simulation Study March 2007 ---------------------------------------------------------------- |Traffic|BtleNeck|Number |Flows per| Preempt | Actual |Over-Pre.| | Model | Load |Ingrs. | Ingress |Threshold| Preempt | Perc | ---------------------------------------------------------------- | CBR | 1789 | 2 | 582 | | 0.321 | 0.05% | | CBR | 1772 | 70 | 9 | 1215 | 0.328 | 1.41% | | CBR | 1782 | 600 | 1 | | 0.336 | 1.85% | ---------------------------------------------------------------- | VBR | 5336 | 2 | 1759 | | 0.333 | 0.35% | | VBR | 5382 | 70 | 26 | 3574 | 0.364 | 2.84% | | VBR | 5405 | 1800 | 1 | | 0.368 | 2.99% | ---------------------------------------------------------------- | SVD | 402 | 2 | 135 | 305 | 0.375 | 8.95% | | SVD | 417 | 70 | 2 | | 0.352 | 8.39% | ---------------------------------------------------------------- Table 4.1 Effect of ingress-egress aggregation. In this table, bottleneck load at failure is represented as the number of flows at the bottleneck after the simulated failure event has occurred and before the preemption takes place. The "Number Ingress" column shows the number of ingresses in the RTT topology. In all cases, ideally, the algorithm should preempt roughly 1/3 of the traffic after the failure event has occurred (the exact percentage differs slightly from experiment to experiment due to some variability of load generation implementation). The second to last column shows the actual preemption percentage in each experiment and the last column shows how far it deviates from the optimal value in terms of over-preemption percentage (where the optimal value is computed based on the actual traffic generated in each experiment). The first conclusion that can be drawn from Table 4.1 is that in these experiments Pre-emption worked quite well for CBR and VBR, and even in the SVD case with just 2 flows per ingress the over- preemption is quite bounded. The second - far more unexpected - outcome of these results is that for all traffic types in these experiments the result show no appreciable effect of the ingress aggregation on the degree of ingress aggregation, as all the preemption percentage do not differ significantly. Given the discussion in the previous section that predicted substantial inaccuracy of pre-emption in the case of a small number of flows per ingress, this result appears both unexpected and encouraging, but does require explanation and discussion. Further analysis of the simulation traces of CBR traffic of experiments of Table 4.1 helped us identify the cause of this Zhang, et al. Expires September 6, 2007 [Page 27] Internet-Draft CL Simulation Study March 2007 phenomenon. It turned out that in all the simulation runs with CBR traffic, contrary to our expectation that Pre-emption marking will be more or less uniformly distributed among active flows, what actually happens is that some flows get all their packets marked, while other flows get no packets marked at all (we refer to this effect loosely as "synchronization" in the rest of this document). It is this phenomenon that, in the case of a single flow per ingress, made only the ingresses whose flows were marked preempt these flows, resulting in correct amount of preemption. Further analysis showed that in fact this effect is not a simulation artifact, and is a direct consequence of periodicity of individual CBR flows in combination with incidental choice of several parameters. As it happens, if the number of tokens arriving in the token bucket in an inter-packet interval of a single CBR flow is an integer multiple of a packet size, then if a packet of a flow is marked once, all the subsequent packets will find the same number of tokens in the token bucket and will also be marked. The proof of this fact is provided in the companion technical report. It seems clear that in general this synchronization cannot be relied upon, and we expected that for the VBR case we will see much less of it. Again, we were in for a surprise, as trace investigation of our initial results reported in Table 4.1 revealed that even though the token bucket state encountered by the packets of the same VBR flow was not quite the same, it was close enough so that again a large number of flows were either fully marked or fully unmarked. We realized that the reason for that is that the number of flows which are in the on-period during the relevant measurement intervals is relatively stable, and hence much of the effects observed for the CBR flows approximately holds for the on-off traffic we use for our VBR model. Since the on- period had the same rate as our CBR model, and the packet size was the same for the two models, similar behavior was observed in both sets of experiments. In the case of SVD, the examination of the CBR portion of the on- period of the SVD flow reveals that only every 50-th packet of the same flow will see the same token bucket state. This reflected in the fact that SVD experiments had a large number of partially marked flows, and synchronization could not have been responsible for relatively bounded over-preemption of about 9% reported in Table 4.1. We believe this performance should be traced to the burstiness of our crude SVD traffic model at the time scales commensurate with the measurement period. In our quest to further understand the unexpectedly reasonable performance at small ingress-egress aggregation we then tested the hypothesis that randomizing the packet inter-arrival time must surely break synchronization of the CBR traffic, and to that end we modified our CBR traffic model to what we Zhang, et al. Expires September 6, 2007 [Page 28] Internet-Draft CL Simulation Study March 2007 call "randomized CBR". As described in section 2, randomized CBR is obtained from a CBR stream by randomly moving the packet by a small amount of time around its transmission time in the corresponding CBR flow. We implemented the same "randomization" to the on-periods of the VBR, VTR, SVD flows. Table 4.2 summaries the repeated the experiments with the randomized CBR while keeping all other setting the same. We also ran the same experiments with real video traces (VTR), also reported in Table 4.2. ----------------------------------------------------------------- |Traffic|BtleNeck| Number |Flows per| Preempt | Actual |Over-Pre.| | Model | Load | Ingre. | Ingress |Threshold| Preempt |Perc (%) | ----------------------------------------------------------------- | | 1789 | 2 | 582 | | 0.32 | 0.53 | | CBR | 1818 | 70 | 9 | 1215 | 0.37 | 3.80 | | | 1780 | 600 | 1 | | 0.45 | 13.19 | ----------------------------------------------------------------- | | 5340 | 2 | 1759 | | 0.36 | 2.80 | | VBR | 5344 | 70 | 26 | 3574 | 0.38 | 4.49 | | | 5363 | 1800 | 1 | | 0.35 | 1.14 | ----------------------------------------------------------------- | | 963 | 2 | 318 | | 0.32 | 3.29 | | VTR | 977 | 70 | 5 | 682 | 0.39 | 8.45 | | | 958 | 300 | 1 | | 0.43 | 14.28 | ----------------------------------------------------------------- | | 427 | 2 | 135 | | 0.38 | 10.37 | | SVD | 425 | 70 | 2 | 304 | 0.36 | 9.56 | | | 430 | 140 | 1 | | 0.37 | 9.17 | ----------------------------------------------------------------- Table 4.2 Effect of ingress-egress aggregation.("randomized CBR") The results in Table 4.2 finally show a clear observable aggregation effect in the cases of CBR and VTR, which now displayed substantially more over-preemption (~13% and ~14% respectively), confirming our expectation that the unexpectedly good performance cannot be expected in general for low ingress-egress aggregation levels. Yet again randomization had almost no effect for VBR and SVD, which stubbornly continued to defy our expectations by showing little sensitivity to low ingress-egress aggregation levels. A further analysis of traces reveals that, the unexpected "good performance" with VBR traffic is due to the nature of the traffic's burstiness and the chosen parameters for the on-off periods. For instance, in the experiment with 1800-ingress topology, a large number of single flow ingresses did not preempt their only flow simply because the flow was not actually sending any traffic during the measurement period; hence there were no marked packets to trigger the preemption. In our experiment setup, both VBR and SVD have an on/off ratio of 1/3. Zhang, et al. Expires September 6, 2007 [Page 29] Internet-Draft CL Simulation Study March 2007 Given that, a hypothesis is that even if all ingresses have exactly one flow, the system (all ingresses together) can preempt no more than approximately 1/3 of its total flows in each round because on the average only 1/3 of the flows are active at a time. Hence, the theoretical worst case of the aggregation effect does not occur in this case. For the CBR and VTR experiments that do exhibit aggregation effect, we are interested in what level of ingress-egress aggregation is sufficient to remove this effect. To answer the question, we varied the number of ingresses in the RTT topology (hencechanging the number of flows per ingress-egress pair), while keeping all other parameters the same. The results are summarized in Table 4.3 below. It shows that while only 4 flow-per-ingress can already gives tolerable over- preemption perc. (~5%) in the case of CBR, it requires much more (35 flows per ingress) to achieve a reasonable result for VTR. ----------------------------------------------------------------- |Traffic|BtleNeck| Number |Flows per| Preempt | Actual |Over-Pre.| | Model | Load | Ingre. | Ingress |Threshold| Preempt |Perc (%) | ----------------------------------------------------------------- | | 1762 | 2 | 582 | | 0.32 | 0.53 | | | 1746 | 10 | 65 | | 0.32 | 1.28 | | | 1719 | 35 | 17 | | 0.32 | 2.65 | | CBR | 1818 | 70 | 9 | 1215 | 0.37 | 3.80 | | | 1687 | 140 | 4 | | 0.34 | 5.78 | | | 1690 | 300 | 2 | | 0.38 | 9.60 | | | 1780 | 600 | 1 | | 0.45 | 13.19 | ----------------------------------------------------------------- | | 963 | 2 | 318 | | 0.32 | 3.29 | | | 960 | 10 | 35 | | 0.34 | 5.22 | | VTR | 955 | 35 | 9 | | 0.35 | 6.08 | | | 977 | 70 | 5 | 682 | 0.39 | 8.45 | | | 935 | 140 | 2 | | 0.38 | 11.29 | | | 958 | 300 | 1 | | 0.43 | 14.28 | ----------------------------------------------------------------- Table 4.3 Varying ingress-egress aggregation for CBR and VTR 4.3.3. Discussion of the Ingress Aggregation Results The fact that slight randomization of CBR traffic does increase over- preemption substantially in the simple single bottleneck topology does suggest a strong need of looking at this phenomenon in the context of a multi-hop network with multiple bottlenecks, as queuing at the multiple hops will result in the change of the strict CBR pattern of the CBR voice. Investigation of the sensitivity of the accuracy of Pre-emption at small ingress-egress aggregation levels for voice traffic should certainly include simulation of other voice Zhang, et al. Expires September 6, 2007 [Page 30] Internet-Draft CL Simulation Study March 2007 codices and their traffic mix. In general, the unexpected sturdiness of the Preemption algorithm at small levels of aggregation warrants further investigation of this phenomenon both from the theoretical point of view and further simulations. The results of the previous section suggest that it is likely that in many realistic scenarios the over-preemption will be a common occurrence for the low levels of ingress-egress aggregation, although the extent of this preemption may not be as large as could be predicted by the worst case arguments. Despite the unexpected difficulty in achieving the predicted large over-preemption with the chosen traffic models and here remains a substantial concern that low ingress-egress aggregation levels may be the Achilles' heel of the excess-rate based pre-emption mechanism of . [I-D.briscoe-tsvwg-cl-architecture]. 4.4. Multiple Bottlenecks Experiments 4.4.1. Motivation for the Investigation In this section, we focus our analysis on the multi-bottleneck effect. That is, how would Pre-emption algorithm perform when the flows from one (or more) ingress-egress pairs traverse multiple bottleneck links. For the rest of section, we use the term "IE- aggregate" (IEA for short) to refer to the flow aggregates of a certain ingress-egress pair. In theory, we expect the IE-aggregate that travel more bottlenecks will be penalized more, which would result in over-preemption on a per-ingress-egress basis. We refer to this as a "beat-down" effect. The main consequence of the beat-down effect is the excessive pre-emption at the up-stream bottlenecks, leading to underutilization of those bottlenecks To illustrate the beat-down effect, consider the setup with 2 bottleneck PLT in Figure 2.3(a). Recall the two bottlenecks are links A - B and B - C. Both links have the same capacity. There are two short IE-aggregates, one from Ingress D to Egress E (IEA2); the other from Ingress E to Egress F (IEA3); each traversing a single bottleneck. At the time of the failure event, each short IEA carries the traffic load that equals 1/4 of the bottleneck link size (or 1/2 of the pre-emption threshold, which in this case is set to 50% of the link bandwidth). The long IE-aggregate (IEA1), from Ingress A to Egress C, traverses both of bottlenecks and carries twice as much traffic as the short ones. Given that we configure the preemption threshold to be 1/2 of link size, it's easy to see that letting all IEAs preempt 1/3 of their flows will give the optimal results (which we refer to as "optimal- preemption") in the sense that all bottleneck links will be fully Zhang, et al. Expires September 6, 2007 [Page 31] Internet-Draft CL Simulation Study March 2007 utilized. However, what we expect to happen is the following. When the long IE-aggregate (IEA1) traverses through the first bottleneck link, assuming uniformly random marking, 1/3 of its traffic will get preemption-marked. (The short IEA2 will also get 1/3 of its traffic marked). Next, 2/3 of IEA1's unmarked traffic together with IEA3's traffic will result a load of (2/3)*(1/2) + 1/4 = 7/12 on the second bottleneck. This implies that for the aggregate IEA1, an additional (7/12-1/2) / (7/12) = 1/7 percentage of remaining unmarked traffic will be marked. And for IEA3, only 1/7 (instead of 1/3) of its traffic will be marked. To summarize, a beat-down effect in this simple setting means we should see the following preemption behaviors: o EA1 : 1/3 + 2/3 * 1/7 = 3/7 > 1/3 o IEA2 : 1/3 o IEA3 : 1/7 < 1/3 o Bottleneck1 : (3/7 * 1/2 + 1/3 * 1/4) / (3/4) = 25/63 > 1/3 o Bottleneck2 : (3/7 * 1/2 + 1/7 * 1/4) / (3/4) = 1/3 We refer to the above values as "expected-preemption". In general, the more bottlenecks an IEA traverses, the more over-preemption occurs at both the long IEA and the upstream bottlenecks. The goal of our experiments was to validate to what extent the beat- down effect is visible in practice, and how much underutilization on up-stream links will actually be seen. To that end, we used 2, 3 and 5 PLT topologies with various traffic types. We are interested in whether the actual-preemption exhibits the multi-bottleneck effect comparing to the optimal-preemption, and also how much does the actual-preemption deviate from our expected-preemption. The results of this investigation are presented in the following subsections. 4.4.2. Detailed Results For the first set of experiments, we use the similar setup as the example described in last subsection. That is, at failure event time, all bottleneck links have a load of roughly 3/4 of its link size. In addition, the long IEA constitutes 2/3 of this load, while the short one is 1/3. Table 4.7 shows the sample output for the multi-bottleneck experiments (In this case, it's with CBR traffic and 5 PLT topology). The first row (labeled IEA1) represents the long IE-Aggregate that travels multiple bottlenecks (the exact count of the bottlenecks is given in the parenthesis after the IEA's name). The rest of IEA rows are the short IE-Aggregates that each travels Zhang, et al. Expires September 6, 2007 [Page 32] Internet-Draft CL Simulation Study March 2007 only one bottleneck. The IEA rows are ordered based on the bottleneck it traverses (from upstream to downstream). The same information is shown for both IEAs and bottlenecks. The last two columns are of most interests in that they shows the how far the actual-preemption deviates from the optimal, and from the expectation. ----------------------------------------------------------------- | | Optimal | Expected | Actual | A - O | A - E | | |Preemption|Preemption|Preemption| | | ----------------------------------------------------------------- | IEA1 (5H) | 0.3090 | 0.4432 | 0.4446 | 13.56 | 0.14 | | IEA1 (5H) | 0.3090 | 0.3090 | 0.3231 | 1.42 | 1.42 | | IEA1 (5H) | 0.3034 | 0.1181 | 0.1601 | -14.33 | 4.20 | |5 IEA1 (5H) | 0.3048 | 0.0541 | 0.0947 | -21.01 | 4.07 | | IEA1 (5H) | 0.3073 | 0.0293 | 0.0641 | -24.32 | 3.48 | |B IEA1 (5H) | 0.3031 | 0.0049 | 0.0307 | -27.24 | 2.57 | |R BN1 | 0.3090 | 0.3995 | 0.4051 | 9.61 | 0.56 | | BN2 | 0.3034 | 0.3392 | 0.3536 | 5.02 | 1.44 | | BN3 | 0.3048 | 0.3182 | 0.3322 | 2.73 | 1.40 | | BN4 | 0.3073 | 0.3092 | 0.3214 | 1.41 | 1.22 | | BN5 | 0.3031 | 0.3031 | 0.3123 | 0.92 | 0.92 | ----------------------------------------------------------------- Table 4.7 Over-preemption percentage with 5-PLT topology and CBR The following Table 4.8 summarizes the main results for multi- bottleneck experiments. For each combination of the traffic type and PLT topology, it shows (actual-preemption - optimal-preemption)*100% (labeled as 'A-O') and (actual-preemption - expect-preemption)*100% (labeled as 'A-E'). Zhang, et al. Expires September 6, 2007 [Page 33] Internet-Draft CL Simulation Study March 2007 ------------------------------------------------------------------- | | CBR | VBR | VTR | SVD | | | A-O A-E | A-O A-E | A-O A-E | A-O A-E | ------------------------------------------------------------------- | IEA1(2H)| 7.61 -0.71 | 10.36 1.06 | 9.19 1.26 | 16.07 8.55 | |2 IEA2(1H)| 0.85 0.85 | 0.86 0.86 | 3.17 3.17 | 7.30 7.30 | |P IEA3(1H)|-14.4 4.07 |-12.39 6.42 |-10.27 7.26 |-1.74 13.81 | |L BN1 | 5.45 -0.21 | 7.20 1.00 | 7.24 1.88 | 13.9 8.15 | |T BN2 | 0.80 0.80 | 2.84 2.84 | 3.18 3.18 | 10.26 10.26 | ------------------------------------------------------------------- | IEA1(3H)| 10.8 -0.85 | 13.98 1.18 | 11.90 0.87 | 19.53 9.37 | |3 IEA2(1H)| 0.78 0.78 | 1.03 1.03 | 3.35 3.35 | 5.06 5.06 | | IEA3(1H)|-14.09 3.98 |-14.07 4.79 |-10.45 6.78 |-2.65 13.63 | |P IEA4(1H)|-21.17 3.96 |-18.94 7.38 |-16.88 7.18 |-6.09 16.50 | |L BN1 | 7.9 -0.33 | 9.67 1.13 | 9.43 1.65 | 14.84 7.97 | |T BN2 | 2.82 0.71 | 4.77 2.38 | 4.80 2.75 | 12.43 10.75 | | BN3 | 0.9 0.69 | 3.23 3.23 | 2.87 2.87 | 11.65 11.65 | ------------------------------------------------------------------- | IEA1(5H)| 13.56 0.14 | 16.30 0.91 | 14.77 1.82 | 23.31 11.37 | | IEA2(1H)| 1.42 1.42 | 2.17 2.17 | 3.20 3.20 | 7.26 7.26 | | IEA3(1H)|-14.33 4.20 |-13.65 5.35 |-11.71 6.55 |-8.05 8.44 | | IEA4(1H)|-21.03 4.07 |-21.68 5.19 |-18.01 6.41 |-12.31 9.68 | |5 IEA5(1H)|-24.32 3.48 |-24.04 5.71 |-21.39 5.74 |-15.69 8.44 | | IEA6(1H)|-27.24 2.57 |-24.69 4.57 |-23.20 5.20 |-15.31 9.78 | |P BN1 | 9.61 0.56 | 11.59 1.33 | 11.06 2.26 | 18.13 10.04 | |L BN2 | 5.02 1.44 | 6.53 2.38 | 6.91 3.30 | 13.86 10.44 | |T BN3 | 2.73 1.40 | 4.01 2.33 | 4.73 3.27 | 12.42 10.83 | | BN4 | 1.41 1.22 | 3.12 2.50 | 3.54 3.06 | 11.08 10.43 | | BN5 | 0.92 0.92 | 2.13 2.13 | 2.89 2.89 | 10.85 10.85 | ------------------------------------------------------------------- Table 4.8 Summary of the PLT results for 2;1 long-to-short load ratio. It's clear from the 'A-O' results that the beat-down effect is visible across all PLT topologies and traffic types. For instance, for the long IE-aggregate (IEA1), as it travels 2, 3, 5 bottlenecks, the degree of over-preemption increases (7.61, 10.85, 13.56 respectively for CBR traffic); so does the most upstream bottleneck link (BN1). Furthermore, all the downstream short IEAs (IEA3 and above) have experienced under-preemption compared to their "optimal" value, while the long IEA preempted more than the "optimal" value. Next we compare the actual-preemption with the level of preemption predicted by the theoretical beat-down effect based on the assumption of uniformly random marking. Our experience reported in the previous section shows that the assumption of uniform marking may not always hold in the case of bursty traffic. Zhang, et al. Expires September 6, 2007 [Page 34] Internet-Draft CL Simulation Study March 2007 As seen from Table 4.8, the results for CBR, VBR and VTR are reasonably close to those predicted by the beat-down effect (within 1% for CBR and within 3% for VBR and VTR). The larger discrepancy between the expected and the actual results for SVD are most likely the consequence of the same burstiness effect that we observed in the previous section with respect to ingress-egress aggregation experiments. Recall that in all of above experiments, we had the long IE-aggregate carries the traffic twice as much as the short ones. Now we investigate what will happen if this load ratio changes. We can use the same method (as the one illustrated in the last subsection) to obtain the expected-preemption for any given PLT topology. The expected trend is that, keeping all other conditions the same, the smaller portion the long IEA is, the more relative unfairness towards it (percentage-wise) will be displayed. In following set of experiments we chose the 1:1 as the load ratio (instead of 2:1) of the long and short aggregates, while keeping other the settings unchanged. The results, (actual-preemption - optimal- preemption)*100%, are summarized in Table 4.9. Zhang, et al. Expires September 6, 2007 [Page 35] Internet-Draft CL Simulation Study March 2007 ------------------------------------------------- | | CBR | VTR | | | 2:1 1:1 | 2:1 1:1 | ------------------------------------------------- | IEA1(2H) | 7.61 10.74 | 9.19 12.50 | |2 IEA2(1H) | 0.85 0.77 | 3.17 2.18 | |P IEA3(1H) | -14.49 -9.71 | -10.27 -7.23 | |L BN1 | 5.45 5.75 | 7.24 7.44 | |T BN2 | 0.80 0.84 | 3.18 2.85 | ------------------------------------------------- | IEA1(3H) | 10.85 16.83 | 11.90 18.36 | |3 IEA2(1H) | 0.78 0.77 | 3.35 2.69 | | IEA3(1H) | -14.09 -10.48 | -10.45 -7.24 | |P IEA4(1H) | -21.17 -15.98 | -16.88 -12.19 | |L BN1 | 7.59 8.93 | 9.43 11.05 | |T BN2 | 2.82 3.81 | 4.80 5.78 | | BN3 | 0.69 1.10 | 2.87 3.34 | ------------------------------------------------- | IEA1(5H) | 13.56 23.23 | 14.77 24.78 | | IEA2(1H) | 1.42 1.06 | 3.20 2.06 | | IEA3(1H) | -14.33 -9.98 | -11.71 -6.19 | | IEA4(1H) | -21.01 -16.15 | -18.01 -13.35 | |5 IEA5(1H) | -24.32 -20.17 | -21.39 -15.47 | | IEA6(1H) | -27.24 -23.06 | -23.30 -16.86 | |P BN1 | 9.61 12.47 | 11.06 13.84 | |L BN2 | 5.02 6.97 | 6.91 9.78 | |T BN3 | 2.73 3.94 | 4.73 6.00 | | BN4 | 1.41 1.91 | 3.54 4.65 | | BN5 | 0.92 .70 | 2.89 3.77 | ------------------------------------------------- Table 4.9 Summary of the PLT results for 1:1 long-to-short load ratio. The results confirm our expected behavior. For instance, the row that gives the over-preemption of the IEA1 that goes through 3 bottleneck links shows that in the 1:1 ratio setup, the over- preemption of the long aggregate is much larger comparing to 2:1 setup. And the problem grows severely when the number of bottleneck link increases (see IEA1 (5H)). Furthermore, the increment in over- preemption of the long IEA also reflects on the bottleneck link, that is, the aggregated over-preemption perc. on the bottleneck link increases accordingly. The 'A-E' part of the results is very similar to the ones in Table 4.5. That is, for CBR, VBR, VTR, we have the actual-preemption close to expectation. A high-level conclusion of the results presented in this section is that the actual results confirm the predicted beat-down effect Zhang, et al. Expires September 6, 2007 [Page 36] Internet-Draft CL Simulation Study March 2007 closely with CBR, VBR and VTR traffic. For SVD, the additional over- preemption at the bottleneck links is consistent with the effect of burstiness of this on-off traffic with high peak-to-mean ratio seen in other experiments. 5. Summary of Results The study presented here demonstrated that overall, both admission control and Preemption algorithms of [I-D.briscoe-tsvwg-cl-architecture] work reasonably well and are relatively insensitive to parameter variations. We can summarize the conclusions of the study so far as follows. 5.1. Summary of Admission Control Results o We observed no significant benefit of using "ramp" making instead of a simpler "step" marking. o There appears to be no appreciable sensitivity of the admission algorithm to either the absolute value of the round-trip time or the relative value of the round-trip time between different flows. o As a rule of thumb, the level of bottleneck aggregation necessary to demonstrate tolerable performance even in the simplest network topology corresponds to links of about 10 Mbps or higher for voice traffic (CBR of VBR with silence compression), assuming at least 50% of the link speed is allocated to the PCN traffic. For higher rate bursty SVD flows, 50% of the OC48 of higher appears to be a reasonable rule of thumb. The higher the degree of bottleneck aggregation, the better the performance. o Even though larger per ingress-egress pair aggregation results in better performance of admission control algorithm, performance remains reasonable even for really low ingress-egress aggregation levels (i.e. a single or a small number of bursty SVD flow per ingress). o Poisson call arrival has a visible effect on performance at lower levels of aggregation (10 Mbps for voice or lower), but is of less significance at the higher levels of aggregation/link speeds o The algorithm is relatively insensitive to variation of key parameter settings at the internal node or the ingress of the PCN domain, as long as the variations are kept within a reasonable range around "sensible" parameter settings. Zhang, et al. Expires September 6, 2007 [Page 37] Internet-Draft CL Simulation Study March 2007 o As expected, synthetic video traffic SVD was the most challenging for all topologies, and the performance of real video traces (VTR) was substantially better. Even for the SVD, however, a range of parameters exist for which performance across all experiments considered is within reasonable bounds o No performance degradation is observed in a multi-bottleneck topology where some flows traverse multiple bottlenecks in the presence of cross-traffic on each of the bottleneck links 5.2. Summary and Discussion of Pre-emption Results The simulations results presented in this installment of the simulation study further demonstrated that at least in a simple one- bottleneck topology case the preemption mechanism of works reasonably well for a wide range of parameters for all traffic models we considered. The key thrust of this study was the investigation of how much ingress-egress aggregation is needed for tolerable performance of the algorithm (assuming sufficient degree of bottleneck aggregation). We demonstrated that contrary to our expectations, it was not easy to find cases with sufficiently bad performance. We traced some of this better-than-expected performance to the effect of synchronization of the token bucket state for certain combinations of parameter values. A question of whether this synchronization can be explored to the benefit of the general operation for voice-only PCN regions remains open, but seems of substantial interest. Further investigation with other codices and in a broader set of network conditions is warranted to address this question. Our experiments demonstrated that the absolute value of RTT of the flows sharing the same bottleneck did not have any appreciable effect as long as the RTT of all flows were the same (or close). However, we have demonstrated that if RTTs of different flows are substantially different, longer RTT flows tend to over-preempt, resulting in overall over-preemption as well. Although a similar effect (referred to as "beat-down effect" in [I-D.briscoe-tsvwg-cl-architecture]) has been theoretically expected in a multi-bottleneck case, the possibility that even in a single bottleneck case a form of "beat-down" of long-haul flows was not previously noticed. On the bright side, at least in the experiments we conducted, the magnitude of the over-preemption was relatively small. Zhang, et al. Expires September 6, 2007 [Page 38] Internet-Draft CL Simulation Study March 2007 6. Future work This draft is but an intermediate step in the investigation of performance of Admission and Preemption approaches for a PCN region. Many of the aspects of the real networks have not been addressed due to time and resource limitations. These include multiple bottleneck case, more sophisticated and/or realistic traffic models and traffic mixes, and many more. Those are subject of on-going investigation. 7. IANA Considerations This document places no requests on IANA. 8. Security Considerations There are no new security issues or considerations introduced by this document. 9. References 9.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. 9.2. Informative References [I-D.briscoe-tsvwg-cl-architecture] Briscoe, B., "An edge-to-edge Deployment Model for Pre- Congestion Notification: Admission Control over a DiffServ Region", draft-briscoe-tsvwg-cl-architecture-04 (work in progress), October 2006. [I-D.briscoe-tsvwg-cl-phb] Briscoe, B., "Pre-Congestion Notification marking", draft-briscoe-tsvwg-cl-phb-03 (work in progress), October 2006. [I-D.briscoe-tsvwg-re-ecn-border-cheat] Briscoe, B., "Emulating Border Flow Policing using Re-ECN on Bulk Data", draft-briscoe-tsvwg-re-ecn-border-cheat-01 (work in progress), June 2006. [I-D.briscoe-tsvwg-re-ecn-tcp] Briscoe, B., "Re-ECN: Adding Accountability for Causing Zhang, et al. Expires September 6, 2007 [Page 39] Internet-Draft CL Simulation Study March 2007 Congestion to TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-03 (work in progress), October 2006. [I-D.davie-ecn-mpls] Davie, B., "Explicit Congestion Marking in MPLS", draft-davie-ecn-mpls-01 (work in progress), October 2006. [I-D.lefaucheur-emergency-rsvp] Faucheur, F., "RSVP Extensions for Emergency Services", draft-lefaucheur-emergency-rsvp-02 (work in progress), June 2006. Authors' Addresses Xinyang (Joy) Zhang Cisco Systems, Inc. and Cornell University 1414 Mass. Ave. Boxborough, MA 01719 USA Email: joyzhang@cisco.com Anna Charny Cisco Systems, Inc. 1414 Mass. Ave. Boxborough, MA 01719 USA Email: acharny@cisco.com Vassilis Liatsos Cisco Systems, Inc. 1414 Mass. Ave. Boxborough, MA 01719 USA Email: vliatsos@cisco.com Zhang, et al. Expires September 6, 2007 [Page 40] Internet-Draft CL Simulation Study March 2007 Francois Le Faucheur Cisco Systems, Inc. Village d'Entreprise Green Side-Batiment T3, 400 Avenue de Roumanille 06410 Biot Sophia-Antipolis, France Email: flefauch@cisco.com Zhang, et al. Expires September 6, 2007 [Page 41] Internet-Draft CL Simulation Study March 2007 Full Copyright Statement Copyright (C) The IETF Trust (2007). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Acknowledgment Funding for the RFC Editor function is provided by the IETF Administrative Support Activity (IASA). Zhang, et al. Expires September 6, 2007 [Page 42]