PCN Working Group D. Satoh Internet-Draft M. Ishizuka Intended status: Informational O. Phanachet Expires: May 22, 2009 Y. Maeda NTT-AT November 18, 2008 Single PCN Threshold Marking by using PCN baseline encoding for both admission and termination controls draft-satoh-pcn-st-marking-00 Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on May 22, 2009. Abstract This document proposes an algorithm for marking and metering by using pre-congestion notification (PCN) baseline encoding for both flow admission and flow termination. The proposed algorithm uses threshold marking whose TBthreshold.threshold is set to the number of bits of a metered-packet smaller than the token bucket size. Another threshold for the token bucket is required to change the marking. frequency. Under the flow termination state, all the packets are to be threshold-marked. Under the admission stop state, 1/Nth of packets are to be threshold-marked when the meter indicates. We Satoh, et al. Expires May 22, 2009 [Page 1] Internet-Draft ST Marking November 2008 evaluates the performance of the proposed algorithm by simulations. Our simulation indicates that the performance is almost the same as that of the CL[I-D.briscoe-tsvwg-cl-architecture] algorithm. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Single Threshold-Marking . . . . . . . . . . . . . . . . . . . 3 3.1. Operation at the PCN-interior-node . . . . . . . . . . . . 4 3.2. Operation at the PCN-egress-node . . . . . . . . . . . . . 7 3.3. Operation at the PCN-ingress-node . . . . . . . . . . . . 7 3.3.1. Admission Decision . . . . . . . . . . . . . . . . . . 7 3.3.2. Flow Termination Decision . . . . . . . . . . . . . . 7 4. Impact on PCN marking behaviour . . . . . . . . . . . . . . . 8 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 8 6. Appendix A: Simulation Setup and Environment . . . . . . . . . 8 6.1. Network and Signaling Models . . . . . . . . . . . . . . . 8 6.2. Traffic Models . . . . . . . . . . . . . . . . . . . . . . 10 6.3. Performance Metrics . . . . . . . . . . . . . . . . . . . 12 6.4. Parameter setting for STM . . . . . . . . . . . . . . . . 12 6.5. Parameter setting for CL . . . . . . . . . . . . . . . . . 12 6.6. Simulation Environment . . . . . . . . . . . . . . . . . . 13 7. Appendix B: Admission Control Simulation . . . . . . . . . . . 13 7.1. Parameter setting for admission control . . . . . . . . . 13 7.2. Basic evaluation . . . . . . . . . . . . . . . . . . . . . 13 7.3. Effect of Ingress-Egress Aggregation . . . . . . . . . . . 14 7.3.1. CBR . . . . . . . . . . . . . . . . . . . . . . . . . 14 7.3.2. VBR . . . . . . . . . . . . . . . . . . . . . . . . . 15 7.3.3. SVD . . . . . . . . . . . . . . . . . . . . . . . . . 15 7.4. Effect of multi-bottleneck . . . . . . . . . . . . . . . . 16 7.5. Fairness among different Ingress-Egress pairs . . . . . . 17 8. Appendix C: Flow termination . . . . . . . . . . . . . . . . . 18 8.1. Basic evaluation . . . . . . . . . . . . . . . . . . . . . 18 9. Appendix D: Another Flow Termination Control . . . . . . . . . 19 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 19 11. Security Considerations . . . . . . . . . . . . . . . . . . . 19 12. Informative References . . . . . . . . . . . . . . . . . . . . 20 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 20 Intellectual Property and Copyright Statements . . . . . . . . . . 22 Satoh, et al. Expires May 22, 2009 [Page 2] Internet-Draft ST Marking November 2008 1. Introduction Pre-congestion notification (PCN) gives information to support admission control and flow termination in order to protect the quality of service (QoS) of inelastic flows. Although several algorithms (e.g., [I-D.briscoe-tsvwg-cl-phb], [I-D.charny-pcn-single- marking],[I-D.babiarz-pcn-3sm]) have been proposed to achieve PCN, only the single marking algorithm (SM) [I-D.charny-pcnsingle- marking] meets the requirement of baseline encoding. This document proposes an algorithm for marking and metering by using PCN baseline encoding for both flow admission and flow termination. Our algorithm uses PCN-threshold marking although SM uses PCN-excess- traffic marking. Although our algorithm uses an elaborate mechanism, it chooses PCN-admissible and PCN-supportable rates independently. 2. Terminology The terminology used in this document conforms to the terminology of [ID.ietf-pcn-architecture] and [I-D.ietf-pcn-marking-behaviour]. 3. Single Threshold-Marking We propose an algorithm of marking and metering by using PCN baseline encoding for both flow admission and flow termination. This algorithm for marking uses only the PCN-threshold-rate as PCN- supportable rate and does not use the PCN-excess-rate. We show a schematic of how the PCN admission control and flow termination mechanisms operate as the rate of PCN-traffic increases, for a PCN- domain with three type of ratios of PCN-threshold-marked packets and three states of Marking.ratio at Fig. 1. As shown Fig. 1, no packets are PCN-marked at a rate less than the PCN-admissible-rate. Some packets are PCN-marked at rates between PCNadmissible and PCN- supportable rates, and all the packets are PCN-marked at rates greater than the PCN-supportable-rate. Single threshold-marking algorithm (STM) uses a PCN-threshold-rate with a very large TBthreshold. threshold and one-N-th packets are marked (Marking.frequency: 1/N) according to PCN-threshold marking with a very large TBthreshold.threshold to accomplish having some packets PCN-marked. Marking.frequency is a ratio of actual marking to marking according to PCN-threshold marking. One-N-th packets are regularly marked according to PCN-threshold marking with a very large TBthreshold.threshold. Marking with Marking.frequency: 1/N is similar to N-th packet marking of [I-D. westberg-pcn-load-control]. The marking.frequeny is applied to threshold marking although N-th packet marking is applied to excess traffic marking. To achieve Satoh, et al. Expires May 22, 2009 [Page 3] Internet-Draft ST Marking November 2008 having no packets being PCN-marked under the PCN-admissible rate, the marking switch is turned off under the PCN-admissible rate. To achieve having all the packets PCN-threshold marked, the proposed algorithm uses PCN-threshold marking with another TBthreshold. This marking.frequency is 1, and this is the same as described in [I-D. pcn-marking-behaviour] when the other TBthreshold is regarded as TBthreshold.threshold in [I-D. pcn-marking-behaviour]. We explain it in detail in the following subsections. PCN traffic rate 100% | Terminate some |All the packets admitted flows |PCN-threshold-marked & Marking.frequency: 1 PCN- | Block new flows Supportable| rate +------------------------------------------------------- (PCN- | threshold- | rate) | Some packets Block Marking.frequency: 1/N | PCN-threshold-marked new flows PCN- | Admissible | rate +------------------------------------------------------- | | No packets Admit new flows not defined | PCN marked | 0% +------------------------------------------------------- Figure 1: Ratio of PCN-threshold-marked packets, Control operations, and Marking.frequency 3.1. Operation at the PCN-interior-node We assume that a token bucket mechanism is used; other implementations are possible. We explain how to mark by using two token buckets in cooperation. The first token bucket is not used for marking but for a marking switch. The second token bucket is used for marking. When the marking switch is OFF, no packet is marked. When the switch is ON, some or all packets are marked according to the second token bucket. We explain the first token bucket in detail. Tokens of the first token bucket are added at the PCN-admissible-rate. Tokens are removed equal to the size in bits of the metered-packet. These Satoh, et al. Expires May 22, 2009 [Page 4] Internet-Draft ST Marking November 2008 additions and removals are independent of the second token bucket. If the metered traffic is sustained at a level greater than the PCN- admissible-rate, then the marking switch is set to ON. However, a short burst of traffic at greater than the PCN-admissible-rate may not trigger the switch to go ON. If the burst is sufficiently long such that the amount of tokens in the first token bucket are less than threshold (TB1.threshold), the marking switch is set to ON. Otherwise, the marking switch is OFF. The behavior of the first token bucket is similar to that of Threshold-marking although there is a difference between actual marking and having the marking switch ON. When the marking switch is ON, packets are PCN-threshold-marked according to the meter based on the second token bucket. When the marking switch is OFF, no packet is marked regardless of the meter of the second token bucket. We explain the second token bucket in detail. Regardless of the marking switch being ON or OFF, tokens of the second token bucket are added at the PCN threshold-rate, which is the PCN-supportable-rate and tokens are removed equal to the size in bits of the metered- packet. These additions and removals are independent of the first token bucket. The TBthreshold.threshold should be set to the number of bits of a metered packet smaller than the token bucket size although it is not an *intermediate* depth described in [I-D. ietf- pcn-marking-behaviour]. This is the same as the second token bucket size before the removal of tokens for the arrived packet. If the marking switch is ON and Marking.frequency is 1, an arrived packet is marked. If the marking switch is ON and Marking.frequency is 1/N, 1/Nth of arrived packets are marked. The Marking.frequency has two values: 1 and 1/N. If the amount of tokens in the second token bucket is not more than TB2.threshold, the Marking.frequency is 1. Otherwise, it is 1/N. The TB2.threshold is set at a deep depth of the second token bucket. The number of N MUST be the same in the domain. This PCN-domain-wide constant does not impose any constraint except the PCN-domain-wide constant itself. Note that marking by metering whether the amount of token in the second token bucket is less than the TBthreshold.threshold or not gives a good approximation of utilization of a single link. If packets arrive according to the Poisson process, the marking ratio is the utilization of a single link [Kleinrock]. Furthremore, if an arrival process is a superposition of renewal processes, the arrival process approaches the Poisson process as the number of superpositions approaches infinity [Cox]. Therefore, when the number of superpositions of renewal processes is sufficiently large, this metering gives a good approximation of the utilization of a single link. An arrived packet is PCN-threshold-marked per one-Nth when the Satoh, et al. Expires May 22, 2009 [Page 5] Internet-Draft ST Marking November 2008 marking switch is ON and the Marking.frequency is 1/N, that is, the amount of tokens of the second token buckets is less than the TBthreshold.threshold. If the amount of tokens of the second token bucket is less than a configured intermediate depth, termed TB2.threshold, the Marking.frequency is changed 1/N to 1. The Marking.frequency is changed from 1 to 1/N when tokens of the second token bucket is larger than the TB2.threshold. This marking is expressed by the following pseudo code: Input: pcn packet Marking.ratio = 1/N; n = 0; FOR i = 1 to 2 TBi.fill = min(TBi.size,TBi.fill + TBi.rate*(now - TBi.lastUpdate)); // add tokens to the two token buckets TBi.fill = min(TBi.fill - packet.size, 0); //remove tokens from each token bucket ENDFOR IF TB1.fill <= TB1.threshold THEN //marking switch is ON IF TB2.fill < TB2threshold.threshold THEN IF TB2.fill > TB2.threshold THEN markCnt++; IF mod(markCnt, N) == 0 THEN //Marking.frequency = 1/N markCnt = 0; packet.mark = TM; ENDIF ELSE markCnt = 0; packet.mark = TM; ENDIF ENDIF ELSE //marking switch is OFF ENDIF output: void where TB2.fill represents TBthreshold.fill in [I-D. pcn-marking- behaviour], TB2.size represents TBthreshold.max in [I-D. pcn-marking- behaviour], TB2.rate represents the PCN-threshold-rate (as PCN- supportable-rate), TB2.lastUpdate represents TBthreshold.lastUpdate, and TB2threshold.threshold represents TBthreshold.threshold in [I-D. pcn-marking-behaviour]. TB2.threhold represents a threshold at which the Marking.frequency from is changed from 1/N to 1 or from 1 to 1/N, as explained above, which is not defined in [I-D. pcn-marking- behaviour]. Threshold marked(TM) represents PCN-thresholdmarked according to Supportable rate. Satoh, et al. Expires May 22, 2009 [Page 6] Internet-Draft ST Marking November 2008 3.2. Operation at the PCN-egress-node A PCN-egress-node measures the ratio of PCN-threshold-marked packets on a per-ingress basis, and reports to the PCN-ingress-node one value. The value is the Congestion Level Estimate (CLE), which is the fraction of the marked traffic received from the ingress node and is exactly the same as that of CL, SM, and three state PCN marking algorithm(3sM). Furthremore, PCN-egress-node measures L-sequential marked packets per PCN-ingress-node. When the PCN-egress-node receives those packets from a PCN-ingress-node, it starts measuring the receiving rate of this PCN-ingressnode. L-sequential marked packets received during the measurement interval are ignored. The PCN-egress-node sends a packet of information about the receiving rate to the PCN-ingress- node after the measurement interval. After that, the PCN-egress-node again starts measuring the receiving rate if it receives L-sequential marked packets. 3.3. Operation at the PCN-ingress-node 3.3.1. Admission Decision Just as in CL and SM, the admission decision is based on the CLE. The ingress node stops admission of new flows if the CLE is greater than a predefined threshold (CLE threshold). The CLE threshold is chosen under the following maximum value. The maximum of the CLE threshold MUST be CLE threshold = (1/N)*rho , (1) where rho represents the minimum ratio between the PCN-admissible- rate and PCN-supportable-rate via all the links between PCN-ingress and egress nodes and N is the denominator of marking.frequency 1/N. Note that the marking ratio approaches to the maximum CLE threshold described above when the load is PCN-admissible-rate at a bottleneck link via all the links between PCN-ingress and egress nodes as the number of superpositions of flows approaches infinity, as described in Sect 3.1. 3.3.2. Flow Termination Decision When a PCN-ingress-node receives a packet with information about a receiving rate from a PCN-egress-node, it starts measuring a sending rate for this PCN-egress-node immediately. If the PCN-ingress-node receives a new receiving rate from the same PCN-egress-node during measurement interval, the new rate is ignored. At the end of the Satoh, et al. Expires May 22, 2009 [Page 7] Internet-Draft ST Marking November 2008 measurement interval, the PCNingress- node terminates flows whose bandwidth is the same as the sending rate - (1-y)* receiving rate. The value of y is small enough to tolerate over termination. This termination spends more time when the PCN-supportable rate is much less than the physical rate because the sending rate is almost the same as the receiving rate even under a congested situation. 4. Impact on PCN marking behaviour The goal of this section is to propose several minor changes to the PCNmarking- behaviour framework as currently described in [I-D.ietf- pcn-marking-behaviour] in order to enable the Single Threshold marking approach. Threshold marking in this draft conforms to [I-D.pcn-marking-behaviour] except for "intermediate depth". We propose an addition of two marking parameters: Marking.frequency, which is a ratio between actual marking and marking according to PCN- threshold marking. 5. Acknowledgements This research was partially supported by the National Institute of Information and Communications Technology (NICT), Tokyo, Japan. We thank Mr. Takayuki Uchida at U-software Corporation for his helping simulations and giving technical advices, and Professor Shigeo Shioda at Chiba University for his fruitful discussion. 6. Appendix A: Simulation Setup and Environment 6.1. Network and Signaling Models We used three types of network topologies shown in the Figures below for simulations. They are the same as those in [I-D.charny-pcn- single-marking] and the first two figures are the same as those in [I-D. briscoe-tsvwg-cl-phb]. The first type of network topology is Single Link, as shown in Fig. A.1. The second type is a multi-link network with a single bottleneck (termed "RTT"), as shown in Fig.A.2. The third type is a range of multi-bottleneck topologies shown in Fig.A.3 (termed "Parking Lot"). A single link between an ingress and an egress node is shown in Fig.A.1, where all floes enter at node A and depart from node B. This topology is used for the basic verification of the behavior of the algorithms with respect to a single IEA in isolation. Satoh, et al. Expires May 22, 2009 [Page 8] Internet-Draft ST Marking November 2008 A --- B Figure A.1: Simulated Single-Link Network. A \ B - D - F / C Figure A.2: Simulated Multi-Link Network. A--B--C A--B--C--D A--B--C--D--E--F | | | | | | | | | | | | | | | | | | | | | | | | | | D E F E F G H G H I J K L (a) (b) (c) Figure A.3: Simulated Multi-Link Network. As shown in Fig. A.2, a set of ingresses (A;B;C) are connected to an interior node in the network (D). This topology is used to study the behavior of the algorithm where many IEAs share a single bottleneck link. The number of ingresses varied in different simulation experiments in the range of 10-1000. All links generally have different propagation delays. So this propagation delays were chosen randomly in the range of 1ms - 100 ms (although in some experiments all propagation delays are set the same). This node D in turn is connected to the egress (F). In this topology, different sets of flows between each ingress and the egress converge on a single link D-F, where the PCN algorithm is enabled. The capacities of the ingress links are not limiting, and hence, no PCN is enabled on those. The bottleneck link D-F is modeled with a 10ms propagation delay in all simulations. Therefore, the range of round-trip delays in the experiments is from 40ms to 220ms. Another type of network of interest is parking lot (PLT) topology, which has multi-bottleneck links. The simplest PLT with 2 bottlenecks is illustrated in Fig A.3(a). A traffic matrix with this network on this topology is as follows: o an aggregate of "2-hop" flows entering the network at A and leaving at C (via the two links A-B-C) Satoh, et al. Expires May 22, 2009 [Page 9] Internet-Draft ST Marking November 2008 o an aggregate of "1-hop" flows entering the network at D and leaving at E (via A-B) o an aggregate of "1-hop" flows entering the network at E and leaving at F (via B-C). In the 2-hop PLT shown in Fig. A.3 (a) the points of congestion are links A-B and B-C. The capacity of all other links is not-limited. We also experiment with larger PLT topologies with 3 bottlenecks (see Fig A.3(b)) and 5 bottlenecks (Fig A.3 (c)). In all cases, we simulated one ingress-egress pair that carries the aggregate of "long" flows traversing all N bottlenecks (where N is the number of bottleneck links in the PLT topology), and N ingress-egress pairs that carry flows traversing a single bottleneck link and exiting at the next "hop". In all cases, only the "horizontal" links in Fig. A.3 were the bottlenecks, with non-limiting capacities of all "vertical" links. Propagation delays for all links in all PLT topologies are set to 1 ms. Our simulations concentrated primarily on the range of capacities of "bottleneck" links with sufficient aggregation - above 128 Mbps for voice and 2.4Gbps Mbps for SVD. It should generally be expected that the higher link speeds will result in higher levels of aggregation, and hence generally better performance of the measurement-based algorithms. Therefore it seems reasonable to believe that the studied link speeds do provide meaningful evaluation targets. In the simulation model, a call request arrives at the ingress and immediately sends a message to the egress. The message arrives at the egress after the propagation time plus link processing time (but no queuing delay). When the egress receives this message, it immediately responds to the ingress with the current CLE. If the CLE is below the specified CLE-threshold, the call is admitted, otherwise it is rejected. An admitted call sends packets according to one of the chosen traffic models for the duration of the call (see next section). Propagation delay from source to the ingress and from destination to the egress is assumed to be negligible and is not modeled. Admissible and Supportable rates are half link speed and 90% of link speed, respectively, in all the links. Actual queue size is 80,000 packets, which is not the byte size because queue size is set by only packet in NS2 simulator. 6.2. Traffic Models We use the same types of traffic as those of [I-D. briscoe-tsvwg-cl- phb], [I-D. charny-pcn-single-marking]. These are CBR voice, on-off Satoh, et al. Expires May 22, 2009 [Page 10] Internet-Draft ST Marking November 2008 traffic approximating voice with silence compression (we termed "VBR"), and on-off traffic with higher peak and mean rates (we termed the latter "Synthetic Video" (SVD). On-off traffic (VBR and SVD) is described with two state Markov chain and on and off periods were exponentially distributed with the specified mean. Each flow arrives according to the Poisson process and the distribution of flow duration was chosen to be exponentially distributed with a mean of 1 min, regardless of the traffic type, which is the same as that in [I-D. briscoe-tsvwg-cl-phb] and [I-D. charny-pcn-single-marking]. Traffic parameters for each type are summarized below: CBR voice o Packet length: 160 bytes o Packet inter-arraival time: 20ms ((160*8)/(64*1000)sec) o Average rate: 64Kbps On-off traffic approximating voice with silence compression o Packet length: 160 bytes o Packet inter-arrival time during On period: 20ms o Long-term average rate: 21.76 Kbps o On period mean duration: 340ms; during the on period traffic is sent with CBR voice parameters described above o Off period mean duration 660ms; no traffic is sent for the duration of the off period SVD o Packet length: 1500 bytes o Packet inter-arrival time during On period: 1ms o Long term average rate: 4 Mbps o On period mean duration: 340 ms o Off period mean duration: 660 ms Satoh, et al. Expires May 22, 2009 [Page 11] Internet-Draft ST Marking November 2008 6.3. Performance Metrics In all our experiments, we use the percent deviation of the mean rate achieved in the experiment from the expected load level as performance metric. We term these "over-admission" and "over- termination" percentages, depending on the type of the experiment. In our experiments of admission control, we compare the actual achieved average throughput to the desired traffic load (admissible rate) and measure the standard deviation of throughput at 500 ms intervals to the desired traffic load. The desired traffic load is not admissible rate exactly because a signal packet for a new flow is sent for admission. Therefore, the desired traffic load is the amount of the signal packets including not admitted flows less than admissible rate. 6.4. Parameter setting for STM All the simulations were run with the following values: o TB1.size = 20 ms at Admissible rate, o TB1.threshold = 10ms at Admissible rate, o TB2.size = 10ms at Supportable rate, o TB2.threshold = 8ms at Supportable rate, o Marking.frequency =1/3 and 1 o CLE threshold = 0.05 (less than Marking.frequency times ratio between admissible and supportable rates) o CLEweight = 0.0005 o The number of sequential marked packets for termination = 100 o Extra percentage of receiving rate more than the difference between sending and receiving rates for termination at one time = 5.0 o Interval for measureing sending rate or receiving rate = 100ms. 6.5. Parameter setting for CL All the simulations were run with 5ms at Supportable rate as token bucket size for Termination and the following Virtual Queue thresholds: Satoh, et al. Expires May 22, 2009 [Page 12] Internet-Draft ST Marking November 2008 o min-marking-threshold: 5ms at link speed, o max-marking-threshold: 15ms at link speed, o virtual-queue-upper-limit: 20ms at link speed. The virtual-queue-upper-limit puts an upper bound on how much the virtual queue can grow. In Admission control simulation, CLE and CLE weight are chosen as 0.05 and 0.3 respectively. 6.6. Simulation Environment We used NS2 for our simulation experiments. Simulations were run on Red Hat Enterprise Linux (64bit), Dell Power Edge 1950, Intel Quad- core Xeon 2.66GHz, 32GB RAM. 7. Appendix B: Admission Control Simulation 7.1. Parameter setting for admission control We evaluate over-admission percentages when the load of the bottleneck is five times of the Admissible rate, that is, 2.5 times of the link speed. The simulation time is 300 sec and simulation results during time interval between 200 sec and 300sec are used for performance metric to obtain the results in the steady state. Egress measurement parameters for STM : In our simulations, the CLE is computed as an exponential weighted moving average (EWMA) with a weight of 0.0005. The CLE is computed on a per-packet basis. Egress measurement parameters for CL : In our simulations, the CLEthreshold was chosen as 0.05. The CLE is computed as an exponential weighted moving average (EWMA) with a weight of 0.3. The CLE is computed on a per-packet basis. These values are the same as those in Sect. 8.2.3 in [I-D.charny-pcn-single-marking]. 7.2. Basic evaluation Over-admission percentage statistics are evaluated using the Single Link topology. Table B.1 gives results of an admission control simulation when the load is the five times the Admissible rate. When the traffic type is CBR, the link speed is 128Mbps and the load is the rate of 5000 flows. The call interval per Ingress-Egress Aggregate(IEA) is 0.012sec. When the traffic type is VBR, the link speed is 78.3Mbps and the load is the rate of 9000 flows. The call interval per Ingress-Egress Aggregate(IEA) is 0.0067sec. When the traffic type is SVD, the link speed is 2.45Gbps and the load is the Satoh, et al. Expires May 22, 2009 [Page 13] Internet-Draft ST Marking November 2008 rate of 1500 flows. The call interval per Ingress-Egress Aggregate(IEA) is 0.04sec. We show the average of results of five simulations with different random seeds each traffic type. The performance of STM is very good although it is inferior to that of CL. We repeated this simulation by using different random seeds. ------------------------------------------------- Type | Over Admission % | Standard deviation % |------------------------------------------- | STM | CL | STM | CL ----------------------- ------------------------- CBR | 0.285 | 0.028 | 1.243 | 0.818 ------------------------------------------------- VBR | 1.017 | 0.979 | 2.761 | 2.587 ------------------------------------------------- SVD | 4.308 | 4.476 | 6.002 | 5.922 ------------------------------------------------- Table B.1: Over admission percentage and standard deviation statistics obtained with Single Link 7.3. Effect of Ingress-Egress Aggregation We evaluated the effect of Ingress-Egress Aggregation using RTT topology. As simulations in [I-D. charny-pcn-single-marking], the aggregate load on the bottleneck is the same across each traffic type with the aggregate load being evenly divided among all ingresses. 7.3.1. CBR Table B.2 shows simulation results with traffic load condition when the traffic type is CBR. In this case, link speed of the bottleneck is 128Mbps for all the case of number of ingresses. Satoh, et al. Expires May 22, 2009 [Page 14] Internet-Draft ST Marking November 2008 --------------------------------------------------------------------- No. of |No. of |Call |Over Admission |Standard deviation ingresses|connections|Interval | (%) | (%) |per IE pair|per IEA |------------------------------------ | |(sec) | STM | CL | STM | CL --------------------------------------------------------------------- 10 | 500 | 0.12 | 0.585 | 0.761 | 2.255 | 1.745 --------------------------------------------------------------------- 70 | 71 | 0.84 | -1.656 | 0.727 | 2.628 | 1.751 --------------------------------------------------------------------- 600 | 8 | 7.20 | -5.218 | 0.038 | 4.114 | 1.369 --------------------------------------------------------------------- 1000 | 5 | 12.0 | -6.030 |-0.179 | 4.342 | 1.262 --------------------------------------------------------------------- Table B.2: Over admission percentage statistics with CBR, RTT 7.3.2. VBR Table B.3 shows simulation results with traffic load condition when the traffic type is VBR. In this case, link speed of the bottleneck is 78Mbps for all the case of number of ingresses. --------------------------------------------------------------------- No. of |No. of |Call |Over Admission |Standard deviation ingresses|connections|Interval | (%) | (%) |per IE pair|per IEA |------------------------------------ | |(sec) | STM | CL | STM | CL --------------------------------------------------------------------- 10 | 900 | 0.07 | -0.753 | 1.345 | 3.381 | 3.102 --------------------------------------------------------------------- 70 | 128 | 0.47 | -3.249 | 1.462 | 4.034 | 3.130 --------------------------------------------------------------------- 600 | 15 | 4.00 | -5.427 | 0.699 | 4.954 | 3.051 --------------------------------------------------------------------- 1000 | 5 | 12.0 | -5.870 | -0.077 | 4.395 | 3.001 --------------------------------------------------------------------- Table B.3: Over admission percentage statistics with VBR, RTT 7.3.3. SVD Table B.4 shows simulation results with traffic load condition when the traffic type is SVD. In this case, link speed of the bottleneck is 2448Mbps for all the case of number of ingresses. Satoh, et al. Expires May 22, 2009 [Page 15] Internet-Draft ST Marking November 2008 --------------------------------------------------------------------- No. of |No. of |Call |Over Admission |Standard deviation ingresses|connections|Interval | (%) | (%) |per IE pair|per IEA |------------------------------------ | |(sec) | STM | CL | STM | CL --------------------------------------------------------------------- 10 | 150 | 0.4 | 1.593 | 4.870 | 6.627 | 6.272 --------------------------------------------------------------------- 70 | 42 | 1.4 | -1.981 | 4.898 | 6.624 | 6.455 --------------------------------------------------------------------- 600 | 10 | 5.6 | -5.502 | 2.597 | 6.859 | 6.224 --------------------------------------------------------------------- 1000 | 5 | 12.0 | -7.192 | 0.221 | 6.644 | 6.431 --------------------------------------------------------------------- Table B.4: Over admission percentage statistics with SVD, RTT 7.4. Effect of multi-bottleneck We evaluated the effect of multi-bottleneck with PLT topology. Over- admission-percentage of each bottleneck ID is shown in Tablle B.5. Standard deviation of over admission with multibottleneck of each bottleneck ID is shown Table B.6. When CBR was used as traffic type, link speed of all the bottleneck is 128Mbps and call-interval per IEA is 0.024sec. When VBR was used as traffic type, link speed of all the bottleneck is 78Mbps and call-interval per IEA is 0.01sec. When SVD was used as traffic type, link speed of all the bottleneck is 2448Mbps and call-interval per IEA is 0.08sec. ------------------------------------------------------------- Traffic |Algorithm| Bottleneck LinkId Type | | 1 | 2 | 3 | 4 | 5 ------------------------------------------------------------- CBR | STM | 1.587 | 1.589 | 1.617 | 1.565 | 1.694 | CL | 0.019 | 0.025 | 0.015 | 0.005 | 0.009 ------------------------------------------------------------- VBR | STM | 2.567 | 2.320 | 2.168 | 2.269 | 2.341 | CL | 0.560 | 0.540 | 0.585 | 0.614 | 0.620 ------------------------------------------------------------- SVD | STM | 2.541 | 2.279 | 2.537 | 2.297 | 2.594 | CL | 1.887 | 1.908 | 1.952 | 2.123 | 2.382 ------------------------------------------------------------- Table B.5: Over admission percentage with multibottleneck Satoh, et al. Expires May 22, 2009 [Page 16] Internet-Draft ST Marking November 2008 ------------------------------------------------------------- Traffic |Algorithm| Bottleneck LinkId Type | | 1 | 2 | 3 | 4 | 5 ------------------------------------------------------------- CBR | STM | 2.270 | 2.245 | 2.329 | 2.293 | 2.287 | CL | 1.002 | 1.105 | 1.036 | 0.975 | 0.945 ------------------------------------------------------------- VBR | STM | 5.293 | 5.343 | 5.019 | 5.256 | 5.312 | CL | 2.385 | 2.350 | 2.423 | 2.387 | 2.476 ------------------------------------------------------------- SVD | STM | 6.474 | 6.772 | 6.243 | 6.125 | 6.524 | CL | 6.042 | 5.551 | 5.844 | 6.030 | 6.021 ------------------------------------------------------------- Table B.6: Standard deviation of over admission with multibottleneck 7.5. Fairness among different Ingress-Egress pairs In [I-D.charny-pcn-single-marking], the fairness is illustrated using the ratio between bandwidth of the long-haul aggregates and the short-haul aggregates. We used CBR traffic with the load of each bottleneck is five times of the Admissible rate. Table B.7 summarizes the fairness results at different topologies. We measure ratios of average throughput of short-haul aggregates to that of long-haul aggregates during the simulation time at the first bottleneck. We display the average of the five simulations. The same link speed and call interval conditions in Table B.5 are used in this experiment. ----------------------------------------------------------------- |Topo| Simulation Time (s) | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 ----------------------------------------------------------------- STM |PLT2| 1.00 | 1.02 | 1.06 | 1.12 | 1.19 | 1.25 | 1.32 | 1.39 CL | | 1.02 | 1.04 | 1.08 | 1.15 | 1.22 | 1.30 | 1.38 | 1.46 ----------------------------------------------------------------- STM |PLT3| 1.02 | 1.02 | 1.11 | 1.23 | 1.36 | 1.49 | 1.62 | 1.74 CL | | 0.98 | 1.00 | 1.05 | 1.12 | 1.21 | 1.30 | 1.41 | 1.52 ----------------------------------------------------------------- STM |PLT5| 1.07 | 1.22 | 1.39 | 1.57 | 1.77 | 1.98 | 2.20 | 2.42 CL | | 1.02 | 1.05 | 1.14 | 1.28 | 1.43 | 1.59 | 1.75 | 1.93 ----------------------------------------------------------------- Table B.7: Fairness performance Satoh, et al. Expires May 22, 2009 [Page 17] Internet-Draft ST Marking November 2008 8. Appendix C: Flow termination We evaluate over-termination percentages when the load of the bottleneck is 1.0, 1.5 and 3.0 times of link speed. The load was lower than the Admissible rate at the beginning of the simulation. The simulation time is 100 sec. The load was changed at half the simulation time, that is 50sec. Simulation results during time interval between 70 sec and 100sec are used for performance metric to obtain the results in the steady state. The rate of one IE-pair was changed from 0 to the load of 1.0, 1.5 and 3.0 times of link speed at bottleneck links with no admission control within 2.0 sec. This simulated changing of a route when there was a failure. When a failure happened, a new Ingree-Egress pair was generated. Their traffic was regarded as traffic added by changing of a route. The flows of the new Ingress-Egress pair were generated according to the Poisson process whose rate was the value of the generated flows divided by 2 sec. The flow generation stopped at the generated flows whose bandwidth is the same as the load. The duration time of all the flows is infinity. 8.1. Basic evaluation Over termination percentages are evaluated with the Single Link topology. Table C.1 gives results of a termination control simulation when the load is the 1.0, 1.5 and 3.0 times Link speed. When the traffic type is CBR, the link speed is 40.5Mbps and the load is the rate of 633, 950, and 1900 flows. When the traffic type is VBR, the link speed is 41.1Mbps and the load is the rate of 1886, 2830, and 5660 flows. When the traffic type is SVD, the link speed is 613Mbps and the load is the rate of 153, 230, and 460 flows. We show the average of results of five simulations with different random seeds each traffic type. Satoh, et al. Expires May 22, 2009 [Page 18] Internet-Draft ST Marking November 2008 ----------------------------------------------------- Traffic | Load | Over Termination % Type |(x Link |---------------------------------- | speed) | STM | CL ------------------------------------------------------ CBR | | 5.54 | 3.57 VBR | 1.0 | 17.66 | 29.03 SVD | | 17.59 | 21.87 ------------------------------------------------------ CBR | | 4.60 | 10.07 VBR | 1.5 | 28.34 | 47.95 SVD | | 33.43 | 46.26 ------------------------------------------------------ CBR | | 3.86 | 21.02 VBR | 3.0 | 30.82 | 48.65 SVD | | 38.21 | 56.55 ------------------------------------------------------- Table C.1: Over termination percentage statistics obtained with Single Link 9. Appendix D: Another Flow Termination Control When PCN-supportable rate is much less than the Physical rate, the flow termination control described in Sect.?? takes much time to control. We describe another flow termination control. A PCN- ingress node measures sending rate per PCN-egress nodes all the time. If a PCN-egress node receives L-sequential marked packets, it sends a message of termination to the PCNingress node. When the PCN-ingress node receives the message, it terminates flows according to x-% of sending rate. PCN-ingress node ignore the message during a time interval after termination. The value of x is small and the time interval is short. 10. IANA Considerations TBD 11. Security Considerations TBD Satoh, et al. Expires May 22, 2009 [Page 19] Internet-Draft ST Marking November 2008 12. Informative References [I-D.babiarz-pcn-3sm] Babiarz, J., "Three State PCN Marking", November 2007. [I-D.briscoe-tsvwg-cl-phb] Briscoe, B., "Pre-Congestion Notification Marking", October 2006. [I-D.charny-pcn-single-marking] Charny, A., "Pre-Congestion Notification Using Single Marking for Admission and Termination", November 2007. [I-D.ietf-pcn-architecture] Eardley, P., "Pre-Congestion Notification Architecture", (work in progress), October 2008. [I-D.ietf-pcn-baseline-encoding] Moncaster, T., Briscoe, B., and M. Menth, "Baseline Encoding and Transport of Pre-Congestion Information", (work in progress), October 2008. [I-D.ietf-pcn-marking-behaviour] Eardley, P., "Marking behaviour of PCN-nodes", (work in progress), October 2008. [I-D.westberg-pcn-load-control] Westberg, L., "LC-PCN: The Load Control PCN Solution", (work in progress), November 2008. Authors' Addresses Daisuke Satoh NTT Advanced Technology Corporation 1-19-18, Nakacho Musashino-shi, Tokyo 180-0006 Japan Email: daisuke.satoh@ntt-at.co.jp Mika Ishizuka NTT Advanced Technology Corporation Email: mika.ishizuka@ntt-at.co.jp Satoh, et al. Expires May 22, 2009 [Page 20] Internet-Draft ST Marking November 2008 Oratai Phanachet NTT Advanced Technology Corporation Email: oratai.phanachet@ntt-at.co.jp Yukari Maeda NTT Advanced Technology Corporation Email: yukari.maeda@ntt-at.co.jp Satoh, et al. Expires May 22, 2009 [Page 21] Internet-Draft ST Marking November 2008 Full Copyright Statement Copyright (C) The IETF Trust (2008). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Satoh, et al. Expires May 22, 2009 [Page 22]