Network Working Group S. Vapiwal Internet Draft J. Karthik Expires: June 2006 Cisco Systems R. Papneja Isocore December 8, 2005 Methodology for benchmarking fast failover time with local protection < draft-vapiwala-bmwg-frr-failover-meth-00.txt> Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. This document may only be posted in an Internet-Draft. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html This Internet-Draft will expire on June 8, . Abstract This draft provides the methodology for benchmarking the failover time of local protection (aka Fast Reroute). The failover to a backup tunnel could happen at the headend of the primary tunnel or a midpoint and the backup could offer link or node protection. It becomes vital to Vapiwala, Karthik, Expires June 8, 2006 [Page 1] Papneja Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms benchmark the failover time for all the cases and combinations. The failover time could also greatly differ based on the design and implementation and by factors like the number of prefixes carried by the tunnel, the number of primary tunnels affected by the event that caused the failover, number of primary tunnels the backup protects and type of failure etc. All the required benchmarking criteria and benchmarking topology required for measuring failover time of local protection is described Conventions used in this document The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. Table of Contents 1. Introduction...................................................3 2. Existing definitions...........................................5 3. Test Considerations............................................5 3.1. Failover Events...........................................5 3.2. Failure Detection [TERMID]................................6 3.3. Use of Data Traffic for MPLS Protection Benchmarking......6 3.4. LSP and Route Scaling.....................................7 3.5. Selection of IGP..........................................7 3.6. Reversion [TERMID]........................................7 3.7. Traffic generation........................................7 4. Test Setup.....................................................8 4.1. Link Protection with 1 hop primary and 1 hop backup TE tunnels........................................................8 4.2. Link Protection with 1 hop primary and 2 hop backup TE tunnels........................................................8 4.3. Link Protection with 2+ hop primary and 1 hop backup TE tunnels........................................................9 4.4. Link Protection with 2+ hop primary and 2 hop backup TE tunnels........................................................9 4.5. Node Protection with 2 hop primary and 1 hop backup TE tunnels.......................................................10 4.6. Node Protection with 2 hop primary and 2 hop backup TE tunnels.......................................................10 4.7. Node Protection with 3 or more hops primary and 1 hop backup TE tunnels....................................................11 4.8. Node Protection with 3 or more hops primary and 2 hop backup TE tunnels....................................................12 4.9. Link Protection with 1 hop primary (from PLR) and 1 hop backup TE tunnels.............................................12 Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 2] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms 4.10. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE tunnels.............................................12 4.11. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE tunnels.............................................13 4.12. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE tunnels.............................................13 4.13. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE tunnels.............................................14 4.14. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE tunnels.............................................14 4.15. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE tunnels.............................................15 4.16. Node Protection with 3+ hop primary (from PLR) and 2 hop backup TE tunnels.............................................15 5. Test Methodology..............................................16 6. Reporting Format..............................................17 7. Security Considerations.......................................18 8. Acknowledgements..............................................18 9. References....................................................18 10. Author's Address.............................................19 Appendix A: Fast Reroute Scalability Table.......................21 1. Introduction A link or a node failure could occur at the headend or the mid point node of a given primary tunnel. The time it takes to failover to the backup tunnel is a key measurement since it directly affects the traffic carried over the tunnel. The failover could occur at the headend or the midpoint of a primary tunnel and the time it takes to failover depends on a variety of factors like the type of physical media, method of FRR solution (detour vs facility), number of primary tunnels, number of prefixes carried over the tunnel etc. Given all this service providers certainly like to see a methodology to measure the failover time under all possible conditions. The following sections describe all the different topologies and scenarios that should be used and considered to effectively benchmark the failover time. The failure triggers, procedures, scaling considerations and reporting format of the results are discussed as well. Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 3] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms In order to benchmark failover time, data plane traffic is used as mentioned in [SCOTT-IGP] since traffic loss is measured in a black-box test and is a widely accepted way to measure convergence. Important point to be noted when benchmarking the failover time is that depending on whether PHP is happening (whether or not implicit null is advertised by the tail-end), and on the number of hops of primary and backup tunnel, we could have different situations where the packets switched over to the backup tunnel may have one, more or 0 labels. All the benchmarking cases mentioned in this document could apply to facility backup as well as local protection enabled in the detour mode. The test cases and the procedures described here should completely benchmark the failover time of a device under test in all possible scenarios and configuration. The additional scenarios defined in this document, are in addition to those considered in [FRR-Meth]. All the cases enlisted in this document could be verified in a single topology that is similar to this. --------------------------- | ------------|--------------- | | | | | | | | -------- -------- -------- -------- -------- TG-| R1 |-----| R2 |----| R3 | | R4 | | R5 |-TA | |-----| |----| |----| |---| | -------- -------- -------- -------- -------- | | | | | | | | | -------- | | ---------| R6 |-------- | | |-------------------- -------- Fig.1: Fast Reroute Topology. In figure 1, TG & TA are Traffic Generator & Analyser respectively. A tester is set outside the node as it sends and receives IP traffic along the working Path, run protocol emulations simulating real world Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 4] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms peering scenarios. The tester MUST record the IP packet sequence numbers, departure time, and arrival time so that the metrics of Failover Time, Additive Latency, and Reversion Time can be measured. The tester may be a single device or a test system. 2. Existing definitions For the sake of clarity and continuity this RFC adopts the template for definitions set out in Section 2 of RFC 1242. Definitions are indexed and grouped together in sections for ease of reference. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. The reader is assumed to be familiar with the commonly used MPLS terminology, some of which is defined in [MPLS-RSVP], [MPLS-RSVP-TE], and [MPLS-FRR-EXT]. 3. Test Considerations This section discusses the fundamentals of MPLS Protection testing: -The types of network events that cause failover -Indications for failover -the use of data traffic -Traffic generation -LSP Scaling -Reversion of LSP -IGP Selection 3.1. Failover Events Triggers for failover to a backup tunnel are link and node failures seen downstream of the PLR as follows. - Shutdown interface on PLR side with POS Alarm - Shutdown interface on remote side with POS Alarm (Widely used) - Shutdown interface on PLR side with RSVP hello - Shutdown interface on remote side with RSVP hello (Widely used) Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 5] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms - Shutdown interface on PLR side with BFD - Shutdown interface on remote side with BFD (Widely used) - Fiber Pull on PLR side - Fiber Pull on remote side (Widely used) - OIR on PLR side - OIR on remote side - Reload router in Node protection case. 3.2. Failure Detection [TERMID] Local failures can be detected via SONET failure with directly connected LSR. Failure indication may vary with the type of alarm - LOS, AIS, or RDI. Failures on Ethernet technology links such as Gigabit Ethernet rely upon Layer 3 signaling indication for failure. Different MPLS protection mechanisms and different implementations use different failure indications. Ethernet technologies such as Gigabit Ethernet rely upon layer 3 failure indication mechanisms since there is no Layer 2 failure indication mechanism. The test procedures in this document can be used against a local failure as well as against a remote failure to account for completeness of benchmarking and to evaluate failover performance independent of the implemented signaling indication mechanism. 3.3. Use of Data Traffic for MPLS Protection Benchmarking Customers of service providers use packet loss as the metric for failover time. Packet loss is an externally observable event having direct impact on customers' application performance. MPLS protection mechanisms exist to minimize packet loss in the event of failure. For this reason it is important to develop a standard router benchmarking methodology for measuring MPLS protection that uses packet loss as a metric. At a known rate for forwarding, packet loss can be measured and used to calculate the Failover time. Measurement of control plane signaling to establish backup paths is not enough to verify failover. Failover is best determined when packets are actually traversing the backup path. An additional benefit of using packet loss for calculation of Failover time is that it enables black-box tests to be designed. Data Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 6] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms traffic can be offered at line-rate to the device under test (DUT), an emulated network event as described above can be forced to occur, and packet loss can be externally measured to calculate the convergence time. Knowledge of DUT architecture is not required. There is no need to rely on the DUT to produce the test results. 3.4. LSP and Route Scaling Failover time performance may vary with the number of established primary and backup LSPs and routes learned. However the procedure outlined here may be used for any number of LSPs, L, and number of routes, R. L and R must be recorded. It is intended with Fast Reroute that the less than 45msec failover requirement be maintained when scaling the number of protected LSPs. 3.5. Selection of IGP The methodologies can be used with ISIS-TE or OSPF-TE. 3.6. Reversion [TERMID] Fast Reroute provides a method to return to restore a backup path to original primary LSP upon recovery from the failure. This is referred to as Reversion, which can be implemented as Global Reversion or Local Reversion. In all test cases listed here Reversion should not produce any packet loss. Each of the test cases in this methodology document provides a step to verify that there is no packet loss. 3.7. Traffic generation It is suggested that at least 3 traffic streams be configured using a traffic generator. In order to monitor the DUT performance for recovery times a set of route prefixes should be advertised before traffic is sent. The traffic should be configured to be sent to these routes. A typical example would be configuring the traffic generator to send the traffic to the first and last of the advertised routes. Also In order to have a good understanding of the performance behavior one may choose to send the traffic to the route, lying at the middle of the advertised routes. For example, if 100 routes are advertised, the user should send traffic to route prefix number 1, route prefix number 50 and to last route prefix advertised, which is 100 in this Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 7] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms example. It is recommended that the traffic is not generated in round-robin fashion to each of the prefixes. 4. Test Setup Topologies to be used for benchmarking the failover time: The following are all the required network topology required to benchmark the failover time for local protection. All of these 16 topologies can be mapped to the master FRR topology shown in figure 1. Topologies shown in section 4.9 to 4.16 refers to the network topologies required to benchmark failover time when DUT is a midpoint PLR. 4.1. Link Protection with 1 hop primary and 1 hop backup TE tunnels -------- P -------- TG-|Ingress |----| Egress |-TA |DUT/PLR |----| Node | -------- B -------- Figure 2: Representing section 4.1 setup Traffic No of Labels No of labels before failure after failure IGP 0 0 Layer3 VPN (PE-PE) 1 1 Layer3 VPN (PE-P) 2 2 Layer2 VC (PE-PE) 1 1 Layer2 VC (PE-P) 2 2 4.2. Link Protection with 1 hop primary and 2 hop backup TE tunnels -------- -------- TG-|Ingress | P |Egress |-TA |DUT/PLR |----| Node | -------- -------- |B | | -------- | ---|Backup |--- |Midpoint| Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 8] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms -------- Figure 3: Representing section 4.2 setup Traffic No of Labels No of labels before failure after failure IGP 0 1 Layer3 VPN (PE-PE) 1 2 Layer3 VPN (PE-P) 2 3 Layer2 VC (PE-PE) 1 2 Layer2 VC (PE-P) 2 3 4.3. Link Protection with 2+ hop primary and 1 hop backup TE tunnels -------- P -------- P -------- TG-|Ingress |----| Midpt |------| Egress |-TA |DUT/PLR |----| Node | | Node | -------- B -------- -------- Figure 4: Representing section 4.3 setup Traffic No of Labels No of labels before failure after failure IGP 1 1 Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-P) 3 3 Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-P) 3 3 4.4. Link Protection with 2+ hop primary and 2 hop backup TE tunnels -------- P -------- P -------- TG-|Ingress |----|Midpt |------| Egress |-TA |DUT/PLR | | Node | | Node | -------- -------- -------- B| | Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 9] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms | -------- | ---|Backup |- |Midpoint| -------- Figure 4: Representing section 4.4 setup Traffic No of Labels No of labels before failure after failure IGP 1 2 Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-P) 3 4 Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-P) 3 4 4.5. Node Protection with 2 hop primary and 1 hop backup TE tunnels -------- P -------- P -------- TG-|Ingress |----|Midpt |------| Egress |-TA |DUT/PLR | | Node | | Node | -------- -------- -------- B| | ----------------------------- Figure 5: Representing section 4.5 setup Traffic No of Labels No of labels before failure after failure IGP 1 0 Layer3 VPN (PE-PE) 2 1 Layer3 VPN (PE-P) 3 2 Layer2 VC (PE-PE) 2 1 Layer2 VC (PE-P) 3 2 4.6. Node Protection with 2 hop primary and 2 hop backup TE tunnels -------- -------- -------- TG-|Ingress | P |MidPoint| P | Egress |-TA Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 10] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms |DUT/PLR |----| Node |----| Node | -------- -------- -------- | | B | -------- | ---------|Backup |--------- |Midpoint| -------- Figure 7: Representing setup for section 4.6 Traffic No of Labels No of labels before failure after failure IGP 1 1 Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-P) 3 3 Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-P) 3 3 4.7. Node Protection with 3 or more hops primary and 1 hop backup TE tunnels -------- P -------- P -------- P -------- TG-|Ingress |----|Midpt |------| Merge |------| Egress |-TA |DUT/PLR | | Node | | Node | | Node | -------- -------- -------- -------- B | | ----------------------------- Figure 8: Representing setup in section 4.7 Traffic No of Labels No of labels before failure after failure IGP 1 1 Layer3 VPN (PE-PE) 2 2 Layer3 VPN (PE-P) 3 3 Layer2 VC (PE-PE) 2 2 Layer2 VC (PE-P) 3 3 Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 11] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms 4.8. Node Protection with 3 or more hops primary and 2 hop backup TE tunnels -------- -------- -------- -------- TG-|Ingress | P |MidPoint| P | Merge | P | Egress |-TA |DUT/PLR |----| Node |----| Node |------| Node | -------- -------- -------- -------- B | | | -------- | ---------|Backup |--------- |Midpoint| -------- Figure 9: Representing the setup for section 4.8 Traffic No of Labels No of labels before failure after failure IGP 1 2 Layer3 VPN (PE-PE) 2 3 Layer3 VPN (PE-P) 3 4 Layer2 VC (PE-PE) 2 3 Layer2 VC (PE-P) 3 4 4.9. Link Protection with 1 hop primary (from PLR) and 1 hop backup TE tunnels ------- -------- P -------- TG-|Ingress|--| Mid-pt |----| Egress |-TA | | | DUT/PLR|----| Node | ------- -------- B -------- Figure 10: Represents the setup for section 4.9 Traffic No of Labels No of labels before failure after failure Any 0 0 4.10. Link Protection with 1 hop primary (from PLR) and 2 hop backup TE tunnels ------- -------- -------- Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 12] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms TG-|Ingress| | Mid-pt | P |Egress |-TA | |----| DUT/PLR|----| Node | ------- -------- -------- |B | | -------- | ---|Backup |-- |Midpoint| -------- Figure 11: Representing setup for section 4.10 Traffic No of Labels No of labels before failure after failure Any 0 1 4.11. Link Protection with 2+ hop (from PLR) primary and 1 hop backup TE tunnels -------- -------- P -------- P -------- TG-|Ingress |----| Mid-pt |----| Midpt |------| Egress |-TA | | | DUT/PLR|----| Node | | Node | -------- -------- B -------- -------- Figure 12: Representing setup for section 4.11 Traffic No of Labels No of labels before failure after failure Any 1 1 4.12. Link Protection with 2+ hop (from PLR) primary and 2 hop backup TE tunnels -------- -------- P -------- P -------- TG-|Ingress |----| Mid-pt |----|Midpt |------| Egress |-TA | | | DUT/PLR| | Node | | Node | -------- -------- -------- -------- B| | | -------- | Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 13] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms ---|Backup |- |Midpoint| -------- Figure 13: Representing the setup for section 4.12 Traffic No of Labels No of labels before failure after failure Any 1 2 4.13. Node Protection with 2 hop primary (from PLR) and 1 hop backup TE tunnels -------- -------- P -------- P -------- TG-|Ingress |----| Mid-pt |----|Midpt |------| Egress |-TA | | | DUT/PLR| | Node | | Node | -------- -------- -------- -------- B| | ----------------------------- Figure 14: Representing the setup for section 4.13 Traffic No of Labels No of labels before failure after failure Any 1 0 4.14. Node Protection with 2 hop primary (from PLR) and 2 hop backup TE tunnels -------- -------- -------- -------- TG-|Ingress | | Mid-pt | P |MidPoint| P | Egress |-TA | |----| DUT/PLR|----| Node |----| Node | -------- -------- -------- -------- | | B | -------- | Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 14] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms ---------|Backup |--------- |Midpoint| -------- Figure 15: Representing setup for section 4.14 Traffic No of Labels No of labels before failure after failure Any 1 1 4.15. Node Protection with 3+ hop primary (from PLR) and 1 hop backup TE tunnels -------- -------- P -------- P -------- P -------- TG-| Ingress|--| Mid-pt |---|Midpt |---| Merge |---| Egress |-TA | | | DUT/PLR| | Node | | Node | | Node | -------- -------- -------- -------- -------- B | | -------------------------- Figure 16: Representing setup for section 4.15 Traffic No of Labels No of labels before failure after failure Any 1 1 4.16. Node Protection with 3+ hop primary (from PLR) and 2 hop backup TE tunnels -------- -------- -------- -------- -------- TG-|Ingress | | Mid-pt | P |MidPoint|P | Merge | P | Egress |-TA | |-- | DUT/PLR|---| Node |---| Node |---| Node | -------- -------- -------- -------- -------- B | | | -------- | ---------|Backup |------- |Midpoint| Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 15] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms -------- Figure 17: Representing setup for section 4.16 Traffic No of Labels No of labels before failure after failure Any 1 2 5. Test Methodology The procedure described here can apply to all the 16 base test cases and the associated topologies. Objective To benchmark the MPLS failover time due to any failure events described in section 3.1 experienced by the device under test which is the point of local repair (PLR). Test Setup - select any one topology out of 16 from section 4 - select overlay technology for FRR test e.g IGP,VPN, VC or Mid- point lsps - The DUT will also have 2 interfaces connected to the traffic generator. Test Configuration 1. Configure the number of primaries and the backups as required by the topology selected 2. Advertise prefixes (as per FRR Scalability table describe in Appendix A) by the tail end. Procedure 1. Establish the primary lsp required by the topology selected 2. Establish the backup lsp required by the selected topology 3. Verify primary and backup lsps are up and that primary is protected Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 16] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms 4. Verify Fast Reroute protection 5. Setup 3 traffic streams as described in section 3.7 6. Send IP traffic at maximum Forwarding Rate to DUT. 7. Verify traffic switched over Primary LSP. 8. Trigger any choice of failure as describe in section 3.1 9. Verify that primary tunnel and prefixes gets mapped to backup tunnels 10. Stop traffic stream and measure the traffic loss. 11. Failover time is calculated as per defined in section 6, Reporting format. 12. Start traffic stream again to verify reversion when protected interface comes up. Traffic loss should be 0 due to make before break or reversion 13. Enable protected interface that was shut (Node in the case of NNHOP) 14. Verify head-end signals new LSP and protection should be in place again 6. Reporting Format For each test, it is recommended that the results be reported in the following format. Parameter Units IGP used for the test ISIS-TE/ OSPF-TE Interface types Gige, POS, ATM, etc. Packet Sizes offered to the DUT Bytes IGP routes advertised number of IGP routes RSVP hello timers configured (if any) milliseconds Number of FRR tunnels configured number of tunnels Number of VPN routes in head-end number of VPN routes Number of VC tunnels number of VC tunnels Number of BGP routes number of BGP routes Number of mid-point tunnels number of tunnels Benchmarks 1st Prefix's failover time milliseconds Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 17] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms Mid Prefix's failover time milliseconds Last Prefix's failover time milliseconds 1st Prefix's reversion time milliseconds Mid Prefix's reversion time milliseconds Last Prefix's reversion time milliseconds Failover time suggested above is calculated using the following formula: (Numbers of packet drop/rate per second * 1000) milliseconds 7. Security Considerations Documents of this type do not directly affect the security of the Internet or of corporate networks as long as benchmarking is not performed on devices or systems connected to operating networks. 8. Acknowledgements Thanks Amrit Hanspal for his input to the document. 9. References [MPLS-LDP] Andersson, L., Doolan, P., Feldman, N., Fredette, A. and B. Thomas, "LDP Specification", RFC 3036, January 2001. [MPLS-RSVP] R. Braden, Ed., et al, "Resource ReSerVation protocol (RSVP) -- version 1 functional specification," RFC2205, September 1999. [MPLS-RSVP-TE] D. Awduche, et al, "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC3209, December 2001. [MPLS-FRR-EXT] Pan, P., Atlas, A., Swallow, G., "Fast Reroute Extensions to RSVP-TE for LSP Tunnels", RFC 4090. [MPLS-ARCH] Rosen, E., Viswanathan, A. and R. Callon, "Multiprotocol Label Switching Architecture", RFC 3031, January 2001. Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 18] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms [RFC-WORDS] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", RFC 2119, March 1997. [RFC-IANA] T. Narten and H. Alvestrand, "Guidelines for Writing an IANA Considerations Section in RFCs", RFC 2434. [TERM-ID] Poretsky, S., Papneja, R., Kimura, T., "Benchmarking Terminology for Protection Performance", draft-poretsky-protection-term- 00.txt, work in progress. [FRR-METH] Poretsky, S., Papneja, R., Rao, S., Le Roux, JL. "Benchmarking Methodology for MPLS Protection Mechanisms,"draft-poretsky-mpls-protection-meth- 04.txt,’’ work in progress. 10. Author's Address Samir Vapiwala Cisco System 300 Beaver Brook Road Boxborough, MA 01719 USA Phone: +1 978 936 1484 EMail: svapiwal@cisco.com Jay Karthik Cisco System 300 Beaver Brook Road Boxborough, MA 01719 USA Phone: +1 978 936 0533 EMail: jkarthik@cisco.com Rajiv Papneja Isocore Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 19] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms 12359 Sunrise Valley Drive, STE 100 Reston, VA 20190 USA Phone: +1 703 860 9273 Email: rpapneja@isocore.com Full Copyright Statement Copyright (C) The Internet Society (2005). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 20] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf- ipr@ietf.org. Acknowledgement Funding for the RFC Editor function is currently provided by the Internet Society. Appendix A: Fast Reroute Scalability Table This section provides the recommended numbers for evaluating the scalability of fast reroute implementations. It also recommends the typical numbers for IGP/VPNv4 Prefixes, LSP Tunnels and VC entries. Based on the features supported by the device under test, appropriate scaling limits can be used for the test bed. A 1. FRR IGP Table No of Headend IGP Prefixes TE LSPs 1 100 1 500 1 1000 1 2000 1 5000 2(Load Balance) 100 2(Load Balance) 500 2(Load Balance) 1000 2(Load Balance) 2000 2(Load Balance) 5000 100 100 500 500 1000 1000 2000 2000 A 2. FRR VPN Table No of Headend VPNv4 Prefixes Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 21] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms TE LSPs 1 100 1 500 1 1000 1 2000 1 5000 1 10000 1 20000 1 Max 2(Load Balance) 100 2(Load Balance) 500 2(Load Balance) 1000 2(Load Balance) 2000 2(Load Balance) 5000 2(Load Balance) 10000 2(Load Balance) 20000 2(Load Balance) Max A 3. FRR Mid-Point LSP Table No of Mid-point TE LSps could be configured at the following recommended levels 100 500 1000 2000 Max supported number A 4. FRR VC Table No of Headend VC entries TE LSPs 1 100 1 500 1 1000 1 2000 1 Max Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 22] Internet-Draft Methodology for benchmarking fast December 2005 failover time with local protection Protection Mechanisms 100 100 500 500 1000 1000 2000 2000 Vapiwala, Karthik, Papneja Expires June 8, 2006 [Page 23]