Network Working Group S. Poretsky Internet Draft Allot Communications Expires: September 08, 2009 Intended Status: Informational Brent Imhoff Juniper Networks March 08, 2009 Benchmarking Methodology for Link-State IGP Data Plane Route Convergence Status of this Memo This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on September 8, 2009. Copyright Notice Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents in effect on the date of publication of this document (http://trustee.ietf.org/license-info). Please review these documents carefully, as they describe your rights and restrictions with respect to this document. ABSTRACT This document describes the methodology for benchmarking Interior Gateway Protocol (IGP) Route Convergence. The methodology is to be used for benchmarking IGP convergence time through externally observable (black box) data plane measurements. The methodology can be applied to any link-state IGP, such as ISIS and OSPF. Poretsky and Imhoff [Page 1] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence Table of Contents 1. Introduction and Scope......................................2 2. Existing Definitions .......................................2 3. Test Setup..................................................3 3.1 Test Topologies............................................3 3.2 Test Considerations........................................5 3.3 Reporting Format...........................................8 4. Test Cases..................................................9 4.1 Convergence Due to Local Interface Failure.................9 4.2 Convergence Due to Remote Interface Failure................10 4.3 Convergence Due to Local Administrative Shutdown...........11 4.4 Convergence Due to Layer 2 Session Loss....................11 4.5 Convergence Due to Loss of IGP Adjacency...................12 4.6 Convergence Due to Route Withdrawal........................13 4.7 Convergence Due to Cost Change.............................14 4.8 Convergence Due to ECMP Member Interface Failure...........15 4.9 Convergence Due to ECMP Member Remote Interface Failure....16 4.10 Convergence Due to Parallel Link Interface Failure........16 5. IANA Considerations.........................................17 6. Security Considerations.....................................17 7. Acknowledgements............................................17 8. References..................................................18 9. Author's Address............................................18 1. Introduction and Scope This document describes the methodology for benchmarking Interior Gateway Protocol (IGP) Route Convergence. The motivation and applicability for this benchmarking is described in [Po09a]. The terminology to be used for this benchmarking is described in [Po09t]. Service Providers use IGP Convergence time as a key metric of router design and architecture. Customers of Service Providers observe convergence time by packet loss, so IGP Route Convergence is considered a Direct Measure of Quality (DMOQ). The test cases in this document are black-box tests that emulate the network events that cause route convergence, as described in [Po09a]. The black-box test designs benchmark the data plane and account for all of the factors contributing to convergence time, as discussed in [Po09a]. Convergence times are measured at the Tester on the data plane by observing packet loss through the DUT. The methodology (and terminology) for benchmarking route convergence can be applied to any link-state IGP such as ISIS [Ca90] and OSPF [Mo98] and others. These methodologies apply to IPv4 and IPv6 traffic and IGPs. 2. Existing Definitions The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [Br97]. RFC 2119 defines the use of these key words to help make the Poretsky and Imhoff [Page 2] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence intent of standards track documents as clear as possible. While this document uses these keywords, this document is not a standards track document. This document adopts the definition format in Section 2 of RFC 1242 [Br91]. This document uses much of the terminology defined in [Po09t]. This document uses existing terminology defined in other BMWG work. Examples include, but are not limited to: Throughput [Ref.[Br91], section 3.17] Device Under Test (DUT) [Ref.[Ma98], section 3.1.1] System Under Test (SUT) [Ref.[Ma98], section 3.1.2] Out-of-order Packet [Ref.[Po06], section 3.3.2] Duplicate Packet [Ref.[Po06], section 3.3.3] Packet Loss [Ref.[Po09t], Section 3.5] 3. Test Setup 3.1 Test Topologies Convergence times are measured at the Tester on the data plane by observing packet loss through the DUT. Figure 1 shows the test topology to measure IGP Route Convergence due to local Convergence Events such as Link Failure, Layer 2 Session Failure, IGP Adjacency Failure, Route Withdrawal, and route cost change. These test cases discussed in section 4 provide route convergence times that include the Event Detection time, SPF Processing time, and FIB Update time. Figure 2 shows the test topology to measure IGP Route Convergence time due to remote changes in the network topology. These times are measured by observing packet loss in the data plane at the Tester. In this topology the three routers are considered a System Under Test (SUT). A Remote Interface [Po09t] failure on router R2 MUST result in convergence of traffic to router R3. NOTE: All routers in the SUT must be the same model and identically configured. --------- Ingress Interface --------- | |<--------------------------------| | | | | | | | Preferred Egress Interface | | | DUT |-------------------------------->| Tester| | | | | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | | Next-Best Egress Interface | | --------- --------- Figure 1. Test Topology 1: IGP Convergence Test Topology for Local Changes Poretsky and Imhoff [Page 3] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence ----- --------- | | Preferred | | ----- |R2 |---------------------->| | | |-->| | Egress Interface | | | | ----- | | |R1 | |Tester | | | ----- | | | |-->| | Next-Best | | ----- |R3 |~~~~~~~~~~~~~~~~~~~~~~>| | ^ | | Egress Interface | | | ----- --------- | | |-------------------------------------- Ingress Interface Figure 2. Test Topology 2: IGP Convergence Test Topology for Convergence Due to Remote Changes --------- Ingress Interface --------- | |<--------------------------------| | | | | | | | ECMP Set Interface 1 | | | DUT |-------------------------------->| Tester| | | . | | | | . | | | | . | | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | | ECMP Set Interface N | | --------- --------- Figure 3. Test Topology 3: IGP Convergence Test Topology for ECMP Convergence Figure 3 shows the test topology to measure IGP Route Convergence time with members of an Equal Cost Multipath (ECMP) Set. These times are measured by observing packet loss in the data plane at the Tester. In this topology, the DUT is configured with each Egress interface as a member of an ECMP set and the Tester emulates multiple next-hop routers (emulates one router for each member). Figure 4 shows the test topology to measure IGP Route Convergence time with members of a Parallel Link. These times are measured by observing packet loss in the data plane at the Tester. In this topology, the DUT is configured with each Egress interface as a member of a Parallel Link and the Tester emulates the single next-hop router. Poretsky and Imhoff [Page 4] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence --------- Ingress Interface --------- | |<--------------------------------| | | | | | | | Parallel Link Interface 1 | | | DUT |-------------------------------->| Tester| | | . | | | | . | | | | . | | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | | Parallel Link Interface N | | --------- --------- Figure 4. Test Topology 4: IGP Convergence Test Topology for Parallel Link Convergence 3.2 Test Considerations 3.2.1 IGP Selection The test cases described in section 4 MAY be used for link-state IGPs, such as ISIS or OSPF. The Route Convergence test methodology is identical. The IGP adjacencies are established on the Preferred Egress Interface and Next-Best Egress Interface. 3.2.2 Routing Protocol Configuration The obtained results for IGP Route Convergence may vary if other routing protocols are enabled and routes learned via those protocols are installed. IGP convergence times MUST be benchmarked without routes installed from other protocols. When performing test cases, advertise a single IGP topology from Tester to DUT on the Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the test setup shown in Figure 1. These two interfaces on the DUT must peer with different emulated neighbor routers for their IGP adjacencies. The IGP topology learned on both interfaces MUST be the same topology with the same nodes and routes. 3.2.3 IGP Route Scaling The number of IGP routes will impact the measured IGP Route Convergence. To obtain results similar to those that would be observed in an operational network, it is RECOMMENDED that the number of installed routes and nodes closely approximates that of the network (e.g. thousands of routes with tens of nodes). The number of areas (for OSPF) and levels (for ISIS) can impact the benchmark results. Poretsky and Imhoff [Page 5] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 3.2.4 Timers There are some timers that will impact the measured IGP Convergence time. Benchmarking metrics may be measured at any fixed values for these timers. It is RECOMMENDED that the following timers be configured to the minimum values listed: Timer Recommended Value ----- ----------------- Link Failure Indication Delay <10milliseconds IGP Hello Timer 1 second IGP Dead-Interval 3 seconds LSA Generation Delay 0 LSA Flood Packet Pacing 0 LSA Retransmission Packet Pacing 0 SPF Delay 0 3.2.5 Interface Types All test cases in this methodology document may be executed with any interface type. All interfaces MUST be the same media and Throughput [Br91][Br99] for each test case. The type of media may dictate which test cases may be executed. This is because each interface type has a unique mechanism for detecting link failures and the speed at which that mechanism operates will influence the measure results. Media and protocols MUST be configured for minimum failure detection delay to minimize the contribution to the measured Convergence time. For example, configure SONET with the minimum carrier-loss-delay. All interfaces SHOULD be configured as point-to-point. 3.2.6 Packet Sampling Interval The Packet Sampling Interval [Po09t] value is the fastest measurable convergence time. The RECOMMENDED value for the Packet Sampling Interval to be set on the Tester is 10 milliseconds. The Packet Sampling Interval MUST be reported. 3.2.7 Offered Load The offered load MUST be the Throughput of the device as defined in [Br91] and benchmarked in [Br99] at a fixed packet size. At least one packet per route in the FIB for all routes in the FIB MUST be offered to the DUT within the Packet Sampling interval. Packet size is measured in bytes and includes the IP header and payload. The packet size is selectable and MUST be recorded. The Throughput MUST be measured at the Preferred Egress Interface and the Next-Best Egress Interface. The duration of offered load MUST be greater than the convergence time. The destination addresses for the offered load MUST be distributed such that all routes are matched and each route is offered an equal share of the total Offered Load. This requirement for the Offered Load to be distributed to match all destinations in the route table creates separate flows that are offered to the DUT. The capability of the Tester to measure packet loss for each individual flow Poretsky and Imhoff [Page 6] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence (identified by the destination address matching a route entry) and the scale for the number of individual flows for which it can measure packet loss should be considered when benchmarking Route-Specific Convergence [Po09t]. 3.2.8 Selection of Convergence Time Benchmark Metrics and Methods The methodologies in the section 4 test cases MAY be applied to benchmark Full Convergence Time, First Route Convergence Time, Reversion Convergence Time, and Route-Specific Convergence Time [Po09t]. The First Route Convergence Time benchmark metric MAY be measured while measuring any of these convergence benchmarks. The benchmarking metrics may be obtained using either the Loss-Derived Convergence Method or Rate-Derived Convergence Method. It is RECOMMENDED that the Rate-Derived Convergence Method be measured when benchmarking convergence times. The Loss-Derived Convergence Method is not the preferred method to measure convergence benchmarks because it can produce a result that is faster than the actual convergence time. When the Packet Sampling Interval is too large, the Rate-Derived Convergence Method may produce a larger than actual convergence time. In such cases the Loss-Derived Convergence Method may produce a more accurate result. 3.2.9 Tester Capabilities It is RECOMMENDED that the Tester used to execute each test case have the following capabilities: 1. Ability to establish IGP adjacencies and advertise a single IGP topology to one or more peers. 2. Ability to produce convergence Event Triggers [Po09t]. 3. Ability to insert a timestamp in each data packet's IP payload. 2. An internal time clock to control timestamping, time measurements, and time calculations. 3. Ability to distinguish traffic load received on the Preferred and Next-Best Interfaces [Po09t]. 4. Ability to disable or tune specific Layer-2 and Layer-3 protocol functions on any interface(s). It is not required that the Tester be capable of making non-data plane convergence observations nor to use those observations for measurements. Poretsky and Imhoff [Page 7] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 3.3 Reporting Format For each test case, it is recommended that the reporting table below is completed and all time values SHOULD be reported with resolution as specified in [Po09t]. Parameter Units --------- ----- Test Case test case number Test Topology (1, 2, 3, or 4) IGP (ISIS, OSPF, other) Interface Type (GigE, POS, ATM, other) Packet Size offered to DUT bytes IGP Routes advertised to DUT number of IGP routes Nodes in emulated network number of nodes Packet Sampling Interval on Tester milliseconds IGP Timer Values configured on DUT: Interface Failure Indication Delay seconds IGP Hello Timer seconds IGP Dead-Interval seconds LSA Generation Delay seconds LSA Flood Packet Pacing seconds LSA Retransmission Packet Pacing seconds SPF Delay seconds Forwarding Metrics Total Packets Offered to DUT number of Packets Total Packets Routed by DUT number of Packets Convergence Packet Loss number of Packets Out-of-Order Packets number of Packets Duplicate Packets number of Packets Convergence Benchmarks Full Convergence First Route Convergence Time seconds Full Convergence Time (Rate-Derived) seconds Full Convergence Time (Loss-Derived) seconds Route-Specific Convergence Number of Routes Measured number of flows Route-Specific Convergence Time[n] array of seconds Minimum R-S Convergence Time seconds Maximum R-S Convergence Time seconds Median R-S Convergence Time seconds Average R-S Convergence Time seconds Reversion Reversion Convergence Time seconds First Route Convergence Time seconds Route-Specific Convergence Number of Routes Measured number of flows Route-Specific Convergence Time[n] array of seconds Minimum R-S Convergence Time seconds Maximum R-S Convergence Time seconds Median R-S Convergence Time seconds Average R-S Convergence Time seconds Poretsky and Imhoff [Page 8] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 4. Test Cases It is RECOMMENDED that all applicable test cases be performed for best characterization of the DUT. The test cases follow a generic procedure tailored to the specific DUT configuration and Convergence Event[Po09t]. This generic procedure is as follows: 1. Establish DUT configuration and install routes. 2. Send offered load with traffic traversing Preferred Egress Interface [Po09t]. 3. Introduce Convergence Event to force traffic to Next-Best Egress Interface [Po09t]. 4. Measure First Route Convergence Time. 5. Measure Full Convergence Time and, optionally, the Route-Specific Convergence Times. 6. Wait the Sustained Convergence Validation Time to ensure there is no residual packet loss. 7. Recover from Convergence Event. 8. Measure Reversion Convergence Time, and optionally the First Route Convergence Time and Route-Specific Convergence Times. 4.1 Convergence Due to Local Interface Failure Objective To obtain the IGP Route Convergence due to a local link failure event at the DUT's Local Interface. Procedure 1. Advertise matching IGP routes and topology from Tester to DUT on the Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the topology shown in Figure 1. Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. Remove link on DUT's Preferred Egress Interface. This is the Convergence Event Trigger[Po09t] that produces the Convergence Event Instant [Po09t]. 5. Measure First Route Convergence Time [Po09t] as DUT detects the link down event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT detects the link down event and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Restore link on DUT's Preferred Egress Interface. 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time and Route-Specific Convergence Times, as DUT detects the link up event and converges all IGP routes and traffic back to the Preferred Egress Interface. Poretsky and Imhoff [Page 9] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence Results The measured IGP Convergence time is influenced by the Local link failure indication, SPF delay, SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. 4.2 Convergence Due to Remote Interface Failure Objective To obtain the IGP Route Convergence due to a Remote Interface Failure event. Procedure 1. Advertise matching IGP routes and topology from Tester to SUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the topology shown in Figure 2. Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to SUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. Remove link on Tester's Neighbor Interface [Po09t] connected to SUT's Preferred Egress Interface. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 5. Measure First Route Convergence Time [Po09t] as SUT detects the link down event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as SUT detects the link down event and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Restore link on Tester's Neighbor Interface connected to DUT's Preferred Egress Interface. 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the link up event and converges all IGP routes and traffic back to the Preferred Egress Interface. Results The measured IGP Convergence time is influenced by the link failure indication, LSA/LSP Flood Packet Pacing, LSA/LSP Retransmission Packet Pacing, LSA/LSP Generation time, SPF delay, SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. This test case may produce Stale Forwarding [Po09t] due to microloops which may increase the measured convergence times. Poretsky and Imhoff [Page 10] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 4.3 Convergence Due to Local Adminstrative Shutdown Objective To obtain the IGP Route Convergence due to a administrative shutdown at the DUT's Local Interface. Procedure 1. Advertise matching IGP routes and topology from Tester to DUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the topology shown in Figure 1. Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. Perform adminstrative shutdown on the DUT's Preferred Egress Interface. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 5. Measure First Route Convergence Time [Po09t] as DUT detects the link down event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Restore Preferred Egress Interface by administratively enabling the interface. 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the link up event and converges all IGP routes and traffic back to the Preferred Egress Interface. Results The measured IGP Convergence time is influenced by SPF delay, SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. 4.4 Convergence Due to Layer 2 Session Loss Objective To obtain the IGP Route Convergence due to a local Layer 2 loss. Procedure 1. Advertise matching IGP routes and topology from Tester to DUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the topology shown in Figure 1. Set the cost of the routes so that the IGP routes along the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. Poretsky and Imhoff [Page 11] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 3. Verify traffic is routed over Preferred Egress Interface. 4. Tester removes Layer 2 session from DUT's Preferred Egress Interface [Po09t]. It is RECOMMENDED that this be achieved with messaging, but the method MAY vary with the Layer 2 protocol. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 5. Measure First Route Convergence Time [Po09t] as DUT detects the Layer 2 session down event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT detects the Layer 2 session down event and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Restore Layer 2 session on DUT's Preferred Egress Interface. 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the session up event and converges all IGP routes and traffic over the Preferred Egress Interface. Results The measured IGP Convergence time is influenced by the Layer 2 failure indication, SPF delay, SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. 4.5 Convergence Due to Loss of IGP Adjacency Objective To obtain the IGP Route Convergence due to loss of the IGP Adjacency. Procedure 1. Advertise matching IGP routes and topology from Tester to DUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the topology shown in Figure 1. Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. Remove IGP adjacency from Tester's Neighbor Interface [Po09t] connected to Preferred Egress Interface. The Layer 2 session MUST be maintained. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 5. Measure First Route Convergence Time [Po09t] as DUT detects the loss of IGP adjacency and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT detects the IGP session failure event and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. Poretsky and Imhoff [Page 12] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Restore IGP session on DUT's Preferred Egress Interface. 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the session recovery event and converges all IGP routes and traffic over the Preferred Egress Interface. Results The measured IGP Convergence time is influenced by the IGP Hello Interval, IGP Dead Interval, SPF delay, SPF Hold time, SPF Execution Time, Tree Build Time, and Hardware Update Time [Po09a]. 4.6 Convergence Due to Route Withdrawal Objective To obtain the IGP Route Convergence due to Route Withdrawal. Procedure 1. Advertise a single IGP topology from Tester to DUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the test setup shown in Figure 1. These two interfaces on the DUT must peer with different emulated neighbor routers for their IGP adjacency. The IGP topology learned on both interfaces MUST be the same topology with the same nodes and routes. It is RECOMMENDED that the IGP routes be IGP external routes for which the Tester would be emulating a preferred and a next-best Autonomous System Border Router (ASBR). Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. The Tester, emulating the neighbor node, withdraws one or more IGP leaf routes from the DUT's Preferred Egress Interface. The withdrawal update message MUST be a single unfragmented packet. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. The Tester MAY record the time it sends the withdrawal message(s). 5. Measure First Route Convergence Time [Po09t] as DUT detects the route withdrawal event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT withdraws routes and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Re-advertise the withdrawn IGP leaf routes to DUT's Preferred Egress Interface. Poretsky and Imhoff [Page 13] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT converges all IGP routes and traffic over the Preferred Egress Interface. Results The measured IGP Convergence time is the SPF Processing and FIB Update time as influenced by the SPF or route calculation delay, Hold time, Execution Time, and Hardware Update Time [Po09a]. 4.7 Convergence Due to Cost Change Objective To obtain the IGP Route Convergence due to route cost change. Procedure 1. Advertise a single IGP topology from Tester to DUT on Preferred Egress Interface [Po09t] and Next-Best Egress Interface [Po09t] using the test setup shown in Figure 1. These two interfaces on the DUT must peer with different emulated neighbor routers for their IGP adjacency. The IGP topology learned on both interfaces MUST be the same topology with the same nodes and routes. It is RECOMMENDED that the IGP routes be IGP external routes for which the Tester would be emulating a preferred and a next-best Autonomous System Border Router (ASBR). Set the cost of the routes so that the Preferred Egress Interface is the preferred next-hop. 2. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 3. Verify traffic is routed over Preferred Egress Interface. 4. The Tester, emulating the neighbor node, increases the cost for all IGP routes at DUT's Preferred Egress Interface so that the Next-Best Egress Interface has lower cost and becomes preferred path. The update message advertising the higher cost MUST be a single unfragmented packet. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. The Tester MAY record the time it sends the message advertising the higher cost on the Preferred Egress Interface. 5. Measure First Route Convergence Time [Po09t] as DUT detects the cost change event and begins to converge IGP routes and traffic over the Next-Best Egress Interface. 6. Measure Full Convergence Time [Po09t] as DUT detects the cost change event and converges all IGP routes and traffic over the Next-Best Egress Interface. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 7. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 8. Re-advertise IGP routes to DUT's Preferred Egress Interface with original lower cost metric. Poretsky and Imhoff [Page 14] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 9. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT converges all IGP routes and traffic over the Preferred Egress Interface. Results It is possible that no measured packet loss will be observed for this test case. 4.8 Convergence Due to ECMP Member Interface Failure Objective To obtain the IGP Route Convergence due to a local link failure event of an ECMP Member. Procedure 1. Configure ECMP Set as shown in Figure 3. 2. Advertise matching IGP routes and topology from Tester to DUT on each ECMP member. 3. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 4. Verify traffic is routed over all members of ECMP Set. 5. Remove link on Tester's Neighbor Interface [Po09t] connected to one of the DUT's ECMP member interfaces. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 6. Measure First Route Convergence Time [Po09t] as DUT detects the link down event and begins to converge IGP routes and traffic over the other ECMP members. 7. Measure Full Convergence Time [Po09t] as DUT detects the link down event and converges all IGP routes and traffic over the other ECMP members. At the same time measure Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 8. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 9. Restore link on Tester's Neighbor Interface connected to DUT's ECMP member interface. 10. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the link up event and converges IGP routes and some distribution of traffic over the restored ECMP member. Results The measured IGP Convergence time is influenced by Local link failure indication, Tree Build Time, and Hardware Update Time [Po09a]. Poretsky and Imhoff [Page 15] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 4.9 Convergence Due to ECMP Member Remote Interface Failure Objective To obtain the IGP Route Convergence due to a remote interface failure event for an ECMP Member. Procedure 1. Configure ECMP Set as shown in Figure 2 in which the links from R1 to R2 and R1 to R3 are members of an ECMP Set. 2. Advertise matching IGP routes and topology from Tester to SUT to balance traffic to each ECMP member. 3. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to SUT on Ingress Interface [Po09t]. 4. Verify traffic is routed over all members of ECMP Set. 5. Remove link on Tester's Neighbor Interface to R2 or R3. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 6. Measure First Route Convergence Time [Po09t] as SUT detects the link down event and begins to converge IGP routes and traffic over the other ECMP members. 7. Measure Full Convergence Time [Po09t] as SUT detects the link down event and converges all IGP routes and traffic over the other ECMP members. At the same time measure Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 8. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 9. Restore link on Tester's Neighbor Interface to R2 or R3. 10. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as SUT detects the link up event and converges IGP routes and some distribution of traffic over the restored ECMP member. Results The measured IGP Convergence time is influenced by Local link failure indication, Tree Build Time, and Hardware Update Time [Po09a]. 4.10 Convergence Due to Parallel Link Interface Failure Objective To obtain the IGP Route Convergence due to a local link failure event for a Member of a Parallel Link. The links can be used for data Load Balancing Procedure 1. Configure Parallel Link as shown in Figure 4. 2. Advertise matching IGP routes and topology from Tester to DUT on each Parallel Link member. Poretsky and Imhoff [Page 16] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 3. Send offered load at measured Throughput with fixed packet size to destinations matching all IGP routes from Tester to DUT on Ingress Interface [Po09t]. 4. Verify traffic is routed over all members of Parallel Link. 5. Remove link on Tester's Neighbor Interface [Po09t] connected to one of the DUT's Parallel Link member interfaces. This is the Convergence Event Trigger [Po09t] that produces the Convergence Event Instant [Po09t]. 6. Measure First Route Convergence Time [Po09t] as DUT detects the link down event and begins to converge IGP routes and traffic over the other Parallel Link members. 7. Measure Full Convergence Time [Po09t] as DUT detects the link down event and converges all IGP routes and traffic over the other Parallel Link members. At the same time measure Out-of-Order Packets [Po06] and Duplicate Packets [Po06]. Optionally, Route-Specific Convergence Times [Po09t] MAY be measured. 8. Stop offered load. Wait 30 seconds for queues to drain. Restart offered load. 9. Restore link on Tester's Neighbor Interface connected to DUT's Parallel Link member interface. 10. Measure Reversion Convergence Time [Po09t], and optionally measure First Route Convergence Time [Po09t] and Route-Specific Convergence Times [Po09t], as DUT detects the link up event and converges IGP routes and some distribution of traffic over the restored Parallel Link member. Results The measured IGP Convergence time is influenced by the Local link failure indication, Tree Build Time, and Hardware Update Time [Po09a]. 5. IANA Considerations This document requires no IANA considerations. 6. Security Considerations Documents of this type do not directly affect the security of Internet or corporate networks as long as benchmarking is not performed on devices or systems connected to production networks. Security threats and how to counter these in SIP and the media layer is discussed in RFC3261, RFC3550, and RFC3711 and various other drafts. This document attempts to formalize a set of common methodology for benchmarking IGP convergence performance in a lab environment. 7. Acknowledgements Thanks to Sue Hares, Al Morton, Kevin Dubray, Ron Bonica, David Ward, Kris Michielsen, Peter De Vriendt and the BMWG for their contributions to this work. Poretsky and Imhoff [Page 17] INTERNET-DRAFT Benchmarking Methodology for March 2009 Link-State IGP Data Plane Route Convergence 8. References 8.1 Normative References [Br91] Bradner, S., "Benchmarking Terminology for Network Interconnection Devices", RFC 1242, IETF, March 1991. [Br97] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", RFC 2119, March 1997 [Br99] Bradner, S. and McQuaid, J., "Benchmarking Methodology for Network Interconnect Devices", RFC 2544, IETF, March 1999. [Ca90] Callon, R., "Use of OSI IS-IS for Routing in TCP/IP and Dual Environments", RFC 1195, IETF, December 1990. [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching Devices", RFC 2285, February 1998. [Mo98] Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998. [Po06] Poretsky, S., et al., "Terminology for Benchmarking Network-layer Traffic Control Mechanisms", RFC 4689, November 2006. [Po09a] Poretsky, S., "Considerations for Benchmarking Link-State IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-app-17, work in progress, March 2009. [Po09t] Poretsky, S., Imhoff, B., "Benchmarking Terminology for Link-State IGP Convergence", draft-ietf-bmwg-igp-dataplane-conv-term-17, work in progress, March 2009. 8.2 Informative References None 9. Author's Address Scott Poretsky Allot Communications 67 South Bedford Street, Suite 400 Burlington, MA 01803 USA Phone: + 1 508 309 2179 Email: sporetsky@allot.com Brent Imhoff Juniper Networks 1194 North Mathilda Ave Sunnyvale, CA 94089 USA Phone: + 1 314 378 2571 EMail: bimhoff@planetspork.com Poretsky and Imhoff [Page 18]