Network Working Group C. Popoviciu Internet-Draft A. Hamza Expires: August 17, 2006 G. Van de Velde Cisco Systems D. Dugatkin IXIA B. Kine Spirent February 13, 2006 IPv6 Benchmarking Methodology Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on August 17, 2006. Copyright Notice Copyright (C) The Internet Society (2006). Abstract The Benchmarking Methodologies define in RFC2544 [4] are IP version independent however, they do not address some of the specificities of IPv6. This document provides additional benchmarking guidelines Popoviciu, et al. Expires August 17, 2006 [Page 1] Internet-Draft IPv6 Performance Benchmarking February 2006 which in conjunction with RFC2544 will lead to a more complete and realistic evaluation of the IPv6 performance of network elements. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Tests and Results Evaluation . . . . . . . . . . . . . . . . . 3 3. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 3 4. Test Environment Set Up . . . . . . . . . . . . . . . . . . . 4 4.1. Single-Port Testing . . . . . . . . . . . . . . . . . . . 4 4.2. Multi-Port Testing . . . . . . . . . . . . . . . . . . . . 6 5. Test Traffic . . . . . . . . . . . . . . . . . . . . . . . . . 6 5.1. Frame Formats and Sizes . . . . . . . . . . . . . . . . . 7 5.1.1. Frame Sizes to be used on Ethernet . . . . . . . . . . 7 5.1.2. Frame Sizes to be used on SONET . . . . . . . . . . . 7 5.2. Protocol Addresses Selection . . . . . . . . . . . . . . . 8 5.2.1. DUT Protocol Addresses . . . . . . . . . . . . . . . . 8 5.2.2. Test Traffic Protocol Addresses . . . . . . . . . . . 9 5.3. Traffic with Extension Headers . . . . . . . . . . . . . . 10 5.4. Traffic set up . . . . . . . . . . . . . . . . . . . . . . 11 6. Modifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.1. Management and Routing Traffic . . . . . . . . . . . . . . 11 6.2. Filters . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.2.1. Filter Format . . . . . . . . . . . . . . . . . . . . 12 6.2.2. Filter Types . . . . . . . . . . . . . . . . . . . . . 12 7. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . . 13 7.1. Throughput . . . . . . . . . . . . . . . . . . . . . . . . 13 7.2. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 14 7.3. Frame Loss . . . . . . . . . . . . . . . . . . . . . . . . 14 7.4. Back-to-Back Frames . . . . . . . . . . . . . . . . . . . 14 7.5. System Recovery . . . . . . . . . . . . . . . . . . . . . 14 7.6. Reset . . . . . . . . . . . . . . . . . . . . . . . . . . 14 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 9. Security Considerations . . . . . . . . . . . . . . . . . . . 15 10. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 15 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 15 12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 12.1. Normative References . . . . . . . . . . . . . . . . . . . 15 12.2. Informative References . . . . . . . . . . . . . . . . . . 15 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 Intellectual Property and Copyright Statements . . . . . . . . . . 19 Popoviciu, et al. Expires August 17, 2006 [Page 2] Internet-Draft IPv6 Performance Benchmarking February 2006 1. Introduction The benchmarking methodologies defined by RFC2544 [4] are proving to be very useful in consistently evaluating IPv4 forwarding performance of network elements. Adherence to these testing and result analysis procedures facilitates the objective comparison of product IPv4 performance. While the principles behind the methodologies introduced in RFC2544 are largely IP version independent, the protocol continued to evolve, particularly in its version 6 (IPv6). This document is providing benchmarking methodology recommendations that address IPv6 specificities. An example of such IPv6 specific considerations is that of evaluating the forwarding performance of traffic containing Extension Headers as defined in RFC2460 [2]. These recommendations are defined within the RFC2544 framework and are meant to complement the test and result analysis procedures described in RFC2544 and not to replace them. The terms used in this document remain consistent with those defined in "Benchmarking Terminology for Network Interconnect Devices" [1]. This terminology document SHOULD be consulted before using or applying the recommendations of this document. Any methodology aspects not covered in this document SHOULD be assumed to be treated based on the RFC2544 recommendations. 2. Tests and Results Evaluation The recommended performance evaluation tests are described in Section 7 of this document. Not all these tests are applicable to all network element types. Nevertheless, for each evaluated device, all applicable tests described in Section 7 MUST be performed. Test execution and the results analysis MUST be performed while observing generally accepted testing practices regarding repeatability, variance and statistical significance of small numbers of trials. 3. Requirements This document observes the requirements identified in RFC2544. The words that are used to define the significance of each particular requirement are capitalized. These words are: o "MUST" This word, or the words "REQUIRED" and "SHALL" mean that the item is an absolute requirement of the specification. Popoviciu, et al. Expires August 17, 2006 [Page 3] Internet-Draft IPv6 Performance Benchmarking February 2006 o "SHOULD" This word or the adjective "RECOMMENDED" means that there may exist valid reasons in particular circumstances to ignore this item, but the full implications should be understood and the case carefully weighed before choosing a different course. o "MAY" This word or the adjective "OPTIONAL" means that this item is truly optional. One vendor may choose to include the item because a particular marketplace requires it or because it enhances the product, for example; another vendor may omit the same item. An implementation is not compliant if it fails to satisfy one or more of the MUST requirements for the protocols it implements. An implementation that satisfies all the MUST and all the SHOULD requirements for its protocols is said to be "unconditionally compliant"; one that satisfies all the MUST requirements but not all the SHOULD requirements for its protocols is said to be "conditionally compliant". 4. Test Environment Set Up The test environment setup recommended for the IPv6 performance evaluation is the same to the one described by RFC2544, in both single-port and multi-port scenarios. Single-port testing is used in measuring per interface forwarding performance while multi-port testing is used to measure the scalability of this performance across the entire platform. 4.1. Single-Port Testing With leading test tool vendors supporting the necessary IPv6 features, the preferred setup for single-port testing is to have the Device Under Test (DUT) ingress and egress ports connected to the same Test Tool device (Tester) as described in Figure 1. +------------+ | | +------------| Tester |--------------+ | | | | | +------------+ | | | | +------------+ | | | | | +------------| DUT |--------------+ | | +------------+ Figure 1 Popoviciu, et al. Expires August 17, 2006 [Page 4] Internet-Draft IPv6 Performance Benchmarking February 2006 In addition, the DUT MUST be monitored for relevant parameters such as resource (CPU, Memory, etc.) utilization. This probe MUST collect the data out of band and is independent of any management data that might be recommended to be collected through the interfaces forwarding the test traffic. Several mechanisms have been defined in order to facilitate the transition to IPv6. They include tunneling mechanisms and translation mechanisms of various types. When evaluating the forwarding performance of a network element that is the head end of an IPv6 over IPv4 tunnel or the device translating between IPv6 and IPv4, the test environment setup described in Figure 1 might not be practical. This is the case when the test tool is not able to terminate the tunnel type involved in the test or cannot efficiently correlate the IPv6 traffic on one side of the translation with the IPv4 traffic on the other. In this situation, a practical alternative is to use a two device test environment as described in Figure 2. +-----------+ | | +---------------------| Tester |----------------------+ | | | | | +-----------+ | | | | +----------+ +----------+ | | | | | | | +--------| DUT 1 |---------------| DUT 2 |---------+ | | | | +----------+ +----------+ Figure 2 In the scenario described in Figure 2, the IPv6 over IPv4 tunnel is established between DUT 1 and DUT 2, or the translation is done on both DUTs. The link between DUT 1 and DUT 2 is IPv4 only its bandwidth should not be the unintentional performance bottleneck of the benchmark test. It is important to note that typically, the performance of a device in the encapsulating direction is different than the one in the de- encapsulating one. This difference can be masked in such a test environment. The asymmetry can easily be measured in an environment such as the one described in Figure 1 as long as the test tool supports the necessary features. There are two possible ways in which the two DUTs can be selected: Popoviciu, et al. Expires August 17, 2006 [Page 5] Internet-Draft IPv6 Performance Benchmarking February 2006 o DUT 1 and DUT 2 are the same type of device, using the same types of interfaces, running the same software and are configured similarly. In this case the measured performance represents the lowest value between the encapsulating and de-encapsulating traffic directions for the device under test. o DUT 1 is a reference device with known performance for the setup that is tested (tunnel type, translation, etc). DUT 2 is the actual device under test. It is important to make sure that DUT 1 is a device with better performance than DUT 2. In this scenario, the performance asymmetry between the encapsulating and de- encapsulating directions might be observed. While the performance asymmetry discussed in the case of transition mechanisms is an interesting characteristic of the device under test, ultimately the lowest of the two represents the practically useful information. Whenever possible, it is preferred to execute the performance evaluation in an environment similar to the one described in Figure 1. Note: During testing, either static or dynamic Neighbor Discovery can be used. The static option can be used as long as it is supported by the test tools. The dynamic option is however preferred if the test tool interacts with the DUT during the duration of the test in order to maintain the respective neighbor caches active. The above described test scenarios go from assumption that the test traffic end points, the IPv6 source and destination address are not directly attached to the DUT, but is seen as one hop beyond, to avoid Neighbor Solicitation (NS) and Neighbor Advertisement (NA) storms due to the Neighbor Unreachability Detection (NUD) mechanism [3]. 4.2. Multi-Port Testing The test environment setup for multi-port testing is similar to the one described in RFC2544 it SHOULD be subjected to the same considerations and methodology. The multi-port testing SHOULD follow a single-port performance evaluation in order to determine the scalability of the network element architecture. There are two aspects of the benchmarking testing where protocol address selection considerations MUST be analysed: The IP addressing of the DUT and the IP addresses used for the test traffic. 5. Test Traffic The traffic used for all tests described in this document SHOULD meet the requirements described in this section. These requirements are designed to reflect the characteristics of IPv6 unicast traffic in Popoviciu, et al. Expires August 17, 2006 [Page 6] Internet-Draft IPv6 Performance Benchmarking February 2006 all its aspects. Using this IPv6 traffic leads to a complete evaluation of the network element performance. 5.1. Frame Formats and Sizes Two types of media are commonly deployed and SHOULD be tested: Ethernet and SONET. This section identifies the frame sizes that SHOULD be used for each media type. Similar to IPv4, small frame sizes help characterize the per-frame processing overhead of the DUT. Note that the minimum size of a relevant IPv6 packet (it carries minimal upper layer information) is larger than that of an IPv4 packet because the former has a 40 bytes long header while the later has a a minimu header of 20 bytes. 5.1.1. Frame Sizes to be used on Ethernet Ethernet in all its types has become the most commonly deployed interface in today's networks. The following table lists the layer 2 packet sizes that SHOULD be used with all Ethernet interface types and the corresponding maximum throughput in frames per second (fps): Size 10Mb/s 100Mb/s 1000Mb/s 10000Mb/s Bytes fps fps fps fps 64 12255 122549 1225490 12254902 128 7530 75301 753012 7530125 256 4252 42517 425170 4251701 512 2273 22727 227273 2272727 1024 1177 11770 117702 1177024 1280 948 9484 94841 948407 1518 803 8033 80334 803342 The maximum throughput listed for each Ethernet type can be used as a reference in analyzing the test results. 5.1.2. Frame Sizes to be used on SONET The Packet over SONET (PoS) interfaces are commonly used for core uplinks and high bandwidth core links. For this reason it is recommended to evaluate the forwarding performance of such interfaces if present or are an option in the DUT. The recommended layer 2 packet sizes and the corresponding max throughput is listed for the various POS interface types: Popoviciu, et al. Expires August 17, 2006 [Page 7] Internet-Draft IPv6 Performance Benchmarking February 2006 Size OC-3 OC-48 OC-192 Bytes fps fps fps 64 353,207 5,651,319 22,605,278 128 159,986 2,559,770 10,239,079 256 76,408 1,222,521 4,890,083 512 37,365 597,842 2,391,367 1024 18,480 295,686 1,182,744 1280 14,751 236,021 944,087 1518 12,423 198,761 795,045 2048 9,191 147,052 588,209 4096 4,583 73,322 293,290 The maximum throughput listed for each PoS type can be used as a reference in analyzing the test results. In addition to the recommended Ethernet frame sizes listed above, testing MAY be performed with the IPv4 IMIX frame size profile defined by the 7:4:1 distribution of Ethernet encapsulated packet sizes: 64, 570 and 1518 bytes. The average IMIX frame size is 353 bytes. The performance test data at frame sizes larger than the IMIX average might carry more practical significance. It may be needed to take for an IPv6 IMIX the minimum MTU of 1280 bytes into account. A IMIX set could be 10:6:1:1 where the packet sizes are 64, 570, 1280 and respectively 1518 bytes to provide a IMIX average size of 381 bytes. 5.2. Protocol Addresses Selection There are two aspects of the IPv6 benchmarking testing where IP address selection considerations MUST be analyzed: The selection of IP addresses for the DUT and the selection of addresses for the test traffic. 5.2.1. DUT Protocol Addresses There is no IPv6 address range reserved for the Benchmarking Methodology Working Group. For this reason, the Global Unicast IPv6 addresses that SHOULD be configured on the tested interfaces of the DUT SHOULD be within the IANA range reserved for documentation purposes [5]. This range is 2001:DB8::/32. Similar to the RFC2544, Appendix C, addresses from the first half of this range SHOULD be use for the ports viewed as input and addresses from the other half of the range for the output ports. The prefix length of the IPv6 addresses configured on the DUT interfaces MUST fall into either one of the following two categories: Popoviciu, et al. Expires August 17, 2006 [Page 8] Internet-Draft IPv6 Performance Benchmarking February 2006 o Prefix length is /126 which would simulate a point-to-point link for a core router. o Prefix length is smaller or equal to /64. No prefix lengths SHOULD be selected in the range between 64 and 128 except the 126 value mentioned above. Note that /126 prefixes might not be always the best choice for addressing point-to-point links such as back-to-back ethernet unless the autoprovisioning mechanism is disabled. Also, not all network elements support this type of addreses. While with IPv6, the DUT interfaces can be configured with multiple global unicast prefixes, the methodology described in this document does not require testing such a scenario. It is not expected that such an evaluation would bring additional data with respect to the performance of the network element. The Interface ID portion of the Global Unicast IPv6 DUT addresses MUST be set to ::1. There are no requirements in the selection of the Interface ID portion of the Link Local IPv6 addresses. When benchmarking the performance of certain IPv6 migration mechanisms such as IPv6 over IPv4 tunneling or IPv6-IPv4 translation, IPv4 addresses will have to be configured on the DUT. For the IPv4 interfaces, addresses from the 192.18.0.0 through 198.19.255.255, reserved to BMWG, SHOULD be used. The recommendations made in RFC2544 Appendix C SHOULD be observed. It is recommended that multiple iterations of the benchmark tests be conducted using the following prefix lengths: 32, 48, 64, 126 and 128. Other prefix lengths can also be used if desired, however the indicated range should be sufficient to establish baseline performance metrics. 5.2.2. Test Traffic Protocol Addresses The IPv6 addresses used as sources and destinations for the test streams SHOULD belong to the IPv6 range reserved for documentation as mentioned above. The source addresses SHOULD belong to one half of the range and the destination addresses to the other, reflecting the DUT interface IPv6 address selection. Tests SHOULD first be executed with a single stream leveraging a single source-destination address pair. The tests SHOULD then be repeated with traffic using a random destination address in the corresponding range. If the network element prefix lookup capabilities are evaluated, the tests SHOULD focus on the IPv6 relevant prefix boundaries: 0-64, 126 and 128. Popoviciu, et al. Expires August 17, 2006 [Page 9] Internet-Draft IPv6 Performance Benchmarking February 2006 Special care needs to be taken about the Neighbor Unreachability Detection (NUD) [3] process. The IPv6 prefix reachable time in the Router Advertisement SHOULD be set to 30 seconds and allow 50% jitter. The IPv6 source and destination addresses SHOULD appear not to be directly connected to the NUD to avoid Neighbor Solicitation (NS) and Neighbor Advertisement (NA) storms due to multiple test traffic flows. 5.3. Traffic with Extension Headers Extension Headers (EH) are an intrinsic part of the IPv6 architecture [2] . They are used with various types of practical traffic such as: Fragmented traffic, Mobility based traffic, Authenticated and Encrypted traffic. For these reasons, all tests described in this document SHOULD be performed with both traffic that has no EH or a set of EH selected from the following list: o Destination Options header o Routing header o Fragment header o Authentication header o Encapsulating Security Payload header o Destination Options header o Mobility header The Hop-by-Hop extension header was excluded because this type of header MUST be processed by each node so a test with traffic containing this EH will not measure the forwarding performance of the DUT but rather its EH processing ability. All IPv6 traffic containing extension headers that is used in testing SHOULD contain the following: o Destination Options header - 8 bytes o Routing header - 24-32 bytes o Fragment header - 8 bytes o Authentication header - 16 bytes The total length of the EH recommended for the test traffic SHOULD be up to 64 bytes. These extension headers add extra bytes to the payload size of the IP packets which MUST be factored in when used in testing at low frame sizes. Their presence will modify the minimum size used in testing. For direct comparison between the data obtained with traffic that has EH and with traffic that doesn't have them, at low frame size, a common bottom size SHOULD be selected for both types of traffic. For the most cases, the network elements ignore the EH when forwarding IPv6 traffic. For these reasons it is most likely that Popoviciu, et al. Expires August 17, 2006 [Page 10] Internet-Draft IPv6 Performance Benchmarking February 2006 the EH related performance impact will be observed when testing the DUT with traffic filters that contain matching conditions for the upper layer protocol information. In those cases, the DUT MUST traverse the chain of EH. The performance of properly designed platforms SHOULD not be affected when IPv6 traffic containing EH is forwarded through a filter. 5.4. Traffic set up All tests recommended in this document SHOULD be performed with bi- directional traffic. For asymmetric situations such as the case of the DUT doing tunneling or translations and where the test tool has the features necessary to perform the tests in a single DUT environment (Figure 1), tests MAY be performed with uni-directional traffic for a more granular characterization of the DUT performance. In these cases, the bi-directional traffic testing would reveal only the worst performance between the two directions (encapsulation, de- encapsulation). All other traffic profile characteristics described in RFC2544 SHOULD be applied in this testing as well. 6. Modifiers RFC2544 underlines the importance of evaluating the performance of network elements under certain operational conditions. The conditions defined in RFC2544 Section 11 are common to IPv4 and IPv6 with the exception of Broadcast Frames. IPv6 does not use layer 2 or layer 3 broadcasts. This section provides additional conditions that are specific to IPv6. The suite of tests recommended in this document SHOULD be first executed in the absence of these conditions and then repeated under each of the conditions separately. 6.1. Management and Routing Traffic The procedures defined in RFC2544 sections 11.2 and 11.3 are applicable for IPv6 Management and Routing Update Frames as well. 6.2. Filters The filters defined in Section 11.4 of RFC2544 apply to IPv6 benchmarking as well. The filter definitions however must be expanded to include upper layer protocol information matching in order to analyze the handling of traffic with Extension Headers (EH) which are an important architectural component of IPv6. Popoviciu, et al. Expires August 17, 2006 [Page 11] Internet-Draft IPv6 Performance Benchmarking February 2006 6.2.1. Filter Format The filter format defined in RFC2544 is applicable to IPv6 as well except that the Source Addresses (SA) and Destination Addresses (DA) are IPv6. In addition to these basic filters, the evaluation of IPv6 performance SHOULD analyze the handling of traffic with Extension Headers. While the intent is not to evaluate a platform's capability to process the various extension header types, the goal is to measure the impact on performance when the network element must parse through the EH in order to reach upper layer information. In IPv6 routers do not have to parse through the extension headers (other than Hop-by- Hop) unless the upper layer information has to be analyzed due to filetrs for example. For this reasons, in order to evaluate the network element handling of IPv6 traffic with EH, the definition of the filters must be extended to include conditions applied to upper layer protocol information. The following filter format SHOULD be use for this type of evaluation: [permit|deny] [protocol] [SA] [DA] where permit or deny indicates the action to allow or deny a packet through the interface the filter is applied to. The Protocol field is defined as: o ipv6: any IP Version 6 traffic o tcp: Transmission Control Protocol o udp: User Datagram Protocol and SA stands for the Source Address and DA for the Destination Address. 6.2.2. Filter Types Based on the RFC2544 recommendations, two types of tests are executed when evaluating performance in the presence of modifiers. One with a single filter and one with 25 filters. Examples of single filters are: Filter for TCP traffic - permit tcp 2001:DB8::1 2001:DB8::2 Filter for UDP traffic - permit udp 2001:DB8::1 2001:DB8::2 Filter for IPv6 traffic - permit ipv6 2001:DB8::1 2001:DB8::2 The single line filter case SHOULD verify that the network element Popoviciu, et al. Expires August 17, 2006 [Page 12] Internet-Draft IPv6 Performance Benchmarking February 2006 permits all TCP/UDP/IPv6 traffic with or without any number of Extension Headers from IPv6 SA 2001:DB8::1 to IPv6 DA 2001:DB8::2 and deny all other traffic. Example of 25 filters: deny tcp 2001:DB8:1::1 2001:DB8:1::2 deny tcp 2001:DB8:2::1 2001:DB8:2::2 deny tcp 2001:DB8:3::1 2001:DB8:3::2 ... deny tcp 2001:DB8:C::1 2001:DB8:C::2 permit tcp 2001:DB8:99::1 2001:DB8:99::2 deny tcp 2001:DB8:D::1 2001:DB8:D::2 deny tcp 2001:DB8:E::1 2001:DB8:E::2 ... deny tcp 2001:DB8:19::1 2001:DB8:19::2 deny ipv6 any any The router SHOULD deny all traffic with or without extension headers except TCP traffic with SA 2001:DB8:99::1 and DA 2001:DB8:99::2. 7. Benchmarking Tests This document recommends the same benchmarking tests described in RFC2544 while observing the DUT setup and the traffic setup considerations described above. The following sections state the test types explicitly and highlight only the methodology differences that might exist with respect to those described in Section 26 of RFC2544. These tests are apllicable to native IPv6 forwarding as well as tunneled IPv6 traffic. It is important however to note that in the case of tunneling mechanisms, network elements perform two lookups, one on IPv6 and the other on IPv4. This observation MUST be considered when comparing results obtained for the two types of IPv6 forwarding (native and tunneled). 7.1. Throughput Objective: To determine the DUT throughput as defined in RFC1242. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. While reporting the network element performance at the minimum frame size might be of interest, it is recommended that the performance at the IMIX average frame size or higher SHOULD also be reported. This data point would have a more Popoviciu, et al. Expires August 17, 2006 [Page 13] Internet-Draft IPv6 Performance Benchmarking February 2006 useful value in practice. 7.2. Latency Objective: To determine the latency as defined in RFC1242. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. 7.3. Frame Loss Objective: To determine the frame loss rate, as defined in RFC1242, of a DUT throughout the entire range of input data rates and frame sizes. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. 7.4. Back-to-Back Frames Objective: To characterize the ability of a DUT to process back-to- back frames as defined in RFC1242. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. 7.5. System Recovery Objective: To characterize the speed at which a DUT recovers from an overload condition. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. 7.6. Reset Objective: To characterize the speed at which a DUT recovers from a device or software reset. Procedure: Same as RFC2544. Reporting Format: Same as RFC2544. Popoviciu, et al. Expires August 17, 2006 [Page 14] Internet-Draft IPv6 Performance Benchmarking February 2006 8. IANA Considerations The value of reserving an IPv6 address range to the Benchmarking Methodology Working Group should be evaluated. The use of the IPv6 prefix range reserved by IANA for documentation is sufficient for the purposes of the test methodology described in this document. 9. Security Considerations There are no security issues that are or need to be addressed in this document. 10. Conclusions The Benchmarking Methodology for Network Interconnect Devices document, RFC2544 [4], is for the most part applicable to evaluating the IPv6 performance of network elements. This document is addressing the IPv6 specific requirements that MUST be observed when applying the recommendations of RFC2544. These additional requirements stem from the architecture characteristics of IPv6. This document is not a replacement of but a complement to RFC2544. 11. Acknowledgements The authors acknowledge the work done by Cynthia Martin and Jeff Dunn with respect to defining the terminology for IPv6 benchmarking. The authors would also like to thank Benoit Lourdelet. 12. References 12.1. Normative References 12.2. Informative References [1] Bradner, S., "Benchmarking terminology for network interconnection devices", RFC 1242, July 1991. [2] Deering, S. and R. Hinden, "Internet Protocol, Version 6 (IPv6) Specification", RFC 2460, December 1998. [3] Narten, T., Nordmark, E., and W. Simpson, "Neighbor Discovery for IP Version 6 (IPv6)", RFC 2461, December 1998. [4] Bradner, S. and J. McQuaid, "Benchmarking Methodology for Popoviciu, et al. Expires August 17, 2006 [Page 15] Internet-Draft IPv6 Performance Benchmarking February 2006 Network Interconnect Devices", RFC 2544, March 1999. [5] Huston, G., Lord, A., and P. Smith, "IPv6 Address Prefix Reserved for Documentation", RFC 3849, July 2004. Popoviciu, et al. Expires August 17, 2006 [Page 16] Internet-Draft IPv6 Performance Benchmarking February 2006 Authors' Addresses Ciprian Popoviciu Cisco Systems Kit Creek Road RTP, North Carolina 27709 USA Phone: 919 787 8162 Email: cpopovic@cisco.com Ahmed Hamza Cisco Systems 3000 Innovation Drive Kanata K2K 3E8 Canada Phone: 613 254 3656 Email: ahamza@cisco.com Gunter Van de Velde Cisco Systems De Kleetlaan 6a Diegem 1831 Belgium Phone: +32 2704 5473 Email: gunter@cisco.com Diego Dugatkin IXIA 26601 West Agoura Rd Calabasas 91302 USA Phone: 818 444 3124 Email: diego@ixiacom.com Popoviciu, et al. Expires August 17, 2006 [Page 17] Internet-Draft IPv6 Performance Benchmarking February 2006 Bill Kine Spirent 1515 Seal Way Seal Beach 90740 USA Phone: 562 598 0631 Email: Bill.Kine@SpirentCom.COM Popoviciu, et al. Expires August 17, 2006 [Page 18] Internet-Draft IPv6 Performance Benchmarking February 2006 Intellectual Property Statement The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Disclaimer of Validity This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Copyright Statement Copyright (C) The Internet Society (2006). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. Acknowledgment Funding for the RFC Editor function is currently provided by the Internet Society. Popoviciu, et al. Expires August 17, 2006 [Page 19]