Internet Draft Document Olen Stokes Provider Provisioned VPN WG Extreme Networks Vach Kompella TiMetra Networks Giles Heron PacketExchange Ltd. Yetik Serbest SBC Communications Expires December 2003 June 2003 Testing Hierarchical Virtual Private LAN Services draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet- Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract This document describes a methodology for testing the operation, administration and maintenance (OA&M) of a general VPN service, that is applied here to Hierarchical Virtual draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 1 Private LAN Services (HVPLS) as described in [VPLS]. As part of this methodology, the MPLS ping concepts described in [LSP- PING] are extended to enable HVPLS spoke-to-spoke connectivity testing. A method to provide the information necessary for this spoke-to-spoke OA&M is also proposed. These are the goals of this draft: - checking connectivity between "service-aware" nodes of a network, - verifying data plane and control plane integrity, - verifying service membership There are two specific requirements to which we call attention because of their seemingly contradictory nature: - the checking of connectivity MUST involve the ability to use packets that look like customer packets - the OAM packets MUST not propagate beyond the boundary of the provider network 1 Table of Contents 1 Table of Contents.......................................2 2 Conventions.............................................3 3 Placement of this Memo in Sub-IP Area...................3 3.1 RELATED DOCUMENTS.....................................3 3.2 WHERE DOES THIS FIT IN THE PICTURE OF THE SUB-IP WORK.4 3.3 WHY IS IT TARGETTED AT THIS WG........................4 3.4 JUSTIFICATION.........................................4 4 Changes since last revision.............................4 5 Overview................................................4 5.1 Connectivity verification.............................5 5.2 Service Verification..................................5 5.3 Topology Discovery....................................5 5.4 Performance Monitoring................................6 6 Terminology.............................................6 7 Structural Model........................................6 7.1 Identification........................................6 7.2 Addressing............................................6 7.3 FIB Traversal.........................................7 7.4 Intermediate Failure Processing.......................7 7.5 Control and Data Plane................................7 draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 2 8 OA&M Functions..........................................7 8.1 VPLS ping.............................................8 8.1.1 Hierarchical L2 Circuit FEC Stack Sub-Type...........9 8.1.2 L2 specific Sub-TLVs.................................9 8.1.3 Reply mode..........................................11 8.1.4 VPLS ping encapsulation.............................11 8.1.5 Egress node control plane processing................12 8.1.6 Error Code TLV......................................13 8.2 VPLS traceroute......................................14 8.2.1 HVPLS operational requirements......................14 8.2.2 VC FEC label TTL processing.........................14 8.2.3 HVPLS spoke node considerations.....................15 8.2.4 VPLS traceroute format..............................16 8.2.5 VPLS traceroute procedures..........................18 8.3 Vendor-specific extensions...........................19 9 Distribution of VPLS node information..................20 9.1 Ethernet VPLS node TLV...............................20 9.2 Ethernet VPLS node procedures........................21 9.3 Operation of limited HVPLS nodes.....................22 10 General VPLS testing procedures........................22 11 Load sharing considerations............................24 12 Security Considerations................................26 13 Scalability Issues.....................................27 14 Intellectual Property Considerations...................27 15 Full Copyright Statement...............................27 16 Acknowledgments........................................28 17 Normative References...................................28 18 Informative References.................................28 19 Authors' Addresses.....................................29 2 Conventions The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 3 Placement of this Memo in Sub-IP Area 3.1 RELATED DOCUMENTS http://search.ietf.org/internet-drafts/draft-lasserre- vkompella-ppvpn-vpls-04.txt http://search.ietf.org/internet-drafts/draft-ietf-mpls-lsp- ping-02.txt draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 3 http://search.ietf.org/internet-drafts/draft-ietf-pwe3- control-protocol-02.txt http://search.ietf.org/internet-drafts/draft-ietf-pwe3-iana- allocation-00.txt http://search.ietf.org/internet-drafts/draft-ietf-pwe3- ethernet-encap-02.txt http://search.ietf.org/internet-drafts/draft-ietf-mpls-rsvp- lsp-fastreroute-00.txt http://www.ietf.org/rfc/rfc3032.txt http://www.ietf.org/rfc/rfc3036.txt http://www.ietf.org/rfc/rfc3209.txt 3.2 WHERE DOES THIS FIT IN THE PICTURE OF THE SUB-IP WORK PPVPN 3.3 WHY IS IT TARGETTED AT THIS WG The charter of the PPVPN WG includes L2 VPN services and this draft specifies a method for testing Ethernet VPLS services over MPLS. 3.4 JUSTIFICATION There is no Internet document that fully provides a method for testing a Hierarchical VPLS. 4 Changes since last revision - incorporated changes to [LSP-PING] - incorporated changes to [VPLS] - incorporated changes to [PWE3-CTRL] 5 Overview This document describes a methodology for the operation, administration and maintenance (OA&M) of a Hierarchical VPLS (HVPLS) as described in [VPLS]. Service providers wanting to deploy such a VPLS need to be able to test its operation across the entire hierarchy of the VPLS. While today's draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 4 standards are able to test parts of an HVPLS, there are some aspects that cannot be tested. This OA&M methodology requires extensions to the MPLS ping concepts described in [LSP-PING] to enable VPLS spoke-to-spoke connectivity testing. It also requires extensions to the HVPLS concepts in [VPLS]. We define the following four functions that are needed to provide OA&M for services: - connectivity verification - service verification - topology discovery - performance monitoring 5.1 Connectivity verification There are five logical steps to verifying the operation of an HVPLS: - verify that all HVPLS nodes are operational - verify that all HVPLS peer sessions are operational - verify that all HVPLS tunnel LSPs are transporting data packets correctly - verify that data packets can be exchanged between any two spokes in the HVPLS - verify that actual customer devices can communicate with devices at any site Existing tests can verify the first three steps. This document describes how to address the final two issues. 5.2 Service Verification Service verification has to do with whether the service aware nodes have been consistently configured for the service or not. These extensions will be described in a later version. 5.3 Topology Discovery Topology discovery has to do with the current layout of a VPN service, e.g., the PEs and MTUs participating in a VPLS. These extensions will be described in Section 9. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 5 5.4 Performance Monitoring Performance monitoring, such as determining round-trip times, jitter, average packet throughput, etc. will be described in a future version. 6 Terminology The following terms are used in this text: - Service point: a provider device that has knowledge of the service. We note here that the transport tunnel, while an integral part of the service, only serves to carry the service. In contrast, service-aware nodes participate in the signaling and maintenance of the VPN service. - Service ping: a protocol for testing end-to-end service point connectivity through the data plane - Service traceroute: a protocol for identifying intermediate service points in an end-to-end service 7 Structural Model We describe below the structural model that implements OA&M for VPN services. The encapsulation and traversal methodology is described. We note that while it is not possible to make an OA&M packet look indistinguishable from customer frames, a reasonable verisimilitude may be maintained. This allows the lookup methods and encapsulation used in the data plane to be preserved. 7.1 Identification We use loopback or "always on" IP addresses to identify the service points. This allows service points to address each other, which is especially important in case of last ditch error reporting. 7.2 Addressing We require the addressing in the OA&M packets to be configurable to match a customer VPN address, whether a MAC address, DLCI, private IP address, etc. While addresses in the service provider network MAY be usable, they may not have relevance in proving that customer packets can traverse the service domain. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 6 7.3 FIB Traversal We require that OA&M packets following the data plane traverse the service points using the forwarding lookup tables just as customer packets are. 7.4 Intermediate Failure Processing Failures at intermediate service points should be reported back using the control plane. This provides the basis for a traceroute function. 7.5 Control and Data Plane Typically, connectivity should first be checked against the data plane. If the request packet makes it to the destination service point, the reply packet should be sent along the data plane. Otherwise, after some interval, the sender should send another packet along the data plane, requesting a reply back on the control plane. If this fails, a final attempt may be made, with the request sent along the control plane, and the reply back along the control plane. If this fails, then the network is probably partitioned. 8 OA&M Functions The OA&M functions described below have specific reference to a particular VPN service, i.e., HVPLS. A later revision will deal with general VPN service OA&M. There are two parts to connectivity verification. The first is ensuring that a particular remote service point is reachable. The second is diagnosing failures in case the said service point is not reachable. The primary goal of connectivity verification is to check the data plane consistency. This requires intermediate service points to forward OAM messages using FIB lookups, next hop table matches, etc., just as they would for a conventional customer packet. We call this the VPLS (service) ping function. A secondary goal of connectivity verification is to provide information of the service points in the service. We call this the VPLS (service) traceroute function. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 7 In addition, it is good to have the ability to send replies through the control plane, so as to allow for error reporting when all else fails. 8.1 VPLS ping Verifying that data packets can be exchanged between any two spokes in the HVPLS detects problems with the "bridging" aspects of the HVPLS. Doing a complete test requires each spoke node to verify that it can exchange packets with every other spoke node in the HVPLS. This requires that each spoke node know the MAC addresses of all of the other spokes. Using this information, a spoke can encapsulate a MPLS ping frame using the same label stack as normal customer data, and then send the packet to a remote peer. When the packet is un- encapsulated at the remote end, the L2 destination MAC address will be that of the remote spoke and the IP address will be in the range 127/8 [LSP-PING]. Such a VPLS ping requires defining a new Target FEC Stack Sub- Type in [LSP-PING]. This type is defined as a "Hierarchical L2 circuit." The complete list of Target FEC Sub-Types is shown below. Sub-Type # Length Value Field ---------- ------ ----------- 1 5 LDP IPv4 prefix 2 17 LDP IPv6 prefix 3 20 RSVP IPv4 Session Query 4 56 RSVP IPv6 Session Query 5 Reserved 6 13 VPN IPv4 prefix 7 25 VPN IPv6 prefix 8 14 L2 VPN endpoint 9 10 L2 circuit ID 10 variable Hierarchical L2 circuit ID New Layer 2 specific Sub-TLVs to be used with a Hierarchical L2 Circuit FEC Stack Sub-Type are also required. A new reply mode is also needed to specify if the reply should use the HVPLS, i.e., the data plane. For security purposes, a service point receiving a VPLS ping SHOULD check the sender information to ensure that the sender is known to be in the HVPLS. See Section 12 for further discussion. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 8 8.1.1 Hierarchical L2 Circuit FEC Stack Sub-Type The proposed Hierarchical L2 Circuit FEC Stack Sub-Type TLV is shown below. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | VC ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Encapsulation Type | Must Be Zero | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | L2 specific Sub-TLV (Optional) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The VC ID is the same as contained in the VC FEC TLV in [VPLS]. The Encapsulation Type is the same Type field discussed in [PWE3-CTRL] and defined in [PWE3-IANA]. Please note that [PWE3-CTRL] now defines two methods for identifying a pseudowire (PWid and Generalized ID). Should [VPLS] be modified as a result, the above method of identifying the desired pseudowire will be subsequently updated. 8.1.2 L2 specific Sub-TLVs The "L2 specific Sub-TLVs" use the same Type field as defined in [PWE3-IANA]. These are: VC Type Description ------- ------------ 0x0001 Frame Relay DLCI 0x0002 ATM AAL5 VCC transport 0x0003 ATM transparent cell transport 0x0004 Ethernet VLAN 0x0005 Ethernet 0x0006 HDLC 0x0007 PPP 0x8008 CEM 0x0009 ATM n-to-one VCC cell transport 0x000A ATM n-to-one VPC cell transport 0x000B IP Layer2 Transport 0x000C ATM one-to-one VCC cell transport 0x000D ATM one-to-one VPC cell transport 0x000E ATM AAL5 PDU VCC transport draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 9 [VPLS] specifies that an Ethernet VC type is used for HVPLS. For HVPLS, the L2 specific Sub-TLV should contain sufficient information to identify the target remote spoke and to allow the remote spoke to be able to respond. The following Sub-TLV is proposed: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type (0x0005) | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Target Ethernet MAC address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 802.1Q Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Sender Ethernet MAC address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 802.1Q Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type The field MUST be set to a value of 0x0005 for Ethernet VC. Length This field specifies the total length of the Value fields in octets. Target Ethernet MAC address This field specifies the target Ethernet MAC of the remote spoke or customer device. It is included here to assist implementations when processing echo replies and VPLS traceroute as discussed below. Sender Ethernet MAC address This field specifies the sender's Ethernet MAC that can be used in a reply. 802.1Q Tag These fields specify the 802.1Q Tag associated with the Target and Sender Ethernet MACs described above. If the Ethernet address is associated with a VLAN, the 802.1Q Tag field MUST be set to the VLAN tag. If the MAC address is not associated with a VLAN (untagged VLAN), it MUST be set to zero. Since an 802.1Q tag is 12-bits, the high 4 bits of the fields MUST be set to zero. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 10 8.1.3 Reply mode [LSP-PING] defines multiple reply modes for MPLS pings. For example, reply mode Value 2, "Reply via an IPv4 UDP packet" allows the reply to be sent via an MPLS LSP but does not require it. For a VPLS ping, an additional reply mode is required. Since a VPLS implies bidirectional operation, a reply mode is required that specifies that the reply packet be sent using the VPLS. Therefore, Value 5, "Reply via a VPLS IPv4 UDP packet," is proposed. The complete list of reply modes is show below. Value Meaning ----- ------- 1 Do not reply 2 Reply via an IPv4 UDP packet 3 Reply via an IPv4 UDP packet with Router Alert 4 Reply via the control plane 5 Reply via a VPLS IPv4 UDP packet This provides four different reply methods to help in failure detection. If the ping is successful using reply mode Value 2, but not successful using reply mode Value 5, this indicates that the ping is reaching the remote peer but there is a problem with the VPLS return path. 8.1.4 VPLS ping encapsulation It is desirable for both operational and security reasons to be able to easily recognize in the data plane that a received packet is a VPLS ping. Therefore, when a VPLS ping is encapsulated, a special label is added below the VC FEC label. This method confines the checking for an OAM packet to the MPLS label stack. Since the bottom-of-stack bit SHOULD be checked on every VC label, regular customer packets will not take a hit. But when the bottom-of-stack bit is off, either something has gone wrong or the packet is an OAM packet. It is proposed that the special label added be the Router Alert Label (value of 1) as defined in [LABEL-ENC]. It should be noted that this requires special handling of the Router Alert Label, as it is not normally present at the bottom of the label stack. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 11 Note that this approach affects how the data plane is checked in load sharing cases where the label stack is used in the load sharing hashing algorithm (see Section 11 for more details). As a packet exits a tunnel LSP, the node first checks if the destination MAC address is a MAC belonging to this node. Note that it also checks its forwarding database associated with this HVPLS to see if the target MAC is associated with a locally attached customer network. If the MAC is found to be local, then this node is the egress node for this VPLS ping. The packet SHOULD be passed to the node's control plane. If it cannot be passed to the control plane, the packet SHOULD be discarded. The packet MUST NOT be forwarded to the customer network. Note that the rate at which packets are passed to the control plane should be regulated to prevent overloading. 8.1.5 Egress node control plane processing When an egress node control plane receives a VPLS ping packet, it checks the Target Ethernet MAC address field to determine if it is a MAC address belonging to that node. If it is, a successful reply is returned. If the MAC address does not belong to that node, an additional check is made of the forwarding database associated with the HVPLS indicated by the VC ID field. If the MAC is present and is associated with a locally attached customer network for this HVPLS, a successful reply is returned. If not, then the TTL in the VC FEC label is checked to determine if this is a VPLS traceroute packet as described in Section 8.2. The replies described above require new MPLS ping Return Code information. To avoid defining codes that are specific to HVPLS, an encoding of the Error Code TLV is proposed. This encoding is shown in Section 8.1.6. Note that this method of operation allows VPLS ping to check for both remote node MACs and remote customer device MACs. When a VPLS ping target MAC address is unknown in a core node, the VPLS ping packet is flooded. As such, it potentially reaches other spoke nodes in addition to the spoke where the destination is actually present. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 12 8.1.6 Error Code TLV The proposed Error Code TLV for use with [LSP-PING] is shown below. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type (0x0004) | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Error Code Sub-TLV | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type The field MUST be set to a value of 0x0004 for Error Code TLV. Length This field specifies the total length of the Value fields in octets. In this case the Value field is an Error Code Sub-TLV. Error Code Sub-TLV The proposed encoding for these Sub-TLVs is shown below. The Error Code Sub-TLV uses the Target FEC Sub-Type values defined in [LSP-PING] as the type field value. The proposed encoding for a Hierarchical L2 Circuit Sub-Type is shown below. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type (0x000A) | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Error Code | Must Be Zero | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Type The field MUST be set to a value of 0x000A for Hierarchical L2 Circuit. Length This field specifies the total length of the Value field in octets. In this case the Value field is an Error Code Sub-TLV. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 13 Error Code The following codes are defined: Value Meaning ----- ------- 1 Replying router is the egress for the given MAC address 2 Replying router is not a member of the indicated HVPLS 3 Replying router a member of the indicated HVPLS, but is not one of the "Downstream Routers" 4 Replying router is one of the "Downstream Routers" and it has the given MAC address in its forwarding database (but it is not the egress) 5 Replying router is one of the "Downstream Routers" but the label mapping is incorrect 6 Replying router is one of the "Downstream Routers" but it does not have the given MAC address in its forwarding database 7 Replying router is one of the "Downstream Routers" and is a member of the indicated HVPLS, but it does not have the given MAC address and is unable to flood the request The Value 7 shown above is explained in Section 8.2.4. 8.2 VPLS traceroute When a VPLS ping using reply mode 5 (Reply via a VPLS IPv4 UDP packet) is unsuccessful, additional capabilities are required to identify the problem location. A VPLS traceroute is therefore proposed. 8.2.1 HVPLS operational requirements To provide an HVPLS traceroute capability, the basic operation of an HVPLS needs further definition. In particular, the handling of the TTL field in VC FEC labels needs to be defined. 8.2.2 VC FEC label TTL processing A data packet normally goes from a HVPLS spoke node to a HVPLS core node. In the core HVPLS node, the VC FEC label received from the spoke node is used to determine the HVPLS with which the encapsulated frame is associated. The frame is un- encapsulated and a check is made for the frame's destination draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 14 MAC address. If the destination MAC is known, the frame is forwarded to the core peer from which the MAC was learned [VPLS]. As the frame is forwarded, the frame is again encapsulated using the VC FEC label received from the peer core node. A similar process occurs at the remote core peer. The VC FEC label is checked, the frame is un-encapsulated, the destination MAC is checked, the frame is again encapsulated and sent to the remote spoke node. The packet containing the newly encapsulated frame uses the VC FEC label received from the destination spoke node. Each time that this process occurs, the received VC FEC label TTL value SHOULD be decremented and the result placed in the outgoing VC FEC label TTL field. If the resulting value is 0, the packet SHOULD be passed to the node's control plane. If it cannot be passed to the control plane, the packet SHOULD be discarded. Note that the rate at which packets are passed to the control plane should be regulated to prevent overloading. If the destination MAC is not known and the packet is flooded to multiple peer nodes or multiple spoke nodes, the same TTL described above is placed in each VC FEC label. In normal operation, the TTL value in the VC FEC label should be set to a value larger than the number of "HVPLS Hops" through which the data packet will pass, e.g., 255. Note that this mode of operation also provides some protection against the effects of loops at the HVPLS level. 8.2.3 HVPLS spoke node considerations [VPLS] specifies that spoke nodes signal Ethernet encapsulation when signaling a VC FEC to a core node. [PWE3- ENET] specifies that such spoke nodes will normally set a value of 2 for the VC FEC label TTL. With the TTL processing described in Section 8.2.2, this will cause packets to be erroneously discarded inside a HVPLS. Therefore, we require a change in the TTL setting at spoke nodes to be 255 by default. As further discussed in Section 12, security considerations also require a change to detect OAM packets. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 15 8.2.4 VPLS traceroute format A VPLS traceroute uses the same echo request packet described in Section 8.1 for a VPLS ping. The echo request packet SHOULD contain one or more Downstream Mapping TLVs [LSP-PING]. In this case the Downstream Router ID field contains the IPv4 address of the next HVPLS node that would be used to reach the Target Ethernet MAC in the HVPLS indicated in the Hierarchical L2 Circuit FEC Stack Sub-Type. The Downstream Label field contains the label stack used to reach the downstream peer. This includes the label(s) for the underlying tunnel LSP and the VC FEC label from the targeted LDP session with the downstream peer. The Protocol field for the VC FEC label indicates a value of 3 for (targeted) LDP. When the indicated Target Ethernet MAC is not known and a packet with this destination MAC would be flooded, the information for all HVPLS peers to which the packet would be flooded is added. For the case where the packet cannot be flooded (such as a limit on MAC addresses that has been exceeded for this HVPLS), a new MPLS ping Return code is defined. This new Value 7 is shown in Section 8.1.6. For the case where the destination MAC address is known, but the packet would not be forwarded, no Downstream Mapping TLV is included. For example, the packet may have been received at one core node from another core node, but the MAC address was previously learned from a different core node. The Downstream Mapping TLV in [LSP-PING] includes a Hash Key Type to address ECMP considerations. The Hash Keys defined deal with IP addresses and MPLS labels. In an HVPLS, it is possible that a hash may be performed on the source and/or destination MAC addresses of the encapsulated L2 frame. See Section 11 for a discussion of load sharing or ECMP with a MAC-based hash. For the case where a hash is performed on MAC address(es) including the destination MAC address, new MAC-based Hash Keys are proposed. The resulting list of Hash Keys is shown below. Hash Key Type IP Address or Next Label -------------------- ------------------------ 0 no multipath (nothing; M = 0) 1 label M labels 2 IP address M IP addresses draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 16 3 label range M/2 low/high label pairs 4 IP address range M/2 low/high address pairs 5 no more labels (nothing; M = 0) 6 All IP addresses (nothing; M = 0) 7 no match (nothing; M = 0) 8 MAC address M/2 MAC addresses 9 MAC address range M/4 low/high MAC pairs This results in the Downstream Mapping format shown below. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Downstream IPv4 Router ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | MTU | Address Type | DS Index | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Downstream Interface Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Hash Key Type | Depth Limit | Multipath Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IP Address or Next label or MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ . . (more IP Addresses or Next labels or MAC Addresses) . +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Downstream Label | Protocol | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ . . . +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Downstream Label | Protocol | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ In the case of a MAC address, the following format is used for the "IP Address or Next label or MAC Address" field(s): +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination Ethernet MAC address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 802.1Q Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The semantics of the new Hash Key Types and the associated MAC Address(es) are shown below. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 17 Value Meaning ----- ------- 8 a list of single MAC addresses is provided, any one of which will cause the hash to match this MP path. 9 a list of MAC address ranges is provided, any one of which will cause the hash to match this MP path. See Section 11 for additional ECMP considerations. 8.2.5 VPLS traceroute procedures To create a traceroute, an echo request is created using the destination Ethernet MAC address of the remote spoke or customer device. The TTL in the VC FEC label is set successively to 1, 2, 3, ... As with MPLS traceroute, the echo request SHOULD contain one or more Downstream Mapping TLVs. For TTL=1, all the peer nodes (and corresponding VC FEC labels) for the sender with respect to the remote spoke being pinged SHOULD be sent in the echo request. As the echo request may be flooded to multiple nodes, the sending node may receive replies from multiple remote nodes. Thus, for n>1, the Downstream Mapping TLVs from all of the received echo replies for TTL=(n-1) are copied to the echo request with TTL=n. Note that this allows an operator to determine which remote nodes would receive a flood of an actual customer data packet destined to the target MAC address. As a packet exits a tunnel LSP, the node first checks if the destination MAC address is a MAC belonging to this node. Note that it also checks its forwarding database associated with this HVPLS to see if the target MAC is associated with a locally attached customer network. If the MAC is found to be local, then this node is the egress node for this VPLS traceroute. The packet SHOULD be passed to the node's control plane. If the MAC is not for this node, it then checks the TTL value of the VC FEC label. If the value is 1, the packet SHOULD be passed to the node's control plane. If a packet cannot be passed to the control plane, the packet SHOULD be discarded. The packet MUST NOT be forwarded to the draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 18 customer network. Note that the rate at which packets are passed to the control plane should be regulated to prevent overloading. In the control plane, an echo reply is created as described in [LSP-PING] and Section 8.1. This includes completing the Downstream Mapping TLV as described in Section 8.2.4. The reply is sent based on the value indicated in the Reply Mode field of the echo request. For security purposes, before replying to a VPLS traceroute, a node SHOULD check the sender information to ensure that the sender is known to be in the specified HVPLS. See Section 12 for further discussion. If the MAC is not for this node, and the value of the VC FEC label TTL is greater than 1, the TTL is decremented and the result is placed in the VC FEC label TTL for the resulting packet. This packet is encapsulated in the same manner as a customer data packet and then passed to the downstream node that would normally be used to reach the indicated Ethernet MAC. If the packet is flooded to multiple peer nodes, this same TTL value is placed in the VC FEC label TTL of each of the packets. 8.3 Vendor-specific extensions A vendor TLV is defined for vendor-specific extensions. One of the issues with defining TLVs for service definitions is that there is no standard for service definitions. We may be able to exchange high-level information such as operational status, but finer details are sometimes vendor-specific. We therefore propose a new vendor-specific TLV type. The resulting list of TLV types is shown below. Type Value Field ----- ------------ 1 Target FEC Stack 2 Downstream Mapping 3 Pad 4 Error Code 5 Vendor-specific The following format is also proposed: draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 19 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type = (0x0005) | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Vendor OUI | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | ~ Value ~ | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ A vendor MAY encode vendor-specific information in these vendor TLVs. In general, if possible, the TLVs SHOULD contain values that are strings, so that all vendors are able to display the values. A request or response of a service ping or service traceroute may contain one or more vendor TLVs. Any service point MAY ignore and drop a vendor TLV that it does not understand. 9 Distribution of VPLS node information A spoke node normally knows the IP address of the core node with which it peers. It typically does not know the IP addresses or MAC addresses of the other spokes in the network. In order to perform the MPLS ping described in Section 8.1, a spoke must know the MAC addresses of the other spokes. This information could be distributed via the targeted LDP sessions between HVPLS nodes. [VPLS] defines a MAC TLV that can be used with a LDP Address Withdrawal Message [MPLS-LDP]. In a similar manner, a LDP Address Message [MPLS-LDP] can be used to distribute the required information about HVPLS nodes. 9.1 Ethernet VPLS node TLV To distribute HVPLS node information, a new TLV is proposed for use with a LDP Address Message. This new Ethernet VPLS node TLV is shown below. Note that the nodeÆs IP address is included to allow remote nodes to correlate the nodeÆs MAC address with its IP address. In this manner an operator can request a VPLS ping by specifying the IP address of the remote spoke. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 20 For IPv4: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |U|F| Type | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Reserved | Address Family | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv4 Address of VPLS node #1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | MAC address of VPLS node #1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 802.1Q Tag #1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv4 address of VPLS node #2 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | MAC address of VPLS node #2 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 802.1Q Tag #2 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ The scope of the Ethernet VPLS node TLV is the VPLS specified in the FEC TLV in the Address Message. 9.2 Ethernet VPLS node procedures When a spoke and a core node establish a new peer relationship for an HVPLS, they exchange one or more Address Messages with the above information. The spoke node sends information about a single HVPLS node (itself). The IP address should be the address by which the node is known to its peers. The MAC address should be a MAC address of the switch such that an un- encapsulated frame received from an HVPLS peer with this destination MAC address will be delivered to the node's control plane. When two core nodes establish a peer relationship, they also exchange one or more Address Messages with the above information. In this case each core sends the MAC address information for all of the spokes to which it is connected, as well as for itself. Returning to the case of a core establishing a peer connection with a spoke, the core node sends to the spoke all of the MAC addresses that it has learned from all of its core peers (as well as for itself). In this manner, each spoke will know the draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 21 MAC addresses of all of the other spokes in the HVPLS. Note that in large networks multiple Address Messages may be required to ensure that media MTU sizes limitations are not exceeded. When a core node loses the connection to a spoke, it instructs all of its core peers to remove the corresponding spoke MAC address by sending the above information in a Address Withdrawal Message [MPLS-LDP]. The core nodes in turn pass that information along in Address Withdrawal Messages to their spokes. When a core node loses the connection to another core node, it instructs all of its spoke nodes to remove all of the corresponding MAC addresses by sending the above information in an Address Withdrawal Message. Using the above capabilities, a spoke node can test the data plane operation between itself and any other spoke site in the HVPLS. 9.3 Operation of limited HVPLS nodes There may be some implementations that do not want to maintain a list of remote spoke IP and MAC addresses. In such an environment, an operator can still invoke a VPLS ping or VPLS traceroute if the MAC address of the remote spoke or customer device is provided. There may be some cases where an operator knows the IP address of a remote spoke, but not the MAC address. A method to obtain the MAC address when only the IP address is known may be described in a future version. 10 General VPLS testing procedures When operating in a background mode, the checks listed in Section 5.1 are normally performed on different time scales. Testing HVPLS nodes and peer sessions requires minimal network impact since this is done on a per machine and a per peer basis. Also, monitoring for these failures can be done with SNMP traps. It is important to note that these tests are expected to catch a significant percentage of the most likely failures. Tunnel LSPs are maintained by their associated MPLS signaling protocols, RSVP-TE [MPLS-RSVP] or LDP. Locally detected draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 22 failures such as link outages or node failures invoke MPLS recovery actions. These actions can include local recovery as in the case of [RSVP-FRR]. In the case of a successful local recovery of a tunnel LSP, the HVPLS nodes need not be notified. If a local recovery is not possible, then RSVP-TE or LDP notifies the HVPLS node using the tunnel LSP of the failure. This in turn can generate a SNMP trap or an operator error message. In addition to failures and changes detected outside of MPLS, both MPLS signaling protocols have control plane failure detection mechanisms. Both have Hello protocols that can detect link failures as well as MPLS control plane failures. LDP also has a Keep Alive protocol. These Hello and Keep Alive protocols run on the time frame of multiple seconds. They also provide failure notification to the HVPLS node using the tunnel. As above, this can generate a SNMP trap or an operator error message. In current [PWE3-CTRL] or [VPLS] environments, loss of a targeted LDP session to a peer is normally a key operator notification. While a failed tunnel LSP can generate a notification as described above, these failures can be temporary in nature due to routing transients. MPLS ping is designed to catch failures where the control plane believes that a tunnel LSP is operational but it is in fact not functioning correctly. This corrupted LSP is much less likely to occur than a LSP going down "properly." When used as a background check, it should be used in addition to the above tunnel LSP failure detection methods and not as a replacement. When any of these methods detects a tunnel LSP failure, the HVPLS node can switch to another LSP if one is available. When the failure is detected by MPLS ping, MPLS traceroute can be used to assist in failure isolation. VPLS ping is designed to detect problems in the "bridging" aspects of the VPLS operation. It detects flooding and/or MAC learning problems in the network which are not checked in the above tests. Note that the number of possible spoke-to-spoke tests to check an entire HVPLS can be significant. Therefore, care should be taken when executing VPLS ping as a background test to avoid overloading the network or the HVPLS nodes. Note also that an individual core node's operation is checked by multiple spoke-to-spoke checks. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 23 When a failure is detected by VPLS ping, VPLS traceroute can be used to assist in problem isolation. All of the above tests check the operation of a HVPLS to the edge of the HVPLS. It is also possible to use VPLS ping and traceroute to check for customer device MAC addresses. While not specified by HVPLS, there is normally additional information available to an operator to check for problems between the edge of the HVPLS and the customer. These include: - the state of the local customer VLAN or port - this is the simplest test and will normally catch the most likely failures - the L2 MAC entries for the local customer VLAN or port - the HVPLS transmit/receive statistics As described above, VPLS ping and VPLS traceroute work with previously defined MPLS tests to provide an end-to-end test capability for an HVPLS. In addition, extensions to [VPLS] allow VPLS ping to operate in a background mode with knowledge of the remote sites that need to be checked. 11 Load sharing considerations Some implementations provide for load sharing a tunnel LSP across multiple LSPs. Such implementations have HVPLS test implications. When customer data entering a VPLS at an ingress node is transmitted to another node over multiple (load sharing) tunnel LSPs, each of these LSPs SHOULD be tested. VPLS pings and traceroutes SHOULD be sent over each of these LSPs. There may also be multiple load sharing tunnel LSPs between a core node which is not the traffic ingress and a downstream node (which may or may not be the traffic egress). At such a core node, a decision is made as to which load sharing tunnel LSP to use to forward a HVPLS packet. This decision is often based on a hash of some "random" field. There are at least three options. One option is to hash on the IP addresses of an encapsulated IP packet. This option would potentially need to be combined with another option to handle non-IP frames. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 24 A second option is to hash on the label stack of the received HVPLS packet. This forces all packets received on a tunnel LSP for the same HVPLS to use the same load sharing tunnel LSP to the next core node. This method distributes traffic among the load sharing tunnel LSPs on a HVPLS basis. A third option is to hash on fields of the HVPLS packet after it has been un-encapsulated (see Section 8.1.5). Such a hash could use the destination and source MAC addresses of the un- encapsulated packet. Thus, traffic received on a tunnel LSP for the same HVPLS may use any of the load sharing tunnel LSPs to the next core node. This method distributes traffic among the load sharing tunnel LSPs on a MAC address pair basis. The first and third options normally produce a more optimal distribution of packets since IP addresses and MAC addresses should be more random than HVPLS labels. This advantage may be somewhat reduced for the first option if customers' data contains a significant amount of non-IP traffic. It may also be somewhat reduced for the third option if customers use a single router at each site to connect to the HVPLS. The second option has an advantage from an HVPLS testing perspective. Since the label stack for a VPLS ping or traceroute is the same as for customer traffic, the second option ensures that VPLS pings and traceroutes are testing all of the LSPs used by customer data. The first option has the disadvantage that a hash on the IP address of an encapsulated ping/traceroute packet uses an address in the 127/8 range and not a true customer IP address. The third option has the disadvantage that a hash on the MAC address of a spoke node may differ from the hash on a true customer MAC address. However, remember that actual customer MAC addresses can be used in a VPLS ping/traceroute and these will use the same path as the customer data when using a MAC- based hash. Please note that a special use of the Router Alert label is specified in Section 8.1.4 and that this label SHOULD NOT be included in any hash algorithm based on the label stack. Remember from above that VPLS pings and traceroutes SHOULD be sent using all of the load sharing tunnel LSPs at the ingress node. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 25 Load sharing designs and hash algorithms remain implementation options. There are trade-offs between optimal load sharing and testability. Of course, testing using IP ping and traceroute has similar exposures from the effects of equal cost multipath. The methodology described thus far provides a means to verify that all remote nodes can be reached. It also provides an operator with a means to verify operation for particular customer MAC addresses. It does not provide a means to verify all load sharing paths in a HVPLS from a single node. The Multipath information contained in the Downstream Mapping TLV in [LSP-PING] provides additional capabilities for verifying all load sharing paths. Use of this information in a VPLS traceroute environment, to test all load sharing paths in an HVPLS, will be discussed in a future version. 12 Security Considerations For security purposes, the edge of a provider's HVPLS network is defined to be a spoke node or a PE that has directly attached customers. Some customers and providers may desire that the provider edge node participate in the customer network. This could be the case when the customer is using the provider's node as a default gateway. In such a configuration, the provider edge node's IP address and Ethernet MAC address are known in the customer network. However, no other provider network information should be exposed to the customer network. When the provider is not furnishing a default gateway function, no provider network information should be exposed to the customer network. The VPLS ping and VPLS traceroute capabilities described in Sections 8.1 and 8.2 are transported inside the HVPLS in the same manner as customer data. This is required to properly test the HVPLS. However, care must be taken to prevent provider network information contained in these test packets from being exposed to the customer network. A test packet that is forwarded to the customer network exposes provider network information to the customer network. Therefore, spoke nodes SHOULD always check for such test packets as described in Section 8.1.4. Any detected test packet SHOULD NOT be forwarded to the customer network. draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 26 Another security concern is the receipt of a VPLS ping or traceroute from a node that is not a member of the HVPLS. Should a HVPLS node respond to a test request from a non-HVPLS member, the response would improperly expose provider network information. To prevent this from happening, the HVPLS node MAY check to ensure that the return Ethernet MAC address is one of the MAC addresses that it has learned using the Ethernet VPLS node TLV described in Section 9 Note that this requires maintaining the MAC information during the entire operation of the HVPLS. 13 Scalability Issues In [VPLS], targeted LDP sessions are used to distribute VPLS labels. Between core nodes, a complete mesh of targeted LDP sessions is required. 14 Intellectual Property Considerations This document is being submitted for use in IETF standards discussions. 15 Full Copyright Statement Copyright (C) The Internet Society (2001). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 27 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 16 Acknowledgments We would like to acknowledge the contributions of Sunil Khandekar, Kevin Frick, Charles Burton, Charlie Hudnall, Greg Henley, and Joe Regan in the development of these ideas. 17 Normative References [VPLS] "Virtual Private LAN services over MPLS", draft- lasserre-vkompella-ppvpn-vpls-04.txt, March 2003 (Work In Progress) [LSP-PING] "Detecting Data Plane Liveliness in MPLS", draft- ietf-mpls-lsp-ping-02.txt, march 2003 (Work In Progress) [PWE3-CTRL] "Transport of Layer 2 Frames Over MPLS", draft- ietf-pwe3-control-protocol-02.txt, February 2003 (Work In Progress) [PWE3-ENET] "Encapsulation Methods for Transport of Ethernet Frames Over IP/MPLS Networks", draft-ietf-pwe3-ethernet-encap- 02.txt, February 2003 (Work In Progress) [PWE3-IANA] "IANA Allocations for pseudo Wire Edge to Edge Emulation (PWE3)", draft-ietf-pwe3-iana-allocation-00.txt, February 2003 (Work In Progress) [MPLS-LDP] Andersson, et al., "LDP Specification", RFC 3036, January 2001 [LABEL-ENC] Rosen, et al., "MPLS Label Stack Encoding", RFC 3032, January 2001 18 Informative References [MPLS-RSVP] Awduche, et al., "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 3209, December 2001 [RSVP-FRR] "Fast Reroute Extensions to RSVP-TE for LSP Tunnels", draft-ietf-mpls-rsvp-lsp-fastreroute-00.txt (Work In Progress) draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 28 19 Authors' Addresses Olen Stokes Extreme Networks 630 Davis Drive, Suite 250 Morrisville, NC 27560 Email: ostokes@extremenetworks.com Vach Kompella 274 Ferguson Dr. Mountain View, CA 94034 Email: vkompella@timetranetworks.com Giles Heron PacketExchange Ltd. The Truman Brewery 91 Brick Lane LONDON E1 6QL United Kingdom Email: giles@packetexchange.net Yetik Serbest SBC Communications 9505 Arboretum Blvd. Austin TX 78759 serbest@tri.sbc.com draft-stokes-vkompella-ppvpn-hvpls-oam-02.txt June 2003 Page 29