Softwire WG M. Xu Internet-Draft Y. Cui Intended status: Standards Track J. Wu Expires: September 25, 2018 S. Yang Tsinghua University C. Metz Cisco Systems March 24, 2018 IPv4 Multicast over an IPv6 Multicast in Softwire Mesh Network draft-ietf-softwire-mesh-multicast-20 Abstract During the transition to IPv6, there will be scenarios where a backbone network internally running one IP address family (referred to as the internal IP or I-IP family), connects client networks running another IP address family (referred to as the external IP or E-IP family). In such cases, the I-IP backbone needs to offer both unicast and multicast transit services to the client E-IP networks. This document describes a mechanism for supporting multicast across backbone networks where the I-IP and E-IP protocol families differ. The document focuses on IPv4-over-IPv6 scenario, due to lack of real- world use cases for IPv6-over-IPv4 scenario. Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at https://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on September 25, 2018. Xu, et al. Expires September 25, 2018 [Page 1] Internet-Draft softwire mesh multicast March 2018 Copyright Notice Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4. Mesh Multicast Mechanism . . . . . . . . . . . . . . . . . . 7 4.1. Mechanism Overview . . . . . . . . . . . . . . . . . . . 8 4.2. Group Address Mapping . . . . . . . . . . . . . . . . . . 8 4.3. Source Address Mapping . . . . . . . . . . . . . . . . . 9 4.4. Routing Mechanism . . . . . . . . . . . . . . . . . . . . 9 5. Control Plane Functions of AFBR . . . . . . . . . . . . . . . 10 5.1. E-IP (*,G) and (S,G) State Maintenance . . . . . . . . . 10 5.2. I-IP (S',G') State Maintenance . . . . . . . . . . . . . 10 5.3. E-IP (S,G,rpt) State Maintenance . . . . . . . . . . . . 11 5.4. Inter-AFBR Signaling . . . . . . . . . . . . . . . . . . 11 5.5. SPT Switchover . . . . . . . . . . . . . . . . . . . . . 13 5.6. Other PIM Message Types . . . . . . . . . . . . . . . . . 13 5.7. Other PIM States Maintenance . . . . . . . . . . . . . . 13 6. Data Plane Functions of the AFBR . . . . . . . . . . . . . . 13 6.1. Process and Forward Multicast Data . . . . . . . . . . . 14 6.2. TTL . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 6.3. Fragmentation . . . . . . . . . . . . . . . . . . . . . . 14 7. Packet Format and Translation . . . . . . . . . . . . . . . . 14 8. Softwire Mesh Multicast Encapsulation . . . . . . . . . . . . 15 9. Security Considerations . . . . . . . . . . . . . . . . . . . 16 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 16 11.1. Normative References . . . . . . . . . . . . . . . . . . 16 11.2. Informative References . . . . . . . . . . . . . . . . . 17 Appendix A. Acknowledgements . . . . . . . . . . . . . . . . . . 17 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17 Xu, et al. Expires September 25, 2018 [Page 2] Internet-Draft softwire mesh multicast March 2018 1. Introduction During the transition to IPv6, there will be scenarios where a backbone network internally running one IP address family (referred to as the internal IP or I-IP family), connects client networks running another IP address family (referred to as the external IP or E-IP family). One solution is to leverage the multicast functions inherent in the I-IP backbone to efficiently forward client E-IP multicast packets inside an I-IP core tree. The I-IP tree is rooted at one or more ingress Address Family Border Routers (AFBRs) [RFC5565] and branches out to one or more egress AFBRs. [RFC4925] outlines the requirements for the softwire mesh scenario and includes support for multicast traffic. It is likely that client E-IP multicast sources and receivers will reside in different client E-IP networks connected to an I-IP backbone network. This requires the client E-IP source-rooted or shared tree to traverse the I-IP backbone network. This could be accomplished by re-using the multicast VPN approach outlined in [RFC6513]. MVPN-like schemes can support the softwire mesh scenario and achieve a "many-to-one" mapping between the E-IP client multicast trees and the transit core multicast trees. The advantage of this approach is that the number of trees in the I-IP backbone network scales less than linearly with the number of E-IP client trees. Corporate enterprise networks, and by extension multicast VPNs, have been known to run applications that create too many (S,G) states [RFC7899]. Aggregation at the edge contains the (S,G) states for customer's VPNs and these need to be maintained by the network operator. The disadvantage of this approach is the possibility of inefficient bandwidth and resource utilization when multicast packets are delivered to a receiving AFBR with no attached E-IP receivers. [RFC8114] provides a solution for delivering IPv4 multicast services over an IPv6 network. But it mainly focuses on the DS-lite [RFC6333] scenario. This document describes a detailed solution for the IPv4- over-IPv6 softwire mesh scenario, where client networks run IPv4 and the backbone network runs IPv6. Internet-style multicast is somewhat different to the [RFC8114] scenario in that the trees are source-rooted and relatively sparse. The need for multicast aggregation at the edge (where many customer multicast trees are mapped into one or more backbone multicast trees) does not exist and to date has not been identified. Thus the need for alignment between the E-IP and I-IP multicast mechanisms emerges. Xu, et al. Expires September 25, 2018 [Page 3] Internet-Draft softwire mesh multicast March 2018 [RFC5565] describes the "Softwire Mesh Framework". This document provides a more detailed description of how one-to-one mapping schemes ([RFC5565], Section 11.1) for IPv4-over-IPv6 multicast can be achieved. 1.1. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Figure 1 shows an example of how a softwire mesh network can support multicast traffic. A multicast source S is located in one E-IP client network, while candidate E-IP group receivers are located in the same or different E-IP client networks that all share a common I-IP transit network. When E-IP sources and receivers are not local to each other, they can only communicate with each other through the I-IP core. There may be several E-IP sources for a single multicast group residing in different client E-IP networks. In the case of shared trees, the E-IP sources, receivers and rendezvous points (RPs) might be located in different client E-IP networks. In the simplest case, a single operator manages the resources of the I-IP core, although the inter-operator case is also possible and so not precluded. Xu, et al. Expires September 25, 2018 [Page 4] Internet-Draft softwire mesh multicast March 2018 +---------+ +---------+ | | | | +--------+ | E-IP | | E-IP |--|Source S| | network | | network | +--------+ +---------+ +---------+ | | +----------+ +----------+ | | | upstream | +-| AFBR |--| AFBR |-+ | +----------+ +----------+ | | | E-IP Multicast | I-IP transit core | packets are forwarded | | across the I-IP | +----------+ +----------+ | transit core +-|dowstream | |downstream|-+ | AFBR |--| AFBR | +----------+ +----------+ | | +--------+ +--------+ +--------+ | | | | +--------+ |Receiver|-- | E-IP | | E-IP |--|Receiver| +--------+ |network | |network | +--------+ +--------+ +--------+ Figure 1: Softwire Mesh Multicast Framework 2. Terminology Terminology used in this document: o Address Family Border Router (AFBR) - A router interconnecting two or more networks using different IP address families. Besides, in the context of softwire mesh multicast, the AFBR runs E-IP and I-IP control planes to maintain E-IP and I-IP multicast states respectively and performs the appropriate encapsulation/decapsulation of client E-IP multicast packets for transport across the I-IP core. An AFBR will act as a source and/or receiver in an I-IP multicast tree. o Upstream AFBR: An AFBR that is located on the upper reaches of a multicast data flow. o Downstream AFBR: An AFBR that is located on the lower reaches of a multicast data flow. o I-IP (Internal IP): This refers to IP address family that is supported by the core network. In this document, the I-IP is IPv6. Xu, et al. Expires September 25, 2018 [Page 5] Internet-Draft softwire mesh multicast March 2018 o E-IP (External IP): This refers to the IP address family that is supported by the client network(s) attached to the I-IP transit core. In this document, the I-IP is IPv6. o I-IP core tree: A distribution tree rooted at one or more AFBR source nodes and branched out to one or more AFBR leaf nodes. An I-IP core tree is built using standard IP or MPLS multicast signaling protocols operating exclusively inside the I-IP core network. An I-IP core tree is used to forward E-IP multicast packets belonging to E-IP trees across the I-IP core. Another name for an I-IP core tree is multicast or multipoint softwire. o E-IP client tree: A distribution tree rooted at one or more hosts or routers located inside a client E-IP network and branched out to one or more leaf nodes located in the same or different client E-IP networks. o uPrefix46: The /96 unicast IPv6 prefix for constructing an IPv4-embedded IPv6 unicast address [RFC6052]. o mPrefix46: The /96 multicast IPv6 prefix for constructing an IPv4-embedded IPv6 multicast address. o PIMv4, PIMv6: refer to [RFC8114]. o Inter-AFBR signaling: A mechanism used by downstream AFBRs to send PIMv6 messages to the upstream AFBR. 3. Scope This document focuses on the IPv4-over-IPv6 scenario, as shown in the following diagram: Xu, et al. Expires September 25, 2018 [Page 6] Internet-Draft softwire mesh multicast March 2018 +---------+ +---------+ | IPv4 | | IPv4 | +--------+ | Client | | Client |--|Source S| | network | | network | +--------+ +---------+ +---------+ | | +----------+ +----------+ | | | upstream | +-| AFBR |--| AFBR |-+ | +----------+ +----------+ | | | | IPv6 transit core | | | | +----------+ +----------+ | +-|dowstream | |downstream|-+ | AFBR |--| AFBR | +----------+ +----------+ | | +--------+ +--------+ +--------+ |IPv4 | |IPv4 | +--------+ |Receiver|-- |Client | |Client |--|Receiver| +--------+ |network | |network | +--------+ +--------+ +--------+ Figure 2: IPv4-over-IPv6 Scenario In Figure 2, the E-IP client networks run IPv4 and the I-IP core runs IPv6. Because of the much larger IPv6 group address space, the client E-IP tree can be mapped to a specific I-IP core tree. This simplifies operations on the AFBR because it becomes possible to algorithmically map an IPv4 group/source address to an IPv6 group/source address and vice-versa. The IPv4-over-IPv6 scenario is an emerging requirement as network operators build out native IPv6 backbone networks. These networks support native IPv6 services and applications but in many cases, support for legacy IPv4 unicast and multicast services will also need to be accommodated. 4. Mesh Multicast Mechanism Xu, et al. Expires September 25, 2018 [Page 7] Internet-Draft softwire mesh multicast March 2018 4.1. Mechanism Overview Routers in the client E-IP networks have routes to all other client E-IP networks. Through PIMv4 messages, E-IP hosts and routers have discovered or learnt of (S,G) or (*,G) IPv4 addresses. Any I-IP multicast state instantiated in the core is referred to as (S',G') or (*,G') and is certainly separated from E-IP multicast state. Suppose a downstream AFBR receives an E-IP PIM Join/Prune message from the E-IP network for either an (S,G) tree or a (*,G) tree. The AFBR translates the PIMv4 message into an PIMv6 message with the latter being directed towards the I-IP IPv6 address of the upstream AFBR. When the PIMv6 message arrives at the upstream AFBR, it is translated back into an PIMv4 message. The result of these actions is the construction of E-IP trees and a corresponding I-IP tree in the I-IP network. An example of the packet format and translation is provided in Section 8. In this case, it is incumbent upon the AFBRs to perform PIM message conversions in the control plane and IP group address conversions or mappings in the data plane. The AFBRs perform an algorithmic, one- to-one mapping of IPv4-to-IPv6. 4.2. Group Address Mapping A simple algorithmic mapping between IPv4 multicast group addresses and IPv6 group addresses is performed. Figure 3 is provided as a reminder of the format: +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ | 0-------------32--40--48--56--64--72--80--88--96-----------127| +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ | mPrefix46 |group address | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+ Figure 3: IPv4-Embedded IPv6 Multicast Address Format An IPv6 multicast prefix (mPrefix46) is provisioned on each AFBR. AFBRs will prepend the prefix to an IPv4 multicast group address when translating it to an IPv6 multicast group address. The construction of the mPrefix46 for SSM is the same as the construction of the mPrefix64 described in Section 5 of [RFC8114]. With this scheme, each IPv4 multicast address can be mapped into an IPv6 multicast address (with the assigned prefix), and each IPv6 Xu, et al. Expires September 25, 2018 [Page 8] Internet-Draft softwire mesh multicast March 2018 multicast address with the assigned prefix can be mapped into an IPv4 multicast address. The group address translation algorithm can be referred in Section 5.2 of [RFC8114]. 4.3. Source Address Mapping There are two kinds of multicast: ASM and SSM. Considering that the I-IP network and E-IP network may support different kinds of multicast, the source address translation rules needed to support all possible scenarios may become very complex. But since SSM can be implemented with a strict subset of the PIM-SM protocol mechanisms [RFC7761], we can treat the I-IP core as SSM-only to make it as simple as possible. There then remain only two scenarios to be discussed in detail: o E-IP network supports SSM One possible way to make sure that the translated PIMv6 message reaches upstream AFBR is to set S' to a virtual IPv6 address that leads to the upstream AFBR. The unicast adddress translation should be achieved according to [RFC6052] o E-IP network supports ASM The (S,G) source list entry and the (*,G) source list entry only differ in that the latter has both the WildCare (WC) and RPT bits of the Encoded-Source-Address set, while with the former, the bits are cleared (See Section 4.9.5.1 of [RFC7761]). So we can translate source list entries in (*,G) messages into source list entries in (S',G') messages by clearing both the WC and RPT bits at downstream AFBRs, and vice-versa for the reverse translation at upstream AFBRs. 4.4. Routing Mechanism With mesh multicast, PIMv6 messages originating from a downstream AFBR need to be propogated to the correct upstream AFBR, and every AFBR needs the /96 prefix in "IPv4-Embedded IPv6 Virtual Source Address Format". To achieve this, every AFBR MUST announce the address of one of its E-IPv4 interfaces in the "v4" field alongside the corresponding uPreifx64. The announcement MUST be sent to the other AFBRs through MBGP [RFC4760]. Since every IP address of upstream AFBR's E-IP interface is different from each other, every uPrefix46 that AFBR announces MUST be different. "uPrefix46" is an IPv6 prefix, and the Xu, et al. Expires September 25, 2018 [Page 9] Internet-Draft softwire mesh multicast March 2018 distribution mechanism is the same as the traditional mesh unicast scenario. But as the "v4" field is an E-IP address, and BGP messages are not tunneled through softwires or any other mechanism specified in [RFC5565], AFBRs MUST be able to transport and encode/decode BGP messages that are carried over I-IP, whose NLRI and NH are of E-IP address family. In this way, when a downstream AFBR receives an E-IP PIM (S,G) message, it can translate this message into (S',G') by looking up the IP address of the corresponding AFBR's E-IP interface. Since the uPrefix46 of S' is unique, and is known to every router in the I-IP network, the translated message will be forwarded to the corresponding upstream AFBR, and the upstream AFBR can translate the message back to (S,G). When a downstream AFBR receives an E-IP PIM (*,G) message, S' can be generated according to the format specified in Figure 3, with the "source address" field set to * (wildcard value). The translated message will be forwarded to the corresponding upstream AFBR. Since every PIM router within a PIM domain MUST be able to map a particular multicast group address to the same RP (see Section 4.7 of [RFC7761]), when the upstream AFBR checks the "source address" field of the message, it finds the IPv4 address of the RP, and ascertains that this is originally a (*,G) message. This is then translated back to the (*,G) message and processed. 5. Control Plane Functions of AFBR AFBRs are responsible for the following functions: 5.1. E-IP (*,G) and (S,G) State Maintenance E-IP (*,G) and (S,G) state maintenance for an AFBR is the same as E-IP (*,G) and (S,G) state maintenance for an mAFTR described in Section 7.2 of [RFC8114] 5.2. I-IP (S',G') State Maintenance It is possible that the I-IP transit core runs another non-transit I-IP PIM-SSM instance. Since the translated source address starts with the unique "Well-Known" prefix or the ISP-defined prefix that SHOULD NOT be used by other service provider, mesh multicast will not influence non-transit PIM-SSM multicast at all. When an AFBR receives an I-IP (S',G') message, it MUST check S'. If S' starts with the unique prefix, then the message is actually a translated E-IP (S,G) or (*,G) message, and the AFBR MUST translate this message back to PIMv4 message and process it. Xu, et al. Expires September 25, 2018 [Page 10] Internet-Draft softwire mesh multicast March 2018 5.3. E-IP (S,G,rpt) State Maintenance When an AFBR wishes to propagate a Join/Prune(S,G,rpt) message to an I-IP upstream router, the AFBR MUST operate as specified in Section 6.5 and Section 6.6. 5.4. Inter-AFBR Signaling Assume that one downstream AFBR has joined a RPT of (*,G) and a SPT of (S,G), and decided to perform an SPT switchover. According to [RFC7761], it SHOULD propagate a Prune(S,G,rpt) message along with the periodical Join(*,G) message upstream towards the RP. However, routers in the I-IP transit core do not process (S,G,rpt) messages since the I-IP transit core is treated as SSM-only. As a result, the downstream AFBR is unable to prune S from this RPT, so it will receive two copies of the same data for (S,G). In order to solve this problem, we introduce a new mechanism for downstream AFBRs to inform upstream AFBRs of pruning any given S from an RPT. When a downstream AFBR wishes to propagate an (S,G,rpt) message upstream, it SHOULD encapsulate the (S,G,rpt) message, then send the encapsulated unicast message to the corresponding upstream AFBR, which we call "RP'". When RP' receives this encapsulated message, it SHOULD decapsulate the message as in the unicast scenario, and retrieve the original (S,G,rpt) message. The incoming interface of this message may be different to the outgoing interface which propagates multicast data to the corresponding downstream AFBR, and there may be other downstream AFBRs that need to receive multicast data of (S,G) from this incoming interface, so RP' SHOULD NOT simply process this message as specified in [RFC7761] on the incoming interface. To solve this problem, we introduce an "interface agent" to process all the encapsulated (S,G,rpt) messages the upstream AFBR receives. The interface agent's RP' SHOULD prune S from the RPT of group G when no downstream AFBR is subscribed to receive multicast data of (S,G) along the RPT. In this way, we ensure that downstream AFBRs will not miss any multicast data that they need. The cost of this is that multicast data for (S,G) will be duplicated along the RPT received by AFBRs affected by the SPT switch over, if at least one downstream AFBR exists that has not yet sent Prune(S,G,rpt) messages to the upstream AFBR. In certain deployment scenarios (e.g. if there is only a single downstream router), the interface agent function is not required. Xu, et al. Expires September 25, 2018 [Page 11] Internet-Draft softwire mesh multicast March 2018 The mechanism used to achieve this is left to the implementation. The following diagram provides one possible solution for an "interface agent" implementation: +----------------------------------------+ | | | +-----------+----------+ | | | PIM-SM | UDP | | | +-----------+----------+ | | ^ | | | | | | | | v | | +----------------------+ | | | I/F Agent | | | +----------------------+ | | PIM ^ | multicast | | messages | | data | | | +-------------+---+ | | +--+--|-----------+ | | | | v | v | | +--------- + +----------+ | | | I-IP I/F | | I-IP I/F | | | +----------+ +----------+ | | ^ | ^ | | | | | | | | +--------|-----|----------|-----|--------+ | v | v Figure 4: Interface Agent Implementation Example Figure 4 shows an example of an interface agent implementation using UDP encapsulation. The interface agent has two responsibilities: In the control plane, it SHOULD work as a real interface that has joined (*,G), representing of all the I-IP interfaces which are outgoing interfaces of the (*,G) state machine, and process the (S,G,rpt) messages received from all the I-IP interfaces. The interface agent maintains downstream (S,G,rpt) state machines for every downstream AFBR, and submits Prune (S,G,rpt) messages to the PIM-SM module only when every (S,G,rpt) state machine is in the Prune(P) or PruneTmp(P') state, which means that no downstream AFBR is subscribed to receive multicast data for (S,G) along the RPT of G. Once a (S,G,rpt) state machine changes to NoInfo(NI) state, which means that the corresponding downstream AFBR has switched to receive Xu, et al. Expires September 25, 2018 [Page 12] Internet-Draft softwire mesh multicast March 2018 multicast data of (S,G) along the RPT again, the interface agent SHOULD send a Join (S,G,rpt) to the PIM-SM module immediately. In the data plane, upon receiving a multicast data packet, the interface agent SHOULD encapsulate it at first, then propagate the encapsulated packet from every I-IP interface. NOTICE: It is possible that an E-IP neighbor of RP' has joined the RPT of G, so the per-interface state machine for receiving E-IP Join/ Prune (S,G,rpt) messages SHOULD be preserved. 5.5. SPT Switchover After a new AFBR requests the receipt of traffic destined for a multicast group, it will receive all the data from the RPT at first. At this time, every downstream AFBR will receive multicast data from any source from this RPT, in spite of whether they have switched over to an SPT or not. To minimize this redundancy, it is recommended that every AFBR's SwitchToSptDesired(S,G) function employs the "switch on first packet" policy. In this way, the delay in switchover to SPT is kept as small as possible, and after the moment that every AFBR has performed the SPT switchover for every S of group G, no data will be forwarded in the RPT of G, thus no more unnecessary duplication will be produced. 5.6. Other PIM Message Types In addition to Join or Prune, other message types exist, including Register, Register-Stop, Hello and Assert. Register and Register- Stop messages are sent by unicast, while Hello and Assert messages are only used between directly linked routers to negotiate with each other. It is not necessary to translate these for forwarding, thus the processing of these messages is out of scope for this document. 5.7. Other PIM States Maintenance In addition to states mentioned above, other states exist, including (*,*,RP) and I-IP (*,G') state. Since we treat the I-IP core as SSM- only, the maintenance of these states is out of scope for this document. 6. Data Plane Functions of the AFBR Xu, et al. Expires September 25, 2018 [Page 13] Internet-Draft softwire mesh multicast March 2018 6.1. Process and Forward Multicast Data Refer to Section 7.4 of [RFC8114]. If there is at least one outgoing interface whose IP address family is different from the incoming interface, the AFBR MUST encapsulate this packet with mPrefix46-derived and uPrefix46-derived IPv6 address to form an IPv6 multicast packet. 6.2. TTL Processing of TTL information in protocol headers depends on the tunneling technology, and it is out of scope of this document. 6.3. Fragmentation The encapsulation performed by an upstream AFBR will increase the size of packets. As a result, the outgoing I-IP link MTU may not accommodate the larger packet size. As it is not always possible for core operators to increase the MTU of every link. Fragmentation after encapsulation and reassembling of encapsulated packets MUST be supported by AFBRs [RFC5565]. 7. Packet Format and Translation Because the PIM-SM Specification is independent of the underlying unicast routing protocol, the packet format in Section 4.9 of [RFC7761] remains the same, except that the group address and source address MUST be translated when traversing an AFBR. For example, Figure 5 shows the register-stop message format in the IPv4 and IPv6 address families. Xu, et al. Expires September 25, 2018 [Page 14] Internet-Draft softwire mesh multicast March 2018 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |PIM Ver| Type | Reserved | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv4 Group Address (Encoded-Group format) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv4 Source Address (Encoded-Unicast format) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ (1). IPv4 Register-Stop Message Format 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |PIM Ver| Type | Reserved | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 Group Address (Encoded-Group format) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IPv6 Source Address (Encoded-Unicast format) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ (2). IPv6 Register-Stop Message Format Figure 5: Register-Stop Message Format In Figure 5, the semantics of fields "PIM Ver", "Type", "Reserved", and "Checksum" can be referred in Section 4.9 of [RFC7761]. IPv4 Group Address (Encoded-Group format): The encoded-group format of the IPv4 group address described in Section 4.2. IPv4 Source Address (Encoded-Group format): The encoded-unicast format of the IPv4 source address described in Section 4.3. IPv6 Group Address (Encoded-Group format): The encoded-group format of the IPv6 group address described in Section 4.2. IPv6 Source Address (Encoded-Group format): The encoded-unicast format of the IPv6 source address described in Section 4.3. 8. Softwire Mesh Multicast Encapsulation Softwire mesh multicast encapsulation does not require the use of any one particular encapsulation mechanism. Rather, it MUST accommodate a variety of different encapsulation mechanisms, and allow the use of encapsulation mechanisms mentioned in [RFC4925]. Additionally, all of the AFBRs attached to the I-IP network MUST implement the same encapsulation mechanism. Xu, et al. Expires September 25, 2018 [Page 15] Internet-Draft softwire mesh multicast March 2018 9. Security Considerations The security concerns raised in [RFC4925] and [RFC7761] are applicable here. In addition, the additional workload associated with some schemes could be exploited by an attacker to perform a out DDoS attack. Compared with [RFC4925], the security concerns SHOULD be considered more carefully: an attacker could potentially set up many multicast trees in the edge networks, causing too many multicast states in the core network. 10. IANA Considerations This document includes no request to IANA. 11. References 11.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, . [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing Architecture", RFC 4291, DOI 10.17487/RFC4291, February 2006, . [RFC4301] Kent, S. and K. Seo, "Security Architecture for the Internet Protocol", RFC 4301, DOI 10.17487/RFC4301, December 2005, . [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, "Multiprotocol Extensions for BGP-4", RFC 4760, DOI 10.17487/RFC4760, January 2007, . [RFC4925] Li, X., Ed., Dawkins, S., Ed., Ward, D., Ed., and A. Durand, Ed., "Softwire Problem Statement", RFC 4925, DOI 10.17487/RFC4925, July 2007, . [RFC5565] Wu, J., Cui, Y., Metz, C., and E. Rosen, "Softwire Mesh Framework", RFC 5565, DOI 10.17487/RFC5565, June 2009, . Xu, et al. Expires September 25, 2018 [Page 16] Internet-Draft softwire mesh multicast March 2018 [RFC6052] Bao, C., Huitema, C., Bagnulo, M., Boucadair, M., and X. Li, "IPv6 Addressing of IPv4/IPv6 Translators", RFC 6052, DOI 10.17487/RFC6052, October 2010, . [RFC6333] Durand, A., Droms, R., Woodyatt, J., and Y. Lee, "Dual- Stack Lite Broadband Deployments Following IPv4 Exhaustion", RFC 6333, DOI 10.17487/RFC6333, August 2011, . [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 2012, . [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 2016, . [RFC7899] Morin, T., Ed., Litkowski, S., Patel, K., Zhang, Z., Kebler, R., and J. Haas, "Multicast VPN State Damping", RFC 7899, DOI 10.17487/RFC7899, June 2016, . [RFC8114] Boucadair, M., Qin, C., Jacquenet, C., Lee, Y., and Q. Wang, "Delivery of IPv4 Multicast Services to IPv4 Clients over an IPv6 Multicast Network", RFC 8114, DOI 10.17487/RFC8114, March 2017, . 11.2. Informative References [RFC7371] Boucadair, M. and S. Venaas, "Updates to the IPv6 Multicast Addressing Architecture", RFC 7371, DOI 10.17487/RFC7371, September 2014, . Appendix A. Acknowledgements Wenlong Chen, Xuan Chen, Alain Durand, Yiu Lee, Jacni Qin and Stig Venaas provided useful input into this document. Authors' Addresses Xu, et al. Expires September 25, 2018 [Page 17] Internet-Draft softwire mesh multicast March 2018 Mingwei Xu Tsinghua University Department of Computer Science, Tsinghua University Beijing 100084 P.R. China Phone: +86-10-6278-5822 Email: xumw@tsinghua.edu.cn Yong Cui Tsinghua University Department of Computer Science, Tsinghua University Beijing 100084 P.R. China Phone: +86-10-6278-5822 Email: cuiyong@tsinghua.edu.cn Jianping Wu Tsinghua University Department of Computer Science, Tsinghua University Beijing 100084 P.R. China Phone: +86-10-6278-5983 Email: jianping@cernet.edu.cn Shu Yang Tsinghua University Graduate School at Shenzhen Shenzhen 518055 P.R. China Phone: +86-10-6278-5822 Email: yangshu@csnet1.cs.tsinghua.edu.cn Chris Metz Cisco Systems 170 West Tasman Drive San Jose, CA 95134 USA Phone: +1-408-525-3275 Email: chmetz@cisco.com Xu, et al. Expires September 25, 2018 [Page 18]