Internet Engineering Task Force L. Kreeger Internet-Draft Cisco Intended status: Informational D. Dutt Expires: January 18, 2013 Cumulus Networks T. Narten IBM D. Black EMC M. Sridharan Microsoft July 17, 2012 Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-01 Abstract The document Overlays for Network Virtualization problem statement document discusses the needs for network virtualization using overlay networks in highly virtualized data centers. The problem statement outlines a need for control protocols to facilitate running these overlay networks. This document outlines the high level requirements to be fulfilled by the control protocols. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on January 18, 2013. Copyright Notice Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Kreeger, et al. Expires January 18, 2013 [Page 1] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Control Plane Protocol Functionality . . . . . . . . . . . . . 3 3.1. Inner to Outer Address Mapping . . . . . . . . . . . . . . 6 3.2. Underlying Network Multi-Destination Delivery Address(es) . . . . . . . . . . . . . . . . . . . . . . . 6 3.3. VN Connect/Disconnect Notification . . . . . . . . . . . . 7 3.4. VN Name to VN-ID Mapping . . . . . . . . . . . . . . . . . 8 4. Control Plane Characteristics . . . . . . . . . . . . . . . . 8 5. Security Considerations . . . . . . . . . . . . . . . . . . . 10 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 11 7. Informative References . . . . . . . . . . . . . . . . . . . . 11 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 11 Kreeger, et al. Expires January 18, 2013 [Page 2] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 1. Introduction "Problem Statement: Overlays for Network Virtualization" [I-D.narten-nvo3-overlay-problem-statement] discusses the needs for network virtualization using overlay networks in highly virtualized data centers and provides a general motivation for building such networks. "Framework for DC Network Virtualization" [I-D.lasserre-nvo3-framework] provides a framework for discussing overlay networks generally and the various components that must work together in building such systems. The reader is assumed to be familiar with both documents. 2. Terminology This document uses the same terminology as found in [I-D.lasserre-nvo3-framework]. This section defines additional terminology used by this document. Network Service Appliance: A stand-alone physical End Device or a virtual End Device that provides a network service, such as a firewall, load balancer, etc. Such appliances may embed Network Virtualization Edge (NVE) functionality within them in order to more efficiently operate as part of a virtualized network. VDC: Virtual Data Center. A container for virtualized compute, storage and network services. Managed by a single tenant, a VDC can contain multiple VNs and multiple Tenant End Systems (TES) that are connected to one or more of these VNs. VN Name: A globally unique name for a VN. The VN Name is not carried in data packets originating from TES, but must be mapped into an appropriate VN-ID for a particular encapsulating technology. Using VN Names rather than VN-IDs to identify VNs in configuration files and control protocols increases the portability of a VDC and its associated VNs when moving among different administrative domains (e.g. switching to a different cloud service provider). 3. Control Plane Protocol Functionality The problem statement [I-D.narten-nvo3-overlay-problem-statement], discusses the needs for a control plane protocol (or protocols) to populate each NVE with the state needed to peform its functions. Note that an NVE may provide overlay encapsulation/decapsulation packet forwarding services to TESs that are co-resident within the Kreeger, et al. Expires January 18, 2013 [Page 3] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 same End Device (e.g. when the NVE is embedded within a hypervisor or a Network Service Appliance), or to TES that are externally connected to the NVE (e.g. a physical Network Service Appliance connected to an access switch containing the NVE). The following figures show examples of scenarios in which the NVE is co-resident within the same End Device as the TES connected to a given VN, and when the NVE is externally located within the access switch. Hypervisor +-----------------------+ | +--+ +-------+---+ | | |VM|---| | | | | +--+ |Virtual|NVE|----- Underlying | +--+ |Switch | | | Network | |VM|---| | | | | +--+ +-------+---+ | +-----------------------+ Hypervisor with an Embedded NVE. Figure 1 Hypervisor Access Switch +------------------+ +-----+-------+ | +--+ +-------+ | | | | | |VM|---| | | VLAN | | | | +--+ |Virtual|---------+ NVE | +--- Underlying | +--+ |Switch | | Trunk | | | Network | |VM|---| | | | | | | +--+ +-------+ | | | | +------------------+ +-----+-------+ Hypervisor with an External NVE. Figure 2 Kreeger, et al. Expires January 18, 2013 [Page 4] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 Network Service Appliance +---------------------------+ | +------------+ +-----+ | | |Net Service |---| | | | |Instance | | | | | +------------+ | NVE |------ Underlying | +------------+ | | | Network | |Net Service |---| | | | |Instance | | | | | +------------+ +-----+ | +---------------------------+ Network Service Appliance (physical or virtual) with an Embedded NVE. Figure 3 Network Service Appliance Access Switch +--------------------------+ +-----+-------+ | +------------+ |\ | | | | | |Net Service |----| \ | | | | | |Instance | | \ | VLAN | | | | +------------+ | |---------+ NVE | +--- Underlying | +------------+ | | | Trunk| | | Network | |Net Service |----| / | | | | | |Instance | | / | | | | | +------------+ |/ | | | | +--------------------------+ +-----+-------+ Physical Network Service Appliance with an External NVE. Figure 4 In the examples above where the NVE functionality is located in the physical access switch, the physical VLAN Trunk connecting the Hypervisor or Network Services Appliance to the external NVE only needs to carry locally significant (e.g. link local) VLAN tag values. These tags are only used to differentiate two different VNs as packets cross the wire to the external NVE. When the NVE receives packets, it uses the VLAN tag to identify the VN the TES belongs to, strips the tag, and adds the appropriate overlay encapsulation for that VN. Given the above, a control plane protocol is necessary to provide an NVE with the information it needs to maintain its own internal state necessary to carry out its forwarding functions as explained in Kreeger, et al. Expires January 18, 2013 [Page 5] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 detail below. 1. An NVE maintains a per-VN table of mappings from TES (inner) addresses to Underlying Network (outer) addresses of remote NVEs. 2. An NVE maintains per-VN state for delivering multicast and broadcast packets to other TES. Such state could include a list of multicast addresses and/or unicast addresses on the Underlying Network for the NVEs associated with a particular VN. 3. End Devices (such as a Hypervisor or Network Service Appliance) utilizing an external NVE need to "attach to" and "detach from" an NVE. Specifically, they will need a protocol that runs across the L2 link between the two devices that identifies the TES and VN Name for which the NVE is providing service. In addition, such a protocol will identify a locally significant tag (e.g., an 802.1Q VLAN tag) that can be used to identify the data frames that flow between the TES and VN. 4. An NVE needs a mapping from each unique VN name to the VN-ID value used within encapsulated data packets within the administrative domain that the VN is instantiated. 3.1. Inner to Outer Address Mapping When presented with a data packet to forward to an End System within a VN, the NVE needs to know the mapping of the TES destination (inner) address to the (outer) address on the Underlying Network of the remote NVE which can deliver the packet to the destination TES. A protocol is needed to provide this inner to outer mapping to each NVE that requires it and keep the mapping updated in a timely manner. Timely updates are important for maintaining connectivity between TES when one TES is a VM Note that one technique that could be used to create this mapping without the need for a control protocol is via data plane learning; However, the learning approach requires packets to be flooded to all NVEs participating in the VN when no mapping exists. One goal of using a control protocol is to eliminate this flooding. 3.2. Underlying Network Multi-Destination Delivery Address(es) Each NVE needs a way to deliver multi-destination packets (i.e. broadcast/multicast) within a given VN to each remote NVE which has a destination TES for these packets. Three possible ways of accomplishing this: Kreeger, et al. Expires January 18, 2013 [Page 6] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 o Use the multicast capabilities of the Underlying Network. o Have each NVE replicate the packets and send a copy across the Underlying Network to each remote NVE currently participating in the VN. o Use one or more distribution servers which replicates the packets on the behalf of the NVEs. Whichever method is used, a protocol is needed to provide on a per VN basis, one or more multicast address (assuming the Underlying Network supports multicast), and/or one or more unicast addresses of either the remote NVEs which are not multicast reachable, or of one or more distribution servers for the VN. The protocol must also keep the list of addresses up to date in a timely manner if the set of NVEs for a given VN changes over time. For example, the set of NVEs for a VN could change as VMs power on/ off or migrate to different hypervisors. 3.3. VN Connect/Disconnect Notification As the previous figures illustrated, NVEs may be embedded within an End Device (such as a Hypervisor or Network Service Appliance), or within an external networking device (e.g. an access switch). Using an external network device as the NVE can provide an offload of the encapsulation / decapsulation function and the protocol overheads which may provide performance improvements and/or resource savings to the client End Device making use of the external NVE. When an NVE is external, a protocol is needed between a client End Device making use of the external NVE and the NVE itself in order to make the NVE aware of the changing VN membership requirements of the End Device. A key driver for using a protocol rather than using static configuration of the exernal NVE is because the VN connectivity requirements can change frequently as VMs are brought up, moved and brought down on various hypervisors throughout the data center. The NVE must be notified when an End Device requires connection to a particular VN and when it no longer requires connection. This protocol should also provide the inner TES addresses within the VN that the End Device contains (e.g. the virtual MAC address of a VMs virtual NIC) to the external NVE. In addition, the external NVE must provide a local tag value for each connected VN to the End Device to use for exchange of packets between the End Device to the NVE (e.g. a locally significant 802.1Q tag value). Kreeger, et al. Expires January 18, 2013 [Page 7] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 The Identification of the VN in this protocol should preferably be made using a globally unique VN Name. A globally unique VN Name facilitates portability of a Tenant's Virtual Data Center. When a VN within a VDC is instantiated within a particular administrative domain, it can be allocated a VN-ID which only the NVE needs to use. An End Device that is making use of an offloaded NVE only needs to communicate the VN Name to the NVE, and get back a locally significant tag value. Ideally the VN Name should be compact as well unique to minimize protocol overhead. Using a Universally Unique Identifier (UUID) as discussed in RFC 4122, would work well because it is both compact and a fixed size and can be generated locally with a high likelihood of being unique. 3.4. VN Name to VN-ID Mapping Once an NVE (embedded or external) receives a VN connect indication with a specified VN Name, the NVE must find the VN-ID value to encapsulate packets with. The NVE serving that hypervisor needs a way to get a VN-ID allocated or receive the already allocated VN-ID for a given VN Name. A protocol for an NVE to get this mapping may be a useful function. 4. Control Plane Characteristics NVEs are expected to be implemented within hypervisors or access switches, or even within a Network Service Appliance. Any resources used by these protocols (e.g. processing or memory) takes away resources that could be better used by these devices to perform their intended functions (e.g. providing resoures for hosted VMs). A large scale data center may contain hundreds of thousands of these NVEs (which may be several independent implementations); Therefore, any savings in per-NVE resources can be multiplied hundreds of thousands of times. Given this, the control plane protocol(s) implemented by NVEs to provide the functionality discussed above should have the below characteristics. 1. Minimize the amount of state needed to be stored on each NVE. The NVE should only be required to cache state that it is actively using, and be able to discard any cached state when it is no longer required. For example, an NVE should only need to maintain an inner-to-outer address mapping for destinations to which it is actively sending traffic as opposed to maintaining mappings for all possible destinations. Kreeger, et al. Expires January 18, 2013 [Page 8] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 2. Fast acquisition of needed state. For example, when a TES emits a packet destined to an inner address that the NVE does not have a mapping for, the NVE should be able to acquire the needed mapping quickly. 3. Fast detection/update of stale cached state information. This only applies if the cached state is actually being used. For example, when a VM moves such that it is connected to a different NVE, the inner to outer mapping for this VM's address that is cached on other NVEs must be updated in a timely manner (if they are actively in use). If the update is not timely, the NVEs will forward data to the wrong NVE until it is updated. 4. Minimize processing overhead. This means that an NVE should only be required to perform protocol processing directly related to maintaining state for the TESs it is actively communicating with. This requirement is for the NVE functionality only. The network node that contains the NVE may be involved in other functionality for the underlying network that maintains connectivity that the NVE is not actively using (e.g., routing and multicast distribution protocols for the underlying network). 5. Highly scalable. This means scaling to hundreds of thousands of NVEs and several million VNs within a single administrative domain. As the number of NVEs and/or VNs within a data center grows, the protocol overhead at any one NVE should not increase significantly. 6. Minimize the complexity of the implementation. This argues for using the least number of protocols to achieve all the functionality listed above. Ideally a single protocol should be able to be used. The less complex the protocol is on the NVE, the more likely interoperable implementations will be created in a timely manner. 7. Extensible. The protocol should easily accommodate extension to meet related future requirements. For example, access control or QoS policies, or new address families for either inner or outer addresses should be easy to add while maintaining interoperability with NVEs running older versions. 8. Simple protocol configuration. A minimal amount of configuration should be required for a new NVE to be provisioned. Existing NVEs should not require any configuration changes when a new NVE is provisioned. Ideally NVEs should be able to auto configure themselves. Kreeger, et al. Expires January 18, 2013 [Page 9] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 9. Do not rely on IP Multicast in the Underlying Network. Many data centers do not have IP multicast routing enabled. If the Underlying Network is an IP network, the protocol should allow for, but not require the presence of IP multicast services within the data center. 10. Flexible mapping sources. Inner to outer address mappings should be able to be created by either the NVEs themselves or other third party entities (e.g. data center management or orchestration systems). The protocol should allow for mappings created by an NVE to be automatically removed from all other NVEs if it fails or is brought down unexpectedly. 11. Secure. See the Security Considerations section below. 5. Security Considerations Editor's Note: This is an initial start on the security considerations section; it will need to be expanded, and suggestions for material to add are welcome. The protocol(s) should protect the integrity of the mapping against both off-path and on-path attacks. It should authenticate the systems that are creating mappings, and rely on light weight security mechanisms to minimize the impact on scalability and allow for simple configuration. Use of an overlay exposes virtual networks to attacks on the underlying network beyond attacks on the control protocol that is the subject of this draft. In addition to the directly applicable security considerations for the networks involved, the use of an overlay enables attacks on encapsulated virtual networks via the underlying network. Examples of such attacks include traffic injection into a virtual network via injection of encapsulated traffic into the underlying network and modifying underlying network traffic to forward traffic among virtual networks that should have no connectivity. The control protocol should provide functionality to help counter some of these attacks, e.g., distribution of NVE access control lists for each virtual network to enable packets from non- participating NVEs to be discarded, but the primary security measures for the underlying network need to be applied to the underlying network. For example, if the underlying network includes connectivity across the public Internet, use of secure gateways (e.g., based on IPsec [RFC4301]) may be appropriate. The inner to outer address mappings used for forwarding data towards a remote NVE could also be used to filter incoming traffic to ensure Kreeger, et al. Expires January 18, 2013 [Page 10] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 the inner address sourced packet came from the correct NVE source address, allowing access control to discard traffic that does not originate from the correct NVE. This destination filtering functionality should be optional to use. 6. Acknowledgements Thanks to the following people for reviewing and providing feedback: Fabio Maino, Victor Moreno, Ajit Sanzgiri, Chris Wright. 7. Informative References [I-D.lasserre-nvo3-framework] Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. Rekhter, "Framework for DC Network Virtualization", draft-lasserre-nvo3-framework-03 (work in progress), July 2012. [I-D.narten-nvo3-overlay-problem-statement] Narten, T., Sridhavan, M., Dutt, D., Black, D., and L. Kreeger, "Problem Statement: Overlays for Network Virtualization", draft-narten-nvo3-overlay-problem-statement-03 (work in progress), July 2012. [RFC4301] Kent, S. and K. Seo, "Security Architecture for the Internet Protocol", RFC 4301, December 2005. Authors' Addresses Lawrence Kreeger Cisco Email: kreeger@cisco.com Dinesh Dutt Cumulus Networks Email: ddutt@cumulusnetworks.com Kreeger, et al. Expires January 18, 2013 [Page 11] Internet-Draft Network Virtualization Overlay CP Reqs July 2012 Thomas Narten IBM Email: narten@us.ibm.com David Black EMC Email: david.black@emc.com Murari Sridharan Microsoft Email: muraris@microsoft.com Kreeger, et al. Expires January 18, 2013 [Page 12]