Internet Engineering Task Force W. Wang
Internet-Draft Zhejiang Gongshang University
Intended status: Standards Track E. Haleplidis
Expires: December 4, 2011 University of Patras
K. Ogawa
NTT Corporation
C. Li
Hangzhou BAUD Networks
J. Halpern
Ericsson
June 2, 2011
ForCES Logical Function Block (LFB) Library
draft-ietf-forces-lfb-lib-04
Abstract
This document defines basic classes of Logical Function Blocks (LFBs)
used in the Forwarding and Control Element Separation (ForCES). It
is defined according to ForCES FE model [RFC5812] and ForCES protocol
[RFC5810] specifications. These basic LFB classes are scoped to meet
requirements of typical router functions and considered as the basic
LFB library for ForCES. Descriptions of individual LFBs are
presented and detailed XML definitions are included in the library.
Several use cases of the defined LFB classes are also proposed.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on December 4, 2011.
Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved.
Wang, et al. Expires December 4, 2011 [Page 1]
Internet-Draft ForCES LFB Library June 2011
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Terminology and Conventions . . . . . . . . . . . . . . . . . 4
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4
2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1. Scope of the Library . . . . . . . . . . . . . . . . . . . 7
3.2. Overview of LFB Classes in the Library . . . . . . . . . . 9
3.2.1. LFB Design Choices . . . . . . . . . . . . . . . . . . 9
3.2.2. LFB Class Groupings . . . . . . . . . . . . . . . . . 9
3.2.3. Sample LFB Class Application . . . . . . . . . . . . . 11
3.3. Document Structure . . . . . . . . . . . . . . . . . . . . 12
4. Base Types . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2. Frame . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3. MetaData . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4. XML for Base Type Library . . . . . . . . . . . . . . . . 15
5. LFB Class Description . . . . . . . . . . . . . . . . . . . . 36
5.1. Ethernet Processing LFBs . . . . . . . . . . . . . . . . . 36
5.1.1. EtherPHYCop . . . . . . . . . . . . . . . . . . . . . 36
5.1.1.1. Data Handling . . . . . . . . . . . . . . . . . . 36
5.1.1.2. Components . . . . . . . . . . . . . . . . . . . . 37
5.1.1.3. Capabilities . . . . . . . . . . . . . . . . . . . 38
5.1.1.4. Events . . . . . . . . . . . . . . . . . . . . . . 38
5.1.2. EtherMACIn . . . . . . . . . . . . . . . . . . . . . . 38
5.1.2.1. Data Handling . . . . . . . . . . . . . . . . . . 38
5.1.2.2. Components . . . . . . . . . . . . . . . . . . . . 39
5.1.2.3. Capabilities . . . . . . . . . . . . . . . . . . . 40
5.1.2.4. Events . . . . . . . . . . . . . . . . . . . . . . 40
5.1.3. EtherClassifier . . . . . . . . . . . . . . . . . . . 40
5.1.4. EtherEncapsulator . . . . . . . . . . . . . . . . . . 42
5.1.5. EtherMACOut . . . . . . . . . . . . . . . . . . . . . 45
5.1.5.1. Data Handling . . . . . . . . . . . . . . . . . . 45
5.1.5.2. Components . . . . . . . . . . . . . . . . . . . . 45
5.1.5.3. Capabilities . . . . . . . . . . . . . . . . . . . 46
5.1.5.4. Events . . . . . . . . . . . . . . . . . . . . . . 46
5.2. IP Packet Validation LFBs . . . . . . . . . . . . . . . . 46
Wang, et al. Expires December 4, 2011 [Page 2]
Internet-Draft ForCES LFB Library June 2011
5.2.1. IPv4Validator . . . . . . . . . . . . . . . . . . . . 46
5.2.1.1. Data Handling . . . . . . . . . . . . . . . . . . 46
5.2.1.2. Components . . . . . . . . . . . . . . . . . . . . 48
5.2.1.3. Capabilities . . . . . . . . . . . . . . . . . . . 48
5.2.1.4. Events . . . . . . . . . . . . . . . . . . . . . . 48
5.2.2. IPv6Validator . . . . . . . . . . . . . . . . . . . . 48
5.2.2.1. Data Handling . . . . . . . . . . . . . . . . . . 48
5.2.2.2. Components . . . . . . . . . . . . . . . . . . . . 50
5.2.2.3. Capabilities . . . . . . . . . . . . . . . . . . . 50
5.2.2.4. Events . . . . . . . . . . . . . . . . . . . . . . 50
5.3. IP Forwarding LFBs . . . . . . . . . . . . . . . . . . . . 50
5.3.1. IPv4UcastLPM . . . . . . . . . . . . . . . . . . . . . 51
5.3.2. IPv4NextHop . . . . . . . . . . . . . . . . . . . . . 52
5.3.3. IPv6UcastLPM . . . . . . . . . . . . . . . . . . . . . 54
5.3.4. IPv6NextHop . . . . . . . . . . . . . . . . . . . . . 54
5.4. Redirect LFBs . . . . . . . . . . . . . . . . . . . . . . 54
5.4.1. RedirectIn . . . . . . . . . . . . . . . . . . . . . . 54
5.4.2. RedirectOut . . . . . . . . . . . . . . . . . . . . . 55
5.5. General Purpose LFBs . . . . . . . . . . . . . . . . . . . 55
5.5.1. BasicMetadataDispatch . . . . . . . . . . . . . . . . 56
5.5.2. GenericScheduler . . . . . . . . . . . . . . . . . . . 56
6. XML for LFB Library . . . . . . . . . . . . . . . . . . . . . 58
7. LFB Class Use Cases . . . . . . . . . . . . . . . . . . . . . 80
7.1. IP Forwarding . . . . . . . . . . . . . . . . . . . . . . 80
8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 81
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 82
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 83
11. Security Considerations . . . . . . . . . . . . . . . . . . . 84
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 85
12.1. Normative References . . . . . . . . . . . . . . . . . . . 85
12.2. Informative References . . . . . . . . . . . . . . . . . . 85
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 86
Wang, et al. Expires December 4, 2011 [Page 3]
Internet-Draft ForCES LFB Library June 2011
1. Terminology and Conventions
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
Wang, et al. Expires December 4, 2011 [Page 4]
Internet-Draft ForCES LFB Library June 2011
2. Definitions
This document follows the terminology defined by the ForCES
Requirements in [RFC3654]and by the ForCES framework in [RFC3746].
The definitions below are repeated for clarity.
Control Element (CE) - A logical entity that implements the ForCES
protocol and uses it to instruct one or more FEs on how to process
packets. CEs handle functionality such as the execution of
control and signaling protocols.
Forwarding Element (FE) - A logical entity that implements the
ForCES protocol. FEs use the underlying hardware to provide per-
packet processing and handling as directed/controlled by one or
more CEs via the ForCES protocol.
ForCES Network Element (NE) - An entity composed of one or more
CEs and one or more FEs. To entities outside an NE, the NE
represents a single point of management. Similarly, an NE usually
hides its internal organization from external entities.
LFB (Logical Function Block) - The basic building block that is
operated on by the ForCES protocol. The LFB is a well defined,
logically separable functional block that resides in an FE and is
controlled by the CE via ForCES protocol. The LFB may reside at
the FE's datapath and process packets or may be purely an FE
control or configuration entity that is operated on by the CE.
Note that the LFB is a functionally accurate abstraction of the
FE's processing capabilities, but not a hardware-accurate
representation of the FE implementation.
FE Topology - A representation of how the multiple FEs within a
single NE are interconnected. Sometimes this is called inter-FE
topology, to be distinguished from intra-FE topology (i.e., LFB
topology).
LFB Class and LFB Instance - LFBs are categorized by LFB Classes.
An LFB Instance represents an LFB Class (or Type) existence.
There may be multiple instances of the same LFB Class (or Type) in
an FE. An LFB Class is represented by an LFB Class ID, and an LFB
Instance is represented by an LFB Instance ID. As a result, an
LFB Class ID associated with an LFB Instance ID uniquely specifies
an LFB existence.
LFB Metadata - Metadata is used to communicate per-packet state
from one LFB to another, but is not sent across the network. The
FE model defines how such metadata is identified, produced and
consumed by the LFBs. It defines the functionality but not how
Wang, et al. Expires December 4, 2011 [Page 5]
Internet-Draft ForCES LFB Library June 2011
metadata is encoded within an implementation.
LFB Component - Operational parameters of the LFBs that must be
visible to the CEs are conceptualized in the FE model as the LFB
components. The LFB components include, for example, flags,
single parameter arguments, complex arguments, and tables that the
CE can read and/or write via the ForCES protocol (see below).
LFB Topology - Representation of how the LFB instances are
logically interconnected and placed along the datapath within one
FE. Sometimes it is also called intra-FE topology, to be
distinguished from inter-FE topology.
ForCES Protocol - While there may be multiple protocols used
within the overall ForCES architecture, the term "ForCES protocol"
and "protocol" refer to the Fp reference points in the ForCES
Framework in [RFC3746]. This protocol does not apply to CE-to-CE
communication, FE-to-FE communication, or to communication between
FE and CE managers. Basically, the ForCES protocol works in a
master-slave mode in which FEs are slaves and CEs are masters.
This document defines the specifications for this ForCES protocol.
Wang, et al. Expires December 4, 2011 [Page 6]
Internet-Draft ForCES LFB Library June 2011
3. Introduction
RFC 3746 [RFC3746] specifies Forwarding and Control Element
Separation (ForCES) framework. In the framework, Control Elements
(CEs) configure and manage one or more separate Forwarding Elements
(FEs) within a Network Element (NE) by use of a ForCES protocol. RFC
5810 [RFC5810] specifies the ForCES protocol. RFC 5812 [RFC5812]
specifies the Forwarding Element (FE) model. In the model, resources
in FEs are described by classes of Logical Function Blocks (LFBs).
The FE model defines the structure and abstract semantics of LFBs,
and provides XML schema for the definitions of LFBs.
This document conforms to the specifications of the FE model
[RFC5812] and specifies detailed definitions of classes of LFBs,
including detailed XML definitions of LFBs. These LFBs form a base
LFB library for ForCES. LFBs in the base library are expected to be
combined to form an LFB topology for a typical router to implement IP
forwarding. It should be emphasized that an LFB is an abstraction of
functions rather than its implementation details. The purpose of the
LFB definitions is to represent functions so as to provide
interoperability between separate CEs and FEs.
More LFB classes with more functions may be developed in future time
and documented by IETF. Vendors may also develop proprietary LFB
classes as described in the FE model [RFC5812].
3.1. Scope of the Library
It is intended that the LFB classes described in this document are
designed to provide the functions of a typical router. RFC 1812
specifies that a typical router is expected to provide functions to:
(1) Interface to packet networks and implement the functions required
by that network. These functions typically include:
o Encapsulating and decapsulating the IP datagrams with the
connected network framing (e.g., an Ethernet header and checksum),
o Sending and receiving IP datagrams up to the maximum size
supported by that network, this size is the network's Maximum
Transmission Unit or MTU,
o Translating the IP destination address into an appropriate
network-level address for the connected network (e.g., an Ethernet
hardware address), if needed, and
o Responding to network flow control and error indications, if any.
Wang, et al. Expires December 4, 2011 [Page 7]
Internet-Draft ForCES LFB Library June 2011
(2) Conform to specific Internet protocols including the Internet
Protocol (IPv4 and/or IPv6), Internet Control Message Protocol
(ICMP), and others as necessary.
(3) Receive and forwards Internet datagrams. Important issues in
this process are buffer management, congestion control, and fairness.
o Recognizes error conditions and generates ICMP error and
information messages as required.
o Drops datagrams whose time-to-live fields have reached zero.
o Fragments datagrams when necessary to fit into the MTU of the next
network.
(4) Choose a next-hop destination for each IP datagram, based on the
information in its routing database.
(5) Usually support an interior gateway protocol (IGP) to carry out
distributed routing and reachability algorithms with the other
routers in the same autonomous system. In addition, some routers
will need to support an exterior gateway protocol (EGP) to exchange
topological information with other autonomous systems. For all
routers, it is essential to provide ability to manage static routing
items.
(6) Provide network management and system support facilities,
including loading, debugging, status reporting, exception reporting
and control.
The classical IP router utilizing the ForCES framework constitutes a
CE running some controlling IGP and/or EGP function and FEs
implementing using Logical Function Blocks (LFBs) conforming to the
FE model[RFC5812] specifications. The CE, in conformance to the
ForCES protocol[RFC5810] and the FE model [RFC5812] specifications,
instructs the LFBs on the FE how to treat received/sent packets.
Packets in an IP router are received and transmitted on physical
media typically referred to as "ports". Different physical port
media will have different way for encapsulating outgoing frames and
decapsulating incoming frames. The different physical media will
also have different attributes that influence its behavior and how
frames get encapsulated or decapsulated. This document will only
deal with Ethernet physical media. Other future documents may deal
with other types of media. This document will also interchangeably
refer to a port to be an abstraction that constitutes a PHY and a MAC
as described by the LFBs like EtherPHYCop, EtherMACIn, and
EtherMACOut.
Wang, et al. Expires December 4, 2011 [Page 8]
Internet-Draft ForCES LFB Library June 2011
IP packets emanating from port LFBs are then processed by a
validation LFB before being further forwarded to the next LFB. After
the validation process the packet is passed to an LFB where IP
forwarding decision is made. In the IP Forwarding LFBs, a Longest
Prefix Match LFB is used to look up the destination information in a
packet and select a next hop index for sending the packet onward. A
next hop LFB uses the next hop index metadata to apply the proper
headers to the IP packets, and direct them to the proper egress.
Note that in the process of IP packets processing, in this document,
we are adhering to the weak-host model[RFC1122] since that is the
most usable model for a packet processing Network Element.
3.2. Overview of LFB Classes in the Library
It is critical to classify functional requirements into various
classes of LFBs and construct a typical but also flexible enough base
LFB library for various IP forwarding equipments.
3.2.1. LFB Design Choices
A few design principles were factored into choosing how the base LFBs
looked like. These are:
o if a function can be designed by either one LFB or two or more
LFBs with the same cost, the choice is to go with two or more LFBs
so as to provide more flexibility for implementers.
o when flexibility is not required, an LFB should take advantage of
its independence as much as possible and have minimal coupling
with other LFBs. The coupling may be from LFB attributes
definitions as well as physical implementations.
o unless there is a clear difference in functionality, similar
packet processing should not be represented as two or more
different LFBs. Or else, it may add extra burden on
implementation to achieve interoperability.
3.2.2. LFB Class Groupings
The document defines groups of LFBs for typical router function
requirements:
(1) A group of Ethernet processing LFBs are defined to abstract the
packet processing for Ethernet as the port media type. As the most
popular media type with rich processing features, Ethernet media
processing LFBs was a natural choice. Definitions for processing of
other port media types like POS or ATM may be incorporated in the
library in future version of the document or in a future separate
Wang, et al. Expires December 4, 2011 [Page 9]
Internet-Draft ForCES LFB Library June 2011
document.
The following LFBs are defined for Ethernet processing:
EtherPHYCop (section 5.1.1)
EtherMACIn (section 5.1.2)
EtherClassifier (section 5.1.3)
EtherEncapsulator (section 5.1.4)
EtherMACOut (section 5.1.5)
(2) A group of LFBs are defined for IP packet validation process.
The following LFBs are defined for IP Validation processing:
IPv4Validator (section 5.2.1)
IPv6Validator (section 5.2.2)
(3) A group of LFBs are defined to abstract IP forwarding process.
The following LFBs are defined for IP Forwarding processing:
IPv4UcastLPM (section 5.3.1)
IPv4NextHop (section 5.3.2)
IPv6UcastLPM (section 5.3.4)
IPv6NextHop (section 5.3.4)
(4) A group of LFBs are defined to abstract the process for redirect
operation, i.e., data packet transmission between CE and FEs.
The following LFBs are defined for redirect processing:
RedirectIn (section 5.5.1)
RedirectOut (section 5.5.2)
(5) A group of LFBs are defined for abstracting some general purpose
packet processing. These processing processes are usually general to
many processing locations in an FE LFB topology.
The following LFBs are defined for redirect processing:
Wang, et al. Expires December 4, 2011 [Page 10]
Internet-Draft ForCES LFB Library June 2011
BasicMetadataDispatch (section 5.6.1)
GenericScheduler (section 5.6.2)
3.2.3. Sample LFB Class Application
Although section 7 will present use cases for LFBs defined in this
document, this section shows a sample LFB class application in
advance so that readers can get a quick overlook of the LFB classes
Figure 1 shows the typical LFB processing path for an IPv4 unicast
forwarding case with Ethernet media interfaces. To focus on the IP
forwarding function, some inputs or outputs of LFBs in the figure
that are not related to the function are missed. Section 7.1 will
describe the LFB topology in more details.
Wang, et al. Expires December 4, 2011 [Page 11]
Internet-Draft ForCES LFB Library June 2011
+-----+ +------+
| | | |
| |<---------------|Ether |<----------------------------+
| | |MACOut| |
| | | | |
|Ether| +------+ |
|PHY | |
|Cop | +---+ |
|#1 | +-----+ | |----->IPv6 Packets |
| | | | | | +----+ |
| | |Ether| | | | | |
| |->|MACIn|-->| |IPv4| | |
+-----+ | | | |-+->| | +---+ |
+-----+ +--+ | | |unicast +-----+ | | |
Ether | | |------->| | | | |
. Classifier| | |packet |IPv4 | | | |
. | | | |Ucast|->| |--+ |
. | | | |LPM | | | | |
+---+ | +----+ +-----+ | | | |
+-----+ | | | IPv4 +---+ | |
| | | | | Validator IPv4 | |
+-----+ |Ether| | |-+ NextHop | |
| |->|MACIn|-->| |IPv4 | |
| | | | | |----->IPv6 Packets | |
|Ether| +-----+ +---+ +----+ | |
|PHY | Ether | | | |
|Cop | Classifier | | +-------+ | |
|#n | | | | | | |
| | +------+ | |<--| Ether |<-+ |
| | | |<------| | | Encap | |
| |<---------------|Ether | ...| | +-------+ |
| | |MACOut| +---| | |
| | | | | +----+ |
+-----+ +------+ | BasicMetadataDispatch |
+-------------------------+
Figure 1: AIPv4 Forwarding Case
3.3. Document Structure
Base type definitions, including data types, packet frame types, and
etadata types are presented in advance for definitions of various LFB
classes. Section 4 (Base Types Section) provide a description on the
base types used by this LFB library. In order for an extensive use
of these base types for other LFB class definitions, the base type
definitions are provided by an xml file in a way as a library which
is separate from the LFB definition library.
Wang, et al. Expires December 4, 2011 [Page 12]
Internet-Draft ForCES LFB Library June 2011
Within every group of LFB classes, a set of LFBs are defined for
individual function purposes. Section 5 (LFB Class Descriptions
Section) makes text descriptions on the individual LFBs. Note that
for a complete definition of an LFB, a text description as well as a
XML definition is required.
LFB classes are finally defined by XML with specifications and schema
defined in the ForCES FE model[RFC5812]. Section 6 (XML LFB
Definitions Section) provide the complete XML definitions of the base
LFB classes library..
Section 7 provides several use cases on how some typical router
functions can be implemented using the base LFB library defined in
this document.
Wang, et al. Expires December 4, 2011 [Page 13]
Internet-Draft ForCES LFB Library June 2011
4. Base Types
TThe FE model [RFC5812] has specified the following data types as
predefined (built-in) atomic data-types:
char, uchar, int16, uint16, int32, uint32, int64, uint64, string[N],
string, byte[N], boolean, octetstring[N], float16, float32, float64.
Based on these atomic data types and with the use of type definition
elements in the FE model XML schema, new data types, packet frame
types, and metadata types can further be defined.
To define a base LFB library for typical router functions, a base
data types, frame types, and metadata types MUST be defined. This
section provides a description of these types and detailed XML
definitions for the base types.
In order for extensive use of the base type definitions for LFB
definitions other than this base LFB library, the base type
definitions are provided with a separate xml library file labeled
with "BaseTypeLibrary". Users can refer to this library by the
statement:
4.1. Data
The following data types are currently defined and put in the base
type library:
(TBD)
4.2. Frame
According to FE model [RFC5812], frame types are used in LFB
definitions to define the types of frames the LFB expects at its
input port(s) and emits at its output port(s). The
element in the FE model is used to define a new frame type.
The following frame types are currently defined and put in the base
type library as base frame types for the LFB library:
(TBD)
Wang, et al. Expires December 4, 2011 [Page 14]
Internet-Draft ForCES LFB Library June 2011
4.3. MetaData
LFB Metadata is used to communicate per-packet state from one LFB to
another. The element in the FE model is used to define
a new metadata type.
The following metadata types are currently defined and put in the
base type library as base metadata types for the LFB library
definitions:
(TBD)
4.4. XML for Base Type Library
EthernetAll
An kinds of Ethernet frame
EthernetII
An Ethernet II frame
ARP
an arp packet
IPv4
An IPv4 packet
IPv6
An IPv6 packet
IPv4Unicast
An IPv4 unicast packet
IPv4Multicast
An IPv4 multicast packet
Wang, et al. Expires December 4, 2011 [Page 15]
Internet-Draft ForCES LFB Library June 2011
IPv6Unicast
An IPv6 unicast packet
IPv6Multicast
An IPv6 multicast packet
Arbitrary
Any kinds of frames
IPv4Addr
IPv4 address
byte[4]
IPv6Addr
IPv6 address
byte[16]
IEEEMAC
IEEE mac.
byte[6]
LANSpeedType
Network speed values
uint32
LAN_SPEED_10M
10M Ethernet
LAN_SPEED_100M
100M Ethernet
LAN_SPEED_1G
1000M Ethernet
Wang, et al. Expires December 4, 2011 [Page 16]
Internet-Draft ForCES LFB Library June 2011
LAN_SPEED_10G
10G Ethernet
LAN_SPEED_AUTO
LAN speed auto
DuplexType
Duplex types
uint32
Auto
Auto negotitation.
Half-duplex
port negotitation half duplex
Full-duplex
port negotitation full duplex
PortStatusValues
The possible values of status. Used for both
administrative and operation status.
uchar
Disabled
the port is operatively disabled.
UP
the port is up.
Wang, et al. Expires December 4, 2011 [Page 17]
Internet-Draft ForCES LFB Library June 2011
Down
The port is down.
MACInStatsType
The content of statistic for EtherMACIn.
NumPacketsReceived
The number of packets received.
uint64
NumPacketsDropped
The number of packets dropped.
uint64
MACOutStatsType
The content of statistic for EtherMACOut.
NumPacketsTransmitted
The number of packets transmitted.
uint64
NumPacketsDropped
The number of packets dropped.
uint64
EtherDispatchEntryType
the type of EtherDispatch table entry.
LogicalPortID
Logical port ID.
uint32
Wang, et al. Expires December 4, 2011 [Page 18]
Internet-Draft ForCES LFB Library June 2011
EtherType
The EtherType value in the Ether head.
uint32
OutputIndex
Group output port index.
uint32
EtherDispatchTableType
the type of EtherDispatch table.
EtherDispatchEntryType
VlanInputTableEntryType
Vlan input table entry type.
IncomingPortID
The incoming port ID.
uint32
VlanID
Vlan ID.
uint32
LogicalPortID
logical port ID.
uint32
VlanInputTableType
Vlan input table type.
VlanInputTableEntryType
Wang, et al. Expires December 4, 2011 [Page 19]
Internet-Draft ForCES LFB Library June 2011
EtherClassifyStatsType
Ether classify statistic information.
EtherType
The EtherType value
uint32
PacketsNum
Packets number
uint64
EtherClassifyStatsTableType
Ether classify statistic information
table type.
EtherClassifyStatsType
IPv4ValidatorStatisticsType
Statistics type in IPv4validator.
badHeaderPkts
Number of bad header packets.
uint64
badTotalLengthPkts
Number of bad total length packets.
uint64
badTTLPkts
Number of bad TTL packets.
uint64
badChecksumPkts
Number of bad checksum packets.
uint64
Wang, et al. Expires December 4, 2011 [Page 20]
Internet-Draft ForCES LFB Library June 2011
IPv6ValidatorStatisticsType
Statistics type in IPv6validator.
badHeaderPkts
Number of bad header packets.
uint64
badTotalLengthPkts
Number of bad total length packets.
uint64
badHopLimitPkts
Number of bad Hop limit packets.
uint64
IPv4PrefixInfoType
IPv4 Prefix information,entry type for
IPv4 prefix table.
IPv4Address
An IPv4 Address
IPv4Addr
Prefixlen
The prefix length
uchar
HopSelector
HopSelector is the nexthop ID which points to
the nexthop table
Wang, et al. Expires December 4, 2011 [Page 21]
Internet-Draft ForCES LFB Library June 2011
uint32
ECMPFlag
An ECMP Flag for this route
boolean
False
This route does not have multiple
nexthops.
True
This route has multiple nexthops.
DefaultRouteFlag
A Default Route Flag for supporting loose RPF.
boolean
False
This is not a default route.
True
This route is a default route. for
supporting the loose RPF.
IPv4PrefixTableType
IPv4 prefix table type.
IPv4PrefixInfoType
Wang, et al. Expires December 4, 2011 [Page 22]
Internet-Draft ForCES LFB Library June 2011
IPv4UcastLPMStatsType
Statistics type in IPv4Unicast.
InRcvdPkts
The total number of input packets
received
uint64
FwdPkts
IPv4 packets forwarded by this LFB
uint64
NoRoutePkts
The number of IP datagrams discarded because
no route could be found.
uint64
IPv6PrefixInfoType
IPv6 Prefix information,entry type for
IPv6 prefix table
IPv6Address
An IPv6 Address
IPv6Addr
Prefixlen
The prefix length
uchar
HopSelector
HopSelector is the nexthop ID which points
Wang, et al. Expires December 4, 2011 [Page 23]
Internet-Draft ForCES LFB Library June 2011
to the nexthop table
uint32
ECMPFlag
An ECMP Flag for this route
boolean
False
This route does not have multiple
nexthops.
True
This route has multiple nexthops.
DefaultRouteFlag
A Default Route Flag.
boolean
False
This is not a default route.
True
This route is a default route.
IPv6PrefixTableType
IPv6 prefix table type.
IPv6PrefixInfoType
Wang, et al. Expires December 4, 2011 [Page 24]
Internet-Draft ForCES LFB Library June 2011
IPv6UcastLPMStatsType
Statistics type in IPv6Unicast.
InRcvdPkts
The total number of input packets
received
uint64
FwdPkts
IPv6 packets forwarded by this LFB
uint64
NoRoutePkts
The number of IP datagrams discarded because
no route could be found.
uint64
IPv4NextHopInfoType
IPv4 next hop information. Entry type for the
IPv4 next hop table.
L3PortID
The ID of the Logical/physical Output Port
that we pass onto the neighboring LFB instance. This
ID indicates what port to the neighbor is as defined
by L3.
uint32
MTU
Maximum Transmission Unit for out going port.
It is for desciding whether the packet need
fragmentation
uint32
NextHopIPAddr
Next Hop IPv4 Address
Wang, et al. Expires December 4, 2011 [Page 25]
Internet-Draft ForCES LFB Library June 2011
IPv4Addr
MediaEncapInfoIndex
The index we pass onto the neighboring LFB
instance. This index is used to lookup a table
(typically media encapsulatation related) further
downstream.
uint32
LFBOutputSelectIndex
LFB Group output port index to select
downstream LFB port. Some possibilities of downstream
LFB instances are:
a) EtherEncapsulator
b) Other type of media LFB
c) A metadata Dispatcher
d) A redirect LFB
e) etc
Note: LFBOutputSelectIndex is the FromPortIndex for
the port group "successout" in the table LFBTopology
(of FEObject LFB) as defined for the NH LFB.
uint32
IPv4NextHopTableType
IPv4 next hop table type
IPv4NextHopInfoType
IPv6NextHopInfoType
IPv6 next hop information. Entry type for the
IPv6NextHopTable.
L3PortID
The ID of the Logical/physical Output Port
that we pass onto the neighboring LFB instance. This
ID indicates what port to the neighbor is as defined
by L3.
uint32
Wang, et al. Expires December 4, 2011 [Page 26]
Internet-Draft ForCES LFB Library June 2011
MTU
Maximum Transmission Unit for out going port.
It is for desciding whether the packet need
fragmentation.
uint32
NextHopIPAddr
Next Hop IPv6 Address
IPv6Addr
MediaEncapInfoIndex
The index we pass onto the neighboring LFB
instance. This index is used to lookup a table
(typically media encapsulatation related) further
downstream.
uint32
LFBOutputSelectIndex
LFB Group output port index to select
downstream LFB port. Some possibilities of downstream
LFB instances are:
a) EtherEncapsulator
b) Other type of media LFB
c) A metadata Dispatcher
d) A redirect LFB
e) etc
Note: LFBOutputSelectIndex is the FromPortIndex for
the port group "successout" in the table LFBTopology
(of FEObject LFB) as defined for the NH LFB.
uint32
IPv6NextHopTableType
IPv6 next hop table type
IPv6NextHopInfoType
EncapTableEntryType
Ethernet encapsulation table entry type.
Wang, et al. Expires December 4, 2011 [Page 27]
Internet-Draft ForCES LFB Library June 2011
DstMac
Ethernet Mac of the Neighbor
IEEEMAC
SrcMac
Source MAC used in encapsulation
IEEEMAC
VlanID
VLAN ID.
uint32
L2PortID
Output logical L2 port ID.
uint32
EncapTableType
Ethernet encapsulation table type
EncapTableEntryType
MetadataDispatchType
The entry type for Metadata dispatch table.
MetadataID
metadata ID
uint32
MetadataValue
metadata value.
uint32
OutputIndex
group output port index.
Wang, et al. Expires December 4, 2011 [Page 28]
Internet-Draft ForCES LFB Library June 2011
uint32
MetadataDispatchTableType
Metadata dispatch table type.
MetadataDispatchType
SchdDisciplineType
scheduling discipline type.
uint32
FIFO
First In First Out scheduler.
RR
Round Robin.
QueueDepthType
the entry type for queue depth
table.
QueueID
Queue ID
uint32
QueueDepthInPackets
the Queue Depth when the depth units
are packets.
uint32
QueueDepthInBytes
the Queue Depth when the depth units
are bytes.
Wang, et al. Expires December 4, 2011 [Page 29]
Internet-Draft ForCES LFB Library June 2011
uint32
QueueDepthTableType
the Depth of Queue table type.
QueueDepthType
PHYPortID
The physical port ID that a packet has entered.
1
uint32
SrcMAC
Source MAC Address
2
IEEEMAC
DstMAC
Destination MAC Address
3
IEEEMAC
LogicalPortID
ID of logical port.
4
uint32
EtherType
The value of EtherType.
5
uint32
VlanID
Vlan ID.
6
Wang, et al. Expires December 4, 2011 [Page 30]
Internet-Draft ForCES LFB Library June 2011
uint32
VlanPriority
The priority of Vlan.
7
uint32
NexthopIPv4Addr
Nexthop IP address.
8
IPv4Addr
NexthopIPv6Addr
Nexthop IP address.
9
IPv6Addr
HopSelector
HopSelector is the nexthop ID which points to the
nexthop table
10
uint32
ExceptionID
Exception Types
11
uint32
Other
Any other exception.
BroadCastPacket
Packet with destination address equal to
255.255.255.255
BadTTL
The packet can't be forwarded as the TTL
has expired.
Wang, et al. Expires December 4, 2011 [Page 31]
Internet-Draft ForCES LFB Library June 2011
IPv4HeaderLengthMismatch
IPv4 Packet's header length is less
than 5.
LengthMismatch
The packet length reported by link layer
is less than the total length field.
RouterAlertOptions
Packet IP head include Router Alert
options.
RouteInTableNotFound
There is no route in the route table
corresponding to the packet destination address
NextHopInvalid
The NexthopID is invalid
FragRequired
The MTU for outgoing interface is less
than the packet size.
LocalDelivery
The packet is for a local interface.
GenerateICMP
ICMP packet needs to be generated.
PrefixIndexInvalid
The prefixIndex is wrong.
IPv6HopLimitZero
Packet with Hop Limit zero
Wang, et al. Expires December 4, 2011 [Page 32]
Internet-Draft ForCES LFB Library June 2011
IPv6NextHeaderHBH
Packet with next header set to Hop-by-Hop
ValidateErrorID
Validate Error Types
12
uint32
Other
Any other validation error.
InvalidIPv4PacketSize
Packet size reported is less than 20
bytes.
NotIPv4Packet
Packet is not IP version 4.
InvalidIPv4HeaderLengthSize
Packet's header length is less than 5.
InvalidIPv4Checksum
Packet with invalid checksum.
InvalidIPv4SrcAddr1
Packet with source address equal to
255.255.255.255.
InvalidIPv4SrcAddr2
Packet with source address 0.
InvalidIPv4SrcAddr3
Wang, et al. Expires December 4, 2011 [Page 33]
Internet-Draft ForCES LFB Library June 2011
Packet with source address of form
127.any.
InvalidIPv4SrcAddr4
Packet with source address in Class E
domain.
InvalidIPv6PakcetSize
Packet size reported is less than 40
bytes.
NotIPv6Packet
Packet is not IP version 6.
InvalidIPv6SrcAddr1
Packet with multicast source address (the
MSB of the source address is 0xFF).
InvalidIPv6SrcAddr2
Packet with source address set to
loopback(::1).
InvalidIPv6DstAddr1
Packet with destination set to 0 or ::1.
L3PortID
ID of L3 port.
13
uint32
RedirectIndex
Redirect Output port index.
14
uint32
Wang, et al. Expires December 4, 2011 [Page 34]
Internet-Draft ForCES LFB Library June 2011
MediaEncapInfoIndex
The index for media encap table in Media encap LFB.
15
uint32
Wang, et al. Expires December 4, 2011 [Page 35]
Internet-Draft ForCES LFB Library June 2011
5. LFB Class Description
According to ForCES specifications, LFB (Logical Function Block) is a
well defined, logically separable functional block that resides in an
FE, and is a functionally accurate abstraction of the FE's processing
capabilities. An LFB Class (or type) is a template that represents a
fine-grained, logically separable aspect of FE processing. Most LFBs
are related to packet processing in the data path. LFB classes are
the basic building blocks of the FE model. Note that RFC 5810 has
already defined an 'FE Protocol LFB' which is as a logical entity in
each FE to control the ForCES protocol. RFC 5812 has already defined
an 'FE Object LFB'. Information like the FE Name, FE ID, FE State,
LFB Topology in the FE are represented in this LFB.
As specified in Section 3.1, this document focuses the base LFB
library for implementing typical router functions, especially for IP
forwarding functions. As a result, LFB classes in the library are
all base LFBs to implement router forwarding.
5.1. Ethernet Processing LFBs
As the most popular physical and data link layer protocols, Ethernets
are widely deployed. It becomes a basic requirement for a router to
be able to process various Ethernet data packets.
Note that there exist different versions of Ethernet protocols, like
Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, IEEE 802.3/802.2 SNAP.
There also exist varieties of LAN techniques based on Ethernet, like
various VLANs, MACinMAC, etc. Ethernet processing LFBs defined here
are intended to be able to cope with all these variations of Ethernet
technology.
There are also various types of Ethernet physical interface media.
Among them, copper and fiber media may be the most popular ones. As
a base LFB definition and a start work, the document only defines an
Ethernet physical LFB with copper media. For other media interfaces,
specific LFBs may be defined in the future versions of the library.
5.1.1. EtherPHYCop
EtherPHYCop LFB abstracts an Ethernet interface physical layer with
media limited to copper.
5.1.1.1. Data Handling
This LFB is the interface to the Ethernet physical media. The LFB
handles ethernet frames coming in from or going out of the FE.
Ethernet frames sent and received cover all packets encapsulated with
Wang, et al. Expires December 4, 2011 [Page 36]
Internet-Draft ForCES LFB Library June 2011
different versions of Ethernet protocols, like Ethernet V2, 802.3
RAW, IEEE 802.3/802.2,IEEE 802.3/802.2 SNAP, including packets
encapsulated with varieties of LAN techniques based on Ethernet, like
various VLANs, MACinMAC, etc. Therefore in the XML an EthernetAll
frame type has been introduced.
Ethernet frames are received from the physical media port and passed
downstream to LFBs such as EtherMACIn via a singleton output known as
"EtherPHYOut". A 'PHYPortID' metadatum, to indicate which physical
port the frame came into from the external world, is passed along
with the frame.
Ethernet packets are received by this LFB from upstream LFBs such
EtherMacOut via the singleton input known as "EtherPHYIn" before
being sent out onto the external world.
5.1.1.2. Components
The AdminStatus component is defined for CE to administratively
manage the status of the LFB. The CE may adminstratively startup or
shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'.
An OperStatus component captures the physical port operational
status. A PHYPortStatusChanged event is defined so the LFB can
report to the CE whenever there is an operational status change of
the physical port.
The PHYPortID component is a unique identification for a physical
port. It is defined as 'read-only' by CE. Its value is enumerated
by FE. The component will be used to produce a 'PHYPortID' metadatum
at the LFB output and to associate it to every Ethernet packet this
LFB receives. The metadatum will be handed to downstream LFBs for
them to use the PHYPortID.
A group of components are defined for link speed management. The
AdminLinkSpeed is for CE to configure link speed for the port and the
OperLinkSpeed is for CE to query the actual link speed in operation.
The default value for the AdminLinkSpeed is set to auto-negotiation
mode.
A group of components are defined for duplex mode management. The
AdminDuplexMode is for CE to configure proper duplex mode for the
port and the OperDuplexMode is for CE to query the actual duplex mode
in operation. The default value for the AdminDuplexMode is set to
auto-negotiation mode.
A CarrierStatus component captures the status of the carrier and
Wang, et al. Expires December 4, 2011 [Page 37]
Internet-Draft ForCES LFB Library June 2011
specifies whether the port is linked with an operational connector.
The default value for the CarrierStatus is 'false'.
5.1.1.3. Capabilities
The capability information for this LFB includes the link speeds that
are supported by the FE (SupportedLinkSpeed) as well as the supported
duplex modes (SupportedDuplexMode).
5.1.1.4. Events
This LFB is defined to be able to generate several events in which
the CE may be interested. There is an event for changes in the
status of the physical port (PhyPortStatusChanged). Such an event
will notify that the physical port status has been changed and the
report will include the new status of the physical port.
Another event captures changes in the operational link speed
(LinkSpeedChanged). Such an event will notify the CE that the
operational speed has been changed and the report will include the
new negotiated operational speed.
A final event captures changes in the duplex mode
(DuplexModeChanged). Such an event will notify the CE that the
duplex mode has been changed and the report will include the new
negotiated duplex mode.
5.1.2. EtherMACIn
EtherMACIn LFB abstracts an Ethernet port at MAC data link layer. It
specifically describes Ethernet processing functions like MAC address
locality check, deciding if the Ethernet packets should be bridged,
provide Ethernet layer flow control, etc.
5.1.2.1. Data Handling
The LFB is expected to receive all types of Ethernet packets, via a
singleton input known as "EtherMACIn", which are usually output from
some Ethernet physical layer LFB, like an EtherPHYCop LFB, alongside
with a metadatum indicating the physical port ID that the packet
comes.
The LFB is defined with two separate singleton outputs. All Output
packets are in Ethernet format, as received from the physical layer
LFB and cover all types of Ethernet packets.
The first singleton output is known as "NormalPathOut". It usually
outputs Ethernet packets to some LFB like an EtherClassifier LFB for
Wang, et al. Expires December 4, 2011 [Page 38]
Internet-Draft ForCES LFB Library June 2011
further L3 forwarding process alongside with a PHYPortID metadata
indicating which physical port the packet came from.
The second singleton output is known as "L2BridgingPathOut".
Although the LFB library this document defines is basically to meet
typical router functions, it will attempt to be forward compatible
with future router functions. The "L2BridgingPathOut" is defined to
meet the requirement that L2 bridging functions may be optionally
supported simultaneously with L3 processing and some L2 bridging LFBs
that may be defined in the future. If the FE supports L2 bridging,
the CE can enable or disable it by means of a "L2BridgingPathEnable"
component in the FE. If it is enabled, by also instantiating some L2
bridging LFB instances following the L2BridgingPathOut, FEs are
expected to fulfill L2 bridging functions. L2BridgingPathOut will
output packets exactly the same as that in the NormalPathOut output.
This LFB can be set to work in a Promiscuous Mode, allowing all
packets to pass through the LFB without being dropped. Otherwise, a
locality check will be performed based on the local MAC addresses.
All packets that do not pass through the locality check will be
dropped.
This LFB can perform Ethernet layer flow control. This is usually
implemented cooperatively by the EtherMACIn LFB and the EtherMACOut
LFB. The flow control is further distinguished by Tx flow control
and Rx flow control, separately for sending process and receiving
process flow controls.
5.1.2.2. Components
The AdminStatus component is defined for CE to administratively
manage the status of the LFB. The CE may administratively startup or
shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'.
The LocalMACAddresses component specifies the local MAC addresses
based on which locality checks will be made. This component is an
array of MAC addresses, and of 'read-write' access permission.
An L2BridgingPathEnable component captures whether the LFB is set to
work as a L2 bridge. An FE that does not support bridging will
internally set this flag to false, and additionally set the flag
property as read-only. The default value for is 'false'.
The PromiscuousMode component specifies whether the LFB is set to
work as in a promiscuous mode. The default value for is 'false'.
The TxFlowControl component defines whether the LFB is performing
Wang, et al. Expires December 4, 2011 [Page 39]
Internet-Draft ForCES LFB Library June 2011
flow control on sending packets. The default value for is 'false'
The RxFlowControl component defines whether the LFB is performing
flow contron on receiving packets. The default value for is 'false'.
A struct component, MACInStats, defines a set of statistics for this
LFB, including the number of received packets and the number of
dropped packets.
5.1.2.3. Capabilities
This LFB does not have a list of capabilities.
5.1.2.4. Events
This LFB does not have any events specified.
5.1.3. EtherClassifier
EtherClassifier LFB abstracts the process to decapsulate Ethernet
packets and classify the data packets into various network layer data
packets according to information included in the Ethernet packets
headers.
Input of the LFB expects all types of Ethernet packets, including
VLAN Ethernet types. The input is a singleton input which may
connect to an upstream LFB like EtherMACIn LFB. The input is also
capable of multiplexing to allow for multiple upstream LFBs being
connected. For instance, when L2 bridging function is enabled in
EtherMACIn LFB, some L2 bridging LFBs may be applied. In this case,
some Ethernet packets after L2 processing may have to be input to
EtherClassifier LFB for classification, while simultaneously packets
directly output from EtherMACIn may also need to input to this LFB.
Input of this LFB is capable of handling this case. Usually, every
input Ethernet packet is expected to be associated with a PHYPortID
metadatum, indicating the physical port the packet comes from. In
some cases, for instance, like in an MACinMAC case, a LogicalPortID
metadatum may be expected to associate with the Ethernet packet to
further indicate which logical port the Ethernet packet belongs to.
Note that PHYPortID metadata is always expected while LogicalPortID
metadata is optionally expected.
A VLANInputTable component is defined in the LFB to classify VLAN
Ethernet packets. According to IEEE VLAN specifications, all
Ethernet packets can be recognized as VLAN types by defining that if
there is no VLAN encapsulation in a packet, a case with VLAN tag 0 is
considered. Therefore the table actually applies to every input
packet of the LFB. The table assigns every input packet with a new
Wang, et al. Expires December 4, 2011 [Page 40]
Internet-Draft ForCES LFB Library June 2011
LogicalPortID according to the packet incoming port ID and the VLAN
ID. A packet incoming port ID is defined as a physical port ID if
there is no logical port ID associated with the packet, or a logical
port ID if there is a logical port ID associated with the packet.
The VLAN ID is exactly the Vlan ID in the packet if it is a VLAN
packet, or 0 if it is not a VLAN packet. Note that a logical port ID
of a packet may be rewritten with a new one by the VLANInputTable
processing.
An EtherDispatchTable component is defined to dispatch every Ethernet
packet to a group of outputs according to the logical port ID
assigned by VLANInputTable to the packet and the Ethernet type in the
Ethernet packet header. By CE configuring the dispatch table, the
LFB can be expected to classify various network layer protocol type
packets and make them output at different output port. It is then
easily expected that the LFB classify packets according to protocols
like IPv4, IPv6, MPLS, ARP, ND, etc.
Output of the LFB is hence defined as a group output. Because there
may be various types of protocol packets at the output ports, the
frameproduced is defined as arbitrary for the purpose of wide
extensibility in the future. In order for downstream LFBs to use, a
bunch of metadata is produced to associate with every output packet.
The medatdata contain normal information like PHYPortID. It also
contains information on Ethernet type, source MAC address, and
destination MAC address of its original Ethernet packet. Moreover,
it contains information of logical port ID assigned by this LFB.
This metadata may be used by downstream LFBs for packet processing.
Lastly, it may conditionally contain information like VlanID and
VlanPriority with the condition that the packet is a VLAN packet.
A MaxOutPutPorts is defined as the capability of the LFB to indicate
how many classification output ports the LFB is capable.
/*discussion*/
Note that logical port ID and physical port ID mentioned above are
all originally configured by CE, and are globally effective within an
ForCES NE (Network Element). To distinguish a physical port ID from
a logical port ID in the incoming port ID field of the
VLANInputTable, physical port ID and logical port ID must be assigned
with separate number spaces. /*discussion */
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
Wang, et al. Expires December 4, 2011 [Page 41]
Internet-Draft ForCES LFB Library June 2011
5.1.4. EtherEncapsulator
EtherEncapsulator LFB abstracts the process to encapsulate IP packets
to Ethernet packets.
Input of the LFB expects types of IP packets, including IPv4 and IPv6
types. The input is a singleton one which may connect to an upstream
LFB like an IPv4NextHop, an IPv6NextHop, or any LFB which requires to
output packets for Ethernet encapsulation. The input is capable of
multiplexing to allow for multiple upstream LFBs being connected.
For instance, an IPv4NextHop or an IPv6NextHop may concurrently
exist, and some L2 bridging LFBs may also output packets to this LFB
simultaneously. Input of this LFB is capable of handling this case.
Usually, every input Ethernet packet is expected to be associated
with an output logical port ID and a next hop IP address as its
metadata. In the case when L2 bridging function is implemented, an
input packet may also optionally receive a VLAN priority as its
metadata. In this case, default value for this metadata is set to 0.
There are several outputs for this LFB. One singleton output is for
normal success packet output. Packets which have found Ethernet L2
information and have been successfully encapsulated to an Ethernet
packet will output from this port to downstream LFB. Note that this
LFB specifies to use Ethernet II as its Ethernet encapsulation type.
Success output also produces an output logical port ID as metadatum
of every output packet for a downstream LFB to decide which logical
port the packet should go out. The downstream LFB usually dispatches
the packets based on its associated output logical port ID. Hence, a
generic dispatch LFB as defined in Section 5.6.1 may be adopted for
dispatching packets based on output logical port ID.
Note that in some implementations of LFBs topology, the processing to
dispatch packets based on an output logical port ID may also take
place before an Ethernet encapsulation,i.e., packets coming into the
encapsulator LFB have already been switched to individual logical
output port paths. In this case, the EtherEncap LFB success output
may be directly connected to a downstream LFB like an EtherMACOut
LFB.
Another singleton output is for IPv4 packets that are unfortunately
unable to find Ethernet L2 encapsulation information by ARP table in
the LFB. In this case, a copy of the packets may need to be
redirected to an ARP LFB in the FE, or to CE if ARP function is
implemented by CE. More importantly, a next hop IP address metadata
should be associated with every packet output here. When an ARP LFB
or CE receives these packets and associated next hop IP address
metadata, it may be expected to generate ARP protocol messages based
on these packets next hop IP addresses to try to get L2 information
Wang, et al. Expires December 4, 2011 [Page 42]
Internet-Draft ForCES LFB Library June 2011
for these packets. Refreshed L2 information is then able to be added
in ARP table in this encapsulator LFB by ARP LFB or by CE. As a
result, these packets are then able to successfully find L2
information and be encapsulated to Ethernet packets, and output via
the normal success output to downstream LFB. (Editorial note1: may
need discussion on what more metadata this output packets need? Note
that the packets may be redirected to CE and CE should know what the
purpose of the packets for. A metadata may need to indicate this.
Editorial note2: we may adopt another way to address the case of
packets unable to do ARP. The packets may be redirected to ARP LFB
or CE without keeping a copy of them in this encapsulator LFB. We
expect that after ARP LFB or CE have processed ARP requests based on
the packets, the packets will be redirected back from ARP LFB or CE
to this encapsulator LFB for Ethernet encapsulation. At this time,
it is hoped the ARP table has been refreshed with new L2 information
that will make these packets able to find)
A more singleton output is for IPv6 packets that are unfortunately
unable to find Ethernet L2 encapsulation information by Neighbor
table in the LFB. In this case, a copy of the packets may need to be
redirected to an ND LFB in the FE, or to CE if IPv6 Neighbor
discovery function is implemented by CE. More importantly, a next
hop IP address metadata should be associated with every packet output
here. When the ND LFB or CE receives these packets and associated
next hop IP address metadata, it may be expected to generate neighbor
discovery protocol messages based on these packets next hop IP
addresses to try to get L2 information for these packets. Refreshed
L2 information is then able to be added in neighbor table in this LFB
by ND LFB or by CE. As a result, these packets are then able to
successfully find L2 information and be encapsulated to Ethernet
packets, and output via the normal success output to downstream
LFB.(Editorial note: may need discussion on what more metadata this
output packets need? Note that the packets may be redirected to CE
and CE should know what the purpose of the packets for. A metadata
may need to indicate this)
A singleton output is specifically defined for exception packets
output. All packets that are abnormal during the operations in this
LFB are output via this port. Currently, only one abnormal case is
defined, that is, packets can not find proper information in a VLAN
output table.
The VLAN output table is defined as the component of the LFB. The
table uses a logical port ID as an index to find a VLAN ID and a new
output logical port ID. In reality, the logical port ID applied here
is the output logical port ID received from every input packet in its
associated metadata. According to IEEE VLAN specifications, all
Ethernet packets can be recognized as VLAN types by defining that if
Wang, et al. Expires December 4, 2011 [Page 43]
Internet-Draft ForCES LFB Library June 2011
there is no VLAN encapsulation in a packet, a case with VLAN tag 0 is
considered. Therefore, every input IP packet actually has to look up
the VLAN output table to find out a VLAN ID and a new output logical
port ID according to its original logical port ID..
The ARP table in the LFB is defined as a component of the LFB. The
table is for IPv4 packet to find its next hop Ethernet layer MAC
addresses. Input IPv4 packet will use an output logical port ID
which is got by looking up the VLAN output table, and a next hop IPv4
address which is got by upstream next hop applicator LFB, to look up
the ARP table to find the Ethernet L2 information, i.e., the source
MAC address and destination MAC address.
The neighbor table is defined as another component of the LFB. The
table is for IPv6 packet to find its next hop Ethernet layer MAC
addresses. Like the ARP table, input IPv6 packet will use its output
logical port ID got from looking up the VLAN output table, and the
packet next hop IPv4 address got by upstream next hop applicator LFB,
to look up the neighbor table to find the Ethernet source MAC address
and destination MAC address.
As will be described in the address resolution LFBs section (section
5.4), Layer 2 address resolution protocols may be implemented with
two choices. One is by FE with specific address resolution LFB, like
an ARP LFB or an ND LFB. The other is to redirect address resolution
protocol messages to CE for CE to implement the function.
As described in section 5.4, the ARP LFB defines the ARP table in
this encapsulator LFB as its alias, and the ND LFB defines the
neighbor table as its alias. This means that the ARP table or the
neighbor table will be maintained or refreshed by the ARP LFB or the
ND LFB when the LFBs are used.
Note that the ARP table and the neighbor table defined in this LFB
are all with property of read-write. CE can also configure the
tables by ForCES protocol [RFC5810]. This makes possible that IPv4
ARP protocol or IPv6 Neighbor Discovery protocol may be implemented
at CE side,i.e., after CE manages an ARP or Neighbor discovery
protocol and gets address resolution results, CE can configure them
to an ARP or neighbor table in FE.
With all the information got from VLAN table and ARP or Neighbor
table, input IPv4 or IPv6 packets are then able to be encapsulated to
Ethernet layer packets. Note that according to IEEE 802.1Q, if input
packets are with non-zero VLAN priority metadata, the packets will
always be encapsulated with a VLAN tag, no matter the value of VLAN
ID is zero or not. If the VLAN priority and the VLAN ID are all
zero, the packets will be encapsulated without a VLAN tag.
Wang, et al. Expires December 4, 2011 [Page 44]
Internet-Draft ForCES LFB Library June 2011
Successfully encapsulated packets are then output via success output
port.
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.1.5. EtherMACOut
EtherMACOut LFB abstracts an Ethernet port at MAC data link layer.
This LFB describes Ethernet packet output process. Ethernet output
functions are closely related to Ethernet input functions, therefore
many components defined in this LFB are as aliases of EtherMACIn LFB
components.
5.1.5.1. Data Handling
The LFB is expected to receive all types of Ethernet packets, via a
singleton input known as "EtherPktsIn", which are usually output from
an Ethernet encapsulation LFB, alongside with a metadatum indicating
the physical port ID that the packet will go through(editorial note:
need more discussion on the port ID being physical layer or L2
layer).
The LFB is defined with a singleton output. All Output packets are
in Ethernet format, possibly with various Ethernet types, alongside
with a metadatum indicating the physical port ID the packet is to go
through. This output links to a downstream LFB that is usually an
Ethernet physical LFB like EtherPHYcop LFB.
This LFB can perform Ethernet layer flow control. This is usually
implemented cooperatively by the EtherMACIn LFB and the EtherMACOut
LFB. The flow control is further distinguished by Tx flow control
and Rx flow control, separately for sending process and receiving
process flow control.
Note that as a base definition, functions like multiple virtual MAC
layers are not supported in this LFB version. It may be supported in
the future by defining a subclass or a new version of this LFB
5.1.5.2. Components
The AdminStatus component is defined for CE to administratively
manage the status of the LFB. The CE may administratively startup or
shutdown the LFB by changing the value of AdminStatus. The default
value is set to 'Down'. Note that this component is defined as an
alias of the AdminStatus component in the EtherMACIn LFB. This
infers that an EtherMACOut LFB usually coexists with an EtherMACIn
Wang, et al. Expires December 4, 2011 [Page 45]
Internet-Draft ForCES LFB Library June 2011
LFB, both of which share the same administrative status management by
CE. Alias properties as defined in the ForCES FE model (RFC 5812)
will be used by CE to declare the target component this alias refers,
which include the target LFB class and instance IDs as well as the
path to the target component. Whereas, these properties are set by
CE only when a system runs, which are outside the XML definitions of
this LFB.
The MTU component defines the maximum transmission unit
The TxFlowControl component defines whether the LFB is performing
flow control on sending packets. The default value for is 'false'.
Note that this component is defined as an alias of TxFlowControl
component in the EtherMACIn LFB.
The RxFlowControl component defines whether the LFB is performing
flow control on receiving packets. The default value for is 'false'.
Note that this component is defined as an alias of RxFlowControl
component in the EtherMACIn LFB.
A struct component, MACOutStats, defines a set of statistics for this
LFB, including the number of transmitted packets and the number of
dropped packets.
5.1.5.3. Capabilities
This LFB does not have a list of capabilities.
5.1.5.4. Events
This LFB does not have any events specified.
5.2. IP Packet Validation LFBs
The LFBs are defined to abstract IP packet validation process. An
IPv4Validator LFB is specifically for IPv4 protocol validation and an
IPv6Validator LFB for IPv6.
5.2.1. IPv4Validator
The IPv4Validator LFB performs IPv4 packets validation according to
RFC 1812.
5.2.1.1. Data Handling
This LFB performs IPv4 validation according to RFC 1812. Then the
IPv4 packet will be output to the corresponding port regarding of the
validation result, whether the packet is a unicast or a multicast
Wang, et al. Expires December 4, 2011 [Page 46]
Internet-Draft ForCES LFB Library June 2011
one, an exception has occurred or the validation failed.
This LFB always expects, as input, packets which have been indicated
as IPv4 packets by an upstream LFB, like an EtherClassifier LFB.
There is no specific metadata expected by the input of the LFB.
Note that, as a default provision of RFC 5812, in FE model, all
metadata produced by upstream LFBs will pass through all downstream
LFBs by default without being specified by input port or output port.
Only those metadata that will be used(consumed) by an LFB will be
explicitly marked in input of the LFB as expected metadata. For
instance, in this LFB, even there is no specific metadata expected,
metadata like PHYPortID produced by some upstream physical layer LFBs
will always pass through this LFB. In some cases, if some component
in the LFB may use the metadata, it actually still can use it
regardless of whether the metadata has been expected or not.
Four output ports are defined to output various validation results.
All validated IPv4 unicast packets will be output at the singleton
port known as "IPv4UnicastOut". All validated IPv4 multicast packets
will be output at the singleton port known as "IPv4MulticastOut"
port. There is no metadata specifically required to produce at these
output ports.
A singleton port known as "ExceptionOut" is defined to output packets
which have been validated as exception packets. An exception ID
metadatum is produced to indicate what has caused the exception.
Currently defined exception types include:
o Packet with destination address equal to 255.255.255.255
o Packet with expired TTL
o Packet with header length more than 5 words
o Packet IP head including Router Alert options
Note that, although TTL is checked in this LFB for validity,
operations to TTL like TTL decreasing will be made only in a followed
forwarding LFB.
The final singleton port known as "FailOut" is defined for all
packets which have failed the validation process. A validate error
ID is associated to every failed packet to indicate the reason.
Currently defined reasons include:
Wang, et al. Expires December 4, 2011 [Page 47]
Internet-Draft ForCES LFB Library June 2011
o Packet size reported is less than 20 bytes
o Packet with version is not IPv4
o Packet with header length < 5
o Packet with total length field < 20
o Packet with invalid checksum
o Packet with source address equal to 255.255.255.255
o Packet with source address 0
o Packet with source address of form {127, }
o Packet with source address in Class E domain
5.2.1.2. Components
This LFB has only one struct component, the
IPv4ValidatorStatisticsType, which defines a set of statistics for
validation process, including the number of bad header packets, the
number of bad total length packets, the number of bad TTL packets,
and the number of bad checksum packets.
5.2.1.3. Capabilities
This LFB does not have a list of capabilities
5.2.1.4. Events
This LFB does not have any events specified.
5.2.2. IPv6Validator
The IPv6Validator LFB performs IPv6 packets validation according to
RFC 2460.
5.2.2.1. Data Handling
This LFB performs IPv6 validation according to RFC 2460. Then the
IPv6 packet will be output to the corresponding port regarding of the
validation result, whether the packet is a unicast or a multicast
one, an exception has occurred or the validation failed.
This LFB always expects, as input, packets which have been indicated
as IPv6 packets by an upstream LFB, like an EtherClassifier LFB.
Wang, et al. Expires December 4, 2011 [Page 48]
Internet-Draft ForCES LFB Library June 2011
There is no specific metadata expected by the input of the LFB.
Similar to the IPv4validator LFB, IPv6Validator has also defined four
output ports to output packets for various validation results.
All validated IPv6 unicast packets will be output at the singleton
port known as "IPv6UnicastOut". All validated IPv6 multicast packets
will be output at the singleton port known as "IPv6MulticastOut"
port. There is no metadata specifically required to produce at these
output ports.
A singleton port known as "ExceptionOut" is defined to output packets
which have been validated as exception packets. An exception ID
metadata is produced to indicate what caused the exception.
Currently defined exception types include:
o Packet with hop limit to zero
o Packet with a link-local destination address
o Packet with a link-local source address
o Packet with destination all-routers
o Packet with destination all-nodes
o Packet with next header set to Hop-by-Hop
The final singleton port known as "FailOut" is defined for all
packets which have failed the validation process. A validate error
ID is associated to every failed packet to indicate the reason.
Currently defined reasons include:
o Packet size reported is less than 40 bytes
o Packet with version is not IPv6
o Packet with multicast source address (the MSB of the source
address is 0xFF)
o Packet with destination address set to 0 or ::1
o Packet with source address set to loopback (::1).
Note that in the base type library, definitions for exception ID and
validate error ID metadata are applied to both IPv4Validator and
IPv6Validator LFBs, i.e., the two LFBs share the same medadata
definition, with different ID assignment inside.
Wang, et al. Expires December 4, 2011 [Page 49]
Internet-Draft ForCES LFB Library June 2011
5.2.2.2. Components
This LFB has only one struct component, the
IPv6ValidatorStatisticsType, which defines a set of statistics for
validation process, including the number of bad header packets, the
number of bad total length packets, and the number of bad hop limit
packets.
5.2.2.3. Capabilities
This LFB does not have a list of capabilities
5.2.2.4. Events
This LFB does not have any events specified.
5.3. IP Forwarding LFBs
IP Forwarding LFBs are specifically defined to abstract the IP
forwarding processes. As definitions for a base LFB library, this
document restricts its LFB definition scope for IP forwarding jobs
only to IP unicast forwarding. LFBs for jobs like IP multicast may
be defined in future versions of the document.
A typical IP unicast forwarding job is usually realized by looking up
some forwarding information table to find some next hop information,
and then based on the next hop information, forwarding packets to
specific output ports. It usually takes two steps to do so, firstly
to look up a forwarding information table by means of Longest Prefix
Matching(LPM) rule to find a next hop index, then to use the index to
look up a next hop information table to find enough information to
submit packets to output ports. This document abstracts the
forwarding processes mainly based on the two steps model. However,
there actually exists other models, like one which may only have a
forwarding information base that have conjoined next hop information
together with forwarding information. In this case, if ForCES
technology is to be applied, some translation work will have to be
done in FE to translate attributes defined by this document into real
attributes the implementation has actually applied.
Based on the IP forwarding abstraction, two kind of typical IP
unicast forwarding LFBs are defined, Unicast LPM lookup LFB and next
hop application LFB. They are further distinguished by IPv4 and IPv6
protocols.
Wang, et al. Expires December 4, 2011 [Page 50]
Internet-Draft ForCES LFB Library June 2011
5.3.1. IPv4UcastLPM
The LFB abstracts the process for IPv4 unicast LPM table looking up.
Input of the LFB always expects to receive IPv4 unicast packets. An
IPv4 prefix table is defined as a component for the LFB to provide
forwarding information for every input packet. The destination IPv4
address of every packet is as the index to look up the table with the
rule of longest prefix matching(LPM). A hop selector is the matching
result, which will be output to downstream LFBs as an index for next
hop information.
Normal output of the LFB is a singleton output, which will output
IPv4 unicast packet that has passed the LPM lookup and got a hop
selector as the lookup result. The hop selector is associated with
the packet as a metadatum. Followed the normal output of the LPM LFB
is usually a next hop applicator LFB. The LFB receives packets with
their next hop IDs and based on the next hop IDs to forward the
packets. A hop selector associated with every packet from the normal
output will directly act as a next hop ID for a next hop applicator
LFB..
The LFB is defined to provide some facilities to support users to
implement equal-cost multi-path routing (ECMP) or reverse path
forwarding (RPF). However, this LFB itself does not provide ECMP or
RPF. To implement ECMP or RPF, additional specific LFBs, like a
specific ECMP LFB, will have to be defined. This work may be done in
the future version of the document.
For the LFB to support ECMP, an ECMP flag is defined in the prefix
table entries. When the flag is set to true, it indicates this table
entry is for ECMP only. In this case, the hop selector in this table
entry will be used as an index for a downstream specific ECMP LFB to
find its correspondent next hop IDs. When ECMP is applied, it will
usually get multiple next hops information.
To distinguish normal output from ECMP case output, a specific ECMP
output is defined. A packet, which has passed through prefix table
entry lookup with true ECMP flag, will always output from this port,
with the hop selector being its lookup result. The output will
usually directly go to a downstream ECMP processing LFB. In the ECMP
LFB, based on the hop selector, multiple next hop IDs may be found,
and more ECMP algorithms may be applied to optimize the route. As
the result of the ECMP LFB, it will output optimized one or multiple
next hop IDs to its downstream LFB that is usually a next hop
applicator LFB.
For the LFB to support RPF, a default route flag is defined in the
Wang, et al. Expires December 4, 2011 [Page 51]
Internet-Draft ForCES LFB Library June 2011
prefix table entry. When set true, the prefix entry is identified as
a default route, and also as a forbidden route for RPF. To implement
various RPF, one or more specific LFBs have to be defined. This job
may be done for the future version of the library.
An exception output is defined to allow some exceptional packets to
output here. Exceptions include cases like packets can not find any
routes by the prefix table.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
5.3.2. IPv4NextHop
This LFB abstracts the process of next hop information application to
IPv4 packets.
The LFB receives an IPv4 packet with an associated next hop ID, and
uses the ID to look up a next hop table to find an appropriate output
port from the LFB. Simultaneously, the LFB also implements TTL
operation and checksum recalculation of every IPv4 packet received.
Input of the LFB is a singleton one which expects to receive IPv4
unicast packets and hop selector metadata from an upstream LFB.
Usually, the upstream LFB is directly an IPv4UnicastLPM LFB_while it
is possible some other upstream LFB may be applied. For instance,
when ECMP is supported, the upstream LFB may be some specific ECMP
LFB.
The next hop ID in hop selector metadata of a packet is then used as
an index to look up a next hop table defined in the LFB. Via this
table and the next hop index, important information for forwarding
the packet is found. Every next hop table entry includes the
following information:
output logical port ID, which will be used by downstream LFBs to
find proper output port.
next hop option, which decides if packets with next hop of this
table entry are destinated to locally attached hosts or not.
Locally attached hosts are hosts in the same subnet with this
router. Next hop option is marked as 'forwarded to locally
attached host' if the next hop entry is for locally attached hosts
delivery. All other next hop entry will be marked with 'normal
forwarding'. If a data packet passes through next hop entries
with its next hop option marked with forwarded to locally attached
host, the next hop IP address metadata associated with the data
packet when output from the LFB will be forced to set to the
Wang, et al. Expires December 4, 2011 [Page 52]
Internet-Draft ForCES LFB Library June 2011
destination IP address of the data packet. If a data packet
passes through a next hop entry with its option being normal
forwarding, the next hop IP address metadata at output will be set
to the next hop IP address as indicated by this next hop entry.
Advantage to define this next hop option for locally attached
hosts is, in this way, the next hop entry number may be greatly
reduced in the case there are a vast number of locally attached
hosts.
next hop IP address, which will be used by downstream LFB to find
proper output port IP address for this packet. Note that when
next hop option is set to 'forwarded to locally attached host',
this entry field becomes invalid. In this case the next hop IP
address is assigned directly by destination IP address of the data
packet pass through this entry check.
encapsulation output index, which is used by data packet to find
proper output of this LFB. Usually, this index can be used to
indicate which encapsulation followed the LFB may be applied to
data packets pass through this next hop entry check and to
classify the packets to different instance of a group output port.
Moreover, this index can also be used to purely indicate output
port instance to act as a classifier based on next hop IDs. For
instance, a next hop table entry can be defined with its
encapsulation output index being directed to an output port which
is followed with LFBs that will redirect data packets to Control
Element(CE). A next hop entry can also be defined for some data
packets that need special local processing of the Forwarding
Element(FE). In this case it is not really acting as an
encapsulation index, rather a pure output index.
As a result, the LFB is defined with two output ports. One is for
success output and another is for exception output. Success output
is a group output, with an index to indicate which output instance in
the group is adopted. The index is exactly the encapsulation output
index described above. Downstream LFBs connected to the success
output group may be various LFBs for encapsulation like LFBs for
Ethernet encapsulation and for PPP encapsulation, various LFBs for
local processing, and LFBs for redirecting packets to CE for
processing. Next hop table uses the encapsulation output index to
indicate which port instance in the output group a packet should go.
Every port instance of the success output group will produce metadata
of output logical port ID and next hop IP address for every output
packet. These metadata will be used by downstream LFBs to further
implementing forwarding or local processing. Note that for next hop
option marked as local host processing, the next hop IP address
metadata for the packet is exactly substituted with the destination
Wang, et al. Expires December 4, 2011 [Page 53]
Internet-Draft ForCES LFB Library June 2011
IP address of the packet.
The exception output of the LFB is a singleton output. It outputs
packets with exceptional cases. An exception ID further indicates
the exception reasons. Exception may happen when a hop selector is
found invalid, or ICMP packets need to be generated (Editorial note:
more discussions here), etc. The exception ID is also produced as a
metadata by the output to be transmitted to a downstream LFB.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
5.3.3. IPv6UcastLPM
The LFB abstracts the process for IPv6 unicast LPM table looking up.
Definitions of this IPv6UcastLPM LFB is very identical to
IPv4UcastLPM LFB except that all IP addresses related are changed
from IPv4 addresses to IPv6 addresses. See section 6 for detailed
XML definitions of this LFB.
5.3.4. IPv6NextHop
This LFB abstracts the process of next hop information application to
IPv6 packets.
Definitions of this IPv6NextHop LFB is very identical to IPv4NextHop
LFB except that all IP addresses related are changed from IPv4
addresses to IPv6 addresses. See section 6 for detailed XML
definitions of this LFB.
5.4. Redirect LFBs
Redirect LFBs abstract data packets transportation process between CE
and FE. Some packets output from some LFBs may have to be delivered
to CE for further processing, and some packets generated by CE may
have to be delivered to FE and further to some specific LFBs for data
path processing. According to RFC 5810 [RFC5810], data packets and
their associated metadata are encapsulated in ForCES redirect message
for transportation between CE and FE. We define two LFBs to abstract
the process, a RedirectIn LFB and a RedirectOut LFB. Usually, in an
LFB topology of an FE, only one RedirectIn LFB instance and one
RedirectOut LFB instance exist.
5.4.1. RedirectIn
A RedirectIn LFB abstracts the process for CE to inject data packets
into FE LFB topology so as to input data packets into FE data paths.
Wang, et al. Expires December 4, 2011 [Page 54]
Internet-Draft ForCES LFB Library June 2011
From LFB topology point of view, the RedirectIn LFB acts as a source
point for data packets coming from CE, therefore the RedirectIn LFB
is defined with only one output, while without any input.
Output of the RedirectIn LFB is defined as a group output. Packets
produced by the output will have arbitrary frame types decided by CE
which generates the packets. Possible frames may include IPv4, IPv6,
or ARP protocol packets. CE may associate some metadata to indicate
the frame types. CE may also associate other metadata to data
packets to indicate various information on the packets. Among them,
there MUST exist a 'RedirectIndex' metadata, which is an integer
acting as an index. When CE transmits the metadata and a binging
packet to a RedirectIn LFB, the LFB will read the metadata and output
the packet to one of its group output port instance, whose port index
is just as indicated by the metadata. Detailed XML definition of the
metadata is in the XML for base type library as in Section 4.4.
All metadata from CE other than the 'RedirectIndex' metadata will
output from the RedirectIn LFB along with their binding packets.
Note that, a packet without a 'RedirectIndex' metadata associated
will be dropped by the LFB.
There is no component defined for current version of RedirectIn LFB.
Detailed XML definitions of the LFB can be found in Section 6.
5.4.2. RedirectOut
A RedirectOut LFB abstracts the process for LFBs in FE to deliver
data packets to CE. From LFB topology point of view, the RedirectOut
LFB acts as a sink point for data packets going to CE, therefore the
RedirectOut LFB is defined with only one input, while without any
output.
Input of the RedirectOut LFB is defined as a singleton input, but it
is capable of receiving packets from multiple LFBs by multiplexing
the singleton input. Packets expected by the input will have
arbitrary frame types. All metadata associated with the input
packets will be delivered to CE via a ForCES protocol redirect
message [RFC5810]. The input will expect all types of metadata.
There is no component defined for current version of RedirectOut LFB.
Detailed XML definitions of the LFB can be found in Section 6.
5.5. General Purpose LFBs
Wang, et al. Expires December 4, 2011 [Page 55]
Internet-Draft ForCES LFB Library June 2011
5.5.1. BasicMetadataDispatch
A basic medatata dispatch LFB is defined to abstract a process in
which a packet is dispatched to some path based on its associated
metadata value.
The LFB is with a singleton input. Packets of arbitrary frame types
can input into the LFB. Whereas, every input packet is required to
be associated with a metadata that will be used by the LFB to do
dispatch. If a packet is not associated with such metadata, the
packet will be dropped inside the LFB.
A group of output is defined to output packets according to a
MetaDispatchTable as defined a component in the LFB. The table
contains the fields of a metadata ID, a metadata value, and an output
port index. A packet, if it is associated with a metadata with the
metadata ID, will be output to the group port instance with the index
corresponding to the metadata value in the table. The metadata value
ussed by the table is required with an interger data type. This
means this LFB currently only allow a metadata with an interger value
to be used for dispatch.
Moreover, the LFB is defined with only one metadata adopted for
dispatch, i.e., the metadata ID in the dispatch table is always the
same for all table rows.
A more complex metadata dispatch LFB may be defined in future version
of the library. In that LFB, multiple tuples of metadata may be
adopted to dispatch packets.
5.5.2. GenericScheduler
There exist various kinds of scheduling strategies with various
implementations. As a base LFB library, this document only defines a
preliminary generic scheduler LFB for abstracting a simple scheduling
process. The generic scheduler LFB is the one. Users may use this
LFB as a basic scheduler LFB to further construct more complex
scheduler LFBs by means of inheritance as described in RFC 5812
[RFC5812].
The LFB describes scheduling process in the following model:
o It is with a group input and expects packets with arbitrary frame
types to arrive for scheduling. The group input is capable of
multiple input port instances. Each port instance may be
connected to different upstream LFB output. No metadata is
expected for each input packet.
Wang, et al. Expires December 4, 2011 [Page 56]
Internet-Draft ForCES LFB Library June 2011
o Multiple queues reside at the input side, with every input port
instance connected to one queue.
o Every queue is marked with a queue ID, and the queue ID is exactly
the same as the index of corresponding input port instance.
o Scheduling disciplines are applied to all queues and also all
packets in the queues.
o Scheduled packets are output from a singleton output port of the
LFB.
Two LFB components are defined to further describe above model. A
scheduling discipline component is defined for CE to specify a
scheduling discipline to the LFB. Currently defined scheduling
disciplines only include FIFO and round robin(RR). For FIFO, we
limit above model that when a FIFO discipline is applied, it is
require that there is only one input port instance for the group
input. If user accidentally defines multiple input port instances
for FIFO scheduling, only packets in the input port with lowest port
index will be scheduled to output port, and all packets in other
input port instances will just ignored.
We specify that if the generic scheduler LFB is defined only one
input port instance, the default scheduling discipline is FIFO. If
the LFB is defined with more than one input port instances, the
default scheduling discipline is round robin (RR).
A current queue depth component is defined to allow CE to query every
queue status of the scheduler. Using the queue ID as the index, CE
can query every queue for its used length in unit of packets or
bytes.
Several capabilities are defined for the LFB. A queue number limit
is defined which limits the scheduler maximum supported queue number,
which is also the maximum number of input port instances. Capability
of disciplines supported provides scheduling discipline types
supported by the FE to CE. Queue length limit provides the
capability of storage ability for every queue.
More complex scheduler LFB may be defined with more complex
scheduling discipline by succeeding this LFB. For instance, a
priority scheduler LFB may be defined only by inheriting this LFB and
define a component to indicate priorities for all input queues.
See Section 6 for detailed XML definition for this LFB.
Wang, et al. Expires December 4, 2011 [Page 57]
Internet-Draft ForCES LFB Library June 2011
6. XML for LFB Library
EtherPHYCop
The LFB describes an Ethernet port abstracted at
physical layer.It limits its physical media to copper.
Multiple virtual PHYs isn't supported in this LFB version.
1.0
EtherPHYIn
The input port of the EtherPHYCop LFB. It
expects any kind of Ethernet frame.
[EthernetAll]
EtherPHYOut
The output port of the EtherPHYCop LFB. It
can produce any kind of Ethernet frame and along with
the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
PHYPortID
The ID of the physical port that this LFB
Wang, et al. Expires December 4, 2011 [Page 58]
Internet-Draft ForCES LFB Library June 2011
handles.
uint32
AdminStatus
Admin status of the LFB
PortStatusValues
2
OperStatus
Operational status of the LFB.
PortStatusValues
AdminLinkSpeed
The link speed that the admin has requested.
LANSpeedType
0x00000005
OperLinkSpeed
The actual operational link speed.
LANSpeedType
AdminDuplexMode
The duplex mode that the admin has requested.
DuplexType
0x00000001
OperDuplexMode
The actual duplex mode.
DuplexType
CarrierStatus
The status of the Carrier. Whether the port
is linked with an operational connector.
boolean
false
Wang, et al. Expires December 4, 2011 [Page 59]
Internet-Draft ForCES LFB Library June 2011
SupportedLinkSpeed
Supported Link Speeds
LANSpeedType
SupportedDuplexMode
Supported Duplex Modes
DuplexType
PHYPortStatusChanged
When the status of the Physical port is
changed,the LFB sends the new status.
OperStatus
OperStatus
LinkSpeedChanged
When the operational speed of the link
is changed, the LFB sends the new operational link
speed.
OperLinkSpeed
OperLinkSpeed
DuplexModeChanged
When the operational duplex mode
is changed, the LFB sends the new operational mode.
Wang, et al. Expires December 4, 2011 [Page 60]
Internet-Draft ForCES LFB Library June 2011
OperDuplexMode
OperDuplexMode
EtherMACIn
An LFB abstracts an Ethernet port at MAC data link
layer. It specifically describes Ethernet processing functions
like MAC address locality check, deciding if the Ethernet
packets should be bridged, provide Ethernet layer flow control,
etc.Multiple virtual MACs isn't supported in this LFB
version.
1.0
EtherMACIn
The input port of the EtherMACIn. It
expects any kind of Ethernet frame.
[EthernetAll]
[PHYPortID]
NormalPathOut
The normal output port of the EtherMACIn.
It can produce any kind of Ethernet frame and along
with the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
Wang, et al. Expires December 4, 2011 [Page 61]
Internet-Draft ForCES LFB Library June 2011
[PHYPortID]
L2BridgingPathOut
The Bridging Output Port of the EtherMACIn.
It can produce any kind of Ethernet frame and along
with the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
AdminStatus
Admin status of the port
PortStatusValues
2
LocalMACAddresses
Local Mac addresses
IEEEMAC
L2BridgingPathEnable
Is the LFB doing L2 Bridging?
boolean
false
PromiscuousMode
Is the LFB in Promiscuous Mode?
boolean
false
Wang, et al. Expires December 4, 2011 [Page 62]
Internet-Draft ForCES LFB Library June 2011
TxFlowControl
Transmit flow control
boolean
false
RxFlowControl
Receive flow control
boolean
false
MACInStats
MACIn statistics
MACInStatsType
EtherClassifier
This LFB abstracts the process to decapsulate
Ethernet packets and classify the data packets into
various network layer data packets according to information
included in the Ethernet packets headers.
1.0
EtherPktsIn
Input port for data packet.
[EthernetAll]
[PHYPortID]
[
LogicalPortID]
ClassifyOut
Output port for classification.
[Arbitrary]
Wang, et al. Expires December 4, 2011 [Page 63]
Internet-Draft ForCES LFB Library June 2011
[PHYPortID]
[SrcMAC]
[DstMAC]
[EtherType]
[VlanID]
[VlanPriority]
EtherDispatchTable
Ether classify dispatch table
EtherDispatchTableType
VlanInputTable
Vlan input table
VlanInputTableType
EtherClassifyStats
Ether classify statistic table
EtherClassifyStatsTableType
EtherEncapsulator
This LFB abstracts the process to encapsulate IP
packets to Ethernet packets according to the L2 information.
1.0
EncapIn
A Single Packet Input
[IPv4]
[IPv6]
[MediaEncapInfoIndex]
[
Wang, et al. Expires December 4, 2011 [Page 64]
Internet-Draft ForCES LFB Library June 2011
VlanPriority]
SuccessOut
Output port for Packets which have found
Ethernet L2 information and have been successfully
encapsulated to an Ethernet packet.
[IPv4]
[IPv6]
[L2PortID]
ExceptionOut
All packets that fail with the other
operations in this LFB are output via this port.
[IPv4]
[IPv6]
[ExceptionID]
[MediaEncapInfoIndex]
[VlanPriority]
EncapTable
Ethernet Encapsulation table.
EncapTableType
Wang, et al. Expires December 4, 2011 [Page 65]
Internet-Draft ForCES LFB Library June 2011
EtherMACOut
EtherMACOut LFB abstracts an Ethernet port at MAC
data link layer. It specifically describes Ethernet packet
output process. Ethernet output functions are closely related
to Ethernet input functions, therefore some components
defined in this LFB are actually alias of EtherMACIn LFB.
1.0
EtherPktsIn
The Input Port of the EtherMACIn. It expects
any kind of Ethernet frame.
[EthernetAll]
[PHYPortID]
EtherMACOut
The Normal Output Port of the EtherMACOut. It
can produce any kind of Ethernet frame and along with
the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
AdminStatus
Admin status of the port. It is the alias of
"AdminStatus" component defined in EtherMACIn.
PortStatusValues
Wang, et al. Expires December 4, 2011 [Page 66]
Internet-Draft ForCES LFB Library June 2011
MTU
Maximum transmission unit.
uint32
TxFlowControl
Transmit flow control. It is the alias of
"TxFlowControl" component defined in EtherMACIn.
boolean
RxFlowControl
Receive flow control. It is the alias of
"RxFlowControl" component defined in EtherMACIn.
boolean
MACOutStats
MACOut statistics
MACOutStatsType
IPv4Validator
An LFB that performs IPv4 packets validation
according to RFC1812. At the same time, ipv4 unicast and
multicast are classified in this LFB.
1.0
ValidatePktsIn
Input port for data packet.
[Arbitrary]
IPv4UnicastOut
Output for IPv4 unicast packet.
Wang, et al. Expires December 4, 2011 [Page 67]
Internet-Draft ForCES LFB Library June 2011
[IPv4Unicast]
IPv4MulticastOut
Output for IPv4 multicast packet.
[IPv4Multicast]
ExceptionOut
Output for exception packet.
[IPv4]
[ExceptionID]
FailOut
Output for failed validation packet.
[IPv4]
[ValidateErrorID]
IPv4ValidatorStats
IPv4 validator statistics information.
IPv4ValidatorStatisticsType
Wang, et al. Expires December 4, 2011 [Page 68]
Internet-Draft ForCES LFB Library June 2011
IPv6Validator
An LFB that performs IPv6 packets validation
according to RFC2460. At the same time, ipv6 unicast and
multicast are classified in this LFB.
1.0
ValidatePktsIn
Input port for data packet.
[Arbitrary]
IPv6UnicastOut
Output for IPv6 unicast packet.
[IPv6Unicast]
IPv6MulticastOut
Output for IPv6 multicast packet.
[IPv6Multicast]
ExceptionOut
Output for exception packet.
[IPv6]
[ExceptionID]
Wang, et al. Expires December 4, 2011 [Page 69]
Internet-Draft ForCES LFB Library June 2011
FailOut
Output for failed validation packet.
[IPv6]
[ValidateErrorID]
IPv6ValidatorStats
IPv6 validator statistics information.
IPv6ValidatorStatisticsType
IPv4UcastLPM
An LFB that performs IPv4 Longest Prefix Match
Lookup.It is defined to provide some facilities to support
users to implement equal-cost multi-path routing(ECMP) or
reverse path forwarding (RPF).
1.0
PktsIn
A Single Packet Input
[IPv4Unicast]
NormalOut
This output port is connected with
IPv4NextHop LFB
Wang, et al. Expires December 4, 2011 [Page 70]
Internet-Draft ForCES LFB Library June 2011
[IPv4Unicast]
[HopSelector]
ECMPOut
This output port is connected with ECMP LFB,
if there is ECMP LFB in the FE.
[IPv4Unicast]
[HopSelector]
ExceptionOut
The output for the packet if an exception
occurs
[IPv4Unicast]
[ExceptionID]
IPv4PrefixTable
The IPv4 prefix table.
IPv4PrefixTableType
IPv4UcastLPMStats
Statistics for IPv4 Unicast Longest Prefix
Match
IPv4UcastLPMStatsType
Wang, et al. Expires December 4, 2011 [Page 71]
Internet-Draft ForCES LFB Library June 2011
IPv6UcastLPM
An LFB that performs IPv6 Longest Prefix Match
Lookup.It is defined to provide some facilities to support
users to implement equal-cost multi-path routing(ECMP) or
reverse path forwarding (RPF).
1.0
PktsIn
A Single Packet Input
[IPv6Unicast]
NormalOut
This output port is connected with
IPv6NextHop LFB
[IPv6Unicast]
[HopSelector]
ECMPOut
This output port is connected with ECMP LFB,
if there is ECMP LFB in the FE.
[IPv6Unicast]
[HopSelector]
ExceptionOut
Wang, et al. Expires December 4, 2011 [Page 72]
Internet-Draft ForCES LFB Library June 2011
The output for the packet if an exception
occurs
[IPv6Unicast]
[ExceptionID]
IPv6PrefixTable
The IPv6 prefix table.
IPv6PrefixTableType
IPv6UcastLPMStats
Statistics for IPv6 Unicast Longest Prefix
Match
IPv6UcastLPMStatsType
IPv4NextHop
This LFB abstracts the process of selecting ipv4
next hop action. It receives an IPv4 packet with an
associated next hop ID, and uses the ID to look up a next
hop table to find an appropriate output port from the LFB.
1.0
PktsIn
A Single Packet Input
[IPv4Unicast]
[HopSelector]
Wang, et al. Expires December 4, 2011 [Page 73]
Internet-Draft ForCES LFB Library June 2011
SuccessOut
The output for the packet if it is valid to be
forwarded
[IPv4Unicast]
[OutputLogicalPortID]
[NextHopIPv4Addr]
ExceptionOut
The output for the packet if an exception
occurs
[IPv4Unicast]
[ExceptionID]
IPv4NextHopTable
The next hop table.
IPv4NextHopTableType
IPv6NextHop
The LFB abstracts the process of next hop
information application to IPv6 packets. It receives an IPv4
packet with an associated next hop ID, and uses the ID to
look up a next hop table to find an appropriate output port
from the LFB..
1.0
PktsIn
Wang, et al. Expires December 4, 2011 [Page 74]
Internet-Draft ForCES LFB Library June 2011
A single packet input.
[IPv6Unicast]
[HopSelector]
SuccessOut
The output for the packet if it is valid to
be forwarded
[IPv6Unicast]
[OutputLogicalPortID]
[NextHopIPv6Addr]
ExceptionOut
The output for the packet if an exception
occurs
[IPv6Unicast]
[ExceptionID]
IPv6NextHopTable
The next hop table.
IPv6NextHopTableType
Wang, et al. Expires December 4, 2011 [Page 75]
Internet-Draft ForCES LFB Library June 2011
RedirectIn
The RedirectIn LFB abstracts the process for CE to
inject data packets into FE LFB topology, so as to input data
packets into FE data paths. CE may associate some
metadata to data packets to indicate various information on
the packets. Among them, there MUST exist a 'RedirectIndex'
metadata, which is an integer acting as an output port index.
1.0
PktsOut
This output group sends the redirected packet
in the data path.
[Arbitrary]
RedirectOut
The LFB abstracts the process for LFBs in
FE to deliver data packets to CE. All metadata
associated with the input packets will be delivered to CE
via the redirect message of ForCES protocol [RFC5810].
1.0
PktsIn
This input receives packets to send to
the CE.
[Arbitrary]
BasicMetadataDispatch
This LFB provides the function to dispatch input
packets to a group output according to a metadata and a
Wang, et al. Expires December 4, 2011 [Page 76]
Internet-Draft ForCES LFB Library June 2011
dispatch table.This LFB currently only allow a metadata with
an interger value to be used for dispatch.
1.0
PktsIn
Input port for data packet.
[Arbitrary]
[Arbitrary]
PktsOut
Data packet output
[Arbitrary]
MetadataDispatchTable
Metadata dispatch table.
MetadataDispatchTableType
GenericScheduler
This is a preliminary generic scheduler LFB for
abstracting a simple scheduling process.Users may use this
LFB as a basic scheduler LFB to further construct more
complex scheduler LFBs by means of inheritance as described
in RFC 5812.
1.0
PktsIn
Input port for data packet.
Wang, et al. Expires December 4, 2011 [Page 77]
Internet-Draft ForCES LFB Library June 2011
[Arbitrary]
PktsOut
Data packet output.
[Arbitrary]
QueueCount
The number of queues to be scheduled.
uint32
SchedulingDiscipline
the Scheduler discipline.
SchdDisciplineType
CurrentQueueDepth
Current Depth of all queues
QueueDepthTableType
QueueLenLimit
Maximum length of each queue,the unit is
byte.
uint32
QueueScheduledLimit
Max number of queues that can be scheduled
by this scheduluer.
uint32
Wang, et al. Expires December 4, 2011 [Page 78]
Internet-Draft ForCES LFB Library June 2011
DisciplinesSupported
the scheduling disciplines supported.
SchdDisciplineType
Wang, et al. Expires December 4, 2011 [Page 79]
Internet-Draft ForCES LFB Library June 2011
7. LFB Class Use Cases
This section demonstrates examples on how the LFB classes defined by
the Base LFB library in Section 6 are applied to achieve typical
router functions.
As mentioned in the overview section, typical router functions can be
categorized in short into the following functions:
o IP forwarding
o address resolution
o ICMP
o network management
o running routing protocol
To achieve the functions, processing paths organized by the LFB
classes with their interconnections should be established in FE. In
general, CE controls and manages the processing paths by use of the
ForCES protocol.
Note that LFB class use cases shown in this section are only as
examples to demonstrate how typical router functions can be
implemented with the defined base LFB library. Users and
implementers should not be limited by the examples.
7.1. IP Forwarding
TBD
Wang, et al. Expires December 4, 2011 [Page 80]
Internet-Draft ForCES LFB Library June 2011
8. Contributors
The authors would like to thank Jamal Hadi Salim, Ligang Dong, and
Fenggen Jia who made major contributions to the development of this
document.
Jamal Hadi Salim
Mojatatu Networks
Ottawa, Ontario
Canada
Email: hadi@mojatatu.com
Ligang Dong
Zhejiang Gongshang University
149 Jiaogong Road
Hangzhou 310035
P.R.China
Phone: +86-571-28877751
EMail: donglg@mail.zjgsu.edu.cn
Fenggen Jia
National Digital Switching Center(NDSC)
Jianxue Road
Zhengzhou 452000
P.R.China
EMail: jfg@mail.ndsc.com.cn
Wang, et al. Expires December 4, 2011 [Page 81]
Internet-Draft ForCES LFB Library June 2011
9. Acknowledgements
This document is based on earlier documents from Joel Halpern, Ligang
Dong, Fenggen Jia and Weiming Wang.
Wang, et al. Expires December 4, 2011 [Page 82]
Internet-Draft ForCES LFB Library June 2011
10. IANA Considerations
(TBD)
Wang, et al. Expires December 4, 2011 [Page 83]
Internet-Draft ForCES LFB Library June 2011
11. Security Considerations
These definitions if used by an FE to support ForCES create
manipulable entities on the FE. Manipulation of such objects can
produce almost unlimited effects on the FE. FEs should ensure that
only properly authenticated ForCES protocol participants are
performing such manipulations. Thus the security issues with this
protocol are defined in the ForCES protocol [RFC5810].
Wang, et al. Expires December 4, 2011 [Page 84]
Internet-Draft ForCES LFB Library June 2011
12. References
12.1. Normative References
[RFC5810] Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang,
W., Dong, L., Gopal, R., and J. Halpern, "Forwarding and
Control Element Separation (ForCES) Protocol
Specification", RFC 5810, March 2010.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control
Element Separation (ForCES) Forwarding Element Model",
RFC 5812, March 2010.
12.2. Informative References
[RFC1812] Baker, F., "Requirements for IP Version 4 Routers",
RFC 1812, June 1995.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
June 1999.
[RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC
Text on Security Considerations", BCP 72, RFC 3552,
July 2003.
[RFC3654] Khosravi, H. and T. Anderson, "Requirements for Separation
of IP Control and Forwarding", RFC 3654, November 2003.
[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal,
"Forwarding and Control Element Separation (ForCES)
Framework", RFC 3746, April 2004.
[RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an
IANA Considerations Section in RFCs", BCP 26, RFC 5226,
May 2008.
Wang, et al. Expires December 4, 2011 [Page 85]
Internet-Draft ForCES LFB Library June 2011
Authors' Addresses
Weiming Wang
Zhejiang Gongshang University
18 Xuezheng Str., Xiasha University Town
Hangzhou, 310018
P.R.China
Phone: +86-571-28877721
Email: wmwang@zjgsu.edu.cn
Evangelos Haleplidis
University of Patras
Patras,
Greece
Email: ehalep@ece.upatras.gr
Kentaro Ogawa
NTT Corporation
Tokyo,
Japan
Email: ogawa.kentaro@lab.ntt.co.jp
Chuanhuang Li
Hangzhou BAUD Networks
408 Wen-San Road
Hangzhou, 310012
P.R.China
Phone: +86-571-28877751
Email: chuanhuang_li@zjgsu.edu.cn
Halpern Joel
Ericsson
P.O. Box 6049
Leesburg, 20178
VA
Phone: +1 703 371 3043
Email: joel.halpern@ericsson.com
Wang, et al. Expires December 4, 2011 [Page 86]