Internet DRAFT - draft-wright-load-distribution


tewg                                                           S.Wright
Internet Draft                                                BellSouth
Document: draft-wright-load-distribution-00.txt                R.Jaeger
Category: Informational                                 LTS, U.Maryland
                                                             IP Highway

                                                              July 2000

                Traffic Engineering of Load Distribution

Status of this Memo

   This document is an Internet-Draft and is in full conformance with
      all provisions of Section 10 of RFC2026 [1].

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups. Note that
   other groups may also distribute working documents as Internet-
   Drafts. Internet-Drafts are draft documents valid for a maximum of
   six months and may be updated, replaced, or obsoleted by other
   documents at any time. It is inappropriate to use Internet- Drafts
   as reference material or to cite them other than as "work in
   The list of current Internet-Drafts can be accessed at
   The list of Internet-Draft Shadow Directories can be accessed at

1. Abstract

   Online traffic load distribution for a single class of service has
   been explored extensively given that extensions to IGP can provide
   loading information to network nodes.   To perform traffic
   engineering of load distribution for multi-service networks, or off
   line traffic engineering of single service networks, a control
   mechanism for provisioning bandwidth according to some policy must
   be provided. This draft identifies the mechanisms that affect load
   distribution and the controls for mechanisms that affect load
   distribution to enable policy based traffic engineering of the load
   distribution to be performed. This draft identifies the mechanisms
   that affect load distribution and the control for those mechanisms
   to enable policy based traffic engineering of load distribution.
   This draft also presents guidelines for the use of these load
   distribution mechanisms in the context of  single IP network

2. Conventions used in this document

Wright           Informational - Expires January 2001                1
               Traffic Engineering Of Load Distribution      July 2000

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   this document are to be interpreted as described in RFC-2119 [2].

3. Introduction

   The traffic load that an IP network supports may be distributed in
   various ways within the constraints of the topology of the network (
   i.e. avoiding routing loops). The default mechanism for load
   distribution is to rely on an IGP (e.g. IS-IS, OSPF) to identify a
   single "shortest" path between any two endpoints of the network.
   "Shortest" is typically defined in terms of a minimization of an
   administrative weight (e.g. hop count) assigned to each link of the
   network topology. Having identified a single shortest path, all
   traffic between those endpoints then follows that path until the IGP
   detects a topology change. While often called dynamic routing,
   (since it changes in response to topology changes) this might be
   better characterized as topology driven route determination.

   This default IGP mechanism works well in a wide variety of
   operational contexts. Nonetheless, there are operational
   environments in which network operators may wish to use additional
   controls to affect the distribution of traffic within their
   networks. These may include:

     a)         service specific routing (e.g. voice service may utilize delay
        sensitive routing, but best effort service may not)
     b)         customer specific routing (e.g. VPNs)
     c)         tactical route changes where peak traffic demands exceed single
        link capacity
     d)         tactical route changes for fault avoidance

   This draft assumes the existence of some rationale for greater
   control of the load distribution than that provided by the default

   The main objectives of this draft include identifying (within the
   context of a single IP network administration) :

     1.         mechanisms that affect load distribution
     2.         controls for mechanisms that affect load distribution
     3.         engineering guidelines in the use of these controls (future)

   This document has evolved from a previous draft [3] in an effort to
   meet comments from the interim tewg for a broader discussion of load
   distribution. The intention is for this document to evolve toward a
   working group informational RFC. Further comments and suggestions
   towards that objective are requested.

Wright           Informational - Expires January 2001                2
               Traffic Engineering Of Load Distribution      July 2000

   This document is structured as follows. Section 4 provides an
   overview of load distribution within the context of a single IP
   network administration. Route determination is discussed in section
   5. Traffic classification in multipath routing configurations is
   considered in section 6. section 7 discusses traffic engineered load
   balancing in MPLS. Security considerations are provided in section

4. Load Distribution

   The traffic load distribution may be considered on a service-
   specific basis or aggregated across multiple services. In
   considering the load distribution, one should also distinguish
   between a snapshot of the network's state (i.e. a measurement) and
   the (hypothetical) network state, that may result from some
   (projected) traffic demand. Measurement of the traffic load
   distribution is discussed further in section 4.1

   Load distribution has two main components - the identification of
   the routes over which traffic flows, and, in the case of multipath
   routing configurations (where multiple acyclic paths exist between
   common endpoints), the classification of flows determines the
   distribution of flows among those routes. Control of the load
   distribution is discussed in section 4.2. The traffic engineering of
   the load distribution within the context of the traffic engineering
   framework is considered in section 4.3

4.1 Traffic Load Definition and Measurement

   With modern node equipment supporting wire speed forwarding, in this
   draft we assume traffic load is a link measurement, although in some
   cases node constraints (e.g. packet forwarding capacity) may be more

   Traffic load is measured in units of network capacity. Network
   capacity is typically measured in units of bandwidth (e.g. with a
   magnitude dimensioned bits/second or packets/second). However,
   bandwidth should be considered a vector quantity providing both a
   magnitude and a direction. Bandwidth magnitude measurements are
   typically made at some specific (but often implicit) point in the
   network where traffic is flowing, in a specific direction, (e.g.
   between two points of a unicast transmission). The significance
   comes in distinguishing between bandwidth measurements made on a
   link basis and bandwidth demands between end-points of a network.

   A snapshot of the current load distribution may be identified
   through relevant measurements available on the network. The
   available traffic metrics for determining the load distribution
   include, e.g. generic IP traffic metrics (see e.g. RFC 2679
   [4] or RFC 2722 [5]). The measurements of network capacity
   utilization can be combined with the information from the routing
   database to provide an overall perspective on the traffic

Wright           Informational - Expires January 2001                3
               Traffic Engineering Of Load Distribution      July 2000

   distribution within the network. This information may be combined at
   the routers (and then reported back) or aggregated in the management
   system for dynamic traffic engineering.

   A peak demand value of the traffic load magnitude (over some time
   interval, and in the context of a specific traffic direction) may be
   used for network capacity planning purposes (beyond the scope of
   this document).  Considering the increasing deployment of asymmetric
   host interfaces (e.g. ADSL) and application software architectures
   (e.g. client-server), the traffic load distribution is not
   necessarily symmetric between the opposite directions of
   transmission for any two endpoints of the network.

4.2 Load Distribution Controls

   In order for a traffic engineering process to impact the network,
   there must be adequate controls within the network to implement the
   results of the offline traffic engineering processes. In this draft,
   we generally assume the physical topology (links and nodes) to be
   fixed, while considering the traffic engineering options for
   affecting the distribution of a traffic load over that topology. The
   addition of new nodes and links is considered a network capacity
   planning problem beyond the scope of this draft.

   The fundamental load affecting mechanisms are:
        (1) the identification of suitable routes and
        (2) (in the case of multipath routing) the allocation of
            traffic to a specific path.

   For traffic engineering purposes, the control mechanisms available
   can affect either of these mechanisms.

4.3 Control of the Load Distribution in the context of the traffic
Engineering Framework

   Given the assumption of the existence of a need for control of the
   load distribution, it follows that the values of the control
   parameters are unlikely to be static. Within the TE Framework's
   taxonomy [6] of traffic engineering systems,
   (control of) load distribution may be:

     a)         dependent on time or network state ( either local or global
        state)- (e.g. based on IGP topology information)
     b)         based on algorithms executed offline or online
     c)         impacted by open or closed loop network control
     d)         centralized or distributed control of the distributed route set
        and traffic classification functions
     e)         prescriptive ( i.e a control function) rather than simply
        descriptive of network state.

   We assume network feedback to be an important part of the dynamic
   control of load distribution within the network. While the offline

Wright           Informational - Expires January 2001                4
               Traffic Engineering Of Load Distribution      July 2000

   algorithms to compute a set of paths between ingress and egress
   points in the administrative domain may rely on historic load data,
   online adjustments to the traffic engineered paths will rely in part
   on this load information reported by the nodes.

   The traffic engineering framework identifies process model
   components for :
     1.         measurement
     2.         modeling, analysis, and simulation
     3.         optimization.

   Traffic load distribution measurment was discussed in section 4.1.
   Modeling, analysis and simulation of the load distribution expected
   in the network is typically performed offline. Such analyses
   typically produce individual results of limited scope (e.g. valid
   for a specific demanded traffic load, or fault condition etc.).
   However the accumulation of a number of such results can provide an
   indication of the robustness of a particular network configuration.

   The notion of optimization of the load distribution implies the
   existence of some objective optimization criteria and constraints.
   Load distribution optimization objectives may include:
        I.   elimination of overload conditions on links / nodes
        II.  equalization of load on links / nodes

   A variety of load distribution constraints may be derived from
   equipment, network topology, operational practices, service
   agreements etc. Load distribution constraints may include:
     a)         current topology / route database
     b)         current planned changes to topology / route database
     c)         capacity allocations for planned traffic demand
     d)         capacity allocations for network protection purposes
     e)         service level agreements (SLAs) for bandwidth and delay
        sensitivity of flows

   Within the context of the traffic-engineering framework, control of
   the load distribution is a core capability for enabling traffic
   engineering of the network.

5. Route Determination

   Routing protocols have been studied extensively in the IETF and
   elsewhere. This section is not intended to replicate existing
   specifications, but rather to focus on specific operational aspects
   of controlling those routing protocols towards a traffic-engineered
   load distribution. in thsi draft , we assume that a traffic
   engineered load distribution typically relies on something other
   than a default IGP rout set, and typically requires support for
   multiple path configurations.

Wright           Informational - Expires January 2001                5
               Traffic Engineering Of Load Distribution      July 2000

   The set of routes deployed for use within a network is not
   necessarily monolithic. Not all routes in the network may be
   determined by the same system. Routes may be static or dynamic.
   Routes may be determined by:
     1.         topology driven IGP
     2.         explicitly specified
     3.         capacity constraints (e.g. link / node / service bandwidth)
     4.         constraints on other desired route characteristics, (e.g.
        delay, diversity / affinity with other routes etc.)

   Combinations of the methods are possible:
     - partial explicit routes where some of the links are selected by
          the topology driven IGP
     - some routes may be automatically generated by the IGP. Others
          may be explicitly set by some management system.

   Explicit routes are not necessarily static. Consider that explicit
   routes may be generated periodically by an offline traffic
   engineering tool and provisioned into the network. MPLS provides
   efficient mechanisms for explicit routing and bandwidth reservation.

   Note that link capacity may be reserved for a variety of protection
   strategies as well as for planned traffic load demands and in
   response to signaled bandwidth requests (e.g. RSVP). When allocating
   capacity, there may be issues in the sequence with which capacity on
   specific routes is allocated affecting the overall traffic load
   capacity. It is important during path selection to chose paths that
   have a minimal effect on future path setups (see e.g. [7]).
   Aggregate capacity required for some paths may exceed the capacities
   of one or more links along the path, forcing the selection of an
   alternative path for that traffic. Constraint based routing
   approaches may also provide mechanisms to support additional
   constraints ( other than capacity based constraints)

   IGPs (e.g. IS-IS, OSPF) have had enhancement proposals to support to
   additional network state information for traffic engineering
   purposes (e.g. available link capacity).  Alternatively, routers can
   report relevant network state information, both raw and processed,
   directly to the management system.

   In other networks (e.g. PSTN), some symmetry in the routing of
   traffic flows and aggregate demand may be assumed. In the case of
   the public Internet, symmetry is unlikely to be achieved in routing
   (e.g. due to peering policies sending responses to different peering
   points than queries).

   Controls over the determination of routes form an important aspect
   of traffic engineering for load distribution. Since the routing
   operates over a specific topology, any control of the topology
   abstraction used provides some control of the set of possible

5.1 Control of the topology abstraction

Wright           Informational - Expires January 2001                6
               Traffic Engineering Of Load Distribution      July 2000

   There are two major controls available on the topology abstraction -
   the use of hierarchical routing and link bundling concepts.

5.1.1 Hierarchical Routing
   Hierarchical routing provides a mechanism to abstract portions of
   the network in order to simplify the topology over which routes are
   being selected. Hierarchical routing examples in IP networks
     a)         use of an EGP (e.g. BGP) and an IGP (e.g.ISIS)
     b)         MPLS Label stacks

   Such hierarchies provide both a simplified topology and a coarse
   classification of traffic.

5.1.2 Link Bundling

   In some cases a pair of LSRs may be connected by several (parallel)
   links. From the MPLS Traffic Engineering point of view, for the
   purpose of scalability, it may be desirable to treat all these links
   as a single IP link - an operation known as Link Bundling (e.g. ref.
   [8]). With load balancing, the load to be balanced is spread across
   multiple LSPs that in general does not require physical topology
   adjacency for the LSRs. The techniques are complementary. Link
   bundling provides a local optimization that is particularly suited
   for aggregating low speed links. Load Balancing is targeted at
   larger scale network optimizations.

 5.2  Operational controls over route determination

   The default topology driven IGP provides the least administrative
   control over route determination. The main control available in this
   case is the ability to modify the administrative weights. This has
   network wide effects and may result in unanticipated traffic shifts.

   A route set comprised entirely of completely-specified explicit-
   routes is the opposite extreme i.e. complete offline operational
   control of the routing. A disadvantage of using explicit routes is
   the administrative burden and potential for human induced errors
   from using this approach on a large scale. Management systems (e.g.
   policy-based management) may be deployed to ease these operational
   concerns, while still providing more precise control over the routes
   deployed in the network. In MPLS enabled networks, explicit route
   specification is feasible and a finer grained approach is possible
   for classification, including service differentiation.

6. Traffic Classification in Multipath Routing Configurations

   Given multiple paths between two endpoints, there is a choice to be
   made of which traffic to send down a particular path. This choice
   could be affected by:

Wright           Informational - Expires January 2001                7
               Traffic Engineering Of Load Distribution      July 2000

     1.         traffic source preferences (e.g. expressed as marking - DSCPs)
     2.         traffic destination preferences ( e.g. peering arrangements)
     3.         network operator preferences (e.g. time of day routing,
        scheduled facility maintenance, policy)
     4.         network state ( e.g. link congestion avoidance)

   There are a number of potential issues related to the use of multi-
   path routing [9] including:
     a)         variable path MTU
     b)         variable latencies
     c)         increased difficulty in debugging
     d)         sequence integrity

   These issues may be of particular concern when traffic from a single
   "flow" is routed over multiple paths, or during the transition of
   traffic flow between paths. Some effort [10] has been made to
   consider these effects in the development of hashing algorithms for
   use in multipath routing. However, the transient effects of flow
   migration for other than best-effort flows have not been resolved.

   The choice of traffic classification algorithm can be delegated to
   the network (e.g. load balancing - which may be done based on some
   hash of packet headers and/or random numbers). This approach is
   taken in Equal Cost Multipath [ECMP] and Optimized Multipath [11].
   Alternatively, a policy-based approach has the advantage of
   permitting greater flexibility in the packet classification and path
   selection. This flexibility can be used for more sophisticated load
   balancing algorithms, or to meet churn in the network optimization
   objectives from new service requirements.

   Multipath routing, in the absence of explicit routes, is difficult
   to traffic engineer as it devolves to the problem of adjusting the
   administrative weights. MPLS networks provide a convenient and
   realistic context for multipath classification examples using
   explicit routes. One LSP could be established along the default IGP
   path. An additional LSP could be provisioned (in various ways)to
   meet different traffic engineering objectives.

7. Traffic Engineered Load Distribution in Multipath MPLS networks

   In this section we focus mainly on load balancing as a specific sub-
   problem within the topic of load distribution. Load-balancing
   essentially provides a partition of the traffic load across the
   multiple paths in the MPLS network. The basis for partitioning the
   traffic can be static or dynamic. Dynamic load balancing can be
   based on a dynamic administrative control (e.g. Time of Day), or it
   can form a closed control loop with some measured network parameter.

   Static partitioning of the load can be based on information carried
   within the packet header (e.g. source / destination addresses,

Wright           Informational - Expires January 2001                8
               Traffic Engineering Of Load Distribution      July 2000

   Source / destination port numbers, packet size, protocol ID, DSCP,
   etc.). Static partitioning can also be based on other information
   available at the LSR (e.g. the arriving physical interface).

   A control-loop based load-balancing scheme seeks to balance the load
   close to some objective, subject to error in the measurements and
   delays in the feedback loop etc. The objective may be based on a
   fraction of the input traffic to be sent down a link (e.g. 20% down
   LSP (abd) and 80% down LSP (acd)in Figure 1) in which case some
   measurement of the input traffic is required. The objective may also
   be based on avoiding congestion loss in which case some loss metric
   is required.

   The metrics required for control loop load balancing may be derived
   from information available locally at the upstream LSR, or may be
   triggered by events distributed elsewhere in the network. In the
   latter case, the metrics must be delivered to the Policy Decision
   Point.  Obviously, locally derived trigger conditions would be
   expected to avoid the propagation delays etc. associated with the
   general distributed case. Frequent notification of the state of
   these metrics increases network traffic which may be undesirable.
   This draft does not seek to provide guidance on the appropriate rate
   of notification for metric updates.

   Consider the case of a single large flow that must be load balanced
   across a set of links. In this case policies based solely on the
   packet headers may be inadequate and some other approach (e.g. based
   on a random number generated within the router) may be required.
   Note that sequence integrity of the aggregate FEC forwarded over a
   set of load balancing LSPs may not be preserved under such a regime.

   ECMP and OMP embed the load balancing optimization problem in the
   IGP implementation. This may be appropriate in the context of a
   single service if the optimization objectives and constraints can be
   agreed. ECMP approaches apply equal cost routes, but do not provide
   guidance on allocating load between routes with different
   capacities. OMP attempts a network wide routing optimization
   (considering capacities) but assumes that all network services can
   be reduced to a single dimension of capacity. For networks requiring
   greater flexibility in the optimization objectives and constraints,
   policy based approaches may be appropriate.

7.1 Policy-Based MPLS Network Context for Load Balancing

   Figure 1 illustrates a generic policy based network architecture in
   the context of an MPLS network. In this example, we have two LSPs
   established :
      LSP #1 that follows the path(abd)
      LSP #2 that follows the path(acd).

   In this draft we do not consider the problems of establishing the
   LSPs. We assume that a variety of mechanisms may be used for either
   manual(e.g. LSPs provision explicit routes) or automated (e.g. LSPs

Wright           Informational - Expires January 2001                9
               Traffic Engineering Of Load Distribution      July 2000

   based on topology driven or data driven shortest path routes)
   establishment of the LSPs. Future versions of this or another draft
   may address the issue of establishing LSPs via policy mechanisms
   (e.g. using COPS).

                     +   Policy   +
                     + Management +
                     +    Tool    +
                       |\       |\
                       |        |
                       |        | (E.g., JAVA, LDAP)
                       |       ++++++++++++++
                       |       +   Policy   +
                       |       + Repository +
                       |       +            +
                       |       ++++++++++++++
                       |        |\
                       |        |
                       |        | (e.g. LDAP)
                     +   Policy   +
                     +  Decision  +
                     +    Point   +
                      /   / |    \
                     /   /  |     +-------+
                    /   /   |              \   (e.g. COPS, SNMP)
       +++++++++++++++  | ++++++++++++++   +++++++++++++++
       + ELSR(a)     +----+ LSR (b)    +---+ ELSR (d)    +
       +++++++++++++++  | ++++++++++++++   +++++++++++++++
                     \  |                    /
                      \ +--\                /
                       \  ++++++++++++++   /
                        \-+ LSR (c)    +--/

      Figure 1 LSR as PEP

   The load balancing operation is performed at the LSR containing the
   ingress of the LSPs to be load balanced. This LSR ((a) in Figure 1)
   is acting as the Policy Enforcement Point for load-balancing
   policies related to these LSPs. In this context the load-balancing
   problem concerns the selection of suitable policies to control the
   classification and  admission  of packets and/or of flows to both

   The admission decision for an LSP is reflected in the placement of
   that LSP as the Next Hop Forwarding Label Entry  (NHFLE) within the
   appropriate routing tables within the LSR. Normally, there is only

Wright           Informational - Expires January 2001               10
               Traffic Engineering Of Load Distribution      July 2000

   one NHLFE corresponding to each FEC, however there are some
   circumstances where multiple NHLFEs may exist for an FEC.

   The conditions for the policies applying to the set of LSPs to be
   load balanced should be consistent. For example, if the condition
   used to allocate flows between LSPs is the source address range,
   then the set of policies applied to the set of LSPs should account
   for the disposition of the entire source address range.

   For policy-based MPLS networks, traffic engineering policies would
   also be able to utilize for both conditions and actions the
   parameters available in the standard MPLS MIBs i.e.

        MPLS Traffic Engineering MIB [12],
        MPLS LSR MIB [13]
        MPLS Packet Classifier MIB [14]

   MIB elements for additional traffic metrics are for further study
   beyond the scope of this internet draft.

7.2 Load Balancing at Edge of MPLS Domain

   Flows admitted to an LSP at the edge of an MPLS domain are described
   by the set of Forwarding Equivalence Classes (FECs) that are mapped
   to the LSPs in the FEC to NHLFE (FTN) table.

   The load-balancing operation may be considered as redefining the
   FECs to send traffic along the appropriate path. Rather than sending
   all the traffic along a single LSP, the load balancing policy
   operation results in the creation of new FECs which effectively
   partition the traffic flow among the LSPs in order to achieve some
   load balance objective.

   Consider as an example, two simple point-to-point LSPs, (a) and (b),
   with the same source and destination LSRs, over which we are to load
   balance some aggregate FEC (z). The aggregate FEC (z) is the union
   of FEC (a) and FEC (b). The load balancing policy may adjust the FEC
   (a) and FEC (b) definitions such that the aggregate FEC (z) is

7.3 Load Balancing at interior of MPLS Domain

   Flows admitted to an LSP at the interior of an MPLS domain are
   described by the set of labels that are mapped to the LSPs in the
   Incoming label Map (ILM).

   A Point-to-Point LSP that simply transits an LSR at the interior of
   an MPLS domain does not have an LSP ingress at this transit LSR.

   Merge points of a Multipoint-to-Point LSP may be considered as
   ingress points for the next link of the LSP.

Wright           Informational - Expires January 2001               11
               Traffic Engineering Of Load Distribution      July 2000

   A label stacking operation may be considered as an ingress point to
   a new LSP.

   The above conditions, which put multiple LSPs onto different LSPs,
   may require load balancing at the interior node.  The FEC of an
   incoming flow may be inferred from its label. Hence load-balancing
   policies may operate based on incoming labels to segregate traffic
   rather than requiring the ability to walk up the incoming label
   stack to the packet header in order to reclassify the packet. The
   result is a coarse load balancing of LSPs (not flows) onto one of a
   number of LSPs from the LSR to the egress LSR.

7.4 MPLS Policies for Load Balancing

   MPLS load balancing partitions an incoming stream of traffic across
   multiple LSPs. The load balancing policy, as well as the ingress LSR
   where the policy is enforced, must be able to distinctly identify
   LSPs that will deliver flows towards the destination. We do not, in
   this draft, address the issue of how LSPs are
   established. It is assumed, however, that the PDP that installs the
   load balancing policy, has knowledge of the existing LSPs and is
   able to identify them in policy rules. One way to achieve this is
   through the binding of a label to an LSP.

   An example MPLS load-balancing policy may state, for the simple case
   of balancing across two LSPs, If traffic matches classifier C,
   then forward on LSP L1, else forward on LSP L2. Classification
   can be done on a number of parameters, such as packet header fields,
   incoming labels, etc. The classification conditions of an MPLS load-
   balancing policy are thus effectively constrained to be able to
   specify the FEC in terms that can be resolved into MPLS packet
   classification MIB parameters.

   Forwarding traffic on an LSP can be achieved by tagging the traffic
   with the appropriate label corresponding to the LSP. MPLS load-
   balancing policy actions typically result in the definition of an
   alternative FEC to be forwarded down a specific LSP. This would
   typically be achieved by appropriate provisioning of the FEC and
   routing tables (the FTN and ILM tables in the MPLS architecture) -
   e.g. via the appropriate MIBs.

8. Security Considerations

   The policy system and the MPLS system both have their inherent
   security issues, which this document does not attempt to resolve.

   The policy system provides a mechanism to configure the LSPs within
   LSRs. Any thing that can be configured can also be incorrectly
   configured with potentially disastrous results. The policy system
   can help to secure the MPLS system by providing appropriate controls

Wright           Informational - Expires January 2001               12
               Traffic Engineering Of Load Distribution      July 2000

   on the LSP life cycle. Conversely, if the security of the Policy
   system is compromised, then this may impact any MPLS systems
   controlled by that policy system.

   The MPLS network is not expected to impact the security of the
   Policy system.

   Further security considerations of Policy-Enabled MPLS networks is
   for further study.

9. References

   [1]  Bradner, S., "The Internet Standards Process -- Revision 3",
      BCP 9, RFC 2026, October 1996.

   [2]  Bradner, S., "Key words for use in RFCs to Indicate Requirement
      Levels", BCP 14, RFC 2119, March 1997

   [3]  Wright, S., Reichmeyer, F., Jaeger, R., Gibson, M., "Policy
      Based load balancing in Traffic Engineered MPLS Networks", work-
      in-progress, draft-wright-mpls-te-policy-00.txt, June 2000.

   [4]  Almes, G., Kaldindi, A., Zekauskas, M., "A One-way Delay metric
      for IPPM", RFC 2679, September 1999

   [5]  Brownlee, N., Mills, C., Ruth, G., "Traffic flow Measurement:
      Architecture", RFC 2722, October 1999.

   [6]  Awduche, D., Chiu, A., Elwalid, A., Widjaja, I., Xiao, X., "A
      Framework for Internet Traffic Engineering", draft-ietf-tewg-
      framework-01.txt, work in progress, May 2000.

   [7]  Kodialam, M., Lakshman, T.V.,  Minimum Interference Routing
      with Applications to MPLS Traffic Engineering, Proc. INFOCOM,

   [8]  Kompella, K., Rekhter, Y., " Link Bundling in MPLS Traffic
      Engineering", work-in-progress, draft-kompella-mpls-bundle-
      00.txt, February 2000.

   [9]  Thaler, D., Hopps, C., "Multipath Issues in Unicast and
      Multicast Next-Hop Selection", draft-thaler-multipath-05.txt,
      work in progress, February 2000.

   [10]  Hopps, C., "Analysis of Equal-cost Multi-path Algorithm",
      draft-hopps-ecmp-algo-analysis-04.txt, work-in-progress, February

   [11]  Villamizar, C. "MPLS Optimized OMP", work in progress, 1999.
      available at

Wright           Informational - Expires January 2001               13
               Traffic Engineering Of Load Distribution      July 2000

   [12]  Srinivasan, C., Viswanathan, A., Nadeau, T., "MPLS Traffic
      Engineering Management Information Base Using SMIv2", work-in-
      progress, draft-ietf-mpls-te-mib-03.txt, March 2000.

   [13]  Srinivasan, C., Viswanathan,A., Nadeau,T.,"MPLS Label Switch
      Router Management Information Base Using SMIv2", work-in-
      progress,  draft-ietf-mpls-lsr-mib-04.txt, April 2000.

   [14]  Nadeau,T., Srinivasan, C., Viswanathan,A., "Multiprotocol
      Switching Packet Classification Management Information Base Using
      SMIv2", work-in-progress, draft-nadeau-mpls-packet-classifier-
      mib-00.txt, March 2000.

10.  Acknowledgments

11. Author's Addresses

   Steven Wright
   Science & Technology
   BellSouth Telecommunications
   41G70 BSC
   675 West Peachtree St. NE.
   Atlanta, GA 30375
   Phone +1 (404) 332-2194

   Robert Jaeger
   Laboratory for Telecommunications Science,
   2800 Powder Mill Road, Bldg 601, Room 131
   Adelphi, MD 20783
   Phone +1 (301) 688-1420

   Francis Reichmeyer
   IPHighway, Inc.
   55 New York Avenue
   Framingham, MA 01701
   Phone +1 (201) 655-8714

Wright           Informational - Expires January 2001               14
               Traffic Engineering Of Load Distribution      July 2000

Full Copyright Statement

   "Copyright (C) The Internet Society (date). All Rights Reserved.
   This document and translations of it may be copied and furnished to
   others, and derivative works that comment on or otherwise explain it
   or assist in its implmentation may be prepared, copied, published
   and distributed, in whole or in part, without restriction of any
   kind, provided that the above copyright notice and this paragraph
   are included on all such copies and derivative works. However, this
   document itself may not be modified in any way, such as by removing
   the copyright notice or references to the Internet Society or other
   Internet organizations, except as needed for the purpose of
   developing Internet standards in which case the procedures for
   copyrights defined in the Internet Standards process must be
   followed, or as required to translate it into languages other than

   The limited permissions granted above are perpetual and will not be
   revoked by the Internet Society or its successors or assigns.

   This document and the information contained herein is provided on an

Wright           Informational - Expires January 2001               15